I didn’t think I’d sit and watch this whole thing but it is a very interesting conversation. Near the end the author says something like “I know people I’m the industry who work in these labs who act like they’ve seen a ghost. They come home from work and struggle with the work they do and ask me what they should do. I tell them they should quit then, and then they stop asking me for advice.”
I do wonder at times if we would even believe a whistleblower should one come to light, telling us about the kind of things they do behind closed doors. We only get to see the marketable end product. The one no one can figure out how it does what it does exactly. We don’t get to see the things on the cutting room floor.
Also, its true. These models are more accurately described as grown not built. Which in a way is a strange thing to consider. Because we understand what it means to build something and to grow something. You can grow something with out understanding how it grows. You cant build something without understanding how you built it.
And when you are trying to control how things grow you sometime get things you didn’t intend to get, even if you got the things you did intend to get.


At what point does one explain consciousness? Like what level of detail do you need in an ontological model or whatever you want to call it to say you’ve explained it?
We’re looking at an abstract action or state of doing an action that’s made up of many component abstractions that are each made up of more abstractions and so on all the way down to the squishy thinking goop stuff that’s doing something mechanical. We apply this abstract concept of consciousness to basically everything with a nervous system above a very small level and there are arguments over how low that should be, while at the same time only being able to confidently subjectively declare our own consciousness while merely trusting that someone else is being both truthful and accurate when declaring their own - a fact that’s driven a whole lot of fancy lads with too much education and too much free time half mad with existential horror despite the clear answer to such crippling ontological uncertainty being that it doesn’t matter and is a very silly question in the first place since it’s all just ontology that we define, that consciousness isn’t a thing it’s an abstract concept that we apply at our discretion.
We can also look at other similar abstract actions like seeing, moving, flying, etc for points of comparison with how we should approach this ontology: like even before we understood what, physically speaking, an eye did we could understand sight, and the earliest cameras predate the sort of molecular biology needed to really delve into how and why eyes do the stuff they do. Similarly, flying machines predated a thorough understanding of how birds or insects could fly, and also predated a thorough understanding of aerodynamics. Wheeled ground locomotion was also achieved thousands of years before we could understand muscles or optimized strides and whatnot.
This is a much more complicated sort of topic than those, certainly, so I’m not at all suggesting “oh you know carts were easy, we didn’t have to understand what muscles chemically were for that, so this should be as simple as sticking two round things to a stick and putting a box on it!” because that’s silly, I only want to point out how we consistently reach the point of mimicking the abstractly understood action in practical terms long before we understand how the naturally evolved form works in depth.
So where do we actually pass the threshold for what we would consider conscious when we’re talking about a machine that’s mimicking the various abstract processes that make up what we call consciousness when we’re looking at each other or a dog or a bird or whatever and saying “that, right there, I’m pretty sure that’s conscious and has an internal existence and thinks even if it’s not all that good at planning and can’t talk in our funny complex words”?
I’m personally drawing that line at when those combined abstract systems add up to something that maintains a level of persistent, ongoing thought with internal modeling and the rapid, dynamic acquisition and use of new information in a fashion that isn’t just some ten thousand word text buffer or clever outside scripted heuristic for flagging stuff that seems relevant and mixing them back into its prompt when appropriate, something that learns and develops its schema as it operates rather than being a static thing that you sometimes replace with a new version of itself with more baked-in “knowledge”. I don’t think that line requires a comprehensive understanding of the molecular biology and chemistry of neurons or a ground-up simulation thereof to pass. I also don’t believe it requires a holistic mimicry of all the non-thinking related bits like making a heart beat, intuiting inner ear nerve stuff to balance good, knowing when one is hungry and intuiting what one needs to eat to address a nutritional deficiency, etc.
I do think it’s probably at least somewhat deeper in the hierarchy of abstract things that do specific stuff than the extremely shallow approach that tech bros are doing where they’re basically saying “smart things what think talk good, we will make the biggest and most talking good machine ever and it will be our god as the smartest among us!” which is complete nonsense and clearly failing.
if you’re interested in books on this kinda thing I’d recommend “Hallucinations” by Oliver Sacks. bodies are strange