I didn’t think I’d sit and watch this whole thing but it is a very interesting conversation. Near the end the author says something like “I know people I’m the industry who work in these labs who act like they’ve seen a ghost. They come home from work and struggle with the work they do and ask me what they should do. I tell them they should quit then, and then they stop asking me for advice.”
I do wonder at times if we would even believe a whistleblower should one come to light, telling us about the kind of things they do behind closed doors. We only get to see the marketable end product. The one no one can figure out how it does what it does exactly. We don’t get to see the things on the cutting room floor.
Also, its true. These models are more accurately described as grown not built. Which in a way is a strange thing to consider. Because we understand what it means to build something and to grow something. You can grow something with out understanding how it grows. You cant build something without understanding how you built it.
And when you are trying to control how things grow you sometime get things you didn’t intend to get, even if you got the things you did intend to get.


I don’t think that’s fundamentally correct. The fundamental dividing line isn’t replicating neurotransmitters and a perfect mimicry of evolved physical structures, but rather it’s a sort of persistence and continuity within a system that can dynamically model scenarios and store, recover, and revise data in a useful manner to facilitate this.
Current models do not and cannot do this, instead just trying to bake it all in to this predictive black box thing that gets fired off over and over by an outside script and then just spits out text or pictures or a sound file. They’re each a dead-end in and of themselves, for all that they could theoretically be attached to a more holistic and complete AI to serve narrow specific functions for it (eg a small LLM quantifying text into a form it can understand, and converting its actual output into something human readable). We don’t need a perfect model of how to do all this in squishy meat goop to make a machine that’s close enough to conscious to be treated as such, we just need a better approach than these shitty pre-baked black box chat bot models to do it.
My problem with the tech people who are saying they’re replicating consciousness is that until we can actually explain consciousness they’re essentially expecting me to believe that they stumbled onto the solution while creating it.
Tech people can’t prove it’s possible to make a conscious program, they can’t describe how consciousness works in forms it already exists in, they can’t even define consciousness. How can they expect me to believe them when they claim they’re close to replicating it?
Consciousness is intimately connected to life, as in we are embodied beings moving around in the world driven by biological needs. In Marxist terms consciousness emerges from the dialectical movement of a specific body and the (social+natural) environment. Unless you are replicating this dialectic, you are not replicating any consciousness whatsoever.
This is not solely Marxist either, cf. Merleau-Ponty, or this Wittgenstein quote:
Saying you can program consciousness with mathematical representations is pure Cartesian dualist idealism.
Of course a good sophist now would say that I’m conflating consciousness with intelligence, and indeed I am. One is dependent on the other.
“Consciousness is intrinsically a property of mushy goop, a magic soul thing made of magic!” - very serious philosophers who should be taken seriously for thinking real hard all day centuries ago
All due respect, comrade, you should get acquainted with a body of thought called dialectical materialism.
total misrepresentation of what I’ve said, I suggest you read it again:
Consciousness starts from biology, in other words the very opposite of a supernatural quality of things. It’s the natural aspect of living matter as it moves around and does things in the material world. This is as radically materialist as it gets. Explaining consciousness from “mental representations” is actually starting from idealism.
The point being is this:
Some readings for you:
Evald Ilyenkov - Dialectical Logic
Feliks Mikhailov - The Riddle of the Self - Dreams of the Kurshskaya Sand Bar
Or if you insist on something non-Marxist from the 21st century:
Mind in Life by Evan Thompson
The very concept of “consciousness” is inherently abstract and idealist. It’s looking at the emergent behavior of a complex system and constructing an ontology to describe what it’s doing. It is a bunch of fuzzy little made up words we use to describe a thing we see happening.
My stance is to fundamentally reject the mystical and essentialist takes on this and simply declare that something may as well be considered conscious when it becomes a complex system that does a set of criteria we associate with sentient life, and that it does not need to come from a complete and total mimicry of the existing systems that naturally arrived at a thing that does that. Fetishizing the specific forms that have yielded it in the past would be like holding up something like “blacksmithing” as the emergent property of muscles and sinew driving a heavy metal thing at some heat-softened other metal thing that’s on a rigid surface and rejecting the idea that a hammer could be driven by any other method (like the sorts of mechanical presses many modern blacksmiths use in lieu of or in support of a hammer and anvil) to accomplish the same goal.
I want to narrow in on this though, because I actually agree with the point of this sort of abstract internal modeling being the result of a system that’s operating within some kind of space that is either material or a mimicry of it. I just disagree that it essentially requires holistic organic life to start learning dynamically and producing abstract models to work with. We fundamentally have machines that can do the hard mechanical parts of this already, the senses and data storage and dynamic responses, and what is lacking is a methodology for squishing this all together although we also have machines that can fundamentally do that sort of fuzzy data and process squishing stuff already too.
This isn’t some kind of intractable problem, it’s one of approach and focus. Machine learning has been hyperfocused on text processing because that was comparatively easy (because there are massive, massive amount of text around), because language is a comparatively simple system, and because there are actual immediate applications to even primitive machine learning as far as trying to pull data out of text or transform text in some way goes so even before it became a funny speak-and-spell bot that could outsmart your average executive despite being a glorified magic-8-ball there were material reasons to study and develop it. I think that’s a dead end albeit one that’s yielded some potentially useful technology, but it’s not a dead end on the grounds that machines can’t ever do thinking stuff.
To reframe this a little, we both agree that it is implausible for a human to sit down and design a machine human (or even a template for one to emerge given training) because designing requires simulating in a sense and containing a simulation of itself within itself is silly and infeasible, right? Well, what about something simpler? Could a human design a mechanical rodent brain (or template thereof)? A mechanical dog brain? Does such a mimicry require a full simulation of neurochemistry, or a replication of the functions that these things perform? Does it require a holistic replication of every extant function or just the ones related to processing and modeling the world around it?
My stance is that we can do a whole lot of good-enough abstract replications of the way living things interact with their surroundings that if fed into a correctly designed fuzzy black box data squishing machine it would start doing the sort of sentient dynamism we associate with living things in a fashion we describe as something being conscious, even if it would not be a synthetic person nor necessarily be easily scaled up to that level. We could probably still cram an artificial speech center onto it so it could articulate its experiences, though, even if it wasn’t particularly smart.
Consciousness is literally the most immediate interaction with the world you are in. You are conscious of something, and that is the world you are in. There is nothing abstract about that; it cannot possibly BE abstract, it’s the most concrete.
Linear algebra is abstract.
All words are “made up”! Even words like “complex” and “system.”
The point is: does the conceptual framework start from material reality or not?
I think you still don’t get it:
In a very fundamental sense, consciousness IS the dynamic interplay of the body and the world it lives IN.
It’s not simply the “result”, that dynamic interplay is exactly that.
This is the point, going about replicating dog brain in order to replicate “dog consciousness” is wrong-headed (heh), you need to replicate dog LIFE
Marxism is a a materialist “systems theory” though so you are on the right path.
e.g. your blacksmithing analogy: surely it’s silly to say that blacksmithing is the emergent property of biceps and all that, and no one would say any such thing, but blacksmithing is emergent from a specific way of life in history that smart apes with ape-hands (as opposed insect legs for instance) developed to face and solve very real material needs
To expand more would require a book-length text so I refer back to the previously cited literature. You are clearly interested in this so I don’t know why you don’t engage with the philosophical literature that expanded on this topic in many interesting ways.
Which is what LLMs are. They are not and cannot be true “strong AI” though as I said previously.
Also: “Of course a good sophist now would say that I’m conflating consciousness with intelligence, and indeed I am. One is dependent on the other.”
You are clearly very intelligent, but I think you dismissed philosophy too haughtily.
Hold up a second, that is something we’re inferring[1], based on our senses, and the entirety of how we can conceive it must necessarily be an abstraction that exists in our brains. Now it’s a good inference, obviously, because it is a logically necessary thing that must exist in order for us to be abstractly modeling our understanding of what we’re doing in the first place, but we are still talking about a framework for describing this abstraction of what we can infer is happening.
I’ve realized as I’ve been writing this all out how heavily my understanding and use of object-oriented programming informs the framework with which I try to articulate things, so perhaps I should clarify that when I talk about abstractions or things being abstract it is because I see the sort of fuzzier, higher-level understandings of them as semantically distinct from the actual bits that make them materially work in that the high-level abstract version is more generic and useful to talk about and think about while the low-level material stuff is only relevant when you need to go in and look at why and how it functions in a particular instance. In that sense (and to keep with the blacksmithing touchstone), something like the action of a hammer striking hot iron is an abstract action, it is a thing that accomplishes a purpose and which we can talk about and conceptualize conveniently whereas its literal material truth could take on a myriad of different forms based on the technique, on the types of tools being used, how those physically work, what the composition of the iron is, where the iron came from, what the desired end shape is, etc. So it’s both a real material action and process, but also an abstraction used to catch all possible variations thereof.
This I take issue with, because there are things that are not conscious that do that, and with things that we recognize as conscious we know that both senses and internal models can be massively detached from material reality. So you have unthinking things that react dynamically within the world that they exist in in a way that can be more accurate than similarly scoped living things we’d still recognize as sentient, and we have consciousnesses that are effectively existing while entirely detached from the world they materially exist in.
So consciousness is something that interacts with fallible inputs and fallible outputs, and it is not required in order for something to receive inputs and respond with actions.
I used it more to set a scope than to talk about literally replicating a high-fidelity synthetic dog. A literal dog has a whole host of things that are very important for a fuzzy social animal running around the woods trying to survive, but which aren’t specifically required for the bits where it’s learning or producing an abstract model of the world around it to guide how it interacts with things beyond that they’re informing what it needs to survive and providing it with tools to use. You wouldn’t need to replicate all the functions of life to get the same degree of social and environmental modeling.
I do feel that we’re sort of talking past each other here, because I do agree with the structural need for some kind of interplay with senses, real or simulated, and motivation systems to provide feedback. It’s just I don’t see the whole “having the capacity for abstract modeling and learning” as being predicated on this specifically being a perfect mimicry of extant life existing within a natural context. Just as the hammer strike is a hammer strike regardless of whether it is driven with muscle and sinew or pneumatic/hydraulic machinery, regardless of whether it is an essential task producing a useful tool or an artistic endeavor producing something purely decorative or anywhere in between, regardless of whether it is the result of a long tradition rooted in historical material contexts or someone trying to figure it out with no experience or guidance beyond knowing that blacksmithing is a thing, it remains something that falls into this abstract category of actions that involve hitting a bit of metal with another for the purpose of shaping it.
Consciousness/sentience seemed like something incredible and ineffable for most of history, built of wondrous parts that we couldn’t dream of replicating, and yet suddenly we have machines that can mimic senses, maintain abstract simulations we design for them, store and retrieve vast amounts of data, follow logical rote processes, and now even just sort of fuzzily produce new machines that perform a novel abstract actions provided a proper training regimen is set up. Materially, we have all the component pieces we need to make something that’s at least dimly conscious if limited in scope, and it does not need to be a wholly faithful synthetic recreation of an extant conscious being to do this.
Sure, it has a long historical context serving very real material needs, during which time it was pretty much exclusively the domain of raw human labor and muscle. Now it’s largely removed from that context and exists more as a craft for a certain sort of artist, while new industrial processes have largely supplanted the old blacksmithing trade, and they use machines that reduce the strain on their bodies and speed up their work now too. We still consider it blacksmithing though, removed from its historical context and traditional methods or not.
That’s my point, that we can take a big complex abstract sort of set of actions and transfer them out of the material context they emerged in, we can change the component pieces around, and we can still end up with the same functions regardless of how they’ve been transformed in both context and form. We might lose things we valued about the original form in the process, of course, and it can be transformed in other ways, but it’s not like the action of “doing blacksmithing to make a thing from metal” is stuck specifically to the historical contexts and methods where people learned how to do this and relied on this for essential tools and becomes lost without that or that it can’t be reverse engineered from fragments of the knowledge that went into it as a trade (regardless of whether such attempts are worse than proper education in established methods).
I used to read philosophy when I was younger, including on this topic. I didn’t find it useful nor memorable, and there certainly weren’t any answers to be found in it.
In contrast, I have my answer to this matter, which I have articulated as best I can and backed up with examples of the reasoning underpinning it. Now am I correct? Who knows, everything we have to go off of is speculation; I could be wrong, but I’m holding on to my bet until we materially see how it pans out. Half my argument isn’t even about a potential method to achieve a machine with dynamic abstract modeling, just how we should categorize one that displays a specific set of properties and defining a line beyond which I would personally be confident in applying the label of “conscious” to it.
Sort of. They’re like a horribly overgrown and overspecialized version of the bit of our language processing brain goop that handles predicting incoming text or spoken language to facilitate understanding it better. They’re interesting and in some limited ways useful, and a stripped down one could probably work pretty well as part of the interface of a more holistic system, but we’re in complete agreement that they’re not intelligent nor are they a pathway to any sort of conscious machine.
Correctly inferring, I’d say. I categorically reject all the “how can we believe our lying eyes?” wanking that some philosophers have gotten very wrapped up in and been bothered by. ↩︎
We are at an impasse. Basically in every paragraph you keep smuggling back in the old dualism, and say it’s all fine.
I’m not even sure what your goal is in engaging with me. Originally I replied to another user who said:
“My problem with the tech people who are saying they’re replicating consciousness is that until we can actually explain consciousness they’re essentially expecting me to believe that they stumbled onto the solution while creating it.”
I’m not sure what you’ve read, but you clearly betray your unfamiliarity with modern philosophy post 1650: Spinoza, Hegel, Marx, Dewey, Santayana, Husserl, Heidegger, Wittgenstein, Merleau-Ponty… all would take issues with your conceptualising this problem.
It seems to me that you are unfamiliar with Marxism, pragmatism, and phenomenology (you clearly didn’t understand my brief explication of intentionality). This is fine, no one should be forced to read philosophy. But dismissing it without understanding it is… well… supercilious.
The role of philosophy is not necessarily to give final answers, but to critically examine conceptual frameworks. With that said, Cartesian dualism was criticised and dismantled so thoroughly we can dismiss it as a “final answer”. It’s just tech people keep smuggling it back through the AI backdoor.
The point being that hominid-bodies in a certain natural environment that had metal available for mining, had the material needs for stronger tools, and the social development of distributed labour of mining, logistics and so on, and the practical imagination: these all dynamically playing off together developed blacksmithing. That is intelligence in action. Strong AI needs to achieve something like this. There was no such thing as blacksmithing, after this there is such a thing. Replicating this after the fact is a different question. That would be weak AI.
If you peel off the modern computer lingo, every paragraph you’ve written something that was thoroughly dismantled by some silly philosopher a hundred years ago. Again, I think we are going in circles, just read some theory (even Lenin has written a book about this). Or don’t read whatever.
At what point does one explain consciousness? Like what level of detail do you need in an ontological model or whatever you want to call it to say you’ve explained it?
We’re looking at an abstract action or state of doing an action that’s made up of many component abstractions that are each made up of more abstractions and so on all the way down to the squishy thinking goop stuff that’s doing something mechanical. We apply this abstract concept of consciousness to basically everything with a nervous system above a very small level and there are arguments over how low that should be, while at the same time only being able to confidently subjectively declare our own consciousness while merely trusting that someone else is being both truthful and accurate when declaring their own - a fact that’s driven a whole lot of fancy lads with too much education and too much free time half mad with existential horror despite the clear answer to such crippling ontological uncertainty being that it doesn’t matter and is a very silly question in the first place since it’s all just ontology that we define, that consciousness isn’t a thing it’s an abstract concept that we apply at our discretion.
We can also look at other similar abstract actions like seeing, moving, flying, etc for points of comparison with how we should approach this ontology: like even before we understood what, physically speaking, an eye did we could understand sight, and the earliest cameras predate the sort of molecular biology needed to really delve into how and why eyes do the stuff they do. Similarly, flying machines predated a thorough understanding of how birds or insects could fly, and also predated a thorough understanding of aerodynamics. Wheeled ground locomotion was also achieved thousands of years before we could understand muscles or optimized strides and whatnot.
This is a much more complicated sort of topic than those, certainly, so I’m not at all suggesting “oh you know carts were easy, we didn’t have to understand what muscles chemically were for that, so this should be as simple as sticking two round things to a stick and putting a box on it!” because that’s silly, I only want to point out how we consistently reach the point of mimicking the abstractly understood action in practical terms long before we understand how the naturally evolved form works in depth.
So where do we actually pass the threshold for what we would consider conscious when we’re talking about a machine that’s mimicking the various abstract processes that make up what we call consciousness when we’re looking at each other or a dog or a bird or whatever and saying “that, right there, I’m pretty sure that’s conscious and has an internal existence and thinks even if it’s not all that good at planning and can’t talk in our funny complex words”?
I’m personally drawing that line at when those combined abstract systems add up to something that maintains a level of persistent, ongoing thought with internal modeling and the rapid, dynamic acquisition and use of new information in a fashion that isn’t just some ten thousand word text buffer or clever outside scripted heuristic for flagging stuff that seems relevant and mixing them back into its prompt when appropriate, something that learns and develops its schema as it operates rather than being a static thing that you sometimes replace with a new version of itself with more baked-in “knowledge”. I don’t think that line requires a comprehensive understanding of the molecular biology and chemistry of neurons or a ground-up simulation thereof to pass. I also don’t believe it requires a holistic mimicry of all the non-thinking related bits like making a heart beat, intuiting inner ear nerve stuff to balance good, knowing when one is hungry and intuiting what one needs to eat to address a nutritional deficiency, etc.
I do think it’s probably at least somewhat deeper in the hierarchy of abstract things that do specific stuff than the extremely shallow approach that tech bros are doing where they’re basically saying “smart things what think talk good, we will make the biggest and most talking good machine ever and it will be our god as the smartest among us!” which is complete nonsense and clearly failing.
if you’re interested in books on this kinda thing I’d recommend “Hallucinations” by Oliver Sacks. bodies are strange
While persistent modeling and predicting is a very important part of what we do, that alone may not be enough to form a conscious mind. We still don’t know what makes us conscious. The popular thinking is that our minds are fundamentally no different from a computer, that the phenomenon emerges out some specific organization at some threshold of complexity. But. It’s only popular because we are surrounded by computers. It would be convenient.
Personally, I’m all-in on as-yet undefined quantum effects within microtubules (part of a cell’s cytoskeleton). I can say that because I’m a moron and it doesn’t matter what I think. But to me, a total fucking layman, it looks like the most interesting point of active research.
You put some things in there that I didn’t exactly say or at least belittled what i was trying to convey.
My whole point is that I think conceptually a super intelligence would have to be able not only to think but also to feel.
Edit: sorry, I somehow mixed up which reply I was responding to. I didn’t click through and assumed this was about a sort of glib reply I’d given to someone else.
Feeling is a subcategory of thinking that’s just sort of arbitrarily distinguished from reason by rhetorical traditions that try to create this dichotomy between the thoughts that you can sort of round towards formal logic systems by phrasing them the right way and the thoughts that are more intuitive and linked to psychosomatic sensations like the visceral pain that sorrow causes. This gets dressed up and treated as some divine and profound thing, and I think that’s very silly and fundamentally flawed.
That is to say, thoughts do not become irrational or become something other than thoughts just because they’re linked to an instinctive reward/punishment feedback system. For example, grief is neither irrational nor profound simply because it causes physical agony, and something doesn’t need a throat or chest to feel a painful pit in to appreciate or be averse to loss.
This philosophical approach to “what would make a synthetic human? Surely it must be this collection of things we value, things we poetically relate to our experience,” I think is just fundamentally just as off base as the techbro approach of thinking their giant speak-and-spell bot just needs one more data center, one more trillion dollar investment, before it becomes an alien superintelligent NHP who will judge us all for sins and virtues that it makes up through some inscrutable alien logic.
In the abstract a system is what it does, and we consider conscious things that are not at all capable of valuing or comprehending all the flowery things we value. So if a system does the things we associate with conscious life, things like dynamic learning and recall within itself, like producing some kind of abstract model of the world and of things around it to be able to operate within that framework, even if that system is not very good at doing those things, even if it is not particularly smart, is it not “conscious”?
To put it another way, is a dog conscious within your ontology? If not, would that change if we somehow crammed a speech center into its brain so it could articulate its experiences? Do you earnestly believe a machine cannot be made that is on the level with even a very silly dog? What if such a silly little robot got given a speech center so it could articulate the way it saw its world?
I don’t draw the line of what becomes “conscious” at a level of “synthetic human with a fully formed human experience” or “inscrutable super intelligent NHP”, merely at the boundary of having functions we associate with sentient life which I think is a whole lot less mystical and inscrutable than people here are rhetorically making it out to be.
I, personally, stating that it is mystical and inscrutable but I am asserting that we need to be a lot better at computers in order to simulate the behavior of an organism accurately. By and large i am in agreement with you.
I think the disconnect is where you drive the argument: I my original comment did not mystify the human experience. I was saying that the physical elements of feeling that are tied back to the other (just to wrap up what we both have been saying) “systems of conciseness” are in my view just as important to mathematically model in a computer program in order for a machine to really achieve what we view as intelligence not in just ourselves but in other organisms.
I think i take such a hard stance not only bc CS as a field isn’t really dedicated to exploring this aspect of AI but also because part of me enjoys the spiritual and mysterious element of being “alive”