I didn’t think I’d sit and watch this whole thing but it is a very interesting conversation. Near the end the author says something like “I know people I’m the industry who work in these labs who act like they’ve seen a ghost. They come home from work and struggle with the work they do and ask me what they should do. I tell them they should quit then, and then they stop asking me for advice.”
I do wonder at times if we would even believe a whistleblower should one come to light, telling us about the kind of things they do behind closed doors. We only get to see the marketable end product. The one no one can figure out how it does what it does exactly. We don’t get to see the things on the cutting room floor.
Also, its true. These models are more accurately described as grown not built. Which in a way is a strange thing to consider. Because we understand what it means to build something and to grow something. You can grow something with out understanding how it grows. You cant build something without understanding how you built it.
And when you are trying to control how things grow you sometime get things you didn’t intend to get, even if you got the things you did intend to get.


Consciousness is intimately connected to life, as in we are embodied beings moving around in the world driven by biological needs. In Marxist terms consciousness emerges from the dialectical movement of a specific body and the (social+natural) environment. Unless you are replicating this dialectic, you are not replicating any consciousness whatsoever.
This is not solely Marxist either, cf. Merleau-Ponty, or this Wittgenstein quote:
Saying you can program consciousness with mathematical representations is pure Cartesian dualist idealism.
Of course a good sophist now would say that I’m conflating consciousness with intelligence, and indeed I am. One is dependent on the other.
“Consciousness is intrinsically a property of mushy goop, a magic soul thing made of magic!” - very serious philosophers who should be taken seriously for thinking real hard all day centuries ago
All due respect, comrade, you should get acquainted with a body of thought called dialectical materialism.
total misrepresentation of what I’ve said, I suggest you read it again:
Consciousness starts from biology, in other words the very opposite of a supernatural quality of things. It’s the natural aspect of living matter as it moves around and does things in the material world. This is as radically materialist as it gets. Explaining consciousness from “mental representations” is actually starting from idealism.
The point being is this:
Some readings for you:
Evald Ilyenkov - Dialectical Logic
Feliks Mikhailov - The Riddle of the Self - Dreams of the Kurshskaya Sand Bar
Or if you insist on something non-Marxist from the 21st century:
Mind in Life by Evan Thompson
The very concept of “consciousness” is inherently abstract and idealist. It’s looking at the emergent behavior of a complex system and constructing an ontology to describe what it’s doing. It is a bunch of fuzzy little made up words we use to describe a thing we see happening.
My stance is to fundamentally reject the mystical and essentialist takes on this and simply declare that something may as well be considered conscious when it becomes a complex system that does a set of criteria we associate with sentient life, and that it does not need to come from a complete and total mimicry of the existing systems that naturally arrived at a thing that does that. Fetishizing the specific forms that have yielded it in the past would be like holding up something like “blacksmithing” as the emergent property of muscles and sinew driving a heavy metal thing at some heat-softened other metal thing that’s on a rigid surface and rejecting the idea that a hammer could be driven by any other method (like the sorts of mechanical presses many modern blacksmiths use in lieu of or in support of a hammer and anvil) to accomplish the same goal.
I want to narrow in on this though, because I actually agree with the point of this sort of abstract internal modeling being the result of a system that’s operating within some kind of space that is either material or a mimicry of it. I just disagree that it essentially requires holistic organic life to start learning dynamically and producing abstract models to work with. We fundamentally have machines that can do the hard mechanical parts of this already, the senses and data storage and dynamic responses, and what is lacking is a methodology for squishing this all together although we also have machines that can fundamentally do that sort of fuzzy data and process squishing stuff already too.
This isn’t some kind of intractable problem, it’s one of approach and focus. Machine learning has been hyperfocused on text processing because that was comparatively easy (because there are massive, massive amount of text around), because language is a comparatively simple system, and because there are actual immediate applications to even primitive machine learning as far as trying to pull data out of text or transform text in some way goes so even before it became a funny speak-and-spell bot that could outsmart your average executive despite being a glorified magic-8-ball there were material reasons to study and develop it. I think that’s a dead end albeit one that’s yielded some potentially useful technology, but it’s not a dead end on the grounds that machines can’t ever do thinking stuff.
To reframe this a little, we both agree that it is implausible for a human to sit down and design a machine human (or even a template for one to emerge given training) because designing requires simulating in a sense and containing a simulation of itself within itself is silly and infeasible, right? Well, what about something simpler? Could a human design a mechanical rodent brain (or template thereof)? A mechanical dog brain? Does such a mimicry require a full simulation of neurochemistry, or a replication of the functions that these things perform? Does it require a holistic replication of every extant function or just the ones related to processing and modeling the world around it?
My stance is that we can do a whole lot of good-enough abstract replications of the way living things interact with their surroundings that if fed into a correctly designed fuzzy black box data squishing machine it would start doing the sort of sentient dynamism we associate with living things in a fashion we describe as something being conscious, even if it would not be a synthetic person nor necessarily be easily scaled up to that level. We could probably still cram an artificial speech center onto it so it could articulate its experiences, though, even if it wasn’t particularly smart.
Consciousness is literally the most immediate interaction with the world you are in. You are conscious of something, and that is the world you are in. There is nothing abstract about that; it cannot possibly BE abstract, it’s the most concrete.
Linear algebra is abstract.
All words are “made up”! Even words like “complex” and “system.”
The point is: does the conceptual framework start from material reality or not?
I think you still don’t get it:
In a very fundamental sense, consciousness IS the dynamic interplay of the body and the world it lives IN.
It’s not simply the “result”, that dynamic interplay is exactly that.
This is the point, going about replicating dog brain in order to replicate “dog consciousness” is wrong-headed (heh), you need to replicate dog LIFE
Marxism is a a materialist “systems theory” though so you are on the right path.
e.g. your blacksmithing analogy: surely it’s silly to say that blacksmithing is the emergent property of biceps and all that, and no one would say any such thing, but blacksmithing is emergent from a specific way of life in history that smart apes with ape-hands (as opposed insect legs for instance) developed to face and solve very real material needs
To expand more would require a book-length text so I refer back to the previously cited literature. You are clearly interested in this so I don’t know why you don’t engage with the philosophical literature that expanded on this topic in many interesting ways.
Which is what LLMs are. They are not and cannot be true “strong AI” though as I said previously.
Also: “Of course a good sophist now would say that I’m conflating consciousness with intelligence, and indeed I am. One is dependent on the other.”
You are clearly very intelligent, but I think you dismissed philosophy too haughtily.
Hold up a second, that is something we’re inferring[1], based on our senses, and the entirety of how we can conceive it must necessarily be an abstraction that exists in our brains. Now it’s a good inference, obviously, because it is a logically necessary thing that must exist in order for us to be abstractly modeling our understanding of what we’re doing in the first place, but we are still talking about a framework for describing this abstraction of what we can infer is happening.
I’ve realized as I’ve been writing this all out how heavily my understanding and use of object-oriented programming informs the framework with which I try to articulate things, so perhaps I should clarify that when I talk about abstractions or things being abstract it is because I see the sort of fuzzier, higher-level understandings of them as semantically distinct from the actual bits that make them materially work in that the high-level abstract version is more generic and useful to talk about and think about while the low-level material stuff is only relevant when you need to go in and look at why and how it functions in a particular instance. In that sense (and to keep with the blacksmithing touchstone), something like the action of a hammer striking hot iron is an abstract action, it is a thing that accomplishes a purpose and which we can talk about and conceptualize conveniently whereas its literal material truth could take on a myriad of different forms based on the technique, on the types of tools being used, how those physically work, what the composition of the iron is, where the iron came from, what the desired end shape is, etc. So it’s both a real material action and process, but also an abstraction used to catch all possible variations thereof.
This I take issue with, because there are things that are not conscious that do that, and with things that we recognize as conscious we know that both senses and internal models can be massively detached from material reality. So you have unthinking things that react dynamically within the world that they exist in in a way that can be more accurate than similarly scoped living things we’d still recognize as sentient, and we have consciousnesses that are effectively existing while entirely detached from the world they materially exist in.
So consciousness is something that interacts with fallible inputs and fallible outputs, and it is not required in order for something to receive inputs and respond with actions.
I used it more to set a scope than to talk about literally replicating a high-fidelity synthetic dog. A literal dog has a whole host of things that are very important for a fuzzy social animal running around the woods trying to survive, but which aren’t specifically required for the bits where it’s learning or producing an abstract model of the world around it to guide how it interacts with things beyond that they’re informing what it needs to survive and providing it with tools to use. You wouldn’t need to replicate all the functions of life to get the same degree of social and environmental modeling.
I do feel that we’re sort of talking past each other here, because I do agree with the structural need for some kind of interplay with senses, real or simulated, and motivation systems to provide feedback. It’s just I don’t see the whole “having the capacity for abstract modeling and learning” as being predicated on this specifically being a perfect mimicry of extant life existing within a natural context. Just as the hammer strike is a hammer strike regardless of whether it is driven with muscle and sinew or pneumatic/hydraulic machinery, regardless of whether it is an essential task producing a useful tool or an artistic endeavor producing something purely decorative or anywhere in between, regardless of whether it is the result of a long tradition rooted in historical material contexts or someone trying to figure it out with no experience or guidance beyond knowing that blacksmithing is a thing, it remains something that falls into this abstract category of actions that involve hitting a bit of metal with another for the purpose of shaping it.
Consciousness/sentience seemed like something incredible and ineffable for most of history, built of wondrous parts that we couldn’t dream of replicating, and yet suddenly we have machines that can mimic senses, maintain abstract simulations we design for them, store and retrieve vast amounts of data, follow logical rote processes, and now even just sort of fuzzily produce new machines that perform a novel abstract actions provided a proper training regimen is set up. Materially, we have all the component pieces we need to make something that’s at least dimly conscious if limited in scope, and it does not need to be a wholly faithful synthetic recreation of an extant conscious being to do this.
Sure, it has a long historical context serving very real material needs, during which time it was pretty much exclusively the domain of raw human labor and muscle. Now it’s largely removed from that context and exists more as a craft for a certain sort of artist, while new industrial processes have largely supplanted the old blacksmithing trade, and they use machines that reduce the strain on their bodies and speed up their work now too. We still consider it blacksmithing though, removed from its historical context and traditional methods or not.
That’s my point, that we can take a big complex abstract sort of set of actions and transfer them out of the material context they emerged in, we can change the component pieces around, and we can still end up with the same functions regardless of how they’ve been transformed in both context and form. We might lose things we valued about the original form in the process, of course, and it can be transformed in other ways, but it’s not like the action of “doing blacksmithing to make a thing from metal” is stuck specifically to the historical contexts and methods where people learned how to do this and relied on this for essential tools and becomes lost without that or that it can’t be reverse engineered from fragments of the knowledge that went into it as a trade (regardless of whether such attempts are worse than proper education in established methods).
I used to read philosophy when I was younger, including on this topic. I didn’t find it useful nor memorable, and there certainly weren’t any answers to be found in it.
In contrast, I have my answer to this matter, which I have articulated as best I can and backed up with examples of the reasoning underpinning it. Now am I correct? Who knows, everything we have to go off of is speculation; I could be wrong, but I’m holding on to my bet until we materially see how it pans out. Half my argument isn’t even about a potential method to achieve a machine with dynamic abstract modeling, just how we should categorize one that displays a specific set of properties and defining a line beyond which I would personally be confident in applying the label of “conscious” to it.
Sort of. They’re like a horribly overgrown and overspecialized version of the bit of our language processing brain goop that handles predicting incoming text or spoken language to facilitate understanding it better. They’re interesting and in some limited ways useful, and a stripped down one could probably work pretty well as part of the interface of a more holistic system, but we’re in complete agreement that they’re not intelligent nor are they a pathway to any sort of conscious machine.
Correctly inferring, I’d say. I categorically reject all the “how can we believe our lying eyes?” wanking that some philosophers have gotten very wrapped up in and been bothered by. ↩︎
We are at an impasse. Basically in every paragraph you keep smuggling back in the old dualism, and say it’s all fine.
I’m not even sure what your goal is in engaging with me. Originally I replied to another user who said:
“My problem with the tech people who are saying they’re replicating consciousness is that until we can actually explain consciousness they’re essentially expecting me to believe that they stumbled onto the solution while creating it.”
I’m not sure what you’ve read, but you clearly betray your unfamiliarity with modern philosophy post 1650: Spinoza, Hegel, Marx, Dewey, Santayana, Husserl, Heidegger, Wittgenstein, Merleau-Ponty… all would take issues with your conceptualising this problem.
It seems to me that you are unfamiliar with Marxism, pragmatism, and phenomenology (you clearly didn’t understand my brief explication of intentionality). This is fine, no one should be forced to read philosophy. But dismissing it without understanding it is… well… supercilious.
The role of philosophy is not necessarily to give final answers, but to critically examine conceptual frameworks. With that said, Cartesian dualism was criticised and dismantled so thoroughly we can dismiss it as a “final answer”. It’s just tech people keep smuggling it back through the AI backdoor.
The point being that hominid-bodies in a certain natural environment that had metal available for mining, had the material needs for stronger tools, and the social development of distributed labour of mining, logistics and so on, and the practical imagination: these all dynamically playing off together developed blacksmithing. That is intelligence in action. Strong AI needs to achieve something like this. There was no such thing as blacksmithing, after this there is such a thing. Replicating this after the fact is a different question. That would be weak AI.
If you peel off the modern computer lingo, every paragraph you’ve written something that was thoroughly dismantled by some silly philosopher a hundred years ago. Again, I think we are going in circles, just read some theory (even Lenin has written a book about this). Or don’t read whatever.
Yes, even when the ontological framework required to operate modern machinery is explained to you, when modern tools are explained to you, all you can do is harrumph and cross your arms and insist some guy thought about thinking stuff a whole lot in a time before light bulbs and he does not agree with the limits of modern engineering concepts or tools that he never heard of and could never have imagined at all. Despite your insistence that you’re grounding your position in material things, it’s actually just pure idealism, ignoring the question of what modern tools materially do in favor of waxing poetic about life and experiences, repeatedly redirecting away from “well, yes, an environment and experiences are a materially important aspect of growing a construct that can abstractly model that environment and operate within it” in favor of “oh no I mean the history, the living-ness of it all, why 300 years ago so and so said…” in a way that shows you fundamentally don’t understand what we’re even talking about here.
Like this fundamentally is not a philosophical question at all, it’s an engineering one; for all that we’re arguing over the acceptable ontology or rhetoric to describe it this is fundamentally about a material object performing material tasks and how processes related to building such material objects and abstracting its parts and tasks into usable forms that are then reapplied mechanically to make it materially do things work and must be conceived of. We’re talking about a machine having the capacity to abstractly model systems that it interacts with, which dynamically acquires and recalls information that informs this. We’re not talking about building a synthetic man who must arrive at his modern state through going through the whole historical context firsthand, we’re not talking about this machine springing a priori with none of the long material and historical context that has brought us to this point, we’re talking about a funny little machine made of funny little material parts that already exist, that we already have, doing a fuzzy little abstract task that there is no reason at all to believe it cannot do.
Like even the idea of “oh the machine must dynamically do synthesis to arrive at novel conclusions” is, I hate to break it to you, already here. That is fundamentally what machine learning systems do to generate things that aren’t rote parts of their training data, and they’re doing it without intelligence or consciousness. They synthetically recombine extant bits of their “knowledge” to make new forms of it, whether they do it well in any given case or not, which is fascinating and disturbing. What is needed is a more holistic and in-depth framework for this sort of technology to operate within so that this can start being reapplied and revised and tested internally to reach a more dynamic state of abstract modeling like what living creatures do, instead of the static mimicries we have now.
This is actually a perfect encapsulation of the problem here. I keep using it as a convenient touchstone to talk about how we think about disciplines and actions, especially because the actual activity involved is so viscerally material, and you keep redirecting away from that to talk about the history and the context of that history, and while the points you’re drawing on aren’t wrong they’re an unrelated deflection. We’re not talking about inventing blacksmithing a priori, we’re talking about how varied it is, how many methods and tools and contexts it can exist in now that it does exist! We’re talking about how a blacksmith’s hammer can be driven by muscle or pressurized machinery and still be the same fundamental discipline put to the same purpose. Yes the social dynamics of bronze age mines are fascinating and an important historical thing, of course they are, and yes some guy with a modern tools making beautifully crafted cooking knives in his garage as an artistic rather than purely pragmatic endeavor is still attached to that context for however far removed from it he seems, but none of this is relevant to what we’re talking about.
It’s also doubly relevant because it’s a physical trade with its own framework for understanding how the component bits work, for breaking down the properties of metals and the stages of how to shape and process them, and this conversation is also like a blacksmith talking about how to shape a bit of metal towards a desired result and being met with “um, actually, you cannot shape the metal without living as a bronze age slave miner, you cannot fathom the metal without being a long range trader carrying it between palace economies, you cannot give it its purpose and its form without dying on a medieval battlefield as a peasant beneath the sword of a knight! This is the context which has given you this art, the truths from which the trade springs, the needs for which it exists! It is not possible to swing a hammer without them!” And it’s like, ok, cool, that’s not relevant to a practical discussion of how you physically do the thing at all and not what this is about.
“Some dude centuries ago just thinking about stuff, based exclusively on his own extremely limited material experiences, based on vibes-based interpretations of what people are, definitely had a better take on the capacities of modern technology and modern engineering than someone living in the context of things and tools that an old philosopher could only vaguely imagine in the most abstract and vibes-based way if at all.”
Yeah philosophy is a deeply unserious discipline overall and this is why. Marx did real, valuable work on economics and political theory that has proven correct, that is not a validation of philosophers’ idle musings on what makes the consciousness of living things tick or what it is mechanically.
I have to come back and address this because it’s so nonsensical. You understand why technical professionals who work with extremely complex and varied machines through human-readable translation layers use those abstract higher level ways of talking about the actions the machine is doing, right? Like the material purpose served by turning an inscrutable series of low level numbers that is different across different machines into one unified framework of tasks that can be talked about, that can be spoken, and that can be generically reapplied to make all these different machines perform the same task? How a field that revolves around breaking down the extremely high level, abstract concepts that you think of as some sort of natural material truths into smaller bits that while still abstract can actually be mechanically applied instead of existing as discrete things produces a different perspective and ontology, and how the framework is manifestly correct and useful for these tasks as evidenced by the fact that it materially works and yields real, practical results?
You understand how this framework applies to actions in general the same way, how we talk about writing or speaking in terms of the high-level action being performed and not through the specific forms of the nerves and cells and personal histories of the person performing that action? How this isn’t at all in conflict with understanding that there are mechanical things happening to facilitate this, that there is a history and context and reasons for all of these things to exist?
Going “harrumph harrumph five hundred years ago we already had people saying magical souls were not seizing upon bodies and working them like puppets, but rather the soul is a manifest property of the body and springs forth from it!” isn’t “disproving” the discipline of making funny little machines do tasks just because the concept of “so instead of dealing with the messy bullshit at the lowest levels, I just tell it what to do in more generic and usable terms” is aesthetically similar to “who cares what meat does, it’s the magic soul stuff that has inhabited it!” despite being manifestly different in every way.
Like we’re not talking about a “soul/mind vs body” but rather like someone reading a book or following verbal instructions, about learning generic sets of high level actions to perform.
Now these last two replies are just reddit-tier word-salad
My patience wanes when I perceive the worst kind of tech people anti-intellectualism, like calling philosophy “vibes based” … soo fucking dumb
You want to have your cake, and eat it too: using the conceptual framework of representationalist dualism, while simultaneously trying to shield it from criticism, saying that criticising it would be non-pragmatic, unlike engineering! You use words like “idealism” you clearly don’t understand.
I was talking about strong AI since the start, and you keep oscillating between strong and weak AI. Or you always talked about weak AI, I’m not even sure, you are so incoherent. Again, I replied to someone else about strong AI.
I reiterate that any discipline, even hard science and practical sciences like engineering, uses conceptual frameworks that are worth criticizing.
Do you get it? You are trying to use philosophy at all points to argue in favor of your ideas, and whenever those philosophical ideas get scrutinized you simply flee back saying that criticizing it would be just silly magical vibes based thinking. It’s a bad faith argument, if I’ve ever seen one.
Saying dialectical materialism is irrelevant because of 21st century software engineering is the same as saying that “Marxism got disproven by bitcoin”. It’s just stupid.
Since we are on hexbear and not on a programming subreddit, do some self-crit and do the reading, because I’ve lost my patience in correcting your very obvious and intellectually dishonest “misunderstandings”: