I didn’t think I’d sit and watch this whole thing but it is a very interesting conversation. Near the end the author says something like “I know people I’m the industry who work in these labs who act like they’ve seen a ghost. They come home from work and struggle with the work they do and ask me what they should do. I tell them they should quit then, and then they stop asking me for advice.”
I do wonder at times if we would even believe a whistleblower should one come to light, telling us about the kind of things they do behind closed doors. We only get to see the marketable end product. The one no one can figure out how it does what it does exactly. We don’t get to see the things on the cutting room floor.
Also, its true. These models are more accurately described as grown not built. Which in a way is a strange thing to consider. Because we understand what it means to build something and to grow something. You can grow something with out understanding how it grows. You cant build something without understanding how you built it.
And when you are trying to control how things grow you sometime get things you didn’t intend to get, even if you got the things you did intend to get.


Hold up a second, that is something we’re inferring[1], based on our senses, and the entirety of how we can conceive it must necessarily be an abstraction that exists in our brains. Now it’s a good inference, obviously, because it is a logically necessary thing that must exist in order for us to be abstractly modeling our understanding of what we’re doing in the first place, but we are still talking about a framework for describing this abstraction of what we can infer is happening.
I’ve realized as I’ve been writing this all out how heavily my understanding and use of object-oriented programming informs the framework with which I try to articulate things, so perhaps I should clarify that when I talk about abstractions or things being abstract it is because I see the sort of fuzzier, higher-level understandings of them as semantically distinct from the actual bits that make them materially work in that the high-level abstract version is more generic and useful to talk about and think about while the low-level material stuff is only relevant when you need to go in and look at why and how it functions in a particular instance. In that sense (and to keep with the blacksmithing touchstone), something like the action of a hammer striking hot iron is an abstract action, it is a thing that accomplishes a purpose and which we can talk about and conceptualize conveniently whereas its literal material truth could take on a myriad of different forms based on the technique, on the types of tools being used, how those physically work, what the composition of the iron is, where the iron came from, what the desired end shape is, etc. So it’s both a real material action and process, but also an abstraction used to catch all possible variations thereof.
This I take issue with, because there are things that are not conscious that do that, and with things that we recognize as conscious we know that both senses and internal models can be massively detached from material reality. So you have unthinking things that react dynamically within the world that they exist in in a way that can be more accurate than similarly scoped living things we’d still recognize as sentient, and we have consciousnesses that are effectively existing while entirely detached from the world they materially exist in.
So consciousness is something that interacts with fallible inputs and fallible outputs, and it is not required in order for something to receive inputs and respond with actions.
I used it more to set a scope than to talk about literally replicating a high-fidelity synthetic dog. A literal dog has a whole host of things that are very important for a fuzzy social animal running around the woods trying to survive, but which aren’t specifically required for the bits where it’s learning or producing an abstract model of the world around it to guide how it interacts with things beyond that they’re informing what it needs to survive and providing it with tools to use. You wouldn’t need to replicate all the functions of life to get the same degree of social and environmental modeling.
I do feel that we’re sort of talking past each other here, because I do agree with the structural need for some kind of interplay with senses, real or simulated, and motivation systems to provide feedback. It’s just I don’t see the whole “having the capacity for abstract modeling and learning” as being predicated on this specifically being a perfect mimicry of extant life existing within a natural context. Just as the hammer strike is a hammer strike regardless of whether it is driven with muscle and sinew or pneumatic/hydraulic machinery, regardless of whether it is an essential task producing a useful tool or an artistic endeavor producing something purely decorative or anywhere in between, regardless of whether it is the result of a long tradition rooted in historical material contexts or someone trying to figure it out with no experience or guidance beyond knowing that blacksmithing is a thing, it remains something that falls into this abstract category of actions that involve hitting a bit of metal with another for the purpose of shaping it.
Consciousness/sentience seemed like something incredible and ineffable for most of history, built of wondrous parts that we couldn’t dream of replicating, and yet suddenly we have machines that can mimic senses, maintain abstract simulations we design for them, store and retrieve vast amounts of data, follow logical rote processes, and now even just sort of fuzzily produce new machines that perform a novel abstract actions provided a proper training regimen is set up. Materially, we have all the component pieces we need to make something that’s at least dimly conscious if limited in scope, and it does not need to be a wholly faithful synthetic recreation of an extant conscious being to do this.
Sure, it has a long historical context serving very real material needs, during which time it was pretty much exclusively the domain of raw human labor and muscle. Now it’s largely removed from that context and exists more as a craft for a certain sort of artist, while new industrial processes have largely supplanted the old blacksmithing trade, and they use machines that reduce the strain on their bodies and speed up their work now too. We still consider it blacksmithing though, removed from its historical context and traditional methods or not.
That’s my point, that we can take a big complex abstract sort of set of actions and transfer them out of the material context they emerged in, we can change the component pieces around, and we can still end up with the same functions regardless of how they’ve been transformed in both context and form. We might lose things we valued about the original form in the process, of course, and it can be transformed in other ways, but it’s not like the action of “doing blacksmithing to make a thing from metal” is stuck specifically to the historical contexts and methods where people learned how to do this and relied on this for essential tools and becomes lost without that or that it can’t be reverse engineered from fragments of the knowledge that went into it as a trade (regardless of whether such attempts are worse than proper education in established methods).
I used to read philosophy when I was younger, including on this topic. I didn’t find it useful nor memorable, and there certainly weren’t any answers to be found in it.
In contrast, I have my answer to this matter, which I have articulated as best I can and backed up with examples of the reasoning underpinning it. Now am I correct? Who knows, everything we have to go off of is speculation; I could be wrong, but I’m holding on to my bet until we materially see how it pans out. Half my argument isn’t even about a potential method to achieve a machine with dynamic abstract modeling, just how we should categorize one that displays a specific set of properties and defining a line beyond which I would personally be confident in applying the label of “conscious” to it.
Sort of. They’re like a horribly overgrown and overspecialized version of the bit of our language processing brain goop that handles predicting incoming text or spoken language to facilitate understanding it better. They’re interesting and in some limited ways useful, and a stripped down one could probably work pretty well as part of the interface of a more holistic system, but we’re in complete agreement that they’re not intelligent nor are they a pathway to any sort of conscious machine.
Correctly inferring, I’d say. I categorically reject all the “how can we believe our lying eyes?” wanking that some philosophers have gotten very wrapped up in and been bothered by. ↩︎
We are at an impasse. Basically in every paragraph you keep smuggling back in the old dualism, and say it’s all fine.
I’m not even sure what your goal is in engaging with me. Originally I replied to another user who said:
“My problem with the tech people who are saying they’re replicating consciousness is that until we can actually explain consciousness they’re essentially expecting me to believe that they stumbled onto the solution while creating it.”
I’m not sure what you’ve read, but you clearly betray your unfamiliarity with modern philosophy post 1650: Spinoza, Hegel, Marx, Dewey, Santayana, Husserl, Heidegger, Wittgenstein, Merleau-Ponty… all would take issues with your conceptualising this problem.
It seems to me that you are unfamiliar with Marxism, pragmatism, and phenomenology (you clearly didn’t understand my brief explication of intentionality). This is fine, no one should be forced to read philosophy. But dismissing it without understanding it is… well… supercilious.
The role of philosophy is not necessarily to give final answers, but to critically examine conceptual frameworks. With that said, Cartesian dualism was criticised and dismantled so thoroughly we can dismiss it as a “final answer”. It’s just tech people keep smuggling it back through the AI backdoor.
The point being that hominid-bodies in a certain natural environment that had metal available for mining, had the material needs for stronger tools, and the social development of distributed labour of mining, logistics and so on, and the practical imagination: these all dynamically playing off together developed blacksmithing. That is intelligence in action. Strong AI needs to achieve something like this. There was no such thing as blacksmithing, after this there is such a thing. Replicating this after the fact is a different question. That would be weak AI.
If you peel off the modern computer lingo, every paragraph you’ve written something that was thoroughly dismantled by some silly philosopher a hundred years ago. Again, I think we are going in circles, just read some theory (even Lenin has written a book about this). Or don’t read whatever.
Yes, even when the ontological framework required to operate modern machinery is explained to you, when modern tools are explained to you, all you can do is harrumph and cross your arms and insist some guy thought about thinking stuff a whole lot in a time before light bulbs and he does not agree with the limits of modern engineering concepts or tools that he never heard of and could never have imagined at all. Despite your insistence that you’re grounding your position in material things, it’s actually just pure idealism, ignoring the question of what modern tools materially do in favor of waxing poetic about life and experiences, repeatedly redirecting away from “well, yes, an environment and experiences are a materially important aspect of growing a construct that can abstractly model that environment and operate within it” in favor of “oh no I mean the history, the living-ness of it all, why 300 years ago so and so said…” in a way that shows you fundamentally don’t understand what we’re even talking about here.
Like this fundamentally is not a philosophical question at all, it’s an engineering one; for all that we’re arguing over the acceptable ontology or rhetoric to describe it this is fundamentally about a material object performing material tasks and how processes related to building such material objects and abstracting its parts and tasks into usable forms that are then reapplied mechanically to make it materially do things work and must be conceived of. We’re talking about a machine having the capacity to abstractly model systems that it interacts with, which dynamically acquires and recalls information that informs this. We’re not talking about building a synthetic man who must arrive at his modern state through going through the whole historical context firsthand, we’re not talking about this machine springing a priori with none of the long material and historical context that has brought us to this point, we’re talking about a funny little machine made of funny little material parts that already exist, that we already have, doing a fuzzy little abstract task that there is no reason at all to believe it cannot do.
Like even the idea of “oh the machine must dynamically do synthesis to arrive at novel conclusions” is, I hate to break it to you, already here. That is fundamentally what machine learning systems do to generate things that aren’t rote parts of their training data, and they’re doing it without intelligence or consciousness. They synthetically recombine extant bits of their “knowledge” to make new forms of it, whether they do it well in any given case or not, which is fascinating and disturbing. What is needed is a more holistic and in-depth framework for this sort of technology to operate within so that this can start being reapplied and revised and tested internally to reach a more dynamic state of abstract modeling like what living creatures do, instead of the static mimicries we have now.
This is actually a perfect encapsulation of the problem here. I keep using it as a convenient touchstone to talk about how we think about disciplines and actions, especially because the actual activity involved is so viscerally material, and you keep redirecting away from that to talk about the history and the context of that history, and while the points you’re drawing on aren’t wrong they’re an unrelated deflection. We’re not talking about inventing blacksmithing a priori, we’re talking about how varied it is, how many methods and tools and contexts it can exist in now that it does exist! We’re talking about how a blacksmith’s hammer can be driven by muscle or pressurized machinery and still be the same fundamental discipline put to the same purpose. Yes the social dynamics of bronze age mines are fascinating and an important historical thing, of course they are, and yes some guy with a modern tools making beautifully crafted cooking knives in his garage as an artistic rather than purely pragmatic endeavor is still attached to that context for however far removed from it he seems, but none of this is relevant to what we’re talking about.
It’s also doubly relevant because it’s a physical trade with its own framework for understanding how the component bits work, for breaking down the properties of metals and the stages of how to shape and process them, and this conversation is also like a blacksmith talking about how to shape a bit of metal towards a desired result and being met with “um, actually, you cannot shape the metal without living as a bronze age slave miner, you cannot fathom the metal without being a long range trader carrying it between palace economies, you cannot give it its purpose and its form without dying on a medieval battlefield as a peasant beneath the sword of a knight! This is the context which has given you this art, the truths from which the trade springs, the needs for which it exists! It is not possible to swing a hammer without them!” And it’s like, ok, cool, that’s not relevant to a practical discussion of how you physically do the thing at all and not what this is about.
“Some dude centuries ago just thinking about stuff, based exclusively on his own extremely limited material experiences, based on vibes-based interpretations of what people are, definitely had a better take on the capacities of modern technology and modern engineering than someone living in the context of things and tools that an old philosopher could only vaguely imagine in the most abstract and vibes-based way if at all.”
Yeah philosophy is a deeply unserious discipline overall and this is why. Marx did real, valuable work on economics and political theory that has proven correct, that is not a validation of philosophers’ idle musings on what makes the consciousness of living things tick or what it is mechanically.
I have to come back and address this because it’s so nonsensical. You understand why technical professionals who work with extremely complex and varied machines through human-readable translation layers use those abstract higher level ways of talking about the actions the machine is doing, right? Like the material purpose served by turning an inscrutable series of low level numbers that is different across different machines into one unified framework of tasks that can be talked about, that can be spoken, and that can be generically reapplied to make all these different machines perform the same task? How a field that revolves around breaking down the extremely high level, abstract concepts that you think of as some sort of natural material truths into smaller bits that while still abstract can actually be mechanically applied instead of existing as discrete things produces a different perspective and ontology, and how the framework is manifestly correct and useful for these tasks as evidenced by the fact that it materially works and yields real, practical results?
You understand how this framework applies to actions in general the same way, how we talk about writing or speaking in terms of the high-level action being performed and not through the specific forms of the nerves and cells and personal histories of the person performing that action? How this isn’t at all in conflict with understanding that there are mechanical things happening to facilitate this, that there is a history and context and reasons for all of these things to exist?
Going “harrumph harrumph five hundred years ago we already had people saying magical souls were not seizing upon bodies and working them like puppets, but rather the soul is a manifest property of the body and springs forth from it!” isn’t “disproving” the discipline of making funny little machines do tasks just because the concept of “so instead of dealing with the messy bullshit at the lowest levels, I just tell it what to do in more generic and usable terms” is aesthetically similar to “who cares what meat does, it’s the magic soul stuff that has inhabited it!” despite being manifestly different in every way.
Like we’re not talking about a “soul/mind vs body” but rather like someone reading a book or following verbal instructions, about learning generic sets of high level actions to perform.
Now these last two replies are just reddit-tier word-salad
My patience wanes when I perceive the worst kind of tech people anti-intellectualism, like calling philosophy “vibes based” … soo fucking dumb
You want to have your cake, and eat it too: using the conceptual framework of representationalist dualism, while simultaneously trying to shield it from criticism, saying that criticising it would be non-pragmatic, unlike engineering! You use words like “idealism” you clearly don’t understand.
I was talking about strong AI since the start, and you keep oscillating between strong and weak AI. Or you always talked about weak AI, I’m not even sure, you are so incoherent. Again, I replied to someone else about strong AI.
I reiterate that any discipline, even hard science and practical sciences like engineering, uses conceptual frameworks that are worth criticizing.
Do you get it? You are trying to use philosophy at all points to argue in favor of your ideas, and whenever those philosophical ideas get scrutinized you simply flee back saying that criticizing it would be just silly magical vibes based thinking. It’s a bad faith argument, if I’ve ever seen one.
Saying dialectical materialism is irrelevant because of 21st century software engineering is the same as saying that “Marxism got disproven by bitcoin”. It’s just stupid.
Since we are on hexbear and not on a programming subreddit, do some self-crit and do the reading, because I’ve lost my patience in correcting your very obvious and intellectually dishonest “misunderstandings”:
Everything you have said is reminiscent of nothing so much as reddit neolib econ undergrads trying to appeal to neoclassical economics, or POLSCI majors trying to insist that “populism” is a real thing and that the american civic cult is super serious and normal.
I have clearly articulated my position, repeatedly. I talk about how machines work and how the methods used to build them abstract them down into usable bits, and you go off on unhinged ramblings about 17th century arguments over what a “soul” is, declaring modern engineering disciplines invalid because the concept of a soul was correctly deemed not credible long ago. You are applying these random nonsequitur appeals to comparatively ancient speculation about how thinking works and then acting like this is a refutation of an entirely different discipline working with tools that even relevant modern experts didn’t expect to do the sorts of weird, novel shit they’re doing already.
That framework is flawed to begin with. We already have weak AI that’s doing weird shit that it was assumed would require strong AI to do, exhibiting language processing performance that despite lacking any intelligence at all and being a purely static thing passes what philosophers thought would be a defining line of true intelligence. These static, unchanging blocks of data exhibit the ability to synthesize novel concepts from components they already contain, despite having no sort of internal modeling or process to revise and create these things. Like that’s completely fucked and not at all in line with what anyone expected from them, you see that right? 20 years ago, 10 years ago, that was a nonsense idea, like the idea that we’d have advanced chatbots that pass the turing test while still being completely unintelligent and unreliable and lacking any sort of conscious reasoning model would be absurd.
And you want to take the stance of “well centuries ago so and so said this about sentience and the conscious mind, and if we look at people through a materialist lens we see they’re the product of this and that” as some kind of authoritative line for what can be done with the tools at hand? You really think that that’s a more relevant framework than the ones developed for modern tools, that built these modern tools, that manifestly apply to how these tools work and what they can do? You want to talk about anti-intellectualism, have some self-awareness.
The idea then that there’s some sort of special dividing line that makes a “strong” AI and which is inscrutable and ineffable is as fundamentally wrong as the people who think the terrible static chatbots have passed such a line. As a categorization method it’s maybe not the worst distinction to make, but it’s neither been passed nor is it as weird and special as it’s been made out to be.
I’ve already made my argument for where that distinction would be, and why I don’t think it requires tools or contemplations of existence that don’t exist but rather just a better application of what does exist, to reach the point of something that is doing the sorts of internal modeling and dynamic learning that we associate with consciousness rather than just being a shitty static mimicry of some of the structures of a brain.
Yes, yes, everything to do with ontology is technically philosophy, you know very well that it’s not the concept of having definitions and frameworks for things that I’m dismissing here when I reject endless idle musings about what consciousness is that contradict modern methodologies as being obviously flawed or irrelevant, or when I dismiss these random tangents about how things that exist are defined and created by the contexts they emerged from, as if that’s a coherent refutation of a machine emerging from the context of someone building it.
You keep weaving in things that are true but irrelevant and misrepresenting them as refutations of unrelated things.
You know damn well that’s not what I’m saying when I dismiss this smug “oh, software engineering, harrumph, that smacks of cartesian dualism, and we settled that whole thing ages ago!” nonsense. Like we’re talking about software, about how software works, and you stick your nose up because you think the abstractions involved in operating real, material machines that do real, material things are “idealist” because they remind you of unrelated ancient philosophical doctrines you read someone dunk on once? Are you going to dismiss linguistics as idealist next? What about books? “Oh an abstract bunch of ideas operated with your meat eyes, that smacks of cartesian dualism! Is this paper your soul? No, of course not, the whole notion of abstracting material things into ‘text’ is impossible as all our actions and states are the emergent property of our forms, and as the book is not within our forms it must be some perfidious soul demon trying to puppet us, an idea we have dismissed!”
I swear you’re like a parody of philosophers getting lost in some archaic weeds.
lolmao what a meltdown
Recommending further reading is not an appeal to authority, don’t be silly, it’s just acknowledging the constraints of the medium (forum comments)… even Ilyenkov and Lefebvre needed 300 pages to explain this, and they are pretty good at explaining it.
Believe it or not, there’s also historical materialist writing on this topic. I recommend the following:
Now stop seething and do the reading, don’t be a lib, you are on a Marxist lemmy instance
Like you, commenting on a discipline you don’t know the first thing about, and then smugly trying to appeal to unrelated philosophy no matter how many times the actual subject is explained to you. I was entirely too generous with my original mockery of your position.
why are you looking at me like this:
Man I just got back from the vet with my probably-dying cat, so fine, sure, whatever, you have successfully debunked the discipline that materially facilitates this conversation by smugly appealing to the ontology of guys who didn’t know what “lightbulbs” are. You win.
Give yourself a good time for it, yeah?