I didn’t think I’d sit and watch this whole thing but it is a very interesting conversation. Near the end the author says something like “I know people I’m the industry who work in these labs who act like they’ve seen a ghost. They come home from work and struggle with the work they do and ask me what they should do. I tell them they should quit then, and then they stop asking me for advice.”
I do wonder at times if we would even believe a whistleblower should one come to light, telling us about the kind of things they do behind closed doors. We only get to see the marketable end product. The one no one can figure out how it does what it does exactly. We don’t get to see the things on the cutting room floor.
Also, its true. These models are more accurately described as grown not built. Which in a way is a strange thing to consider. Because we understand what it means to build something and to grow something. You can grow something with out understanding how it grows. You cant build something without understanding how you built it.
And when you are trying to control how things grow you sometime get things you didn’t intend to get, even if you got the things you did intend to get.


Now these last two replies are just reddit-tier word-salad
My patience wanes when I perceive the worst kind of tech people anti-intellectualism, like calling philosophy “vibes based” … soo fucking dumb
You want to have your cake, and eat it too: using the conceptual framework of representationalist dualism, while simultaneously trying to shield it from criticism, saying that criticising it would be non-pragmatic, unlike engineering! You use words like “idealism” you clearly don’t understand.
I was talking about strong AI since the start, and you keep oscillating between strong and weak AI. Or you always talked about weak AI, I’m not even sure, you are so incoherent. Again, I replied to someone else about strong AI.
I reiterate that any discipline, even hard science and practical sciences like engineering, uses conceptual frameworks that are worth criticizing.
Do you get it? You are trying to use philosophy at all points to argue in favor of your ideas, and whenever those philosophical ideas get scrutinized you simply flee back saying that criticizing it would be just silly magical vibes based thinking. It’s a bad faith argument, if I’ve ever seen one.
Saying dialectical materialism is irrelevant because of 21st century software engineering is the same as saying that “Marxism got disproven by bitcoin”. It’s just stupid.
Since we are on hexbear and not on a programming subreddit, do some self-crit and do the reading, because I’ve lost my patience in correcting your very obvious and intellectually dishonest “misunderstandings”:
Everything you have said is reminiscent of nothing so much as reddit neolib econ undergrads trying to appeal to neoclassical economics, or POLSCI majors trying to insist that “populism” is a real thing and that the american civic cult is super serious and normal.
I have clearly articulated my position, repeatedly. I talk about how machines work and how the methods used to build them abstract them down into usable bits, and you go off on unhinged ramblings about 17th century arguments over what a “soul” is, declaring modern engineering disciplines invalid because the concept of a soul was correctly deemed not credible long ago. You are applying these random nonsequitur appeals to comparatively ancient speculation about how thinking works and then acting like this is a refutation of an entirely different discipline working with tools that even relevant modern experts didn’t expect to do the sorts of weird, novel shit they’re doing already.
That framework is flawed to begin with. We already have weak AI that’s doing weird shit that it was assumed would require strong AI to do, exhibiting language processing performance that despite lacking any intelligence at all and being a purely static thing passes what philosophers thought would be a defining line of true intelligence. These static, unchanging blocks of data exhibit the ability to synthesize novel concepts from components they already contain, despite having no sort of internal modeling or process to revise and create these things. Like that’s completely fucked and not at all in line with what anyone expected from them, you see that right? 20 years ago, 10 years ago, that was a nonsense idea, like the idea that we’d have advanced chatbots that pass the turing test while still being completely unintelligent and unreliable and lacking any sort of conscious reasoning model would be absurd.
And you want to take the stance of “well centuries ago so and so said this about sentience and the conscious mind, and if we look at people through a materialist lens we see they’re the product of this and that” as some kind of authoritative line for what can be done with the tools at hand? You really think that that’s a more relevant framework than the ones developed for modern tools, that built these modern tools, that manifestly apply to how these tools work and what they can do? You want to talk about anti-intellectualism, have some self-awareness.
The idea then that there’s some sort of special dividing line that makes a “strong” AI and which is inscrutable and ineffable is as fundamentally wrong as the people who think the terrible static chatbots have passed such a line. As a categorization method it’s maybe not the worst distinction to make, but it’s neither been passed nor is it as weird and special as it’s been made out to be.
I’ve already made my argument for where that distinction would be, and why I don’t think it requires tools or contemplations of existence that don’t exist but rather just a better application of what does exist, to reach the point of something that is doing the sorts of internal modeling and dynamic learning that we associate with consciousness rather than just being a shitty static mimicry of some of the structures of a brain.
Yes, yes, everything to do with ontology is technically philosophy, you know very well that it’s not the concept of having definitions and frameworks for things that I’m dismissing here when I reject endless idle musings about what consciousness is that contradict modern methodologies as being obviously flawed or irrelevant, or when I dismiss these random tangents about how things that exist are defined and created by the contexts they emerged from, as if that’s a coherent refutation of a machine emerging from the context of someone building it.
You keep weaving in things that are true but irrelevant and misrepresenting them as refutations of unrelated things.
You know damn well that’s not what I’m saying when I dismiss this smug “oh, software engineering, harrumph, that smacks of cartesian dualism, and we settled that whole thing ages ago!” nonsense. Like we’re talking about software, about how software works, and you stick your nose up because you think the abstractions involved in operating real, material machines that do real, material things are “idealist” because they remind you of unrelated ancient philosophical doctrines you read someone dunk on once? Are you going to dismiss linguistics as idealist next? What about books? “Oh an abstract bunch of ideas operated with your meat eyes, that smacks of cartesian dualism! Is this paper your soul? No, of course not, the whole notion of abstracting material things into ‘text’ is impossible as all our actions and states are the emergent property of our forms, and as the book is not within our forms it must be some perfidious soul demon trying to puppet us, an idea we have dismissed!”
I swear you’re like a parody of philosophers getting lost in some archaic weeds.
lolmao what a meltdown
Recommending further reading is not an appeal to authority, don’t be silly, it’s just acknowledging the constraints of the medium (forum comments)… even Ilyenkov and Lefebvre needed 300 pages to explain this, and they are pretty good at explaining it.
Believe it or not, there’s also historical materialist writing on this topic. I recommend the following:
Now stop seething and do the reading, don’t be a lib, you are on a Marxist lemmy instance
Like you, commenting on a discipline you don’t know the first thing about, and then smugly trying to appeal to unrelated philosophy no matter how many times the actual subject is explained to you. I was entirely too generous with my original mockery of your position.
why are you looking at me like this:
Man I just got back from the vet with my probably-dying cat, so fine, sure, whatever, you have successfully debunked the discipline that materially facilitates this conversation by smugly appealing to the ontology of guys who didn’t know what “lightbulbs” are. You win.
Give yourself a good time for it, yeah?
Sorry about your cat, I hope they get better, and you have a few more years together.
We were just talking, okay?
I debated about this often, and I read about it a lot. I actually also work with this kind of stuff. Doesn’t matter. Winning is not the point. All this online talk doesn’t really matter. Be with your cat. I know I love mine.
Thanks, and I’m sorry for being snarky.