I didn’t think I’d sit and watch this whole thing but it is a very interesting conversation. Near the end the author says something like “I know people I’m the industry who work in these labs who act like they’ve seen a ghost. They come home from work and struggle with the work they do and ask me what they should do. I tell them they should quit then, and then they stop asking me for advice.”

I do wonder at times if we would even believe a whistleblower should one come to light, telling us about the kind of things they do behind closed doors. We only get to see the marketable end product. The one no one can figure out how it does what it does exactly. We don’t get to see the things on the cutting room floor.

Also, its true. These models are more accurately described as grown not built. Which in a way is a strange thing to consider. Because we understand what it means to build something and to grow something. You can grow something with out understanding how it grows. You cant build something without understanding how you built it.

And when you are trying to control how things grow you sometime get things you didn’t intend to get, even if you got the things you did intend to get.

  • EnsignRedshirt [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    60
    ·
    2 months ago

    Hank Green, bless his heart, is being a credulous rube about AI. He’s approaching the issue with more skepticism than the AI boosters, but he’s still stuck in the mentality that the experts in the field must know something he doesn’t. That’s a good instinct to have, but he’s showing his ass by reviewing a book written by Eleizer Yudkowsky.

    Yudkowsky is not an expert. He is just some guy. He’s not an engineer, or academic, or even an AI startup founder. He’s a terminally-online weirdo who created an identity around writing pure speculation about imaginary AI technology. All of his “work” is based on extrapolating consequences from unfounded assumptions. He writes science fiction, but because the fiction is useful to people like Peter Thiel, it’s being treated as serious academic work, and as a result, people like Hank Green feel the need to treat it like it’s worthy of consideration.

    Everything Yudkowsky does should be taken as seriously as the work of Ayn Rand or L. Ron Hubbard. AI doomerism is useful to AI boosters to obfuscate the reality that superintelligence as they define it is not imminent or even inevitable. The current crop of AI tools are not and will never evolve towards something that we would consider to be intelligent or sentient or conscious, and that’s hard for some people to grasp given that a trillion dollars has been invested in pretending that that’s what’s happening.

    Artificial intelligence, in a true sense, is certainly possible. We, as humans, have intelligence. Our intelligence is emergent from the properties of the universe, and so there must be some way to replicate or even surpass that intelligence. We aren’t special. That being said, just because we aren’t special doesn’t mean we have the means to surpass our own intelligence. We may never have the capability to build a machine that can surpass us. If it’s possible to do so, we don’t know how to do it, and no technology we have today is on track to do it.

    Hank Green is a liberal idealist. For all that he has a science background, he has too much faith in mainstream institutions. He, himself, says that people should “stay in their lanes” when it comes to having opinions about complex topics, but he seems to spend very little effort questioning whether the people claiming to be experts in the AI field are actually qualified to be experts. Yudkowsky is as qualified to talk about AI as RFK Jr. is to talk about vaccines, or Jordan Peterson is to talk about gender identity. If Hank had done any research at all into the authors of the book, he would have noticed that Yudkowsky has no academic credentials, nor any professional experience, and that his credibility in the field of AI comes entirely from private, for-profit concerns with a vested interest in pushing the narrative that the current state of AI is a revolutionary technology with potentially existential consequences for humanity. If he were as smart as he wants people to think he is, he would have taken one look at the authors of this dumb book and discounted it as garbage without reading it.

    AI isn’t scary. It’s not going to end the world by accident. The harm it could potentially do to humanity is already happening, with people going insane talking to AI chatbots like they’re conscious beings, having epiphanies because the sycophantic robots are telling them exactly what they want to hear. That’s the danger of AI, that it makes everyone stupid and insane, not that it’ll become Skynet and take over all the nuclear weapons.

    • LeninWeave [none/use name, any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      22
      ·
      2 months ago

      He writes science fiction

      Most famously, he writes neoliberal Harry Potter fanfiction.

      Everything Yudkowsky does should be taken as seriously as the work of Ayn Rand or L. Ron Hubbard.

      Extremely generous.

    • insurgentrat [she/her, it/its]@hexbear.net
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      2 months ago

      Someone promote this ensign, and get him a different coloured shirt…

      These things are ordinary, their harms are ordinary. Wasted resources, polluted information streams, scams, exploitation, unregulated exposure to vulnerable people.

      Even if we make a machine that does something like thinking there is no particularly reason to believe a machine that does something like thinking would be some sort of skynet. Ants do something like thinking, we do something like thinking, orange cats do something like thinking. It seems like there’s a huge spectrum of something like thinking, with often limited domains, and lots of room for stupidity. It’s just not particularly plausible that a machine would be able to usefully modify itself in some way as to create a runaway feedback cycle, nor that a sort of intelligence that conveys godlike powers of charmisma and foresight is possible, even ignoring practical concerns like: manipulating the physical world.

    • darkmode [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      13
      ·
      2 months ago

      really appreciate this. If this excellent explanation wasn’t convincing, consider this:

      computers, in the most vulgar terms, do what we tell them to. They are physical devices, operating with electrical signals. In order to manipulate them in such a manner that models human intelligence, we would have to concretely understand every aspect of our minds and bodies. Seriously, down to the most minute detail, we would need first to be able to plainly describe in code every physical mechanism in our bodies in order for a computer to operate as an “intelligence”. I don’t need anything about neuroscience, or any of the fields that are studying the human body, but i know enough to confidently state that the current tech cannot accurately model the psychosomatic aspects of memory and decision making that are vital to the human experience. We would have to simulate physical sensations in a proper “AI”.

      • KobaCumTribute [she/her]@hexbear.net
        link
        fedilink
        English
        arrow-up
        16
        ·
        2 months ago

        I don’t think that’s fundamentally correct. The fundamental dividing line isn’t replicating neurotransmitters and a perfect mimicry of evolved physical structures, but rather it’s a sort of persistence and continuity within a system that can dynamically model scenarios and store, recover, and revise data in a useful manner to facilitate this.

        Current models do not and cannot do this, instead just trying to bake it all in to this predictive black box thing that gets fired off over and over by an outside script and then just spits out text or pictures or a sound file. They’re each a dead-end in and of themselves, for all that they could theoretically be attached to a more holistic and complete AI to serve narrow specific functions for it (eg a small LLM quantifying text into a form it can understand, and converting its actual output into something human readable). We don’t need a perfect model of how to do all this in squishy meat goop to make a machine that’s close enough to conscious to be treated as such, we just need a better approach than these shitty pre-baked black box chat bot models to do it.

        • LeninWeave [none/use name, any]@hexbear.net
          link
          fedilink
          English
          arrow-up
          13
          ·
          2 months ago

          My problem with the tech people who are saying they’re replicating consciousness is that until we can actually explain consciousness they’re essentially expecting me to believe that they stumbled onto the solution while creating it.

          Tech people can’t prove it’s possible to make a conscious program, they can’t describe how consciousness works in forms it already exists in, they can’t even define consciousness. How can they expect me to believe them when they claim they’re close to replicating it?

          • Kieselguhr [none/use name]@hexbear.net
            link
            fedilink
            English
            arrow-up
            9
            ·
            2 months ago

            My problem with the tech people who are saying they’re replicating consciousness is that until we can actually explain consciousness they’re essentially expecting me to believe that they stumbled onto the solution while creating it.

            Consciousness is intimately connected to life, as in we are embodied beings moving around in the world driven by biological needs. In Marxist terms consciousness emerges from the dialectical movement of a specific body and the (social+natural) environment. Unless you are replicating this dialectic, you are not replicating any consciousness whatsoever.

            This is not solely Marxist either, cf. Merleau-Ponty, or this Wittgenstein quote:

            The human body is the best picture of the human soul.

            Saying you can program consciousness with mathematical representations is pure Cartesian dualist idealism.

            Of course a good sophist now would say that I’m conflating consciousness with intelligence, and indeed I am. One is dependent on the other.

            • KobaCumTribute [she/her]@hexbear.net
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              “Consciousness is intrinsically a property of mushy goop, a magic soul thing made of magic!” - very serious philosophers who should be taken seriously for thinking real hard all day centuries ago

              • Kieselguhr [none/use name]@hexbear.net
                link
                fedilink
                English
                arrow-up
                4
                ·
                2 months ago

                All due respect, comrade, you should get acquainted with a body of thought called dialectical materialism.

                “Consciousness is intrinsically a property of mushy goop, a magic soul thing made of magic!”

                total misrepresentation of what I’ve said, I suggest you read it again:

                life, as in we are embodied beings moving around in the world driven by biological needs

                Consciousness starts from biology, in other words the very opposite of a supernatural quality of things. It’s the natural aspect of living matter as it moves around and does things in the material world. This is as radically materialist as it gets. Explaining consciousness from “mental representations” is actually starting from idealism.

                very serious philosophers who should be taken seriously

                The point being is this:

                1. You can’t do science without some kind of conceptual framework
                2. the conceptual framework of computationalism is Cartesian mind-body dualism (something that’s couple of centuries older than Marx) even though now it’s worded as Hardware-Software dualism

                Some readings for you:

                Evald Ilyenkov - Dialectical Logic

                Feliks Mikhailov - The Riddle of the Self - Dreams of the Kurshskaya Sand Bar

                Or if you insist on something non-Marxist from the 21st century:

                Mind in Life by Evan Thompson

                • KobaCumTribute [she/her]@hexbear.net
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  2 months ago

                  Consciousness starts from biology, in other words the very opposite of a supernatural quality of things. It’s the natural aspect of living matter as it moves around and does things in the material world. This is as radically materialist as it gets. Explaining consciousness from “mental representations” is actually starting from idealism.

                  The very concept of “consciousness” is inherently abstract and idealist. It’s looking at the emergent behavior of a complex system and constructing an ontology to describe what it’s doing. It is a bunch of fuzzy little made up words we use to describe a thing we see happening.

                  My stance is to fundamentally reject the mystical and essentialist takes on this and simply declare that something may as well be considered conscious when it becomes a complex system that does a set of criteria we associate with sentient life, and that it does not need to come from a complete and total mimicry of the existing systems that naturally arrived at a thing that does that. Fetishizing the specific forms that have yielded it in the past would be like holding up something like “blacksmithing” as the emergent property of muscles and sinew driving a heavy metal thing at some heat-softened other metal thing that’s on a rigid surface and rejecting the idea that a hammer could be driven by any other method (like the sorts of mechanical presses many modern blacksmiths use in lieu of or in support of a hammer and anvil) to accomplish the same goal.

                  It’s the natural aspect of living matter as it moves around and does things in the material world.

                  I want to narrow in on this though, because I actually agree with the point of this sort of abstract internal modeling being the result of a system that’s operating within some kind of space that is either material or a mimicry of it. I just disagree that it essentially requires holistic organic life to start learning dynamically and producing abstract models to work with. We fundamentally have machines that can do the hard mechanical parts of this already, the senses and data storage and dynamic responses, and what is lacking is a methodology for squishing this all together although we also have machines that can fundamentally do that sort of fuzzy data and process squishing stuff already too.

                  This isn’t some kind of intractable problem, it’s one of approach and focus. Machine learning has been hyperfocused on text processing because that was comparatively easy (because there are massive, massive amount of text around), because language is a comparatively simple system, and because there are actual immediate applications to even primitive machine learning as far as trying to pull data out of text or transform text in some way goes so even before it became a funny speak-and-spell bot that could outsmart your average executive despite being a glorified magic-8-ball there were material reasons to study and develop it. I think that’s a dead end albeit one that’s yielded some potentially useful technology, but it’s not a dead end on the grounds that machines can’t ever do thinking stuff.

                  To reframe this a little, we both agree that it is implausible for a human to sit down and design a machine human (or even a template for one to emerge given training) because designing requires simulating in a sense and containing a simulation of itself within itself is silly and infeasible, right? Well, what about something simpler? Could a human design a mechanical rodent brain (or template thereof)? A mechanical dog brain? Does such a mimicry require a full simulation of neurochemistry, or a replication of the functions that these things perform? Does it require a holistic replication of every extant function or just the ones related to processing and modeling the world around it?

                  My stance is that we can do a whole lot of good-enough abstract replications of the way living things interact with their surroundings that if fed into a correctly designed fuzzy black box data squishing machine it would start doing the sort of sentient dynamism we associate with living things in a fashion we describe as something being conscious, even if it would not be a synthetic person nor necessarily be easily scaled up to that level. We could probably still cram an artificial speech center onto it so it could articulate its experiences, though, even if it wasn’t particularly smart.

          • KobaCumTribute [she/her]@hexbear.net
            link
            fedilink
            English
            arrow-up
            4
            ·
            2 months ago

            until we can actually explain consciousness

            At what point does one explain consciousness? Like what level of detail do you need in an ontological model or whatever you want to call it to say you’ve explained it?

            We’re looking at an abstract action or state of doing an action that’s made up of many component abstractions that are each made up of more abstractions and so on all the way down to the squishy thinking goop stuff that’s doing something mechanical. We apply this abstract concept of consciousness to basically everything with a nervous system above a very small level and there are arguments over how low that should be, while at the same time only being able to confidently subjectively declare our own consciousness while merely trusting that someone else is being both truthful and accurate when declaring their own - a fact that’s driven a whole lot of fancy lads with too much education and too much free time half mad with existential horror despite the clear answer to such crippling ontological uncertainty being that it doesn’t matter and is a very silly question in the first place since it’s all just ontology that we define, that consciousness isn’t a thing it’s an abstract concept that we apply at our discretion.

            We can also look at other similar abstract actions like seeing, moving, flying, etc for points of comparison with how we should approach this ontology: like even before we understood what, physically speaking, an eye did we could understand sight, and the earliest cameras predate the sort of molecular biology needed to really delve into how and why eyes do the stuff they do. Similarly, flying machines predated a thorough understanding of how birds or insects could fly, and also predated a thorough understanding of aerodynamics. Wheeled ground locomotion was also achieved thousands of years before we could understand muscles or optimized strides and whatnot.

            This is a much more complicated sort of topic than those, certainly, so I’m not at all suggesting “oh you know carts were easy, we didn’t have to understand what muscles chemically were for that, so this should be as simple as sticking two round things to a stick and putting a box on it!” because that’s silly, I only want to point out how we consistently reach the point of mimicking the abstractly understood action in practical terms long before we understand how the naturally evolved form works in depth.

            So where do we actually pass the threshold for what we would consider conscious when we’re talking about a machine that’s mimicking the various abstract processes that make up what we call consciousness when we’re looking at each other or a dog or a bird or whatever and saying “that, right there, I’m pretty sure that’s conscious and has an internal existence and thinks even if it’s not all that good at planning and can’t talk in our funny complex words”?

            I’m personally drawing that line at when those combined abstract systems add up to something that maintains a level of persistent, ongoing thought with internal modeling and the rapid, dynamic acquisition and use of new information in a fashion that isn’t just some ten thousand word text buffer or clever outside scripted heuristic for flagging stuff that seems relevant and mixing them back into its prompt when appropriate, something that learns and develops its schema as it operates rather than being a static thing that you sometimes replace with a new version of itself with more baked-in “knowledge”. I don’t think that line requires a comprehensive understanding of the molecular biology and chemistry of neurons or a ground-up simulation thereof to pass. I also don’t believe it requires a holistic mimicry of all the non-thinking related bits like making a heart beat, intuiting inner ear nerve stuff to balance good, knowing when one is hungry and intuiting what one needs to eat to address a nutritional deficiency, etc.

            I do think it’s probably at least somewhat deeper in the hierarchy of abstract things that do specific stuff than the extremely shallow approach that tech bros are doing where they’re basically saying “smart things what think talk good, we will make the biggest and most talking good machine ever and it will be our god as the smartest among us!” which is complete nonsense and clearly failing.

        • darkmode [comrade/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 months ago

          You put some things in there that I didn’t exactly say or at least belittled what i was trying to convey.

          My whole point is that I think conceptually a super intelligence would have to be able not only to think but also to feel.

          • KobaCumTribute [she/her]@hexbear.net
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            2 months ago

            Edit: sorry, I somehow mixed up which reply I was responding to. I didn’t click through and assumed this was about a sort of glib reply I’d given to someone else.

            My whole point is that I think conceptually a super intelligence would have to be able not only to think but also to feel.

            Feeling is a subcategory of thinking that’s just sort of arbitrarily distinguished from reason by rhetorical traditions that try to create this dichotomy between the thoughts that you can sort of round towards formal logic systems by phrasing them the right way and the thoughts that are more intuitive and linked to psychosomatic sensations like the visceral pain that sorrow causes. This gets dressed up and treated as some divine and profound thing, and I think that’s very silly and fundamentally flawed.

            That is to say, thoughts do not become irrational or become something other than thoughts just because they’re linked to an instinctive reward/punishment feedback system. For example, grief is neither irrational nor profound simply because it causes physical agony, and something doesn’t need a throat or chest to feel a painful pit in to appreciate or be averse to loss.

            This philosophical approach to “what would make a synthetic human? Surely it must be this collection of things we value, things we poetically relate to our experience,” I think is just fundamentally just as off base as the techbro approach of thinking their giant speak-and-spell bot just needs one more data center, one more trillion dollar investment, before it becomes an alien superintelligent NHP who will judge us all for sins and virtues that it makes up through some inscrutable alien logic.

            In the abstract a system is what it does, and we consider conscious things that are not at all capable of valuing or comprehending all the flowery things we value. So if a system does the things we associate with conscious life, things like dynamic learning and recall within itself, like producing some kind of abstract model of the world and of things around it to be able to operate within that framework, even if that system is not very good at doing those things, even if it is not particularly smart, is it not “conscious”?

            To put it another way, is a dog conscious within your ontology? If not, would that change if we somehow crammed a speech center into its brain so it could articulate its experiences? Do you earnestly believe a machine cannot be made that is on the level with even a very silly dog? What if such a silly little robot got given a speech center so it could articulate the way it saw its world?

            I don’t draw the line of what becomes “conscious” at a level of “synthetic human with a fully formed human experience” or “inscrutable super intelligent NHP”, merely at the boundary of having functions we associate with sentient life which I think is a whole lot less mystical and inscrutable than people here are rhetorically making it out to be.

            • darkmode [comrade/them]@hexbear.net
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 months ago

              I, personally, stating that it is mystical and inscrutable but I am asserting that we need to be a lot better at computers in order to simulate the behavior of an organism accurately. By and large i am in agreement with you.

              I think the disconnect is where you drive the argument: I my original comment did not mystify the human experience. I was saying that the physical elements of feeling that are tied back to the other (just to wrap up what we both have been saying) “systems of conciseness” are in my view just as important to mathematically model in a computer program in order for a machine to really achieve what we view as intelligence not in just ourselves but in other organisms.

              I think i take such a hard stance not only bc CS as a field isn’t really dedicated to exploring this aspect of AI but also because part of me enjoys the spiritual and mysterious element of being “alive”

        • Wheaties [she/her]@hexbear.net
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 months ago

          While persistent modeling and predicting is a very important part of what we do, that alone may not be enough to form a conscious mind. We still don’t know what makes us conscious. The popular thinking is that our minds are fundamentally no different from a computer, that the phenomenon emerges out some specific organization at some threshold of complexity. But. It’s only popular because we are surrounded by computers. It would be convenient.

          Personally, I’m all-in on as-yet undefined quantum effects within microtubules (part of a cell’s cytoskeleton). I can say that because I’m a moron and it doesn’t matter what I think. But to me, a total fucking layman, it looks like the most interesting point of active research.

      • Le_Wokisme [they/them, undecided]@hexbear.net
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 months ago

        in principle we could accidentally make a non-human intelligence because our (animals, even) way of having brains made of meat and electrochemistry probably isn’t the only formulation for it, but the odds of that are incomprehensibly small and LLMs aren’t any more of a precursor to that than any other tech thing we have.

        • darkmode [comrade/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          I couched that whole comment on the human thing bc the current sales pitch is that this stuff is replacing people. Could more powerful computers and some kind of theoretical algorithm surpass LLMs? Why not? But i don’t think that animals operate based on pure statistics and linear algebra

          • Le_Wokisme [they/them, undecided]@hexbear.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            sorry i meant that a machine intelligence doesn’t have to be a copy of us, which means we don’t strictly need a complete understanding of our own minds to create one, not that we could make a robot animal with pure math.

    • Keld [he/him, any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      10
      ·
      2 months ago

      Yudkowsky is not an expert. He is just some guy. He’s not an engineer, or academic, or even an AI startup founder.

      He is not even a high school graduate. It feels important to remember that given the cult of credentialism and purported technocracy built up around him.