I didn’t think I’d sit and watch this whole thing but it is a very interesting conversation. Near the end the author says something like “I know people I’m the industry who work in these labs who act like they’ve seen a ghost. They come home from work and struggle with the work they do and ask me what they should do. I tell them they should quit then, and then they stop asking me for advice.”

I do wonder at times if we would even believe a whistleblower should one come to light, telling us about the kind of things they do behind closed doors. We only get to see the marketable end product. The one no one can figure out how it does what it does exactly. We don’t get to see the things on the cutting room floor.

Also, its true. These models are more accurately described as grown not built. Which in a way is a strange thing to consider. Because we understand what it means to build something and to grow something. You can grow something with out understanding how it grows. You cant build something without understanding how you built it.

And when you are trying to control how things grow you sometime get things you didn’t intend to get, even if you got the things you did intend to get.

  • KobaCumTribute [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    2 months ago

    Edit: sorry, I somehow mixed up which reply I was responding to. I didn’t click through and assumed this was about a sort of glib reply I’d given to someone else.

    My whole point is that I think conceptually a super intelligence would have to be able not only to think but also to feel.

    Feeling is a subcategory of thinking that’s just sort of arbitrarily distinguished from reason by rhetorical traditions that try to create this dichotomy between the thoughts that you can sort of round towards formal logic systems by phrasing them the right way and the thoughts that are more intuitive and linked to psychosomatic sensations like the visceral pain that sorrow causes. This gets dressed up and treated as some divine and profound thing, and I think that’s very silly and fundamentally flawed.

    That is to say, thoughts do not become irrational or become something other than thoughts just because they’re linked to an instinctive reward/punishment feedback system. For example, grief is neither irrational nor profound simply because it causes physical agony, and something doesn’t need a throat or chest to feel a painful pit in to appreciate or be averse to loss.

    This philosophical approach to “what would make a synthetic human? Surely it must be this collection of things we value, things we poetically relate to our experience,” I think is just fundamentally just as off base as the techbro approach of thinking their giant speak-and-spell bot just needs one more data center, one more trillion dollar investment, before it becomes an alien superintelligent NHP who will judge us all for sins and virtues that it makes up through some inscrutable alien logic.

    In the abstract a system is what it does, and we consider conscious things that are not at all capable of valuing or comprehending all the flowery things we value. So if a system does the things we associate with conscious life, things like dynamic learning and recall within itself, like producing some kind of abstract model of the world and of things around it to be able to operate within that framework, even if that system is not very good at doing those things, even if it is not particularly smart, is it not “conscious”?

    To put it another way, is a dog conscious within your ontology? If not, would that change if we somehow crammed a speech center into its brain so it could articulate its experiences? Do you earnestly believe a machine cannot be made that is on the level with even a very silly dog? What if such a silly little robot got given a speech center so it could articulate the way it saw its world?

    I don’t draw the line of what becomes “conscious” at a level of “synthetic human with a fully formed human experience” or “inscrutable super intelligent NHP”, merely at the boundary of having functions we associate with sentient life which I think is a whole lot less mystical and inscrutable than people here are rhetorically making it out to be.

    • darkmode [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      I, personally, stating that it is mystical and inscrutable but I am asserting that we need to be a lot better at computers in order to simulate the behavior of an organism accurately. By and large i am in agreement with you.

      I think the disconnect is where you drive the argument: I my original comment did not mystify the human experience. I was saying that the physical elements of feeling that are tied back to the other (just to wrap up what we both have been saying) “systems of conciseness” are in my view just as important to mathematically model in a computer program in order for a machine to really achieve what we view as intelligence not in just ourselves but in other organisms.

      I think i take such a hard stance not only bc CS as a field isn’t really dedicated to exploring this aspect of AI but also because part of me enjoys the spiritual and mysterious element of being “alive”