Literally just mainlining marketing material straight into whatever’s left of their rotting brains.

  • stigsbandit34z [they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    22
    ·
    1 year ago

    I’m no philosopher, but at lot of these questions seem very epistemological and not much different from religious ones (i.e. so what changes if we determine that life is a simulation). Like they’re definitely fun questions, but I just don’t see how they’ll be answered with how much is unknown. We’re talking “how did we get here” type stuff

    I’m not so much concerned with that aspect as I am about the fact that it’s a powerful technology that will be used to oppress shrug-outta-hecks

    • WholeEnchilada [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      1 year ago

      Actually, yeah, you’re on it. These questions are epistemological. They’re also phenomenological. Testing AI is all about seeing how it responds and reacts just as much as they are about being. It’s silly. When it comes to AI right now, existing is measured by reaction to see if it’s imitating a human intelligence. I’m pretty sure “I react therefore I am” was never coined by any great, old philosopher. So, what can we learn from your observation? Nobody knows anything. Or at least, the supposed geniuses who make AI and test it believe that reaction measures intelligence.

    • Nevoic@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Yeah, capitalists will use unreliable tech to replace workers. Even if GPT4 is the end all (there’s no indication that it is), that would still displace tons of workers and just result in both worse products for everyone and a worse, more competitive labor market.

      • stigsbandit34z [they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        You seem to be getting some mixed replies, but I feel like I know what you’ve been trying to convey with most of your comments.

        A lot of people have been dismissing LLMs as pure marketing hype (and they very well could be) but it doesn’t change the fact that companies will eventually decide that they can be integrated into other business processes once they reach a point of an “acceptable” percent of errors. They are really just statistical models at the end of the day. Right now, no C-suite/executive worth their salt would decide to let something like GPT write emails, craft reports, code/generate scripts, etc because there is bound to be some nuance it can’t quite grasp. Pragmatically, I view it in the same way as scrap on an assembly line, but we all know damn well that algorithms can perform a CEO’s role just as well as any other computer-based job (I haven’t really thought about how this tech will be used with robotics but I’m sure there are some implications for that too).

        This topic is one that has been deeply fascinating ever since I took an intro cognitive science class on a whim in college lol which is why I have many thoughts (some of which are probably kinda dumb admittedly).

        This also just coincides sooooo well considering the fact that I’m just about to finish Bullshit Jobs and recently read a line about how Graeber describes the internet ( a LLM’s training set)- “A repository of almost all of human knowledge and cultural achievement.”