• ourob
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    7 months ago

    A far more likely scenario is that they have been overstating what the software can do and how much room for progress remains with current methods.

    AI has blown up so fast with so much hype, that I’m very skeptical. I’ve seen what it can do, and it’s impressive over past machine learning algorithms. But it does play on the human tendency to anthropomorphize things.

    • Unaware7013@kbin.social
      link
      fedilink
      arrow-up
      9
      ·
      7 months ago

      I’ve not been super stoked on ai specifically because of my track record using them. Maybe it’s my use case (primarily technical/programming/cli questions that I haven’t been able to answer myself) or my prompts are not suited for ai assistance, but I’ve had dozens of interactions with the various ai bots (bard, bing, gpt3/3.5) have been disappointing to say the least. Never gotten a correct answer, rarely given correct syntax, and it frequently just repeats answers I’ve already told it are incorrect and/or just don’t work.

      Ai has been nothing more than a disappointment to me.

    • NounsAndWords@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 months ago

      From what I understand he was fired by the non-profit board of the company and it’s the investors and money people who want him back. It sounds like the opposite, the people making it are becoming concerned about what is about to start happening with this tech.

      Experts from different companies have been saying AGI within a decade and that Al the current issues seem solvable.

      • kirklennon@kbin.social
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        7 months ago

        Experts from different companies have been saying AGI within a decade

        AGI has been five to ten years away for decades.

          • kirklennon@kbin.social
            link
            fedilink
            arrow-up
            1
            ·
            7 months ago

            I was actually thinking the same thing when I wrote it but I think we may finally actually be getting somewhat close to that, and I don’t think we’re even remotely close to discussing AGI outside of pure science fiction. LLMs have made us appear deceptively close; they can spit out sentences that look like stuff people write, but we haven’t moved even marginally closer to true comprehension, which would be required for actual AGI.

            • NounsAndWords@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 months ago

              I was about to respond with pretty much the top half of what you said. But I think an early step in AGI is how we start splitting hairs about what “counts.” And the number of things that we were “supposed” to always be better at keep changing with each new advance.

              In ten years I don’t think we will have clear, unquestionable Artificial General Intelligence, but I think there will be some people trying to explain that yes the model can act and respond exactly as a human would in the exact same circumstance but it’s not really thinking or feeling anything. I certainly don’t think the AI we’re playing with in 10 years will be based primarily on text prediction, but there are still just so many different routes being explored in this field, it sure doesn’t feel like a real plateau yet. Maybe I’ll change my mind when GPT5 is only marginally more capable than GPT4.