• ColeSloth
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    10 months ago

    You’re looking at it in a flawed manner. AI has already been making up sources and names to state things as facts. If there’s a hundred websites for claiming the earth is flat and you ask an ai if the earth is flat, it may tell you it is flat and source those websites. It’s already been happening. Then imagine more opinionated things than hard observable scientific facts. Imagine a government using AI to shape opinion and claim there was no form of insurrection on Jan 6th. Thousands of websites and comments could quickly be fabricated to confirm that it was all made up. Burying the truth into obscurity.

    • lloram239@feddit.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      You have plenty of literature that can act as ground truth. This is not a terribly hard problem to solve, it just requires actually focusing on it. Which so far simply hasn’t been done. ChatGPT is just the first “look, this can generate text”. It was never meant to do anything useful by itself or stick to the truth. That all still has to be developed. ChatGPT simply demonstrates that LLM can process natural language really well. It’s the first step in this, not the last.

      • ColeSloth
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        10 months ago

        Sounds like you’re arguing against yourself, now.