• nothacking
    link
    fedilink
    English
    arrow-up
    23
    ·
    edit-2
    1 year ago

    if asked by a user prompts chatGPT to summarize a copyrighted book, it will do so.

    So will a human. Let’s stop extending copyright law. Also, how you know it read the book, and not a summary of it, of which there are loads on the internet?

    • SpaceToast@mander.xyz
      link
      fedilink
      English
      arrow-up
      18
      ·
      1 year ago

      This is why I am pro AI art. It’s no different than a human taking inspiration from other work.

      Nobody comes up with anything truly original. It’s all inspired by someone before them.

    • Dominic@beehaw.org
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      Also, how you know it read the book, and not a summary of it, of which there are loads on the internet?

      In the case of ChatGPT, it’s hard to tell. OpenAI won’t even reveal what their training dataset was.

      Researchers have done some tests to tease this out, and they’re pretty confident that it has read quite a few books and memorized them verbatim. See one of my favorite papers in a while, Speak, Memory: An Archaeology of Books Known to ChatGPT/GPT-4.

      • nothacking
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Reading the paper, for even the best books it only guesses a name in a passage right more then seventy percent of the time on 5 of the over 500 tested books. On “The fellowship of the ring”, it got hardly over 50%, and that’s hardly a little known book. These LLM’s are definitely familiar with the content, I would hardly call that memorizing verbatim. (Humans are also reasonably good at this after reading a book)

    • Fauxreigner@beehaw.org
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      1 year ago

      Beyond that, it’ll try to summarize a book, but it often can’t do so successfully, although it will act like it has. Give it a try on something that is even a little bit obscure and it can’t really give you good information. I tried with Blindsight, which is not something that’s in the popular culture, but also a Hugo nominee, so not completely obscure. It knew who the characters were, and had a general sense of the tone, but it completely fabricated every major plot point that I asked about. Did the same with A Head Full of Ghosts, which is more well known but still not something everyone has read, and it did the same thing.

      One thing I found that’s really fun is to ask it a question and then follow up with something like “Are you sure about that?” It’ll almost always correct itself and make up something else. It’ll go one step further and incorporate details you ask about. Give it a prompt like “Are you sure this character died of natural causes? I thought they were killed by Bob” and it will very frequently say you’re right and make up a story along those lines that’s plausible within the text. It doesn’t work on really popular stuff; you can’t convince it that Optimus Prime saves Luke Skywalker in RotJ, but anything even a little less well known and it’ll tell you details that it’s making up whole cloth with complete confidence.

      • nothacking
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Another highly amusing thing to do is to ask it about non existent chemicals or antenna types. (Try “inverted tripole” or “dinitrogen azide”) It always generates plausible but incorrect answers (eloquent bullshit).