In its submission to the Australian government’s review of the regulatory framework around AI, Google said that copyright law should be altered to allow for generative AI systems to scrape the internet.

  • BlameThePeacock@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Jackson Pollock didn’t create paintings, Jackson Pollock’s art was story telling and showmanship.

    Yes, in order to learn a spoken language you have to have heard it. However, languages evolve over time. You develop regional accents and dialects. All of the UK speaks English but no two towns speak the same way.

    Just like different models have their own patterns of writing…

    You’re thinking about LLMs like they’re equivalent to multiple people(or groups of people) but each LLM is equivalent to a single person. The training and resulting function of each one is as distinct as an individual human.

    I could raise one of my children to perform the exact same functions as an LLM or art creation tool. Give them exactly the same image/text sets that these models are trained on, and have them practice for a decade or two. Then I could tell them “Hey I need a picture of an orange rabbit riding a bike” and they could draw me one, or write a story about the same topic. There’s clearly no copyright infringement in that process, so why would it be different for creating a machine to do the same thing?

    • Phanatik@kbin.social
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      An LLM or art creation tool is barely equatable to one person. The difference between a child and an art creation tool is that you could show a child a single picture of a bunny, a bike and a carrot then ask them to draw an orange bunny riding a bike and they could draw something resembling that. An art bot would require hundreds to thousands of images of each object to understand what it is before it can even make a reasonable attempt. It’s not even comparable the level of training required.

      At least the child’s drawing will have some personality in it, every output from an art bot ends up looking soulless. The reason for that is the simple concept that an art bot only imitates what it’s been trained on and an artist draws on inspiration before applying the two things an art bot will never have; intent or purpose.

      • BlameThePeacock@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        You’re missing the training even a child has received to reach the state where they could do that. If you raised a child to 5 years completely by themselves in an empty room they wouldn’t be able to draw anything at all, let alone something based on pictures. The act of drawing a variation on a bunny from a picture requires they learn and practice fine motor skills, and it requires them to have an understanding of animals.

        Humans get literally 150,000+ hours of training time before we even let them try to become an adult.

        • Phanatik@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Sure but the training isn’t an algorithm deciding probabilities. Children do not 100% express themselves based on environment. On one side you have nature and the other you have nurture.

          An example:
          The FBI’s studies into serial killers uncovered that these people, even though have been influenced by their environment to become what they are, respond to external stimuli in an abnormal way which is what leads them down that path to begin with.

          A child learns how language and creativity is expressed before attempting to express themselves. These bots aren’t built to deal with this expression because at their core, they are statistical models. It looks at a sentence like a series of variables to determine what comes next. The sentence itself could be nonsensical but the bot doesn’t know that, it’s using the probabilities it’s been trained on to construct the sentence.

          You might say bots have their own way of expressing themselves but I would say that’s something we’re applying to the bot than it is demonstrating itself. I’m sure it’s very cute when it apologises for making a mistake but that apology isn’t sincere, it’s been programmed to respond that way when it thinks you’re pointing out its mistakes. It’s merely imitating a sense of remorse than displaying actual remorse.