Hail Satan.

Kbin
Sharkey

Using Mbin as a backup to my main Kbin account due to tech issues on Kbin.social. May either switch to this one permanently or abandon it, depending on how Kbin’s development goes. All my active fedi accounts are linked.

  • 4 Posts
  • 454 Comments
Joined 4 months ago
cake
Cake day: March 4th, 2024

help-circle

  • The transcripts won’t be released unless they’re leaked. Giving out any details about the minor he was chatting with risks exposing the victim, who is possibly still a minor. Releasing the transcripts would be an incredibly damaging move, and not to Beahm; people would almost certainly doxx the kid immediately, possibly putting them at even greater risk of harm than they would have been in to begin with.

    We don’t need to see them, anyway. We’re not involved. We have nothing to gain from reading the details. If self-admitting to having inappropriate conversations with minors isn’t evidence enough to convince you one way or the other, then I really don’t see how reading a sext thread with a child will make much of a difference.



  • I don’t think that’s the basis of their argument.

    The RIAA alleges that the generators used the record labels’ songs to illegally train the models since they didn’t have the rights holders’ permission to use the recordings. But whether the companies needed that permission is unclear. AI companies have argued that the use of training data is a case of fair use, meaning they are allowed to use the recordings with impunity.

    Emphasis mine. Their concern is that the music was used for commercial purposes, not how the music came into their possession. Web scraping is already legal, that’s never been a piracy issue.




  • Piracy isn’t the issue, I’m not sure if we’re referencing different things here.

    How the developers came to possess the training material isn’t being called into question - it’s whether or not they’re allowed to train an AI with it, and whether doing so constitutes copyright infringement. And currently, the way in which generative AI works does not cross those legal boundaries, as written.

    The argument the RIAA wants to make is that using copyrighted material for the purposes of training software extends beyond the protections of fair use. I believe their argument is that - even if acquired otherwise legally - acquiring music for the explicit purpose of making new music would be considered a commercial use of the material. Basically like the difference between buying an album to listen to with your headphones or buying an album to play for a packed concert hall, suggesting that the commercial intent behind acquiring the music is what makes it illegal.


  • I feel that this logic follows a common misconception of generative AI. Its output isn’t made from the training data. It will take inspiration from it, but it doesn’t just mix-and-match samples from the training materials. GenAI uses metadata that it builds based on that training data, but the data, itself, isn’t directly referenced during generation.

    The way AI generates content isn’t like when Vanilla Ice sampled Under Pressure; it would be more like if Vanilla Ice had talent and could actually write music, and had accidentally written the same bass line without ever hearing Queen. While unlikely, it’s still possible, and I’m sure we’ve all experienced a similar situation; ie. you open a comment thread to post a joke based on the headline and see the top comment is already the exact same joke you were going to make… You didn’t copy the other user, and they didn’t copy you, but you both likely share a similar experience that trigger the same associations.

    For the same reasons that two different writers can accidentally tell the same story, or two different comedians can write the same joke, two different musicians can write the same melodies if they have shared inspirations. In all of those instances, both parties can create entirely original materials own their own accord, even if they aren’t meaningfully unique from each other. The way generative AI works isn’t significantly different, which is why this is such a legally-murky situation. If generative AI were more rudimentary and was actually sampling the training data, it would be an open-and-shut copyright infringement case. But, because the materials the AI produces are original creations of its own, we get into this situation where we have to argue over where to draw the line between “inspiration” and “replication”.





  • “The basic point is that [the AI companies’] model requires a vast corpus of sound recordings in order to output synthetic music files that are convincing imitations of human music,” the suits alleged. “Because of their sheer popularity and exposure, the Copyrighted Recordings had to be included within Suno’s training data for Suno’s model to be successful at creating the desired human-sounding outputs.

    Nope, there’s plenty of other ways for an AI to have created similar notes. Say you have Song A written by Steve. Steve grew up listening to a lot of John, who wrote songs B through Z. Steve spent his childhood listening to and being influenced by John, so when Steve eventually grows up to write Song A, it’s incredibly possible for it to contain elements from songs B through Z. So if an AI trains off of Steve it’s going to consequently pick up whatever habits Steve learned from John.

    Just like how you picked up some habits from your parents, which they picked up from their parents… etc. You could develop a habit that started with an ancestor you’ve never met; who are you copying?