A group of authors filed a lawsuit against Meta, alleging the unlawful use of copyrighted material in developing its Llama 1 and Llama 2 large language models....
(1)to reproduce the copyrighted work in copies or phonorecords
The works are copied in their entirey and reproduced in the training database. AI businesses do not deny this is copying, but instead claim it is research and thus has a fair use exemption.
I argue it is not research, but product development - and furthermore, unlike traditional R&D, it is not some prototype that is different and separate from the commercial product. The prototype is the commercial product.
(2)to prepare derivative works based upon the copyrighted work
AI can and has reproduced significant portions of copyrighted work, even in spite of the fact that the finished product allegedly does not include the work in its database (it just read the training database).
Furthermore, even if a human genuinely and honestly believes they’re writing something original, that does not matter when they reproduce work that they have read before. What determines copyright infringement is the similarity of the two works.
If you read through the court filings against OpenAI and Stability AI, much of the argument is based around trying to make a claim under case 1.
The position that I take is that the arguments made against OpenAI and Stability AI in court are not complete. They’re not quite good enough. However, that doesn’t mean there isn’t a valid argument that is good enough. I just hope we don’t get a ruling in favour of AI businesses simply because the people challenging them didn’t employ the right ammunition.
With regards to Case 2, I refer back to my comment about the similarity of the work. The argument isn’t that the LLM itself is an infringement of copyright, but that the LLM, as designed by the business, infringes copyright in the same way a human would.
I definitely agree it is all extremely unclear. However, I maintain that the textual definition of the law absolutely still encompasses the feeling that peoples’ work is being ripped off for a commercial venture. Because it is so commercial, original authors are being harmed as they will not see any benefit from the commercial profits.
I would also like to point you to my other comment, which I put a lot of time into and where I expanded on many other points (link to your instance’s version): https://lemmy.world/comment/6706240
The works are copied in their entirey and reproduced in the training database. AI businesses do not deny this is copying, but instead claim it is research and thus has a fair use exemption.
The copying of the data is not, by itself, infringement. It depends on the use and purpose of the copied data, and the defense argues that training a model against the data is fair use under TDM use-cases.
AI can and has reproduced significant portions of copyrighted work, even in spite of the fact that the finished product allegedly does not include the work in its database (it just read the training database).
The model does not have a ‘database’, it is a series of transform nodes weighted against unstructured data. The transformation of the copyrighted works into a weighted regression model is what is being argued is fair use.
Furthermore, even if a human genuinely and honestly believes they’re writing something original, that does not matter when they reproduce work that they have read before.
yup, and it isn’t the act of that human reading a copyrighted work that is considered as infringement, it is the creation of the work that is substantially similar. In the same analogy, it wouldn’t be the creation of the AI model that is the infringement, but each act of creation thereafter that is substantially similar to a copyrighted work. But this comes with a bunch of other problems for the plaintiffs, and would be a losing case without merit.
The position that I take is that the arguments made against OpenAI and Stability AI in court are not complete
The argument isn’t that the LLM itself is an infringement of copyright, but that the LLM, as designed by the business, infringes copyright in the same way a human would.
Trying really hard not to come off as rude, but there’s a good reason why this isn’t the argument being put forward in the lawsuits. If this was their argument, the LLM could be considered a commissioned agent, placing the liability on the agent commissioning the work (e.g. the human prompting the work) - not OpenAI or Stability - in much the same way a company is held responsible for the work produced by an employee.
I really do understand the anger and frustration apparent in these comments, but I would really like to encourage you to learn a bit more about the basis for these cases before spending substantial effort writing long responses.
The copying of the data is not, by itself, infringement.
Copyright is absolute. The rightsholder has complete and total right to dictate how it is copied. Thus, any unauthorised copying is copyright infringement. However, fair use gives exemption to certain types of copying. The copyright is still being infringed, because the rightsholder’s absolute rights are being circumvented, however the penalty is not awarded because of fair use.
This is all just pedantry, though, and has no practical significance. Saying “fair use means copyright has not been infringed” doesn’t change anything.
it is a series of transform nodes weighted against unstructured data.
That’s a database. Or perhaps rather some kind of 3D array - which could just be considered an advanced form of database. But yeah, you’re right here, you win this pedantry round lol. 1-1.
it wouldn’t be the creation of the AI model that is the infringement, but each act of creation thereafter that is substantially similar to a copyrighted work. But this comes with a bunch of other problems for the plaintiffs, and would be a losing case without merit.
Yeah I don’t want to go down the avenue of suing the AI itself for infringement. However…[1][2][3]
Trying really hard not to come off as rude
You’re not coming off as rude at all with what you’ve said, in fact I welcome and appreciate your rebuttals.
I really do understand the anger and frustration apparent in these comments, but I would really like to encourage you to learn a bit more about the basis for these cases before spending substantial effort writing long responses.
You say that as if I haven’t enjoyed fleshing out the ideas and sharing them. By the way, right now I’m sharing with you lemmy’s hidden citation feature :o)
Although, I was much happier replying to you before I just saw the downvotes you’ve apparently given me across the board. That’s a bit poor behaviour on your part, you shouldn’t downvote just because you disagree - and you can’t even say that I’m wrong as a justification when the whole thing is being heavily debated and adjudicated over whether it is right or wrong.
I thought we were engaging in a positive manner, but apparently you’ve been spitting in my face.
but there’s a good reason why this isn’t the argument being put forward in the lawsuits.
The LLM absolutely could be considered an agent, but the way it acts is merely prompted by the user. The actual behaviour is dictated by the organisation that built it. In any case, this is only my backup argument if you even consider the initial copying to be research - which it isn’t. ↩︎
Copyright is absolute. The rightsholder has complete and total right to dictate how it is copied.
Really and truly, this is not how this works. The exemptions granted by the office of the registrar are granting an exemption to copyright claims against fair uses. It isn’t talking about whether the claim can be awarded damages, it’s talking about the claim being exempt in entirety. You can think about copyright as an exemption to the first amendment right to free speech, and the exemption to copyright as describing where that ‘right’ does not apply. Copyright holders do not get to control the use of their work where fair use has been determined by the registrar, which is reconsidered every 3 years.
This is all just pedantry, though, and has no practical significance. Saying “fair use means copyright has not been infringed” doesn’t change anything.
True enough, but it seems like it’s important for your understanding in how copyright works.
Or perhaps rather some kind of 3D array - which could just be considered an advanced form of database. But yeah, you’re right here, you win this pedantry round lol. 1-1.
I wasn’t being pedantic, that distinction is important for how copyright is conceptualized. The AI model is the thing being considered for infringement, so it’s important to note that the works being claimed within it do not exist as such within the model. The ‘3-d array’ does not contain copyrighted works. You can think of it as a large metadata file, describing how to construct language as analyzed through the training data. The nature and purpose of the ‘work’ is night-and-day different from the works being claimed, and ‘database’ is a clear misrepresentation (possibly even intentionally so) of what it is.
Yeah I don’t want to go down the avenue of suing the AI itself for infringement.
That was exactly what you pivoted to in your comment here, i’m not sure why you’re now saying you don’t want to go down that avenue. I’m confused what you’re arguing at this point.
Although, I was much happier replying to you before I just saw the downvotes you’ve apparently given me across the board. That’s a bit poor behaviour on your part, you shouldn’t downvote just because you disagree - and you can’t even say that I’m wrong as a justification when the whole thing is being heavily debated and adjudicated over whether it is right or wrong.
I’ve down-voted your comments because they contain inaccuracies and could be misleading to others. You shouldn’t let my grading of your comments reflect my attitude towards you; i’m sure you’re a fine individual. Downvotes don’t mean anything on Lemmy anyway, i’m not sure ‘spitting in your face’ is a fair or accurate description, but I don’t want to invalidate your feelings, so I apologize for making you feel that way as that wasn’t my intent.
I’ve down-voted your comments because they contain inaccuracies and could be misleading to others. You shouldn’t let my grading of your comments reflect my attitude towards you; i’m sure you’re a fine individual. Downvotes don’t mean anything on Lemmy anyway, i’m not sure ‘spitting in your face’ is a fair or accurate description, but I don’t want to invalidate your feelings, so I apologize for making you feel that way as that wasn’t my intent.
No worries, you’ve been very respectable. My feelings weren’t particularly hurt, I just felt the need to call it out.
Personally, I’m against downvoting things merely because they are wrong. If someone says something that’s wrong, it may well be a commonly held misconception, and downvoting it also demotes any correction that has been given, which means other people who hold the misconception are less likely to be corrected.
And that’s beside the fact that I don’t really think I’m completely wrong here :o)
That was exactly what you pivoted to in your comment here, i’m not sure why you’re now saying you don’t want to go down that avenue. I’m confused what you’re arguing at this point.
To be a little more specific, I don’t want to go down the route of blaming AI itself for copyright infringement. That is to say, whether or not AI is bound by laws the way that humans are. I think it is only worthwhile considering whether the AI developer and/or the users are infringing copyright through their creation or use of AI. In particular, I think the legal or philosophical question of whether AI is affected by laws in the same way humans are is pointless when we’re just talking about LLM’s and not a true Artificial Intelligence.
The ‘3-d array’ does not contain copyrighted works. You can think of it as a large metadata file, describing how to construct language as analyzed through the training data. The nature and purpose of the ‘work’ is night-and-day different from the works being claimed, and ‘database’ is a clear misrepresentation (possibly even intentionally so) of what it is.
Yes absolutely, the LLM itself does not include copyrighted works. That’s not what I’m arguing. The two issues I take are with the database of information the LLM is trained on. This database does contain copyrighted works, AI developers admit that it does, but they claim it is fair use research. I disagree with this claim, to use the terminology from one of your links their “research” is not “scholarly” - it is commercial product development.
The other issue is that the LLM can reproduce copyrighted work. While I agree with you in some sense that the user of the LLM is instructing it to infringe copyright, and thus the user is responsible, in another sense I think the developer is also responsible because they have given the tool the capability to do this. This is perhaps not a strong argument, particularly when the developers have made efforts to fix these “bugs” as they come to light.
However my most important point is that the developers have infringed copyright by building a training database full of copyrighted works, which the LLM was then trained on. The LLM itself isn’t copyright infringement, but they infringed copyright to develop it.
The works are copied in their entirey and reproduced in the training database. AI businesses do not deny this is copying, but instead claim it is research and thus has a fair use exemption.
I argue it is not research, but product development - and furthermore, unlike traditional R&D, it is not some prototype that is different and separate from the commercial product. The prototype is the commercial product.
AI can and has reproduced significant portions of copyrighted work, even in spite of the fact that the finished product allegedly does not include the work in its database (it just read the training database).
Furthermore, even if a human genuinely and honestly believes they’re writing something original, that does not matter when they reproduce work that they have read before. What determines copyright infringement is the similarity of the two works.
The position that I take is that the arguments made against OpenAI and Stability AI in court are not complete. They’re not quite good enough. However, that doesn’t mean there isn’t a valid argument that is good enough. I just hope we don’t get a ruling in favour of AI businesses simply because the people challenging them didn’t employ the right ammunition.
With regards to Case 2, I refer back to my comment about the similarity of the work. The argument isn’t that the LLM itself is an infringement of copyright, but that the LLM, as designed by the business, infringes copyright in the same way a human would.
I definitely agree it is all extremely unclear. However, I maintain that the textual definition of the law absolutely still encompasses the feeling that peoples’ work is being ripped off for a commercial venture. Because it is so commercial, original authors are being harmed as they will not see any benefit from the commercial profits.
I would also like to point you to my other comment, which I put a lot of time into and where I expanded on many other points (link to your instance’s version): https://lemmy.world/comment/6706240
The copying of the data is not, by itself, infringement. It depends on the use and purpose of the copied data, and the defense argues that training a model against the data is fair use under TDM use-cases.
The model does not have a ‘database’, it is a series of transform nodes weighted against unstructured data. The transformation of the copyrighted works into a weighted regression model is what is being argued is fair use.
yup, and it isn’t the act of that human reading a copyrighted work that is considered as infringement, it is the creation of the work that is substantially similar. In the same analogy, it wouldn’t be the creation of the AI model that is the infringement, but each act of creation thereafter that is substantially similar to a copyrighted work. But this comes with a bunch of other problems for the plaintiffs, and would be a losing case without merit.
Trying really hard not to come off as rude, but there’s a good reason why this isn’t the argument being put forward in the lawsuits. If this was their argument, the LLM could be considered a commissioned agent, placing the liability on the agent commissioning the work (e.g. the human prompting the work) - not OpenAI or Stability - in much the same way a company is held responsible for the work produced by an employee.
I really do understand the anger and frustration apparent in these comments, but I would really like to encourage you to learn a bit more about the basis for these cases before spending substantial effort writing long responses.
Copyright is absolute. The rightsholder has complete and total right to dictate how it is copied. Thus, any unauthorised copying is copyright infringement. However, fair use gives exemption to certain types of copying. The copyright is still being infringed, because the rightsholder’s absolute rights are being circumvented, however the penalty is not awarded because of fair use.
This is all just pedantry, though, and has no practical significance. Saying “fair use means copyright has not been infringed” doesn’t change anything.
That’s a database. Or perhaps rather some kind of 3D array - which could just be considered an advanced form of database. But yeah, you’re right here, you win this pedantry round lol. 1-1.
Yeah I don’t want to go down the avenue of suing the AI itself for infringement. However…[1][2][3]
You’re not coming off as rude at all with what you’ve said, in fact I welcome and appreciate your rebuttals.
You say that as if I haven’t enjoyed fleshing out the ideas and sharing them. By the way, right now I’m sharing with you lemmy’s hidden citation feature :o)
Although, I was much happier replying to you before I just saw the downvotes you’ve apparently given me across the board. That’s a bit poor behaviour on your part, you shouldn’t downvote just because you disagree - and you can’t even say that I’m wrong as a justification when the whole thing is being heavily debated and adjudicated over whether it is right or wrong.
I thought we were engaging in a positive manner, but apparently you’ve been spitting in my face.
The LLM absolutely could be considered an agent, but the way it acts is merely prompted by the user. The actual behaviour is dictated by the organisation that built it. In any case, this is only my backup argument if you even consider the initial copying to be research - which it isn’t. ↩︎
Really and truly, this is not how this works. The exemptions granted by the office of the registrar are granting an exemption to copyright claims against fair uses. It isn’t talking about whether the claim can be awarded damages, it’s talking about the claim being exempt in entirety. You can think about copyright as an exemption to the first amendment right to free speech, and the exemption to copyright as describing where that ‘right’ does not apply. Copyright holders do not get to control the use of their work where fair use has been determined by the registrar, which is reconsidered every 3 years.
True enough, but it seems like it’s important for your understanding in how copyright works.
I wasn’t being pedantic, that distinction is important for how copyright is conceptualized. The AI model is the thing being considered for infringement, so it’s important to note that the works being claimed within it do not exist as such within the model. The ‘3-d array’ does not contain copyrighted works. You can think of it as a large metadata file, describing how to construct language as analyzed through the training data. The nature and purpose of the ‘work’ is night-and-day different from the works being claimed, and ‘database’ is a clear misrepresentation (possibly even intentionally so) of what it is.
That was exactly what you pivoted to in your comment here, i’m not sure why you’re now saying you don’t want to go down that avenue. I’m confused what you’re arguing at this point.
I’ve down-voted your comments because they contain inaccuracies and could be misleading to others. You shouldn’t let my grading of your comments reflect my attitude towards you; i’m sure you’re a fine individual. Downvotes don’t mean anything on Lemmy anyway, i’m not sure ‘spitting in your face’ is a fair or accurate description, but I don’t want to invalidate your feelings, so I apologize for making you feel that way as that wasn’t my intent.
No worries, you’ve been very respectable. My feelings weren’t particularly hurt, I just felt the need to call it out.
Personally, I’m against downvoting things merely because they are wrong. If someone says something that’s wrong, it may well be a commonly held misconception, and downvoting it also demotes any correction that has been given, which means other people who hold the misconception are less likely to be corrected.
And that’s beside the fact that I don’t really think I’m completely wrong here :o)
To be a little more specific, I don’t want to go down the route of blaming AI itself for copyright infringement. That is to say, whether or not AI is bound by laws the way that humans are. I think it is only worthwhile considering whether the AI developer and/or the users are infringing copyright through their creation or use of AI. In particular, I think the legal or philosophical question of whether AI is affected by laws in the same way humans are is pointless when we’re just talking about LLM’s and not a true Artificial Intelligence.
Yes absolutely, the LLM itself does not include copyrighted works. That’s not what I’m arguing. The two issues I take are with the database of information the LLM is trained on. This database does contain copyrighted works, AI developers admit that it does, but they claim it is fair use research. I disagree with this claim, to use the terminology from one of your links their “research” is not “scholarly” - it is commercial product development.
The other issue is that the LLM can reproduce copyrighted work. While I agree with you in some sense that the user of the LLM is instructing it to infringe copyright, and thus the user is responsible, in another sense I think the developer is also responsible because they have given the tool the capability to do this. This is perhaps not a strong argument, particularly when the developers have made efforts to fix these “bugs” as they come to light.
However my most important point is that the developers have infringed copyright by building a training database full of copyrighted works, which the LLM was then trained on. The LLM itself isn’t copyright infringement, but they infringed copyright to develop it.