I’m rather curious to see how the EU’s privacy laws are going to handle this.

(Original article is from Fortune, but Yahoo Finance doesn’t have a paywall)

  • Dkarma@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    edit-2
    1 year ago

    It takes so.much money to retrain models tho…like the entire cost all over again …and what if they find something else?

    Crazy how murky the legalities are here …just no caselaw to base anything on really

    For people who don’t know how machine learning works at a very high level

    basically every input the AI is trained on or “sees” changes a set of weights (float type decimal numbers) and once the weights are changed you can’t remove that input and change the weights back to what they were you can only keep changing them on new input

    • DigitalWebSlinger@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      1 year ago

      So we just let them break the law without penalty because it’s hard and costly to redo the work that already broke the law? Nah, they can put time and money towards safeguards to prevent themselves from breaking the law if they want to try to make money off of this stuff.

      • Dkarma@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        No one has established that they’ve broken the law in any way, though. Authors are upset but it’s unclear if they can prove they were damaged in some way or that the companies in question are even liable for anything.

        Remember,the burden of proof is on the plaintiff not these companies if a suit is brought.

        • vrighter
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          I’m european. I have a right to be forgotten.

            • Fribbtastic@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              I just skimmed through the “right to be forgotten” site from the EU and there is nothing specifically mentioned about “search engines” or at least not from what I can find.

              Basically, ANY website that has users from the EU needs to comply with the GDRP which means that you have the “right to be forgotten” when:

              • The personal data is no longer necessary for the purpose an organization originally collected or processed it.
              • An organization is relying on an individual’s consent as the lawful basis for processing the data and that individual withdraws their consent.
              • An organization is relying on legitimate interests as its justification for processing an individual’s data, the individual objects to this processing, and there is no overriding legitimate interest for the organization to continue with the processing.
              • An organization is processing personal data for direct marketing purposes and the individual objects to this processing.
              • An organization processed an individual’s personal data unlawfully.
              • An organization must erase personal data in order to comply with a legal ruling or obligation.
              • An organization has processed a child’s personal data to offer their information society services.

              However, you cannot ask for deletion if the following reasons apply:

              • The data is being used to exercise the right of freedom of expression and information.
              • The data is being used to comply with a legal ruling or obligation.
              • The data is being used to perform a task that is being carried out in the public interest or when exercising an organization’s official authority.
              • The data being processed is necessary for public health purposes and serves in the public interest.
              • The data being processed is necessary to perform preventative or occupational medicine. This only applies when the data is being processed by a health professional who is subject to a legal obligation of professional secrecy.
              • The data represents important information that serves the public interest, scientific research, historical research, or statistical purposes and where erasure of the data would likely to impair or halt progress towards the achievement that was the goal of the processing.
              • The data is being used for the establishment of a legal defense or in the exercise of other legal claims.

              The GDPR is also not particularly specific and pretty vague from what I have read which will also apply to AI and not just “google searches”.

              https://gdpr.eu/article-17-right-to-be-forgotten/

              That means that anyone who gathered the data with or without the consent of the user will have to apply for that if they are serving the application to EU users. This also includes being able to be forgotten so every company has to have the necessary features to delete the data.

              And since the Regulation (it is NOT a law), is already a few years old now and the company that should delete your data does not in fact delete it “without undue delay”. So the arguments “but we can’t” or “it takes too much time” aren’t really valid here, this should have been considered when the application was written/designed.

              However, as stated in the contra points above, someone might argue that AI like ChatGPT could operate in the interest of research or the public interest and that a deletion of that data or data set could “impair or halt progress to that achievement that was the goal”.

              That means that from my knowledge right now it is pretty clear. If someone has private data about you, you can request them to be deleted and that should be done without delay which seems to be that the company has one month to comply with that request.

              But, these are just the things I could gather from the official websites.

      • frezik@midwest.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        The “safeguard” would be “no PII in training data, ever”. Which is fine by me, but that’s what it really means. Retraining a large dataset every time a GDPR request comes in is completely infeasible.