Google has filed an application with the US Patent and Trademark Office for a tool that would use machine learning (ML, a subset of AI) to detect what Google decides to consider as “misinformation” on social media.

Google already uses elements of AI in its algorithms, programmed to automate censorship on its massive platforms, and this document indicates one specific path the company intends to take going forward.

The patent’s general purpose is to identify information operations (IO) and then the system is supposed to “predict” if there is “misinformation” in there.

Judging by the explanation Google attached to the filing, it at first looks like blames its own existence for proliferation of “misinformation” – the text states that information operations campaigns are cheap and widely used because it is easy to make their messaging viral thanks to “amplification incentivized by social media platforms.”

But it seems that Google is developing the tool with other platforms in mind.

The tech giant specifically states that others (mentioning X, Facebook, and LinkedIn by name in the filing) could make the system train their own “different prediction models.”

Machine learning itself depends on algorithms being fed a large amount of data, and there are two types of it – “supervised” and “unsupervised,” where the latter works by providing an algorithm with huge datasets (such as images, or in this case, language), and asking it to “learn” to identify what it is it’s “looking” at.

(Reinforcement learning is a part of the process – in essence, the algorithm gets trained to become increasingly efficient in detecting whatever those who create the system are looking for.)

The ultimate goal here would highly likely be for Google to make its “misinformation detection,” i.e., censorship more efficient while targeting a specific type of data.

The patent indeed states that it uses neural networks language models (where neural networks represent the “infrastructure” of ML).

Google’s tool will classify data as IO or benign, and further aims to label it as coming from an individual, an organization, or a country.

And then the model predicts the likelihood of that content being a “disinformation campaign” by assigning it a score.

  • Damage@feddit.it
    link
    fedilink
    arrow-up
    6
    ·
    11 months ago

    Well that’s an asshole move. First of all it would be stupid to allow such a system to be patented, as it’s kinda generic.

    Second, if they get it they can effectively obstruct attempts by other actors to limit misinformation on different platforms.

    • skygirl@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      You can trust us! Only our platform is able to dynamically remove misinformation.

      Yep, seems fine to me. What could go wrong?

  • yiggy@links.hackliberty.org
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    11 months ago

    This is fantastic, maybe even glorious.

    Yes, I can’t see any reason why this won’t leave the world a better and more informed place.

    After all, if we can’t trust the largest advertising company in history to be the arbiter of truth and accuracy, who can we trust?