• sin_free_for_00_days@lemmy.one
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    1 year ago

    This is good. I used to think that, here in the US, the 1st amendment was a wonderful thing. Watching the anti-science and magat growth based on straight up lies has made me question that belief. As hawkwind says, it’s important to know who is deciding if something is fake, but so often if it gets to that point it’s like we’re missing out on fucking common fucking sense. If someone can’t tell that an election wasn’t stolen, or that a person isn’t a criminal when they admit to crimes, well…fuck 'em.

    • lobut@lemmy.ca
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 year ago

      There was a good podcast about the first amendment by NPR. They talked about how it was never intended to mean what we think it does. Mostly about having to charge for the press. Even at the time, people were protesting the war and judges were putting them in jail despite the first amendment.

      (I think it’s this one: https://radiolab.org/podcast/what-holmes)

      The thing is that most people assume that the truth will always come through. I think we’ve seen over the years that that’s not the case. The first amendment is and should be an eternal debate about what’s true and what’s acceptable.

  • demvoter@kbin.social
    link
    fedilink
    arrow-up
    16
    ·
    1 year ago

    While this sounds good, it is just a proposed law. It must be enacted and then it must be enforced.. EU has laws yet all the social media companies have done very little about the trash on their sites. Twitter is promoting mis- and disinformation to all its users. YT said it would stop removing false election fraud videos. We’ll see if this goes anywhere.

    • Lucien@beehaw.org
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      All penalties for to large organizations should be based on global turnover. Not only that, they should have a third metric which is based off the calculated benefit the company gained by breaking the regulation.

      So if Meta complains it would cost $X to moderate effectively, they should be fined $X * 3 or whatever. If Amazon saves $500B by misclassifyjng its drivers as contractors, they should be fined $1.5T. If the company needs to file for bankruptcy because it was based on illegal practices, so be it.

      • wet_lettuce@beehaw.org
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        He’s not asking how to spot it. He’s asking who gets to be the ultimate arbiter of fakeness?

        Even reputable news sources make mistakes. Sometimes their sources give bad information. Maybe they reported in good faith, but with bad information?

        What happens when they work around it by JAQ-ing off. https://rationalwiki.org/wiki/Just_asking_questions

        • Storksforlegs@beehaw.org
          link
          fedilink
          English
          arrow-up
          13
          ·
          edit-2
          1 year ago

          True, no matter how careful news outlets get things wrong, sources turn out to be mistaken etc etc but I think this law is not about punishing reputable news sources who make mistakes.

          This law is more about Demonstrably false, unverified info, masquerading as real news. (Disinformation campaigns).

          Hopefully the law is nuanced enough to distinguish between mistakes, I agree that there could be potential problem for misuse if it is too vague. However something like this is REALLY needed, social media is a hotbed of bullshit since since that crap means more user engagement . It angries up the blood and keeps users hooked. And then when this stuff is left to fester users get radicalized and start overdosing on horse medicine and shouting about lizard adrenalin or whatever…

          I think a law like this is necessary to make social media companies do literally anything. They clearly wont if left to their own devices.

  • Steeve@lemmy.ca
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    1 year ago

    As long as this is watched by a nonbiased third party, this is excellent news. I think this sort of regulation is even what social media needs to survive at this point. There’s been this standoff between governments and social media corporations with neither one wanting to be the one to regulate this content, because it’s political/corporate suicide to have it look like you’re taking a shot at “free speech”. I hate that misinformation had to get this bad before someone finally decided to regulate it.

    I’d go a step further and charge the creators of misinformation content if done maliciously as well.

  • ticho@social.fossware.space
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    “Multimillion-dollar fines” is just another term for “pocket change” in this context. Pump those numbers up!

    • g0nz0li0@beehaw.org
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      Who? Not the same body responsible for enforcement, which is a good start. Language in the article to me suggests they’re targeting obvious disinformation spread by bots and telling platforms they need to have processes in place to manage themselves internally - so would punish the likes of Twitter who have decided that anything goes (because it saves Elon money).

      But not enough information at hand yet, so best to remain skeptical (but not conspiratorial).