• henfredemars@infosec.pub
    link
    fedilink
    English
    arrow-up
    79
    arrow-down
    6
    ·
    1 年前

    Slop is insulting. If I take the time to read it, I want another human to have taken the time to write it.

    • reev@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      2
      ·
      1 年前

      The counter there is to have an AI summarize it. No time taken to write nor to read haha

      • Empricorn@feddit.nl
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        1 年前

        That would work if it didn’t get even that wrong a huge amount of time. There are entire subreddits dedicated to AI summary fails!

        • I_Has_A_Hat@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          2
          ·
          1 年前

          Cherry picked and edited to give bad answers. Go play around with any of the big models, you’ll be bored and disappointed because 99.9% of the time it will give you exactly what you ask for.

          Except Gemini. Gemini is a drunk.

          • Empricorn@feddit.nl
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 年前

            Yes, I’m sure the people posting funny AI summary fails to laugh at with others have an agenda and are all doctoring their screenshots…

      • explodicle@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 年前

        I foresee a future where we have an AI layer on top of corporate emails, translating from English to corpo-speak and back to English again.

    • lolrightythen@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      1 年前

      Agree to an extent. Its a tool that can aid talentless folk like myself shitpost, and has its place. But I agree with tags and disagree with inundating forums and stealing ip

    • spujb@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      2
      ·
      1 年前

      interesting counterpoint. but i also imagine if ai content was correctly tagged, traffic to slop content would dramatically decrease, reducing incentive to post the content in the first place.

      i don’t know which force is stronger but i think both certainly exist.

    • NudeNewt@lemm.ee
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      3
      ·
      1 年前

      This. Don’t let AI or AI posters know that you caught on. Just report them and be on your way.

  • Theonetheycall1845@lemmy.world
    link
    fedilink
    arrow-up
    28
    arrow-down
    1
    ·
    1 年前

    I am in complete agreement with this. While you can currently tell what’s AI it won’t be long before we’re scratching our heads wondering which way is up and which way is down. Hell, I saw an AI generated video of a cat cooking food. It looked real sortve.

      • UnderpantsWeevil@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        3
        ·
        1 年前

        Get Politics Out Of My Shitposts

        I just want banal generic memes with hollow aphorisms, preferably with images of babies or puppies or something. I’m tired of waking up every morning and being confronted with the social expression of my degraded material conditions. People need to just STFU with their outcries of frustration and despair and get back to being clowns for my amusement.

        • T0RB1T@lemmy.ca
          link
          fedilink
          arrow-up
          5
          ·
          1 年前

          Can’t tell if an unreasonable entitled comment, or a sarcastic comment.

          Maybe both? (눈_눈)

    • ByteOnBikes@slrpnk.net
      link
      fedilink
      arrow-up
      4
      ·
      1 年前

      “Elon Musk and Donald Trump were individuals?”

      (people in 2035, who were not around during the Cronenberg-ing of Musk & Trump in 2028)

  • Spiderwort@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    1 年前

    Maybe all digital content just shouldn’t be trusted. It’s like some kind of demon-realm or something. Navigable by the wise but for common fools like you and I, perilous. Full of illusion.

  • ramble81@lemm.ee
    link
    fedilink
    arrow-up
    18
    ·
    1 年前

    And people want a NSFL tag and people want a…

    You get a tag, and you get a tag, and you all get a tag!

  • gandalf_der_12te
    link
    fedilink
    arrow-up
    19
    arrow-down
    2
    ·
    1 年前

    political posts should have a tag as well, so people can filter them out. people just bluesky, pixelfed, … instead of lemmy because of all the politics here.

  • Bob Robertson IX
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    5
    ·
    1 年前

    Most photos taken or edited on a cell phone are enhanced with AI. Where do you draw the line?

      • Bob Robertson IX
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        1 年前

        so if I use an AI engine to do in-painting on an existing image, then that’s fine? This image is AI enhanced:

        Would it need to be tagged? (obviously should be tagged NSFL)

        • ewigkaiwelo@lemmy.worldBanned
          link
          fedilink
          arrow-up
          2
          ·
          1 年前

          All of your points are valid, it is hard to draw definitive line, but so will be hard moderating the content because there will be people not giving correct tags to their posts, so even if there will be specific labels for “A.I. generated”, “modified using A.I.” etc people still will avoid using them, intentionally or not

  • hmmm@sh.itjust.works
    link
    fedilink
    arrow-up
    15
    arrow-down
    2
    ·
    1 年前

    It is available on R34, Hentai and Porn Website.

    Truly we are just improving our tech to goon. LOL

  • cley_faye@lemmy.world
    link
    fedilink
    arrow-up
    13
    ·
    1 年前

    Sure. Only problem is, it’s a people issue. Some people making ai generated content may be honest and willing to abide to such rule, but most are proud to not even read the rules and just blast shitty slop left and right. For this second category of people, when you point it to them, a very small percentage of them goes “oh, sorry”. The vast majority just keep posting until blocked.

    Granted, this experience mostly stems from every media posting sites out there, so it may be a bit biased…

  • fmstrat@lemmy.nowsci.com
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 年前

    Adobe is trying for the opposite. Content authenticity with digital signatures to show something is not AI (been having conversations with them on this).

    • Tetsuo@jlai.lu
      link
      fedilink
      arrow-up
      20
      ·
      1 年前

      Oh I’m sure Adobe has the greatest of intentions on this. Such a reputable company that has a stellar past.

      I’m sure they won’t gatekeep this digital human signature in some atrocious proprietary standard along with an expensive subscription to have the honor of using it.

      Don’t listen to Adobe on AI or even better don’t accept any “idea” or solution from Adobe.

      • ByteOnBikes@slrpnk.net
        link
        fedilink
        arrow-up
        2
        ·
        1 年前

        Yeah pretty much.

        I recall flash, and how they absolutely controlled it. I loved flash as a young programmer too.

        But in retrospect, forcing users to go through adobe to use something, with no alternatives? What a nightmare for a Open Internet.

    • Korhaka@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 年前

      How would that work then, I presume most would just ignore it because if it only verifies you used Adobe to make something it’s pretty worthless as a “this isn’t AI” mark.

      • fmstrat@lemmy.nowsci.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 年前

        It uses cryptographic signatures in the cameras and tools. Say you take a photo with a compatible camera, it gets a signature. Then you retouch in Photoshop, it gets a another signature. And this continues through however many layers. The signature is in the file’s EXIF data, so it can be read on the web. Meaning a photo on a news site could be labeled as authentic, retouched, etc.

        Edit: Doesn’t require Adobe tools. Adobe runs the services, but the method is open. There are cameras on the market today that do this when you take a picture. I beleive someone could add it to GIMP if they desired.

    • kernelle@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      1 年前

      Very nice idea in theory, but proving there is no AI involved in the creation of art is not something I think is remotely possible. It’s an arms race more than anything, but I’m very interested in how Adobe will tackle it. I think people will be appreciating physical art more again, but even then we could argue about the usage of AI tools.

      Anyhow, people will have to come to terms with the fact that AI is here to stay, and will only get better too.

      • fmstrat@lemmy.nowsci.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 年前

        My other reply talks about how this works with cryptographic signatures, but sure, people can lie. The key to this method is if there is a signature from a reputable artist, news org, or photographer, then that origin can’t be forged. So it’s about proving the authenticity (origin) vs the negative use of AI.

        • kernelle@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 年前

          Pretty cool indeed, thank you. I like the idea of a cryptographic certificate of authenticity, would definitely add value to the digital art world.

    • JustEnoughDucks@feddit.nl
      link
      fedilink
      arrow-up
      1
      ·
      1 年前

      And being adobe, they will put a nice little backdoor in it for them to change the credentials so that they can take artists’ work and use it, train their AI with it, and sell it like they have been doing for years.

      • fmstrat@lemmy.nowsci.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 年前

        You can’t change the credentials if the user owns the private key. But nothing stops AI training, that’s part of the terms of service of some of their products, which operate outside the realm of this more open initiative.

        • JustEnoughDucks@feddit.nl
          link
          fedilink
          arrow-up
          1
          ·
          1 年前

          Spoken like a real Adobe rep lol.

          It’s called a backdoor for a reason. Also since adobe software nowadays has almost full access to your machine, what is to stop adobe from simply uploading and storing your private key on their servers and using it when they like? They run their DRM client with a ton of rights to your computer on boot.

          WhatsApp can do exactly the same thing and read every message you write and still claim it is “end to end encrypted” for example because key creation is through a process in their proprietary software.

          • fmstrat@lemmy.nowsci.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 年前

            Not sure why you’d say that, its just a factual statement. Also, I don’t even use Adobe products, and transitioned to GIMP and Shotcut many, many years ago. I work in privacy and data security, so I just happen to be involved with this initiative from the sidelines.

            As for your conmetary, you could say the same thing about Signal. But you wouldn’t, because you like them. Just because you don’t like a company doesn’t mean they are being nefarious.

            Would I rather a privacy-focused company be doing this? Yes.

            Am I pleased with what I see from Adobe (a weekly working group full of identity and open source community members)? Yes.

            Does Adobe have a good chance of making this mainstream because of their ecosystem? Also yes.

            When you see something better, let me know and I’ll participate there too, vs complaining about those trying.

            • JustEnoughDucks@feddit.nl
              link
              fedilink
              arrow-up
              1
              ·
              1 年前

              https://community.signalusers.org/t/overview-of-third-party-security-audits/13243

              Here is an entire list of years and years of independent audits

              https://github.com/signalapp

              Here, go look yourself to verify that the frontend isn’t sending your encryption key back to the server.

              https://www.adobe.com/trust/security.html

              Please tell me where I can find the source code of Adobe’s creative cloud DRM that has full access to the computer it is installed on and their audits to verify that they aren’t sending my private keys back.

              You are comparing an audited, open source program with closed down proprietary system that says “trust me bro, we work with ‘security partners’, no we won’t release the audits”.

              Interesting comparison. It’s like comparing a local farming co-op to the agro-industrial complex of Monsanto/beyer and saying “you could say the same about either! Monsanto is at least innovating in the seed space, no no no, ignore how they use it!!”

              • fmstrat@lemmy.nowsci.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 年前

                You’re taking that out of context. Signal is open source, but you don’t get to see what happens between GitHub and the Play Store. Adobe’s system that I am aluding to is also open, but we don’t get to see what happens in the software itself. The problem is, that’s not even what I’m talking about. I’m talking about a standard they are developing, not their software or DRM.

                This isn’t just for Adobe, they’re just starting the process. Other systems can run it. Hardware can run it. Do you not use linux because Canonical or Red Hat contributed? Do you steer developers away from flutter because Google started it? Where is the line? Who do you think kicks off all the standards you use today? OAuth, OIDC, etc. If you want to avoid everything these companies contribe to, you’re going to have to stop using the internet.

  • Majorllama@lemmy.worldBanned
    link
    fedilink
    arrow-up
    11
    ·
    1 年前

    That might work for now when those of us who know what to look for can readily identify AI content for the time being, but there will be a time when nobody can tell anymore. How will we enforce the tagging then? Bad actors will always lie anyway. Some will accidentally post it without knowing its AI.

    I think they should add a tag for it anyway so those who are knowingly posting AI stuff can tag it but I fear that in the next few years the AI images and videos will be inescapable and impossible to identify reliably even for people who are usually good at picking out altered or fake images and videos.

      • Majorllama@lemmy.worldBanned
        link
        fedilink
        arrow-up
        6
        ·
        1 年前

        Yeah unfortunately bad actors ruin pretty much everything. We can do our best as a society to set things up in a way where systems can’t be abused but the sad reality is we just need to raise people better.

        Lying, cheating (the academic or competitive integrity kind) and many other undesirable behaviors are part of human nature but good parenting teaches kids not to use those.

  • Fedizen@lemmy.world
    link
    fedilink
    arrow-up
    14
    arrow-down
    4
    ·
    1 年前

    It should be fineable starting at like 500 dollars + any profits and ad revenue if its not labelled