• SituationCake@aussie.zone
      link
      fedilink
      arrow-up
      3
      ·
      2 months ago

      Of course they did. It is universal that in every workplace people exist who just turn up and tick boxes, they have no care of their impacts on anyone else - clients, coworkers, the public etc. Zero f’s and zero diligence. Sometimes their mundane box ticking is harmless, but in some positions it can be very destructive. I don’t even know how a business could ban the use of AI. Even if block in company IT, someone could just do it on their phone and copy paste. The ban would only be useful if the person was discovered, and then probably have to go through the warning process. But damage already done. So unfortunately I think we are stuck with it forever from now on. Enshitification of the world continues.

    • Pilk@aussie.zone
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      2 months ago

      People need to realise how easy it is for a human to figure out synthetic content. At least, with the current state of AI text generation.

      I don’t think it shouldn’t be used (edit: This context is one of many exceptions); I do think it should be clearly labelled as synthetic.

      Reddit is a wasteland for this shit already, though. Probably too late.

      • melbaboutown@aussie.zone
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        2 months ago

        The problem is not only does it make poor recommendations that affect the legal outcome and safety of the child.

        The LLM has also now got hold of sensitive and potentially identifiable personal information, which is now subject to the company’s own rules of how that information will be handled and disclosed.

        Edit: The gathering of the sensitive personal information also wasn’t disclosed or consented to.

        So I don’t think it should be used for this purpose. The use here was completely inappropriate.

        I’ve also refused to allow my GP to use AI to take notes during the consultation, because I don’t think the owner of that technology should have access to my medical information or give itself permission to disclose it.

        Ps. In the infancy of AI I used to participate in citizen science projects as a volunteer, training the models to recognise slides with cancer cells. I also watched in interest as it was used to generate simple forms challenging parking fines (?) for those who couldn’t afford legal assistance.

        So it’s not like I’m screaming about technology being bad and Thomas Edison being a witch. I simply think a lot of corner cutting and misuse is happening without regulations, and leading to real harm.