I get some of the surface level reasons, and those annoy me too. Cramming AI into everything is dumb and unnecessary.

However, I do feel that at a deeper level, it has a lot of useful applications that will absolutely change society and improve the efficiency and skills of those who use it. For example, if someone wants to learn to code, they could take a few different paths. There are the traditional paths, just read or go to school and learn to code that way. Or you could pay for a bootcamp or an online coding education platform. Or, you could just tell an AI chatbot you want to learn to code, and have them become your teacher, and correct any errors you make in real time. Another application is in generating ideas or quick mock ups. Say I’m playing a game of d&d with friends. I need a character avatar so I just provide a description to the AI and it makes it up quick. It might take a few prompts, but it usually does a pretty good job. Or if I have a scenario I need to make a few enemies for, I could just provide the description of those enemies and have a quick stat block made up for them.

I realize that there are underlying issues with regard to training the AI on others work, but as someone who is a musician myself, and a supporter of open source as often as possible, I feel that it’s a bit hypocritical for people to get upset about AI “stealing” work with regard to code or other stuff that people willingly put out there for free for others to consume. Any artist or coder could “steal” the work of others for inspiration for their work, the same as an AI does, an AI is just much more efficient about it. I do think that most of the corporations that are pushing some new AI feature or promising the world or end of the labor force is full of shit, and that we are definitely in some sort of an AI bubble, but the technology itself is definitely useful in a lot of ways, and if it can be developed on a more localized and decentralized scale (community owned AI hubs anyone?), it could actually be a really powerful and beneficial technology for organizations and individuals looking to do more with less.

  • schnurrito
    link
    fedilink
    arrow-up
    2
    ·
    1 day ago

    whoever employs LLM

    incumbent upon the handler to assume liabillity

    I agree. If you make any kind of real-world decision based on the output of AI, you should be liable for it as if you’d made that decision yourself.

    But I remember reading some news stories about cases where people (often minors) chatted with chatbots and managed to get those chatbots into states where the chatbots encouraged that the users harm themselves (in some cases even commit suicide?). As tragic as that is, I don’t see how it’s morally right to hold the AI companies responsible for that unless it can be shown they did this on purpose. All the AI did in such cases was what it was advertised and understood to do: generate plausible-sounding text based on user input. Those are the cases I’m talking about.

    • cheese_greater@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 day ago

      Its a difficult issue, no doubt about it. It doesnt help that the healthcare system in many places is basically massive social murder and negligence but its also so hard to really do anything to help when one feels no reason to live+ there isnt some external locus of commitment (and safety) that can help to rule that out for them.

      Personally, I got a pet and added a couple more to help with that over time to make sure she had pals or at least something else to keep her company when I wasnt sufficient but ive also had a lot of random help and weird circumstances that dont necessarily scale. Anybody who wants to be ok and turn things around should have that support and a path available, like anybody