• 8 Posts
  • 2.95K Comments
Joined 3 years ago
cake
Cake day: July 31st, 2023

help-circle


  • There’s a few of them. Notably, the guy who didn’t care that AI art is built on the back of copyright violations getting pissy about his AI-generated art not being eligible for copyright.

    But more importantly here, I don’t think most artists in the gaming industry are in much of a position where they can stand by their artistic integrity. If every publisher pushes studios into using AI to be more “productive”, the choice becomes between slopping or starving—and most people don’t like starving.

    We as consumers are the only ones that can afford to push back against this shit. Our survival doesn’t rely on buying DLSS 5 games so we have the ability to boycott them to send a message.


  • Power and grid infrastructure is a limitation that can exceed hardware availability in some regions. Musk has a datacenter with 20-something methane gas generators running throughout the day to power his mini-me sycophantic AI, Grok.

    At the cost of a cultural deficit, solar could provide an environmental benefit there during the day.


  • If you thought Flock cameras were a bad situation, imagine not being able to query, read, write, or probably even speak about topics that they decide are “unpatriotic” or “satanic”.

    The only difference between right now and then is that right now they aren’t doing anything about it. They already have the data about people’s opinions and leanings as a side effect of the massive network of tracking built for targeted advertising.

    It will obviously be worse when we’re stuck renting computers, but what you’re describing is a today problem just as much as it’s a future problem. The only reason it hasn’t turned full 1984 is because they haven’t gone full mask off yet.


  • No, it won’t. It will cause more of the supply to be reallocated away from consumers into enterprise, and that is exactly what the big tech companies want to see happen.

    Having access to a computer and phone is as much of a necessity to survive in modern society as internet is. When personal computing is unaffordable to the point where subscription computing is a good enough “deal” for consumers to jump on, the ball will start rolling towards the inevitable price squeeze that we have no choice but to accept.






  • The researchers said it was “maddening” that such easy action to fight the climate crisis was not being taken, and said people should be angry. Stopping the leaks can even be free, given that captured gas can be sold – methane is the “natural gas” that fires power stations.

    It’s maddening but expected.

    When corporate decisions are based solely on pleasing investors, fixing a leak isn’t a priority. It might be a long-term investment that eventually pays for itself, but it comes with a front-loaded cost that diminishes the profits of the current quarter.

    The only way to get them to care about the problem is if it’s actively unprofitable or comes with personal liability for the leadership, and the only way that will happen is with regulations.

    In other words: “why about the survivability of the species when we can instead care about making our investor’s loins tingle?”







  • It’s the same for me.

    I don’t care if somebody uses Claude or Copilot if they take ownership and responsibility over the code it generates. If they ask AI to add a feature and it creates code that doesn’t fit within the project guidelines, that’s fine as long as they actually clean it up.

    I’m more concerned with the admitted OpenClaw usage. That’s a hydrogen bomb heading straight for a fireworks factory.

    This is the problem I have with it too. Using something that vulnerable to prompt injection to not only write code but commit it as well shows a complete lack of care for bare minimum security practices.


  • the experiment you are referring to was specifically designed to deceive whereas AI vulnerabilities would just be simple bugs.

    In my original comment, I was specifically referring to OpenClaw. Given that it doesn’t live in a vacuum and can be influenced with prompt injection, it’s not safe to assume that whatever bugs it creates aren’t specifically designed to deceive.

    Secondly, the security requirements of the Linux Kernel are way more important/stringent than Lutris, which has no special access & is often even further sandboxed if installed via Flatpak.

    Sure, but that’s not the point I was trying to make. You said that I don’t trust the guy to audit the code for malicious intent before committing and I gave you a reason why nobody should: if multiple people with decades of experience in a specialized domain can’t catch vulnerabilities disguised as subtle bugs, one guy who isn’t scrutinizing the changes nearly as hard definitely won’t.