Profile pic is from Jason Box, depicting a projection of Arctic warming to the year 2100 based on current trends.

  • 1 Post
  • 4.14K Comments
Joined 2 年前
cake
Cake day: 2024年3月3日

help-circle







  • Rhaedas@fedia.iotomemes@lemmy.worldSmol but mighty
    link
    fedilink
    arrow-up
    3
    ·
    14 小时前

    They all have different personalities. Even among the ones like this that are bent on attacking everything, some do it more playfully and even with minimal clawing, while others are nightmare devils that are in kill mode. Who later are curled up in your lap all lovable.


  • You’d be so correct if I had said “artificial intelligence”. But that wasn’t what I was talking about. “AI” absolutely has been hijacked by the companies and the media to mean something that it is not. LLMs are not intelligent.

    Want to get into the nitpicking? To talk about actual intelligence we’d be using “AGI”, not “AI”. Also, ELIZA is capitalized, as I was around when we got to play with it in BASIC in the first personal computers. Again, not intelligent, just a lot of IF-THEN clauses. ELIZA was more a demonstration of how people can be fooled easily, and boy, they sure can be now.








  • Then why is 15/hr still such a hot topic of resistance when that number was a living wage minimum amount decades ago? And what is the federal minimum wage still at? Then there’s the issue of underemployment in both lowered expectations of what’s available for a person’s skill set, and also how many hours are actually available at whatever the rate is offered (i.e., if you give an employee a job at 20/hr but only give them 15 hours a week, that’s not a living wage).

    There’s a lot of problems beyond just wage growth, and I would suggest that even if wages did start increasing faster than inflation for a while now, that just means they’re “only” too low a little less. People wouldn’t be working multiple jobs for each household member to make ends meet if wages were close to appropriate for cost of living needs.


  • As a side topic, that’s something that’s always fascinated me. Looking around at the various vehicles and shops that most people just see as background noise, there are so many industries that we take for granted unless we need that particular product or service. Lots of things done in the middle of the supply chain that the average consumer never sees. Commerce is an enormous machine with billions of hidden parts to it, and it’s amazing it functions like it does even in times of crisis.



  • One issue is that AI in its various forms makes it far easier than it had been to use such a tool without understanding what the limitations are. Garbage in, garbage out still applies, but if the user can’t tell the difference, the garbage gets spread as quality work. This had led to the term “AI slop” which has morphed into a general “I don’t like this post” label.

    Another bigger issue is the origin of the data for training, which unfortunately has tainted good uses for these tools (when used within their limits, as stated before). I agree with this concern, but once LLMs and related AI became freely open to the public, that ship has sailed and even if there was a company that could even prove its AI was trained only with legitimately obtained information (which could make it more limited than the ones out there), would anyone believe them?

    A related issue on training would be how the AI was trained (ignoring the problem of the source of the data). The very fact that LLMs were modeled to give proper and positive answers only leads to the conclusion that it has long moved from a research project to find AGI into a marketing ploy to give the best impression on the ignorant public to profit from. This gets into the “AI slop” area of seemingly good results to the average user when it is not, but rather than slop it’s deception.


  • They are not intelligent at all, so applying such a study label is an anthropomorphic action.

    The title makes a suggestion that the article itself doesn’t lead to. It’s not the AI that is creating a service of being more skilled than it appears, but rather it is imparting or enhancing that impression on the users that don’t understand the limitations of the tool.

    To use an analogy that is a bit too extreme, if someone thinks that a chainsaw is a great tool when using it properly and tries to use it for other things, it’s not the chainsaw that’s overextending its abilities…

    Or to put it another way… it’s always prudent to first eliminate the possibility of user error first, as that’s a likely source of the problem.

    Or another way… it’s always the human’s fault, until it’s not.