• Churbleyimyam@lemm.ee
    link
    fedilink
    English
    arrow-up
    371
    ·
    4 months ago

    I think AI has mostly been about luring investors into pumping up share prices rather than offering something of genuine value to consumers.

    Some people are gonna lose a lot of other people’s money over it.

    • themurphy@lemmy.ml
      link
      fedilink
      English
      arrow-up
      123
      ·
      4 months ago

      Definitely. Many companies have implemented AI without thinking with 3 brain cells.

      Great and useful implementation of AI exists, but it’s like 1/100 right now in products.

      • floofloof@lemmy.ca
        link
        fedilink
        English
        arrow-up
        58
        ·
        4 months ago

        If my employer is anything to go by, much of it is just unimaginative businesspeople who are afraid of missing out on what everyone else is selling.

        At work we were instructed to shove ChatGPT into our systems about a month after it became a thing. It makes no sense in our system and many of us advised management it was irresponsible since it’s giving people advice of very sensitive matters without any guarantee that advice is any good. But no matter, we had to shove it in there, with small print to cover our asses. I bet no one even uses it, but sales can tell customers the product is “AI-driven”.

      • PerogiBoi@lemmy.ca
        link
        fedilink
        English
        arrow-up
        44
        ·
        4 months ago

        My old company before they laid me off laid off our entire HR and Comms teams in exchange for ChatGPT Enterprise.

        “We can just have an AI chatbot for HR and pay inquiries and ask Dall-e to create icons and other content”.

        A friend who still works there told me they’re hiring a bunch of “prompt engineers” to improve the quality of the AI outputs haha

        • themurphy@lemmy.ml
          link
          fedilink
          English
          arrow-up
          21
          ·
          4 months ago

          That’s an even worse ‘use case’ than I could imagine.

          HR should be one of the most protected fields against AI, because you actually need a human resource.

          And “prompt engineer” is so stupid. The “job” is only necessary because the AI doesn’t understand what you want to do well enough. The only productive guy you could hire would be a programmer or something, that could actually tinker with the AI.

        • verity_kindle@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          10
          ·
          4 months ago

          I’m sorry. Hope you find a better job, on the inevitable downswing of the hype, when someone realizes that a prompt can’t replace a person in customer service. Customers will invest more time, i.e., even wait in a purposely engineered holding music hell, to have a real person listen to them.

    • SlopppyEngineer@lemmy.world
      link
      fedilink
      English
      arrow-up
      85
      ·
      4 months ago

      Yes, I’m getting some serious dot-com bubble vibes from the whole AI thing. But the dot-com boom produced Amazon, and every company is basically going all-in in the hope they are the new Amazon while in the end most will end up like pets.com but it’s a risk they’re willing to take.

      • slaacaa@lemmy.world
        link
        fedilink
        English
        arrow-up
        59
        ·
        4 months ago

        “You might lose all your money, but that is a risk I’m willing to take”

        • visionairy AI techbro talking to investors
        • SlopppyEngineer@lemmy.world
          link
          fedilink
          English
          arrow-up
          13
          ·
          4 months ago

          Investors pump money in a bunch of companies so the chances of at least one of them making it big and paying them back for all the failed investments is almost guaranteed. That’s what taking risks is all about.

          • verity_kindle@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            9
            ·
            4 months ago

            Sure, but it SEEMS, that some investors are relying on buzzword and hype, without research and ignoring the fundamentals of investing, i.e. besides the ever evolving claims of the CEO, is the company well managed? What is their cash flow and where is it going a year from now? Do the upper level managers have coke habits?

            • slaacaa@lemmy.world
              link
              fedilink
              English
              arrow-up
              15
              ·
              4 months ago

              You’re right, but these fundamentals don’t really matter anymore, investors are buying hype and hoping to sell a bigger hype for more money later.

              • Aceticon@lemmy.world
                link
                fedilink
                English
                arrow-up
                10
                ·
                4 months ago

                Seeing the whole thing as Knowingly Trading in Hype is actually a really good insight.

                Certainly it neatly explains a lot.

                • rottingleaf@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  16
                  ·
                  4 months ago

                  Also called a Ponzi scheme, where every participant knows it’s a scam, but hopes to find some more fools before it crashes and leave with positive balance.

          • Churbleyimyam@lemm.ee
            link
            fedilink
            English
            arrow-up
            6
            ·
            4 months ago

            If the whole sector turns out to be garbage it won’t matter which particular set of companies within it you invest in; you will get burned if you cash out after everyone else.

      • barsoap@lemm.ee
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        4 months ago

        OpenAI will fail. StabilityAI will fail. CivitAI will prevail, mark my words.

    • peto (he/him)@lemm.ee
      link
      fedilink
      English
      arrow-up
      36
      ·
      4 months ago

      A lot of it is follow the leader type bullshit. For companies in areas where AI is actually beneficial they have already been implementing it for years, quietly because it isn’t something new or exceptional. It is just the tool you use for solving certain problems.

      Investors going to bubble though.

    • spiderman@ani.social
      link
      fedilink
      English
      arrow-up
      20
      ·
      4 months ago

      Yeah, can make some products better but most of the products these days that use AI, it doesn’t actually need them. It’s annoying to use products that actively shovel AI when it doesn’t even need it.

      • Lost_My_Mind@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        4
        ·
        4 months ago

        Ya know what pfoduct MIGHT be better with AI?

        Toasters. They have ONE JOB, and everybody agrees their toaster is crap. But you’re not going to buy another toaster, because that too will be crap.

        How about a toaster, that accurately, and evenly toasts your bread, and then DOESN’T give you a heart attack at 5am when you’re still half asleep???

        IS THAT TOO MUCH TO ASK???

    • SLVRDRGN@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      4 months ago

      I tried to find the advert but I see this on YouTube a lot - an Adobe AI ad which depicts, without shame, AI writing out a newsletter/promo for a business owner’s new product (cookies or ice cream or something), showing the owner putting no effort into their personal product and a customer happily consuming because they were attracted by the thoughtless promo.

      How are producers/consumers okay with everything being so mediocre??

      • Churbleyimyam@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        ·
        4 months ago

        How are producers/consumers okay with everything being so mediocre??

        I’m not. My particular beef is with is with plastics and toxic materials and chemicals being ubiquitous in everything I buy. Systemic problem that I can do almost nothing about apart from make things myself out of raw materials.

      • MajorHavoc@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        How are producers/consumers okay with everything being so mediocre??

        “You’re always trying to make everything just a little bit worse so that you can feel good about having a lot more of it. I love it. It’s so human!” - The Good Place

    • Riskable@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 months ago

      My doorbell camera manufacturer now advertises their products as using, “Local AI” meaning, they’re not relying on a cloud service to look at your video in order to detect humans/faces/etc. Honestly, it seems like a good (marketing) move.

  • Lvxferre@mander.xyz
    link
    fedilink
    English
    arrow-up
    174
    ·
    4 months ago

    As I mentioned in another post, about the same topic:

    Slapping the words “artificial intelligence” onto your product makes you look like those shady used cars salesmen: in the best hypothesis it’s misleading, in the worst it’s actually true but poorly done.

  • oyo@lemm.ee
    link
    fedilink
    English
    arrow-up
    160
    arrow-down
    1
    ·
    4 months ago

    LLMs: using statistics to generate reasonable-sounding wrong answers from bad data.

      • Blackmist@feddit.uk
        link
        fedilink
        English
        arrow-up
        54
        ·
        4 months ago

        And the system doesn’t know either.

        For me this is the major issue. A human is capable of saying “I don’t know”. LLMs don’t seem able to.

        • xantoxis@lemmy.world
          link
          fedilink
          English
          arrow-up
          35
          ·
          4 months ago

          Accurate.

          No matter what question you ask them, they have an answer. Even when you point out their answer was wrong, they just have a different answer. There’s no concept of not knowing the answer, because they don’t know anything in the first place.

          • Blackmist@feddit.uk
            link
            fedilink
            English
            arrow-up
            18
            ·
            4 months ago

            The worst for me was a fairly simple programming question. The class it used didn’t exist.

            “You are correct, that class was removed in OLD version. Try this updated code instead.”

            Gave another made up class name.

            Repeated with a newer version number.

            It knows what answers smell like, and the same with excuses. Unfortunately there’s no way of knowing whether it’s actually bullshit until you take a whiff of it yourself.

            • nilloc
              link
              fedilink
              English
              arrow-up
              5
              ·
              4 months ago

              So instead of Prompt Engineer, the more accurate term should be AI Taste Tester?

              From what I’ve seen you’ll need an iron stomach.

      • treadful@lemmy.zip
        link
        fedilink
        English
        arrow-up
        13
        ·
        4 months ago

        They really aren’t. Go ask about something in your area of expertise. At first glance, everything will look correct and in order, but the more you read the more it turns out to be complete bullshit. It’s good at getting broad strokes but the details are very often wrong.

        Now imagine someone that doesn’t have your expertise reading that answer. They won’t recognize those details are wrong until it’s too late.

        • Quereller@lemmy.one
          link
          fedilink
          English
          arrow-up
          6
          ·
          4 months ago

          That is about the experience I have. I asked it for factual information in the field I work at. It didn’t gave correct answers. Or, it gave working protocols which were strange and would not be successful.

      • GBU_28@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        edit-2
        4 months ago

        With proper framework, decent assertions are possible.

        1. It must cite the source and provide the quote, not just a summary.
        2. An adversarial review must be conducted

        If that is done, the work on the human is very low.

        That said, it’s STILL imperfect, but this is leagues better than one shot question and answer

        • Aceticon@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          4 months ago

          Except LLMs don’t store sources.

          They don’t even store sentences.

          It’s all a stack of massive N-dimensional probability spaces roughly encoding the probabilities of certain tokens (which are mostly but not always words) appearing after groups of tokens in a certain order.

          And all of that to just figure out “what’s the most likely next token”, an output which is then added to the input and fed into it again to get the next word and so on, producing sentences one word at a time.

          Now, if you feed it as input a long, very precise sentence taken from a unique piece, maybe you’re luck and it will output the correct next word, but if you already have all that you don’t really need an LLM to give you the rest.

          Maybe the “framework” you seek - which is quite akin to a indexer with a natural language interface - can be made with AI, but it’s not something you can do with LLMs because their structure is entirely unsuited for it.

          • GBU_28@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            4 months ago

            The proper framework does, with data store, indexing and access functions.

            The cutting edge work is absolutely using LLMs in post-rag pipelines.

            Consumer grade chat interfaces def do not do this.

            Edit if you worry about topics like context window, sentence splitting or source extraction, you aren’t using a best in class framework any more.

  • Meron35@lemmy.world
    link
    fedilink
    English
    arrow-up
    144
    arrow-down
    1
    ·
    4 months ago

    Market shows that investors are actively turned on by products that use AI

          • Riskable@programming.dev
            link
            fedilink
            English
            arrow-up
            3
            ·
            4 months ago

            When one of two things happens:

            • A new hype starts to replace it (can happen fast though!)
            • The hype starts to specialize into subcategories of the hype (e.g. AI images, AI videos, AI text generation)

            When “AI” hype dies down we are likely to see “AI” removed from various topics because enough people know and understand the hyped parent topic. It’ll just be “image generation”, “video generation”, “generated text”, etc.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      4 months ago

      Customers worry about what they can do with it, while investors and spectators and vendors worry about buzzwords. Customers determine demand.

      Sadly what some of those customers want to do is to somehow improve their own business without thinking, and then they too care about buzzwords, that’s how the hype comes.

    • Lucidlethargy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 months ago

      There are different types of people in the market. The informed ones hate AI, and the uninformed love it. The informed ones tend to be the cornerstones of businesses, and the uninformed ones tend to be in charge.

      So we have… All this. All this nonsense. All because of stupid managers.

      • MajorHavoc@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        But what if it actually is magic this time? Just this once!? And we miss the hype train?! (This is a sarcastic impression of real conversations I have had.)

  • snekerpimp@lemmy.world
    link
    fedilink
    English
    arrow-up
    131
    arrow-down
    1
    ·
    4 months ago

    No shit, because we all see that AI is just technospeak for “harvest all your info”.

      • blarth@thelemmy.club
        link
        fedilink
        English
        arrow-up
        26
        ·
        4 months ago

        I refuse to use Facebook anymore, but my wife and others do. Apparently the search box is now a Meta AI box, and it pisses them every time. They want the original search back.

        • nossaquesapao@lemmy.eco.br
          link
          fedilink
          English
          arrow-up
          21
          ·
          edit-2
          4 months ago

          That’s another thing companies don’t seem to understand. A lot of them aren’t creating new products and services that use ai, but are removing the existing ones, that people use daily and enjoy, and forcing some ai alternative. Of course people are going to be pissed off!

          • Krauerking@lemy.lol
            link
            fedilink
            English
            arrow-up
            4
            ·
            4 months ago

            We aren’t allowed new things. That might change their perfectly balanced money making machine.

            And making search worse so it can pretend to be an ex is not what I or anyone is looking for in the search box.

    • barsquid@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      ·
      4 months ago

      Yes the cost is sending all of your data to the harvest, but what price can you put on having a virtual dumbass that is frequently wrong?

    • DudeDudenson@lemmings.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      4 months ago

      Doubt the general consumer thinks that, in sure most of them are turned away because of the unreliability and how ham fisted most implementations are

    • Capricorn_Geriatric@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      More like “instead of making something that gets the job done, expect pur unfinished product to complain and not do whatever it’s supposed to”. Or just plain false advertising.

      Either way, not a good look and I’m glad it’s not just us lemmings who care.

  • BradleyUffner@lemmy.world
    link
    fedilink
    English
    arrow-up
    115
    arrow-down
    1
    ·
    4 months ago

    LLM based AI was a fun toy when it first broke. Everyone was curious and wanted to play with it, which made it seem super popular. Now that the novelty has worn off, most people are bored and unimpressed with it. The problem is that the tech bros invested so much money in it and they are unwilling to take the loss. They are trying to force it so that they can say they didn’t waste their money.

    • 2pt_perversion@lemmy.world
      link
      fedilink
      English
      arrow-up
      55
      arrow-down
      3
      ·
      4 months ago

      Honestly they’re still impressive and useful it’s just the hype train overload and trying to implement them in areas they either don’t fit or don’t work well enough yet.

      • GratefullyGodless@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        1
        ·
        4 months ago

        AI does a good job of generating character portraits for my TTRPG games. But, really, beyond that I haven’t found a good use for it.

        • abracaDavid@lemmy.today
          link
          fedilink
          English
          arrow-up
          9
          ·
          4 months ago

          So far that’s been the best use of AI for me too. I’ve also used it to help flesh out character backgrounds, and then I just go through and edit it.

          • 2pt_perversion@lemmy.world
            link
            fedilink
            English
            arrow-up
            11
            ·
            4 months ago

            Yeah exactly, as a tool that doesn’t need to be perfect to give you a starting point it’s excellent. But companies sort of forgot the “as a tool” part and are just implementing ai outright in places it’s not ready yet like drive-thru windows or voice only interface devices…it’s not ready for that shit currently (if it ever truly will be).

            • abracaDavid@lemmy.today
              link
              fedilink
              English
              arrow-up
              2
              ·
              4 months ago

              They are all completely half-baked products being rolled out before they’re ready because none of these billion dollar tech companies will allow a product to not immediately generate revenue.

              I’m really enjoying seeing the backlash of everyone unanimously being sick of having this unfinished tech shoved down our throats.

        • Mikina@programming.dev
          link
          fedilink
          English
          arrow-up
          5
          ·
          4 months ago

          One place where I found AI usefull is in generating search queries in JIRA. Not having to deal with their query language every time I have to change a search filter, but being able to just use the built in AI to query in natural language has already saved me like two or three minutes in total in the last two months.

        • netvor@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          4 months ago

          …also TTRPH, TTRPI, TTRPJ, TTRPK, TTRPL, TTRPM, TTRPN, TTRPO, TTRPP, TTRPQ, TTRPR, TTRPS, TTRPT, TTRPU, TTRPV, TTRPW, TTRPX, TTRPY and TTRPZ games.

          But beyond that, no good use, no siree.

          PS: spoiler

          that was WAY harder to type than I expected.

      • netvor@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        4 months ago

        Even in areas where they would fit it’s really annoying how some companies are trying to push it down our throats.

        It’s always some obnoxious UI element, screaming at me their 3 example questions, and I always sigh and think, “I have to assume you can only answer these 3 particular questions, and why would I ask those questions, and when I ask UI questions I expect precise answers so would I want to use AI for that.”

        I have no doubt that LLM’s have more uses than I can think of, but come on…

        I’m happy for studies like this. People who are trying to smear their AI all over our faces need to calm, the f…k, down.

    • Flying Squid@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      4 months ago

      Many of us who are old enough saw it as an advanced version of ELIZA and used it with the same level of amusement until that amusement faded (pretty quick) because it got old.

      If anything, they are less impressive because tricking people into thinking a computer is actually having a conversation with them has been around for a long time.

      • 𝓔𝓶𝓶𝓲𝓮@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        4 months ago

        So you want to tell me they all spent billions and made huge data centres that suck more power than small country so we can all play with it, generate some cringy smut and then toss it away?

        This is kinda insane if that’s how it will play out

        • Flying Squid@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 months ago

          Not the first time this has happened. Even recently. See NFTs. Venture capitalists hear “tech buzzword” and throw money at it because if they’re lucky, it’s the next Google. Or at least it gets an IPO and they can cash out.

          • 𝓔𝓶𝓶𝓲𝓮@lemm.ee
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            4 months ago

            Yeah but the scale is bigger and we could be doing something worthwhile with all these finite resources it makes me a bit dizzy

            • Flying Squid@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              4 months ago

              We could, but they don’t care about making the world a better place. They care about getting rich. And then if everything collapses, they can go to their private island or their doomsday vault or whatever and enjoy the apocalypse.

    • reddthat_209@reddthat.com
      link
      fedilink
      English
      arrow-up
      10
      ·
      4 months ago

      I agree with this, my sentiments exactly as well. Getting AI pushed towards us from every direction & really never asked for it. Like to use it for certain things but go to it when needed. Don’t want it in everything, at least personally.

  • esc27@lemmy.world
    link
    fedilink
    English
    arrow-up
    109
    ·
    edit-2
    4 months ago

    They’ve overhyped the hell out of it and slapped those letters on everything including a lot of half baked ideas. Of course people are tired of it and beginning to associate ai with bad marketing.

    This whole situation really does feel dotcommish. I suspect we will soon see an ai crash, then a decade or so later it will be ubiquitous but far less hyped.

    • Vent@lemm.ee
      link
      fedilink
      English
      arrow-up
      48
      arrow-down
      1
      ·
      4 months ago

      Thing is, it already was ubiquitous before the AI “boom”. That’s why everything got an AI label added so quickly, because everything was already using machine learning! LLMs are new, but they’re just one form of AI and tbh they don’t do 90% of the stuff they’re marketed as and most things would be better off without them.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      4 months ago

      What did they even expect, calling something “AI” when it’s no more “AI” than a Perl script determining whether a picture contains more red color than green or vice versa.

      Anything making some kind of determination via technical means, including MCs and control systems, has been called AI.

      When people start using the abbreviation as if it were “the” AI, naturally first there’ll be a hype of clueless people, and then everybody will understand that this is no different from what was before. Just lots of data and computing power to make a show.

    • xantoxis@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      ·
      edit-2
      4 months ago

      They don’t care. At the moment AI is cheap for them (because some other investor is paying for it). As long as they believe AI reduces their operating costs*, and as long as they’re convinced every other company will follow suit, it doesn’t matter if consumers like it less. Modern history is a long string of companies making things worse and selling them to us anyway because there’s no alternatives. Because every competitor is doing it, too, except the ones that are prohibitively expensive.

      [*] Lol, it doesn’t do that either

      • simpleslipeagle@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        1
        ·
        4 months ago

        Assuming MBAs can do math might be a mistake. I’ve worked on an MBA pet project that squandered millions in worker time and opportunity cost to save 30k mrc…

        • MataVatnik@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          4 months ago

          I read this article that out of the 10 top Harvard MBA grads 8 of them had have gone to tank the company they were CEOs at. Or something ridiculous.

  • JCreazy@midwest.social
    link
    fedilink
    English
    arrow-up
    77
    ·
    4 months ago

    There are even companies slapping AI labels onto old tech with timers to trick people into buying it.

  • qx128@lemmy.world
    link
    fedilink
    English
    arrow-up
    69
    ·
    4 months ago

    I can attest this is true for me. I was shopping for a new clothes washer, and was strongly considering an LG until I saw it had “AI wash”. I can see relevance for AI in some places, but washing clothes is NOT one of them. It gave me the feeling LG clothes washer division is full of shit.

    Bought a SpeedQueen instead and been super happy with it. No AI bullshit anywhere in their product info.

  • Grandwolf319@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    64
    arrow-down
    1
    ·
    4 months ago

    I mean, pretty obvious if they advertise the technology instead of the capabilities it could provide.

    Still waiting for that first good use case for LLMs.

    • psivchaz@reddthat.com
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      4
      ·
      4 months ago

      It is legitimately useful for getting started with using a new programming library or tool. Documentation is not always easy to understand or easy to search, so having an LLM generate a baseline (even if it’s got mistakes) or answer a few questions can save a lot of time.

      • Grandwolf319@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        34
        ·
        4 months ago

        So I used to think that, but I gave it a try as I’m a software dev. I personally didn’t find it that useful, as in I wouldn’t pay for it.

        Usually when I want to get started, I just look up a basic guide and just copy their entire example to get started. You could do that with chatGPT too but what if it gave you wrong answers?

        I also asked it more specific questions about how to do X in tool Y. Something I couldn’t quickly google. Well it didn’t give me a correct answer. Mostly because that question was rather niche.

        So my conclusion was that, it may help people that don’t know how to google or are learning a very well know tool/language with lots of good docs, but for those who already know how to use the industry tools, it basically was an expensive hint machine.

        In all fairness, I’ll probably use it here and there, but I wouldn’t pay for it. Also, note my example was chatGPT specific. I’ve heard some companies might use it to make their docs more searchable which imo might be the first good use case (once it happens lol).

        • BassTurd@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          4 months ago

          I just recently got copilot in vscode through work. I typed a comment that said, “create a new model in sqlalchemy named assets with the columns, a, b, c, d”. It couldn’t know the proper data types to use, but it output everything perfectly, including using my custom defined annotations, only it was the same annotation for every column that I then had to update. As a test, that was great, but copilot also picked up a SQL query I had written in a comment to reference as I was making my models, and it also generated that entire model for me as well.

          It didn’t do anything that I didn’t know how to do, but it saved on some typing effort. I use it mostly for its auto complete functionality and letting it suggest comments for me.

          • Grandwolf319@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            5
            ·
            4 months ago

            That’s awesome, and I would probably would find those tools useful.

            Code generators have existed for a long time, but they are usually free. These tools actually costs a lot of money, cost way more to generate code this way than the traditional way.

            So idk if it would be worth it once the venture capitalist money dries up.

            • BassTurd@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              4 months ago

              That’s fair. I don’t know if I will ever pay my own money for it, but if my company will, I’ll use it where it fits.

            • bamboo@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 months ago

              What are these code generators that have existed for a long time?

                • bamboo@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  4 months ago

                  Neither of those seem similar to GitHub copilot other than that they can reduce keystrokes for some common tasks. The actual applicability of them seems narrow. Frequently I use GitHub copilot for “implement this function based on this doc comment I wrote” or “write docs for this class/function”. It’s the natural language component that makes the LLM approach useful.

        • Dran@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          4 months ago

          I’m actually working on a vector DB RAG system for my own documentation. Even in its rudimentary stages, it’s been very helpful for finding functions in my own code that I don’t remember exactly what project I implemented it in, but have a vague idea what it did.

          E.g

          Have I ever written a bash function that orders non-symver GitHub branches?

          Yes! In your ‘webwork automation’ project, starting on line 234, you wrote a function that sorts Git branches based on WebWork’s versioning conventions.

        • markon@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          4 months ago

          Huge time saver. I’ve had GPT doing a lot of work for me and it makes stuff like managing my Arch install smooth and easy. I don’t use OpenAI stuff much though. Gemini has gotten way better, Claude 3.5 Sonnet is beastly at code stuff. I guess if you’re writing extremely complex production stuff it’s not going to be able to do that, but try asking most people even what an unsigned integer is. Most people will be like “what?”

          • Grandwolf319@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            4 months ago

            but try asking most people even what an unsigned integer is. Most people will be like “what?”

            Why is that relevant? Are you saying that AI makes coding more accessible? I mean that’s great, but it’s like a calculator. Sure it helps people who need simple calculations in the short term, but it might actually discourage software literacy.

            I wish AI could just be a niche tool, instead it’s like a simple calculator being sold as a smartphone.

    • beveradb@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      ·
      4 months ago

      I’ve built a couple of useful products which leverage LLMs at one stage or another, but I don’t shout about it cos I don’t see LLMs as something particularly exciting or relevant to consumers, to me they’re just another tool in my toolbox which I consider the efficacy of when trying to solve a particular problem. I think they are a new tool which is genuinely valuable when dealing with natural language problems. For example in my most recent product, which includes the capability to automatically create karaoke music videos, the problem for a long time preventing me from bringing that product to market was transcription quality / ability to consistently get correct and complete lyrics for any song. Now, by using state of the art transcription (which returns 90% accurate results) plus using an open weight LLM with a fine tuned prompt to correct the mistakes in that transcription, I’ve finally been able to create a product which produces high quality results pretty consistently. Before LLMs that would’ve been much harder!

    • Empricorn@feddit.nl
      link
      fedilink
      English
      arrow-up
      8
      ·
      4 months ago

      Haven’t you been watching the Olympics and seen Google’s ad for Gemini?

      Premise: your daughter wants to write a letter to an athlete she admires. Instead of helping her as a parent, Gemini can magic-up a draft for her!

      • psivchaz@reddthat.com
        link
        fedilink
        English
        arrow-up
        10
        ·
        4 months ago

        On the plus side for them, they can probably use Gemini to write their apology blog about how they missed the mark with that ad.

    • NABDad@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 months ago

      I think the LLM could be decent at the task of being a fairly dumb personal assistant. An LLM interface to a robot that could go get the mail or get you a cup of coffee would be nice in an “unnecessary luxury” sort of way. Of course, that would eliminate the “unpaid intern to add experience to a resume” jobs. I’m not sure if that’s good or bad,l. I’m also not sure why anyone would want it, since unpaid interns are cheaper and probably more satisfying to abuse.

      I can imagine an LLM being useful to simulate social interaction for people who would otherwise be completely alone. For example: elderly, childless people who have already had all their friends die or assholes that no human can stand being around.

      • Grandwolf319@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        4 months ago

        Is that really an LLM? Cause using ML to be a part of future AGI is not new and actually was very promising and the cutting edge before chatGPT.

        So like using ML for vision recognition to know a video of a dog contains a dog. Or just speech to text. I don’t think that’s what people mean these days when they say LLM. Those are more for storing data and giving you data in forms of accurate guesses when prompted.

        ML has a huge future, regardless of LLMs.

          • nic2555@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            4 months ago

            Yes. But not all Machine Learning (ML) is LLM. Machine learning refer to the general uses of neural networks while Large Language Models (LLM) refer more to the ability for an application, or a bot, to understand natural language and deduct context from it, and act accordingly.

            ML in general as a much more usages than only power LLM.

        • markon@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          edit-2
          4 months ago

          Just look at AlphaProof. Lol we’re all about to be outclassed. I’m sure everyone will still derrid the bots. They could be actual ASI and especially here in the US we’d say “I don’t see any intelligence.” I wish or society and all of us at individualsc would reflect on our limitations and tiny tiny insignificance on the grand scale. Our egos may kill us.

          P.S… I give us a 10% to make it to 2100 in any numbers or quality of life we’d consider remotely acceptable today. Pretty grim, but I think that’s the weight of the challenges we’re facing. Without AI I’d probably just say it was fucking hopeless. Because we’ve had all the time we needed and all the tech we needed and hardly ever fix anything. Always running a day late and a dollar short. This species has dreams to big for our collective britches. It’s always been a foolish endeavor and full of suffering and horrors. We’re here though so I hope we at least give it a good go. Would be super lame to go out in a putter and take must lifev on earth with us.

          So now the question is if we can use all these access models to actually do something about our problems. Even LLMs seem quite good at pointing out how we are really bad at using the tools we already have and know exactly how to use because we’re always too busy arguing while the ship sinks!

          • markon@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            4 months ago

            COVID tried and a lot of people paid the price for being low information and not so bright. Sadly plenty of people who did the right things still got fucked by stupidity of others!

      • markon@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        4 months ago

        I feel like everyone who isn’t really heavily interacting or developing don’t realize how much better they are than human assistants. Shit, for one it doesn’t cost me $20 an hour and have to take a shit or get sick, or talk back and not do its fucking job. I do fucking think we need to say a lot of shit though so we’ll know it ain’t an LLM, because I don’t know of an LLM that I can make output like this. I just wish most people were a little less stuck in their western oppulance. Would really help us no get blindsided.

        • markon@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          4 months ago

          Mostly true before, now 99.99%. The charades are so silly because obviously as a worker all I care about is how much I get paid. That’s it.

          All the company organization will care about. Is that work gets done to their standards or above and at the absolute lowest price possible.

          So my interests are diametrically opposed to their interests because my interest is to work as little as possible for as much money as possible. Their goal is to get as much work out of me as possible for as little money as possible. We could just be honest about it and stop the stupid games. I don’t give a shit about my employer anymore than they give a shit about me. If I care about the work that just means I’m that much more pissed they’re relying on my good will towards people who use their products and or services.

      • Flying Squid@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 months ago

        That’s because businesses are using AI to weed out resumes.

        Basically you beat the system by using the system. That’s my plan too next time I look for work.

    • EvilBit@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      4 months ago

      I actually think the idea of interpreting intent and connecting to actual actions is where this whole LLM thing will turn a small corner, at least. Apple has something like the right idea: “What was the restaurant Paul recommended last week?” “Make an album of all the photos I shot in Belize.” Etc.

      But 98% of GenAI hype is bullahit so far.

      • Grandwolf319@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        4 months ago

        How would it do that? Would LLMs not just take input as voice or text and then guess an output as text?

        Wouldn’t the text output that is suppose to be commands for action, need to be correct and not a guess?

        It’s the whole guessing part that makes LLMs not useful, so imo they should only be used to improve stuff we already need to guess.

        • EvilBit@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 months ago

          One of the ways to mitigate the core issue of an LLM, which is confabulation/inaccuracy, is to have a layer of either confirmation or simply forgiveness intrinsic to the task. Use the favor test. If you asked a friend to do you a favor and perform these actions, they’d give you results that you can either/both look over yourself to confirm they’re correct enough, or you’re willing to simply live with minor errors. If that works for you, go for it. But if you’re doing something that absolutely 100% must be correct, you are entirely dependent on independently reviewing the results.

          But one thing Apple is doing is training LLMs with action semantics, so you don’t have to think of its output as strictly textual. When you’re dealing with computers, the term “language” is much looser than you or I tend to understand it. You can have a “grammar” that is inclusive of the entirety of the English language but also includes commands and parameters, for example. So it will kinda speak English, but augmented with the ability to access data and perform actions within iOS as well.

    • pumpkinseedoil@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      4 months ago

      LLM have greatly increased my coding speed: instead of writing everything myself I let AI write it and then only have to fix all the bugs

      • Grandwolf319@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 months ago

        I’m glad. Depends on the dev. I love writing code but debugging is annoying so I would prefer to take longer writing if it means less bugs.

        Please note I’m also pro code generators (like emmet).

  • answersplease77@lemmy.world
    link
    fedilink
    English
    arrow-up
    63
    ·
    4 months ago

    I literally uninstalled and disabled every AI process and app in that latest galaxy AI update, which was the whole update btw. my reasons are:

    1- privacy and data sharing.

    2- the battery, cpu, ram of AI bloatware running in the background 247.

    3- it was chaging and doing things which I didn’t want especially in the galary photo albums and camera AI modes.

    • squidspinachfootball@lemm.ee
      link
      fedilink
      English
      arrow-up
      15
      ·
      4 months ago

      I was considering a new Samsung phone - is that baked into it? (Assuming you’re talking Samsung anyway, based on the galaxy name)

      • CileTheSane@lemmy.ca
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        1
        ·
        4 months ago

        Samsung is a nightmare, don’t purchase their products.

        For example: I used to have a Samsung phone. If I plugged it into the USB port on my computer Windows Explorer would not be able to see it to transfer files. My phone would tell me I need to download Samsung’s drivers to transfer files. I could only get them by downloading Samsung’s software. Once I installed the software Windows Explorer was able to see the device and transfer files. Once I uninstalled the software Windows Explorer couldn’t see the device again.

        Anything Samsung can do in your region to insert themselves between you and what you are trying to do they will do.

        • squidspinachfootball@lemm.ee
          link
          fedilink
          English
          arrow-up
          4
          ·
          4 months ago

          The software bloat is not dissimilar to what I’ve heard in the past, but I’d forgotten since I haven’t gone in depth researching yet. Which phones do we prefer today? Loosely off the top of my head, less bloat/intrusiveness, nice camera, battery life enough for a day, and maybe on the smaller size to fit one hand are probably what I’ll be looking in to.

          • CileTheSane@lemmy.ca
            link
            fedilink
            English
            arrow-up
            3
            ·
            4 months ago

            Apparently Pixel is the easiest to install an alternative OS on, going to start looking into that soon.

            • squidspinachfootball@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 months ago

              I’ve heard good things about Graphene OS, but also deviating from the “stock” experience might make it more difficult to do certain things… like biometrics for banking or something? Not sure myself. Will look into it too, good idea.

            • squidspinachfootball@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 months ago

              Ooo I haven’t heard of Ulefone before, I see some of their phones have a built in thermal camera? That sounds cool. How’s the Android/software experience? I’m not familiar with the Chinese phone lines, do they have their own bloat like Samsung?

              • Ilovethebomb@lemm.ee
                link
                fedilink
                English
                arrow-up
                2
                ·
                4 months ago

                No bloatware, although mine has a “feature” called Duraspeed I need to uninstall that restricts background applications, including fitness tracking ones I actually want running, and notifies me multiple times per day about this.

                Them and Doogee I really like, especially since the phones don’t need to be in a case.

      • Wintex@lemm.ee
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        2
        ·
        4 months ago

        To give you a second opinion from the other guy, I’ve had quite a few Samsungs in a row at this point. From Galaxy S2 to S23Ultra skipping years between every purchase.

        They are effectively the premium vendor of Android, at least for western audiences. The midrange has some good ones, but other companies do well there too. At the high end, Samsung might lose out a bit to google on images of people, but the phones Samsung sell are well built, have a long support life, have lots of features that usually end up being imported to AOSP and/or Google’s own version of Android. The last few generations are the Apple of Android. The AI features they’ve added can be run on device if you want, and idk what the other guy is talking about, but the AI features aren’t that obnoxiously pushed on my device, the S23 Ultra. I have some things on, most things off. Then again, I’ve used HTC for a few years and iPhone for two weeks, so except for helping my dad with his Pixel 6a while that device lasted, I’ve not really tried other brands. The added customization on Samsung is kind of a problem for me, because I don’t feel like changing brands after being able to customize so much out of the box.

        And I’ve never had issues connecting to a simple Windows computer, given that the phone has always been able to use the normal Plug-and-play driver that is there already. If you have a macbook like I do, it’s a bit cringe, but that’s a macbook issue moreso.

        • CileTheSane@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          ·
          4 months ago

          the Apple of Android

          And here I thought I was being critical of them.

          You are right of course, Samsung is very much like Apple. And if you don’t care about a company trying to lock you into their software, inserting themselves in between everything you’re trying to do, and denying you control over your own device, then I’m sure it works just fine.

          • Wintex@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            4 months ago

            You are framing the issue to read the way you want it to be read. The customization and software options I am currently using, I have been able to make 90% of it work with a rooted phone and a combination of many open source tools and more. Now I get 100 % without theming breaking randomly, bluetooth being stable, not having to reset the phone every time I update to a new version, and more random issues I had with banking apps and others. I have control over my device stop dooming lmaooo. People use devices that fit their needs.

            • CileTheSane@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 months ago

              The customization and software options I am currently using, I have been able to make 90% of it work with a rooted phone and a combination of many open source tools and more.

              When I was using Windows I was able to get it to work 90% the way I wanted it to with a combination of open source tools, and help online disabling the bullshit. The point is I shouldn’t have to put that much effort fighting my OS to get it working the way I want it to.

              With a Samsung phone maybe I can avoid their bullshit by rooting the phone and finding open source software, but I’d rather just go with a different company and not have the hassle.

              stop dooming lmaooo.

              “This company has shit business practices, you should use someone else” is not ‘dooming’.

              People use devices that fit their needs.

              Yes, and I’m pointing out why Samsung might not fit their needs in case they are unaware.

        • FatCrab@lemmy.one
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          4 months ago

          I’ll second this experience. Pricing aside (and even then, because of their new recycling policy, I was able to replace an old galaxy nearly the size of a tablet with a new flip-- that has VERY surprisingly become my favorite phone I’ve ever owned-- for like a hundred bucks), I’ve never had complaints about my Samsung phone and wearables that weren’t general to all smartphones. And the easy integrations between my watch, phone, and earbuds, all Samsung, is really great.

      • answersplease77@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        4 months ago

        Yee. No root required, neither recommended for samsung devices. In short just enable developer mode from phone settings, then debug it with adb platform to uninstall and disable any system app, and can also change lines, colors, phone behaviors, properties and look, install and uninstall apps which you could not before…and so many things.

    • time_fo_that@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      Did it help with battery life? My S24U has not been getting the greatest battery life lately and I wonder if this is why.

      • answersplease77@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        I don’t know about the AI stuff specifically. Check your battery usage to see which process is doing that. but yes debloating in general makes your phone battery longer, and with the help of few more tricks also faster. There are thousands of no-root-required debloating tutorials online.

  • Sarmyth@lemmy.world
    link
    fedilink
    English
    arrow-up
    58
    ·
    4 months ago

    I’ve learned to hate companies that replaced their support staff with AI. I don’t mind if it supplements easy stuff, that should take like 15 seconds, but when I have to jump through a bunch of hoops to get to the one lone bastard stuck running the support desk on their own, I start to wonder why I give them any money at all.

  • Captain Aggravated@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    57
    arrow-down
    1
    ·
    4 months ago

    “AI” is certainly a turn-off for me, I would ask a salesman “do you have one that doesn’t have that?” and I will now enumerate why:

    1. LLMs are wrongness machines. They do have an almost miraculous ability to string words together to form coherent sentences but when they have no basis at all in truth it’s nothing but an extremely elaborate and expensive party trick. I don’t want actual services like web searches replaced with elaborate party tricks.

    2. In a lot of cases it’s being used as a buzzword to mean basically anything computer controlled or networked. Last time I looked up they were using the word “smart” to mean that. A clothes dryer that can sense the humidity of the exhaust air to know when the clothes are dry isn’t any more “AI” than my 90’s microwave that can sense the puff of steam from a bag of popcorn. This is the kind of outright dishonest marketing I’d like to see fail so spectacularly that people in the advertising business go missing over it.

    3. I already avoided “smart” appliances and will avoid “AI” appliances for the same reasons: The “smart” functionality doesn’t actually run locally, it has to connect to a server out on the internet to work, which means that while that server is still up and offering support to my device, I have a hole in my firewall. And then they’ll stop support ten minutes after the warranty expires and the device will no longer work. For many of these devices there’s no reason the “smart” functionality couldn’t run locally on some embedded ARM chip or talk to some application running on a PC that I own inside my firewall, other than “then we don’t get your data.”

    4. AI is apparently consuming more electricity than air conditioning. In fact, I’m not convinced that power consumption isn’t the selling point they’re pushing at board meetings. “It’ll keep our friends in the pollution industry in business.”