• Fisch
    link
    fedilink
    arrow-up
    86
    ·
    7 months ago

    That’s probably a real answer from someone on Quora then

    • bstix@feddit.dk
      link
      fedilink
      arrow-up
      66
      ·
      7 months ago

      What’s the point in having an AI run the search and present the found answer for you, when you just ran the search yourself and gets the AI finding presented?

      As this point AI helpers are just a layer that hides the details from the original search. It’s useless for this. AI is wonderful for lots of stuff, but this just isn’t it. I used to laugh when people used the Google search box to find Google so they could search in Google, but that is exactly what AI is doing for us now.

      • nikita@sh.itjust.works
        link
        fedilink
        arrow-up
        19
        ·
        7 months ago

        Plus the insane power consumption for such a marginally useful feature. Especially given that it’s on by default for everyone using google (as I understand)

        It’s almost like the feature is not ready but they need to show off to their investors anyway. At the cost of user experience and the environment.

        At least with ChatGPT you have to consciously go to their website and use, rather than being the first result of a fucking internet search.

        • bstix@feddit.dk
          link
          fedilink
          arrow-up
          1
          ·
          7 months ago

          Yes, the search engine AI is a very expensive and shitty filter.

          Unfortunately with SEO going nuts these days, it might be necessary to have some kind of spam filter for searching the web just to avoid some of the enshittification created by AI in the first place. Like, the AI goes to Quera and Reddit for answers instead of the marketing links, so at least the answers are less commercial.

          Obviously these “human” sources will eventually or are already shittifyed too, with half the posts there also being marketing in one way or the other.

          I dislike it, but flooding the web with useless crap may be the key to making some better alternative…

      • RecallMadness@lemmy.nz
        link
        fedilink
        arrow-up
        4
        ·
        7 months ago

        More eyes on your website, means less on other websites, making your adverts more valuable.

        And when it doesn’t work, it doesn’t matter, because you run the advertising on the other websites too. Bonus: you can penalise rankings for websites that don’t use your advertising network.

      • AFK BRB Chocolate@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 months ago

        Was having a related conversation with an employee this morning (I manage a software engineering organization). He asked an LLM how to separate the parts of a date in Excel, and got a pretty good explanation of how do it with the text to columns wizard, and also how to use a formula to get each part. He was happy because he felt it would have taken him much longer to figure it out himself.

        I was saying I thought that was a good use of an LLM - it’s going to give a tailored answer - but my worry is that people will do less scrubbing of an answer coming from an AI than one they saw on a forum. I said we should think of it like a tailored Google search.

        For comparison, I googled “Excel formula separate parts of a date” and one of the top results was a forum discussion that had the exact solutions the LLM gave, using the same examples. On the one hand, to get it from the forum you had to wade through all the wrong answers and discussions. On the other hand, that discussion puts the answer given in the context of a bunch of others that are off the mark, and I think make people less likely to assume it’s correct.

        In any case, it’s still just synthesizing from or regurgitating training data.

        • bstix@feddit.dk
          link
          fedilink
          arrow-up
          2
          ·
          7 months ago

          I think LLMs are better for more fluffy stuff, like writing speeches etc.

          Excel solutions are often very specific. A vague question like separating a date can be solved in many ways, using a variety of formulas, the text-to-column wizard, VBA, import queries or even just formatting, all depending on what you really need, what the input is and what locality is used and other things.

          The text-to-column method is great, because it transforms whatever the input is into a date type, making it possible to treat it as and make calculations as an actual date. It’s not always the right solution though, for instance if the input is ambiguous.

          It’s fine that he learned to use this method, but I wonder what he’d ask the LMM in a case where it isn’t the right solution and what it’ll come up with then. He didn’t actually learn to separate a date from the input. He learned to use the text import wizard.

          In my experience it’s preferable to learn these things on a more basic level if only just to be able to search more specifically for the right answer, because there is a specific answer. Having a language model run through a bunch of solutions and presenting the most popular one might just be a waste of time and leading you into a wild goose chase.

          • AFK BRB Chocolate@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            7 months ago

            You might have missed where I said it explained both the text to columns wizard and a formula. He used the formula, which is what he was looking for. He’s a top notch software developer, he just doesn’t use Excel much.

            But I agree with your broader point. I keep having to remind people that the “LM” part is for “language model.” It’s not figuring anything out, it’s distilling what an answer should look like. A great example is to ask one for a mathematical proof that isn’t commonly found online - maybe something novel. In all likelihood, it’s going to give you one, and it will probably look like the right kind of stuff, but it will also probably be wrong. It doesn’t know math (it doesn’t know anything), it just has a model of what a response should look like.

            That being said, they’re pretty good for a number of things. One great example is lesson plans. From what I understand, most teachers now give an LLM the coursework and ask it to generate a lesson plan. Apparently they do an excellent job and save many hours of work. Anything that involves summarizing information is good, especially as that constrains the training data.