• WhiteOakBayou@lemmy.world
    link
    fedilink
    English
    arrow-up
    183
    arrow-down
    1
    ·
    1 year ago

    At some point in the last two years I completely stopped using Google search in browser and just use Google maps to find businesses or ddg for searches. Actual Google search just has too many sponsored or promotional links

    • WeeSheep@lemmy.world
      link
      fedilink
      English
      arrow-up
      65
      arrow-down
      1
      ·
      1 year ago

      I just searched local restaurants near me and tried to sort by distance and the first option was 800 miles away, the second was 600 miles away. It’s not just Google search getting worse.

        • Daft_ish@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          11 months ago

          Was over for me when I opt out out of some of their data tracking shit and they started captcha’ing me everytime I browsed there. Like wtf Google what are you anymore? Sounds dumb but them changing the banner every week was the start of the end.

          • Gormadt@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            5
            ·
            11 months ago

            Changing the banner for like holidays and anniversaries of things isn’t an issue for me IMO

            But yeah all their tracking shit that you can’t opt out of is a big problem and a big part of why I’m pulling away from Google as much as I can

      • TheGrandNagus@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        11 months ago

        Here’s the thing, Google has changed. Over time, they’ve restructured themselves, initially purposefully, but now they’re facing the consequences of that.

        Google genuinely does still have amazing programmers and engineers.

        The trouble is that their expertise is in crafting systems that harvest personal information; expertise in other areas has been left to rot because there’s no point in improving them.

        Gmail is already entrenched, as is search, YouTube, maps, android, etc.

        They aren’t going to attract a significant amount more customers, so their main avenue for continued growth has been to become better at harvesting and processing data.

        For a while that was fine, but now that this expertise has been lost, Google can’t make good products. They don’t have the ability to do so, it’s not that they don’t want people to use their software and think “wow this is actually pretty great”, it’s that they genuinely can’t do it anymore. Not unless the product you want is a telemetry system, in which case I doubt you’ll find anybody that can do such a stellar job.

        It’s a part of why Google starts then kills so many projects. They want to expand to collect more data, but they don’t have the ability to create good services anymore, so it just ends up being an advanced data collector with a sub-par app/website on top of it. The company just isn’t structured to make things in any other way.

      • umbrella@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        they are probably putting sponsored results above the legit stuff.

        didnt someone get caught recently doing that?

    • ColeSloth
      link
      fedilink
      English
      arrow-up
      36
      ·
      11 months ago

      Pretty soon the internet will be almost completely ruined. Within a few years. AI bots will have spammed everything. Searches and web pages will be entirely faked bs. Reddit and Lemmy will have enough ai Bots commenting and pushing agendas/products that no one will have a clue who’s a real person. Information that’s true will be almost impossible to verify online.

      In short, if you think the web has gotten bad now, you ain’t seen nothin yet.

      • ParanoiaComplex@lemmy.world
        link
        fedilink
        English
        arrow-up
        18
        ·
        11 months ago

        I agree with the sentiment, but lack of AI has not stopped SEO hacking in the past. Sure it will help them go farther, but there are already tons of garbage websites hacking the top 1-5 results of any search

        • rottingleaf@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          In the past I remember it made using search engines less rewarding than using web directories, web rings, asking people on forums etc. That was slower, but gave you results (and acquaintances). While using search meant looking through dozens of pages of search results, mainly SEO.

        • ColeSloth
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          The top results pages, sure. I belive it’s going to take over the top 500. Along with flooding places like lemmy and reddit.

      • lloram239@feddit.de
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        edit-2
        11 months ago

        I am more optimistic on that one. AI provides a pretty clear way out of this, since it allows you to automatically detect the bullshit. Meaning either the bullshit has to raise so much in quality that it is indistinguishable from good content, in which case it would not be bullshit anymore, or it will get filtered. AI can also transform bad websites into good ones, like a super-powered ReaderMode, AdBlock and more all rolled into one, so a lot of the “lets plaster everything with ads” will lose effectiveness.

        The problem over the last decade was that Google completely lost interest in being a search engine, they are just an ad company and as long as search leads you to more ads, they are quite happy. So the user experience went down the toilet.

        The real problem with AI is that it will remove the incentive for the authors. Content producers want to get paid, with AI you can just extract the information from an article without ever viewing the article or the ads around it.

        • EmergMemeHologram@startrek.website
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          I think it’s just a new world for spam.

          At some point, probably soon, AI content will generate so much data it becomes untenable to store all the scraped data.

          We’ll also reach a point where it becomes much more costly to parse the data for AI spam+trustworthiness+topics. If you need LLMs just to filter spam, that is a large step up in costs and infrastructure vs current methods.

          When that happens what happens to search? The quality will have to degrade or the margins will drop off sharply.

        • ColeSloth
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          They have already been trying to use ai to combat and identify ai in college and highschool papers. So far it’s been severely ineffective. AI has gotten pretty good at writing out a sentence or two that looks like it’s real. If ai improves enough I doubt they’ll be much of a way to identify it all.

          • lloram239@feddit.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            11 months ago

            It’s not about identifying AI or even spam, but about extracting useful information. Are the claims made in a source backed by other sources? Do they violate information from trusted sources? That’s all stuff that an AI can reason about and then discard the source as junk or condense it down to the useful information in it.

            Basically you completely skip browsing the Web yourself and just use the AI to find you what you want. Think of it like some IMDB or Wikipedia, but covering everything and written and curated by AI. When the AI doesn’t already know some fact, it goes crawling the Web and finding it out for you, expanding its knowledge base in the process.

            Or see the ship computer from StarTrek, you don’t see the people there browsing the Web, you see them getting data in exactly the format they need and they can reformat and filter it as needed.

            At the moment there are still some technical hurdles, the AI systems we have are all still a little to stupid for this. But that seems to be the direction we are heading, things like summarizer bots already do a pretty good job and ChatGPT is reasonably good at answering basic questions and reformatting it the way you need it. Only a matter of time until it gets good enough that you couldn’t do a better job yourself.

            • ColeSloth
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              11 months ago

              You’re looking at it in a flawed manner. AI has already been making up sources and names to state things as facts. If there’s a hundred websites for claiming the earth is flat and you ask an ai if the earth is flat, it may tell you it is flat and source those websites. It’s already been happening. Then imagine more opinionated things than hard observable scientific facts. Imagine a government using AI to shape opinion and claim there was no form of insurrection on Jan 6th. Thousands of websites and comments could quickly be fabricated to confirm that it was all made up. Burying the truth into obscurity.

              • lloram239@feddit.de
                link
                fedilink
                English
                arrow-up
                1
                ·
                11 months ago

                You have plenty of literature that can act as ground truth. This is not a terribly hard problem to solve, it just requires actually focusing on it. Which so far simply hasn’t been done. ChatGPT is just the first “look, this can generate text”. It was never meant to do anything useful by itself or stick to the truth. That all still has to be developed. ChatGPT simply demonstrates that LLM can process natural language really well. It’s the first step in this, not the last.

                • ColeSloth
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  11 months ago

                  Sounds like you’re arguing against yourself, now.

    • clearleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 months ago

      I think this is part of why Google holds on to youtube despite it not making them money. Without that Google would just be the map and email company. They would completely lose the appearance of “owning” any part of the internet.