None of what I write in this newsletter is about sowing doubt or “hating,” but a sober evaluation of where we are today and where we may end up on the current path. I believe that the artificial intelligence boom — which would be better described as a generative AI boom — is (as I’ve said before) unsustainable, and will ultimately collapse. I also fear that said collapse could be ruinous to big tech, deeply damaging to the startup ecosystem, and will further sour public support for the tech industry.

Can’t blame Zitron for being pretty downbeat in this - given the AI bubble’s size and side-effects, its easy to see how its bursting can have some cataclysmic effects.

(Shameless self-promo: I ended up writing a bit about the potential aftermath as well)

  • s3p5r@lemm.ee
    link
    fedilink
    English
    arrow-up
    22
    ·
    2 months ago

    I don’t toil in the mines of the big FAANG, but this tracks with what I’ve been seeing in my mine. I also predict it will end with lay-offs and companies collapsing.

    Zitron thinks a lot about the biggest companies and how it will ultimately hurt them, which is reasonable. But, I think it ironically downplays the scale of the bubble, and in turn, the impacts of it bursting.

    The expeditions into OpenAI’s financials have been very educational. If I were an investigative reporter, my next move would be to look at the networks created by venture capitalists and what is happening inside the companies who share the same patrons as Open AI. I don’t say that as someone who interacts with finances, just as someone who carefully watches organizational politics.

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    21
    ·
    edit-2
    2 months ago

    On each step, one part of the model applies reinforcement learning, with the other one (the model outputting stuff) “rewarded” or “punished” based on the perceived correctness of their progress (the steps in its “reasoning”), and altering its strategies when punished. This is different to how other Large Language Models work in the sense that the model is generating outputs then looking back at them, then ignoring or approving “good” steps to get to an answer, rather than just generating one and saying “here ya go.”

    Every time I’ve read how chain-of-thought works in o1 it’s been completely different, and I’m still not sure I understand what’s supposed to be going on. Apparently you get a strike notice if you try too hard to find out how the chain-of-thinking process goes, so one might be tempted to assume it’s something that’s readily replicable by the competition (and they need to prevent that as long as they can) instead of any sort of notably important breakthrough.

    From the detailed o1 system card pdf linked in the article:

    According to these evaluations, o1-preview hallucinates less frequently than GPT-4o, and o1-mini hallucinates less frequently than GPT-4o-mini. However, we have received anecdotal feedback that o1-preview and o1-mini tend to hallucinate more than GPT-4o and GPT-4o-mini. More work is needed to understand hallucinations holistically, particularly in domains not covered by our evaluations (e.g., chemistry). Additionally, red teamers have noted that o1-preview is more convincing in certain domains than GPT-4o given that it generates more detailed answers. This potentially increases the risk of people trusting and relying more on hallucinated generation.

    Ballsy to just admit your hallucination benchmarks might be worthless.

    The newsletter also mentions that the price for output tokens has quadrupled compared to the previous newest model, but the awesome part is, remember all that behind-the-scenes self-prompting that’s going on while it arrives to an answer? Even though you’re not allowed to see them, according to Ed Zitron you sure as hell are paying for them (i.e. they spend output tokens) which is hilarious if true.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      2 months ago

      one might be tempted to assume it’s something that’s readily replicable by the competition (and they need to prevent that as long as they can) instead of any sort of notably important breakthrough.

      ah yes, open AI

  • mirrorwitch@awful.systems
    link
    fedilink
    English
    arrow-up
    18
    ·
    2 months ago

    I also fear that said collapse could be ruinous to big tech, deeply damaging to the startup ecosystem, and will further sour public support for the tech industry.

    Yes… ha ha ha… YES!

  • FredFig@awful.systems
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    edit-2
    2 months ago

    I’m terrified for the future, and not even on hater shit. The public numbers are bad, and barring some extremely surprising reports locked behind a wall of NDAs, the private numbers don’t seem much better - even Saltman, perpetual cheerleader he is, doesn’t have much to offer except desperation to keep the party going, barely even a week after their big model drop.

    Sam Altman responds to a user asking for the promised voice features with extreme pettiness. "how about a few weeks of gratitude for magic intelligence in the sky, and then you can have more toys soon?"

    Image description

    Sam Altman responds to a user asking for the promised voice features with extreme pettiness. “how about a few weeks of gratitude for magic intelligence in the sky, and then you can have more toys soon?”

    So if all the big tech players know that this is garbage, the continual doubling down on this either points to: 1. scrambling for the pie while it’s there because they need it to stay afloat, or 2. everything else they have to offer is even worse somehow? And in either case, the aura of being a tech company instead of a company is lost, and I don’t know what happens in the fallout. The probably best case scenario is that only tech workers like myself have to eat the blowback, but I suspect things won’t play out so cleanly.

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      ·
      2 months ago

      User requests something that accommodates their actual use-case. Altman responds by dismissing it as “toys,” in that same cultivated faux-casual lowercase smarm that constitutes the bulk of his public identity. This man is not fit to be an executive.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 months ago

      And in either case, the aura of being a tech company instead of a company is lost

      I don’t understand this and am kinda afraid of what hides behind this

      • anton@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        2 months ago

        It’s about hype and economics.
        Tech companies can theoretically scale well and are valued on the expectation of growth while normal companies are manly valued based on what they currently do. An app can basically be copied for free to millions of users once it has been coded and servers don’t cost that much. A traditional company, say a car company, that wants to increase profits has to build a new factory or something. The problems arise when a companies perception goes from startup/tech company to normal company.

        Example: wework was a startup that rented office space long term and lets it customers rent short term from them. Once people realized, that it was a real estate company and not a tech company it’s value plummeted, it couldn’t raise more capital and went bankrupt.

        Edit: spelling

      • FredFig@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        2 months ago

        So I should be clear, I dont think theres anything special about Tech companies that should let them be treated differently. But for whatever reason, it is a fact that places like We or Tesla or Theranos or fucking Groupon gets stupid valuations just because they’re “tech” adjacent.

        If the market ever catches on that theres no secret ingredient (and as Zitron’s shown, there are pretty visible public numbers pointing at this), we’re looking at a correction at the trillion dollar scale. Or maybe we never ask Google to put up or shut up, and just keep the fairy powder in our eyes forever.

  • corbin@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    ·
    2 months ago

    Hallucinations — which occur when models authoritatively states something that isn’t true (or in the case of an image or a video makes something that looks…wrong) — are impossible to resolve without new branches of mathematics…

    Finally, honesty. I appreciate that the author understands this, even if they might not have the exact knowledge required to substantiate it. For what it’s worth, the situation is more dire than this; we can’t even describe the new directions required. My fictional-universe theory (FU theory) shows that a knowledge base cannot know whether its facts are describing the real world or a fictional world which has lots in common with the real world. (Humans don’t want to think about this, because of the implication.)

  • ShakingMyHead@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    ·
    2 months ago

    Microsoft is making laptops with dedicated Copilot buttons.

    I think they’d rather burn their company to the ground, all the while telling their customers that they just needed to wait a little while longer, rather than admit that they got it wrong.

  • zbyte64@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    2 months ago

    I worry this is going to leave a crater in the software industry that will take a decade to fill.

    • stoly@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 months ago

      Software dev has been an over saturated market for ages now, even before the recent AI craze.

    • Gladaed@feddit.org
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      16
      ·
      2 months ago

      Summizing Emails is a valid purpose. If you want to be pedantic about what AI means, go gatekeeper somewhere else.

      • khalid_salad@awful.systems
        link
        fedilink
        English
        arrow-up
        16
        ·
        2 months ago

        go gatekeeper somewhere else

        Me, showing up to a chemistry discussion group I wasn’t invited to:

        Alchemy has valid use cases. If you want to be pedantic about what alchemy means, go gatekeep somewhere else.

      • gerikson@awful.systems
        link
        fedilink
        English
        arrow-up
        13
        ·
        2 months ago

        JFC how many novel-length emails do you get in a week?

        I think a more constructive way to handle this problem is to train people to write better emails.

        • skillissuer
          link
          fedilink
          English
          arrow-up
          13
          ·
          2 months ago

          a sort of problem that only lw forums users have

          • gerikson@awful.systems
            link
            fedilink
            English
            arrow-up
            11
            ·
            2 months ago

            “AI, please summarize this LW”

            “Certainly. Here is the summary: give all your money to Yud or burn in virtual hell forevermore”

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        13
        ·
        2 months ago

        go gatekeeper somewhere else

        who the fuck are you again? go post somewhere else

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        2 months ago

        ah yes, thanks for that extremely valuable driveby input that you gave after someone most definitely was speaking directly to you

        oh, wait. hang on. sorry, I just checked my notes here. it’s the other thing, the opposite one.

  • TommySoda@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    11
    ·
    2 months ago

    I mean it’s gotten to the point where I can’t even keep track of all the different AI being pushed by companies. My prediction is some company is going to make a super efficient and helpful AI and everyone will start using that as a base point. Like how every company wanted a website before they all just migrated the majority of their information to social media like Facebook and Twitter. And let’s be honest, most of the big companies making AI are not going to be the ones to do it. And even though they are improving, they are more interested in making money than better AI. We haven’t seen a major breakthrough in months and the majority of progress is minimal. Every time they come out with a new model it’s usually just the same with more bells and whistles.