The big AI models are running out of training data (and it turns out most of the training data was produced by fools and the intentionally obtuse), so this might mark the end of rapid model advancement

  • JoeByeThen [he/him, they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    40
    ·
    22 days ago

    No, it’s not. Maybe strictly for LLMs, but they were never the endpoint. They’re more like a Frontal Lobe emulator, the rest of the “brain” still needs to be built. Conceptually, Intelligence is largely about interactions between Context and Data. We have plenty of written Data. In order to create Intelligence from that Data we’ll need to expand the Context for that Data into other sensory systems; Which we are beginning to see in the combo LLM/Video/Audio models. Companies like Boston Dynamics are already working with and collecting Audio/Video/Kinesthetic Data in the Spatial Context. Eventually researchers are going to realize (if they haven’t already) that there’s massive amounts of untapped Data being unrecorded in virtual experiences. Though I’m sure some of the delivery/ remote driver companies are already contemplating how to record their Telepresence Data to refine their models. If capitalism doesn’t implode on itself before we reach that point, the future of gig work will probably be Virtual Turks where, via VR, you’ll step into the body of a robot when it’s faced with a difficult task, complete the task, and then that recorded experience will be used to train future models. It’s sad, because under socialism there’s an incredible potential for building a society where AI/Robots and humanity live in symbiosis akin to something like The Culture, but it’s just gonna be another cyber dystopia panopticon.

  • lurkerlady [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    1
    ·
    edit-2
    22 days ago

    This is accurate, though I am actually going to explain why. These big model companies (Google, ClosedAI, etc) parasitize the open-weights/open-source community that actually makes good Loras, fine tunes, and research papers. Consumer hardware simply hasn’t gotten good and cheap enough for very good fine tune training, and thats why this is all slowly petering out. In a couple of generations of consumer GPUs, which will be when we get consumer GPUs geared towards AI (re: super high VRAM counts of like 70gb+ for an affordable sub 700 usd cost), we might see another leap forward in this tech. Though I will say that this mostly pertains to LLMs, generative AI models like Stable Diffusion have a lot of tricks up their sleeves that can still be explored. Most of recent research and tweaking has been based around building a structure for the AI to build on, to sort of guide it rather than letting it take random stabs at things, in order to improve outputs. Some people have been doing things like hard coding color theory, framing a photograph, etc, and interpreting human language to trigger that hard code.

    We’ve had statistical models like these since the 50s. Consumer hardware has always been the big materialist bottleneck, this is all powered by small research teams and hobbyist nerds. You can throw a ton of money at it and have a giant research team, but the performance you squeeze out of adding 400b more parameters to your 13b model or having a gigantic locked-down datacenter is going to be diminishing.

    Also, synthetic data can be useful, people are hating on it in this thread but its a great way to reinforce good habits in the AI and interpret garbled code and speech that would otherwise confuse the AI. I sometimes feel like people just see something about ‘AI bad’ and upvote it and don’t try to understand it, where it is useful and where it is not, and so on.

      • lurkerlady [she/her]@hexbear.net
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        22 days ago

        Synthetic data is basically a fancy way of saying ‘I’m properly formatting data and reinforcing the ai’s good outputs’. Rearranging words, fixing / adding tags, that sort of thing. This is generated with various tools that usually have an LLM or VLM plugged in, though some are as simple as a regex script.

    • MacN'Cheezus@lemmy.today
      link
      fedilink
      English
      arrow-up
      3
      ·
      22 days ago

      Better hardware isn’t going to change anything except scale if the underlying approach stays the same. LLMs are not intelligent, they’re just guessing a bunch of words that are statistically most likely to satisfy the user’s request based on their training data. They don’t actually understand what they’re saying.

  • davel [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    29
    ·
    edit-2
    22 days ago

    Spicy autocomplete can produce much more content much faster than we can, and it is consuming its own content now. What could go wrong?

    clown-to-clown-communicationclown-to-clown-conversation

  • ssj2marx@lemmy.ml
    link
    fedilink
    English
    arrow-up
    27
    ·
    edit-2
    22 days ago

    I know what they’re trying to say, but I really wish these writers would use accurate terms. “AI” aren’t intelligent in any meaningful sense, they’re just pattern generators, and they were never getting “smarter”, the patterns that they were capable of outputting were just getting more complex.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      22 days ago

      Yeah 100%. It’s like adopting the language of your oppressor. The hucksters have been selling their “learning”, “intelligence”, “minds”, etc. for so long that many people have internalized it. Let’s please return to reality and using scientific terms like data, function, average, statistics, etc.

  • DragonBallZinn [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    26
    ·
    edit-2
    22 days ago

    Based. Fuck AI.

    Always suspicious when its one of the few technologies boomers got super hyped up about and wanted to shove into everything.

  • Owl [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    21
    ·
    22 days ago

    This entire boom was predicated on being able to throw 10x the compute budget at a problem and get 2x the quality of results, so it was inevitable. It’s not like big tech is suddenly funding long-term R&D teams again; they stopped doing that before most of these companies were even founded.

  • Assian_Candor [comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    21
    ·
    22 days ago

    It would be funny if we hadn’t incinerated the planet for this shit. The peddlers will get rich too, zero consequences, except of course for the jobs that were snuffed out in infancy.

  • aaro [they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    14
    ·
    22 days ago

    reposting my hot AI take

    Just because capital can’t possibly imagine more than 5 minutes in the future, and just because capital can only speak profit and couldn’t fathom progress for the sake of progress, doesn’t mean that AI isn’t real and scary. The technological hurdles are similar things that have been overcome in past technologies, the incentive to replace workers with machines is just as enticing as it’s ever been, and if we’ve seen noise and fervor like this now with this little of the total reward reaped, expect to continue to see this much noise and fervor until the last drop of blood has been squeezed out.

  • D61 [any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    13
    ·
    22 days ago

    The more social media style posts/comments I read about this “AI” stuff, the more I realize I’ve been doing the same thing since I was in middle school.

    I was reading way above my grade level and would use words (often incorrectly) that I wasn’t expected to know with such confidence that adults thought I was smart.