• frustratedphagocytosis@kbin.social
    link
    fedilink
    arrow-up
    14
    arrow-down
    3
    ·
    9 months ago

    Back in the day, I’d be thrilled to read something like this, but now all I hear is ‘look at how many new ways the Google overlord can fuck humans up with protein mutations to eliminate fragile meat-based enemies’

    • thefartographer@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      That’s ridiculous sci-fi fantasy
      - cough -
      Anyone else have a sudden urge to be more open with their location sharing?

  • PreviouslyAmused@lemmy.ml
    link
    fedilink
    English
    arrow-up
    10
    ·
    9 months ago

    I want to believe this, but given how wonky AI bots have proven to be as of late, I can’t help but think that you can cut this number down by several million

    • repungnant_canary@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      9 months ago

      In my field where Google “throws” their huge DL models at problems as well, the papers they publish tend to have very limited explanation of how and why the model works and they don’t really provide a comprehensive validation of the model. So I find it difficult to trust their findings here, not only by looking at LLMs but also their “scientific” models.