• blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    2 days ago

    The big claim is that R1 was trained on far less computing power than OpenAI’s models at a fraction of the cost.

    And people believe this … why? I mean, shouldn’t the default assumption about anything anyone in AI says is that it’s a lie?

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      2 days ago

      A) Putting on my conspiracy theory hat… OpenAI has been bleeding for most of a year now, with execs hitting the door running and taking staff with them. It’s not at all implausible that somebody lower on the totem pole could have been convinced to leak some reinforcement training weights to help Deepseek along.

      B) Putting on my best LessWronger hat (random brown stains, full of holes)… I estimate no less than a 25% chance that by the end of this week, Sammy-boy will be demanding an Oval Office meeting, banging the table and screaming about “theft!” and “hacking!!”