I know people here are very skeptical of AI in general, and there is definitely a lot of hype, but I think the progress in the last decade has been incredible.

Here are some quotes

“In my field of quantum physics, it gives significantly more detailed and coherent responses” than did the company’s last model, GPT-4o, says Mario Krenn, leader of the Artificial Scientist Lab at the Max Planck Institute for the Science of Light in Erlangen, Germany.

Strikingly, o1 has become the first large language model to beat PhD-level scholars on the hardest series of questions — the ‘diamond’ set — in a test called the Graduate-Level Google-Proof Q&A Benchmark (GPQA)1. OpenAI says that its scholars scored just under 70% on GPQA Diamond, and o1 scored 78% overall, with a particularly high score of 93% in physics

OpenAI also tested o1 on a qualifying exam for the International Mathematics Olympiad. Its previous best model, GPT-4o, correctly solved only 13% of the problems, whereas o1 scored 83%.

Kyle Kabasares, a data scientist at the Bay Area Environmental Research Institute in Moffett Field, California, used o1 to replicate some coding from his PhD project that calculated the mass of black holes. “I was just in awe,” he says, noting that it took o1 about an hour to accomplish what took him many months.

Catherine Brownstein, a geneticist at Boston Children’s Hospital in Massachusetts, says the hospital is currently testing several AI systems, including o1-preview, for applications such as connecting the dots between patient characteristics and genes for rare diseases. She says o1 “is more accurate and gives options I didn’t think were possible from a chatbot”.

  • UlyssesT [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    3 months ago

    The Rube Goldbergian machine that burns forests and dries up lakes needs just a few more Rube Goldbergian layers to do… what we already had, more or less, but quicker and sloppier with more errors and more burned forests and dried up lakes.

    I truly do believe that most of the loudest “AI” proselytizers are trying to convince everyone else, and perhaps themselves, that there’s more to this than what’s being presented, and just like in the cyberpunkerino treats, criticism, doubt, or even concern about the harm this technology has already done and will be doing on a larger scale is framed in a tiresome lazy “you are just Luddites afraid of the future” thought-terminating cliched way. soypoint-1 k-pain soypoint-2

    • batsforpeace [any, any]@hexbear.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      Despite skepticism over whether nuclear fusion—which doesn’t emit greenhouse gases or carbon dioxide—will actually come to fruition in the next few years or decades, Gates said he remains optimistic. “Although their timeframes are further out, I think the role of fusion over time will be very, very critical,” he told The Verge.

      gangster-spongebob don’t worry climate folks, we will throw some dollars at nuclear fusion startups and they will make us beautiful clean energy for AI datacenters in just a few years, only a few more years of big fossil fuel use while we wait, promise

      Oracle currently has 162 data centers in operation and under construction globally, Ellison told analysts during a recent earnings call, adding that he expects the company to eventually have 1,000 to 2,000 of these facilities. The company’s largest data center is 800 megawatts and will contain “acres” of Nvidia (NVDA)’s graphics processing units (GPUs) to train A.I. models, he said.

      porky-happy I want football fields of gpus

      Ellison described a dinner with Elon Musk and Jensen Huang, the CEO of Nvidia, where the Oracle head and Musk were “begging” Jensen for more A.I. chips. “Please take our money. No, take more of it. You’re not taking enough, we need you to take more of it,” recalled Ellison, who said the strategy worked.

      NOOOOO give us more chips brooo