• Chozo@fedia.io
    link
    fedilink
    arrow-up
    156
    ·
    4 months ago

    Using only the power of a small star, we were able to crash Windows in record time.

  • ABCDE@lemmy.world
    link
    fedilink
    English
    arrow-up
    68
    arrow-down
    1
    ·
    4 months ago

    Only -231 degrees required, nice.

    Is clock speed a thing anymore rather than cores?

    • imaqtpie@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      46
      arrow-down
      2
      ·
      4 months ago

      As far as I know, clock speed is still pretty nice to have, but chip development has shifted towards adding multiple cores because it basically became technologically impossible to continue increasing clock speeds.

      • themoonisacheese@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        49
        arrow-down
        1
        ·
        4 months ago

        A nice fun fact: if you consider how fast electricity travels in silicium, it turns out that for a clock that pulses in the tens of billions of times per second (which is what gigahertz are), it is physically impossible for each pulse to get all the way across a 2cm die before the next pulse starts. This is exacerbated by the fact that a processor has many meandering paths throughout and is not a straight line.

        So at any given moment, there are several clock cycles traveling throughout a modern processor at the same time, and the designers have to just “come up” with a solution that makes that work, nevermind the fact that almost all the digital logic design tools are not built to account for this, so instead they end up having to use analog (as in audio chips, not as in pen-and-paper design) design tools.

        • Buddahriffic@lemmy.world
          link
          fedilink
          English
          arrow-up
          18
          ·
          4 months ago

          Signals don’t have to make it across the whole die each clock pulse. They just have to make it to the next register in their pipeline/data path, and timing tools for that absolutely exist. They treat it as analog because the signals themselves are analog and chips must account for things like the time it takes for a signal to go from a 0 to a 1 (or vice versa), as well as the time it takes to “charge” a flip flop so that it registers the signal change and holds it stable for the next stage of the pipeline.

    • A_Random_Idiot@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      4 months ago

      Depends on what you are doing.

      Gaming you want speed.

      rendering, you want cores.

      as a typical rule of thumb, since games will always be limited to the number of threads they use, and rendering/compiling/etc typically uses everything it can get.

      • Quetzalcutlass@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        And CPUs with higher core counts tend to have lower clock speeds per core, leading to games sometimes running much better on mid-range hardware than on the latest and greatest.

        • A_Random_Idiot@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          3 months ago

          Yeah i forgot to mention that. thanks for picking up my slack.

          More cores > more heat > less speed per core to manage the heat.

          Less cores > less overall heat > more speed per core.

          Which is why, generally, a 5600 is better for gaming than a 5950.

          the 3d cache chips throw a minor wrench into things though, as the extra and faster cache can help compensate for lower speeds, which makes the 5800x3d generally a better gaming chip than the 5600, despite lower speeds.

      • frezik@midwest.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        Games are optimized for multiple cores to a much higher degree than they used to. Single core games are uncommon, even on the indie scene.

        They were held back for a long time by console hardware, but that’s not a problem anymore.

    • MonkderZweite@feddit.ch
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 months ago

      Would be interestng what in a superconducting (-271°C?) CPU happens. At least leakage due to tunneling effects should be reduced at -231°C.

    • MonkderZweite@feddit.ch
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      4 months ago

      There’s multithreading mods for almost every mod-able game, be it Satisfactory, Rimworld or Oxygen not Included. For Dwarf Fortress not?

      • fakeman_pretendname@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        You can mod almost everything in Dwarf Fortress, down to the shear strength of a single beard hair - but you can’t mod the threading :)

      • sus@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        3 months ago

        nowadays dwarf fortress has built in multithreading (and the combination of other optimizations and the progress in CPU power has made it a fairly well-performing game overall)

  • drawerair@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    4
    ·
    edit-2
    4 months ago

    This may excite some, but I value sustained real-world performance more. For example, the fast processors tested by Gamers nexus.

      • nilloc
        link
        fedilink
        English
        arrow-up
        7
        ·
        4 months ago

        The index of efficiency at LeMans is where the 2 meet for me.

        I also dig the hyper milers, but don’t commute far, so haven’t bothered with my own cars.

    • frezik@midwest.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      As it happens, overclocking is one of the few reasons to bother with the 14900ks.

      “Hey, guys, you know how our top desktop cpu runs hot, is really expensive, and loses in most gaming benchmarks to an AMD cpu that costs $200 less? Let’s fix that by releasing one that’s 2% faster, gets even hotter, and is even more expensive.” - a daily conversation at Intel.

  • mariusafa@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    4
    ·
    4 months ago

    Holy shit what is the average freq for this cpu? They probably had to increment the Volatge by a LOT. I mean what technology is this 10nm? The capacitance of those devices takea big part on latency

    • Dnn@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      ·
      4 months ago

      It’s 6.2 GHz and they set the voltage to 1.85 V. Both is stated in the article. You must have missed it.