The extra cache can be a pretty significant speedup in certain workloads, as much or more than the first CCD was compared to non-X3D parts. The caveat is these workloads are encountered in less than one percent of real world consumer situations. Sure, these chips are exactly what some data scientists and people handling data sources like colliders or SEM arrays want, you can now hold an entire genome deconvolution module in cache and operate purely in memory. Every other average jackoff wanting a few extra frames in Waifu Simulator buying these chips will be mostly disappointed, the loss in boost clocks and TDP allocated to the compute tiles will outweigh the extra couple of shaders they can compute in real time.
This is blatantly obvious to anyone who grew up adding their own cache to PCs back in the day, you really do hit a point where cache is big enough for your cores to saturate and you are back to waiting on the pipeline pretty quickly, and the only people even now that can squeeze more out of that are working with massively parallelized computations that themselves are cheap, but with data types that don’t align with what a GPU can work with.
Fintech, genome researchers, and people studying field theory will benefit to be sure, and the extra X3D chip would be absolutely worth the premium over having to otherwise use Epyc/Threadripper and trading off clocks and memory speeds. Honestly the X3D2 chip would make a far cheaper and faster workstation than a threadripper that ran similar workloads as long as you stayed under 128GB of memory.
deleted by creator
I mean I want one, not sure I would ever need it, but…
The extra cache can be a pretty significant speedup in certain workloads, as much or more than the first CCD was compared to non-X3D parts. The caveat is these workloads are encountered in less than one percent of real world consumer situations. Sure, these chips are exactly what some data scientists and people handling data sources like colliders or SEM arrays want, you can now hold an entire genome deconvolution module in cache and operate purely in memory. Every other average jackoff wanting a few extra frames in Waifu Simulator buying these chips will be mostly disappointed, the loss in boost clocks and TDP allocated to the compute tiles will outweigh the extra couple of shaders they can compute in real time.
This is blatantly obvious to anyone who grew up adding their own cache to PCs back in the day, you really do hit a point where cache is big enough for your cores to saturate and you are back to waiting on the pipeline pretty quickly, and the only people even now that can squeeze more out of that are working with massively parallelized computations that themselves are cheap, but with data types that don’t align with what a GPU can work with.
Fintech, genome researchers, and people studying field theory will benefit to be sure, and the extra X3D chip would be absolutely worth the premium over having to otherwise use Epyc/Threadripper and trading off clocks and memory speeds. Honestly the X3D2 chip would make a far cheaper and faster workstation than a threadripper that ran similar workloads as long as you stayed under 128GB of memory.