- cross-posted to:
- technology@beehaw.org
- cross-posted to:
- technology@beehaw.org
An interesting development, but it seems to be focused exclusively on parallel compute (enterprise dGPUs use cases):
The Austin, Texas-based AI chip startup says it’s developing an optical processing unit (OPU) that in theory is capable of delivering 470 petaFLOPS of FP4 / INT4 compute — about 10x that of Nvidia’s newly unveiled Rubin GPUs — while using roughly the same amount of power.
From my limited understanding for CPUs (which are arguably far more complex and less “predictable”), Moore’s Law is definitely dead.
If you look at single-thread CPU performance, gains from say ~2013 (Haswell/Ivy Bridge) are relatively modest compared to modern ~2025 era top end CPUs (9800X3D). Just compare a late 486, say the i486DX2 from 1994 to a P3/Tualatin from ~2001, there is no comparison at all.


