Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Semi-obligatory thanks to @dgerard for starting this)
is this a possible thing: all the AI assistant stuff being forced onto us in the next gen hardware is gonna need significant computing power bumps to support it, is this creating a potential surplus of computing power in all devices that could time very well with an excessive skeuomorphic UI design response to the decade of bland flatness weāve endured thatās gonna cook the cpus on the devices of everyone else?
Haha, no. Flat UI was done for reasons of fashion, not efficiency. UI will always expand to consume the available memory and compute, regardless of how boring it looks. Exhibit A: Electron!
yeah but I didnāt say that flat ui was created for efficiency. Any efficiency of a flat ui is cancelled out by the excesses of client-side JS. I know it is fashion, I was there. But I also know that there is a sense that it is efficient by the designers that design with it.
The ongoing trend of āflat UIā is largely not due to processing power though. Even inexpensive computers have CPUs and GPUs that could push very fancy graphics without problems, see what the same machines can do in game graphics (and I donāt mean high-end gaming, I mean the kind of simple gaming that can run on a low-end laptop these days). Some of the early GUIs in the 1980s had āflat designā due to performance limitations, but that went away in the 1990s. Today it could still be a reason in some embedded system scenarios with simple microcontrollers, but not in a desktop or laptop computer, and also not in smartphones or tablets.
The reason we have the bland flat design is the same why we still have things like āall surfaces are ugly glossy black plasticā (luckily this one is on its way out) or āwar on physical buttonsā aka ātouchscreens everywhereāā¦ itās simply a design trend.
@nightsky ātouchscreens everywhereā isnāt an aesthetic choice, itās a cost-of-goods choice: which adds more to the cost of a physical product, a bunch of bespoke embossed buttons/keys for specific tasks, or a single mass-produced touchscreen?
Itās the same reason modern electronics uses embedded microcontrollers rather than actual properly designed task-specific gate arrays.
I hear you, but I didnāt say flat ui is due to processing power. My line of thought is that a sudden bump in available processing power might prompt designers to feel that elaborate uis are fine now because despite flat ui not being an efficiency thing, it is definitely perceived as one by the average designer who doesnāt know how much of the css used to render it is generated client-side via js
Just chiming in to say to hell with skeuomorphism, I still want Apple Platinum back. Bonus points if it comes with an option for Dark Platinum that was only present in the early releases of OS X Server.
to the computing side, and with the proviso that in my own estimation of my skills I am at best slightly less than ādangerously cluelessā: unfortunately not as much as may be desired because the kind of chips being added are fairly specialised silicon
itās not impossible that people may find other uses for it over time but to the best of my knowledge as it stands right now much of this shit is dead weight the moment this bubble pops
(I donāt think it will all go entirely away; there are some ML uses that are not complete trash. but thatās a long different arc)
Iām not sure I follow the skeu side of your comment?
that;s exactly the catch I was hoping wouldnāt be the case. When the AI shit is abandoned, is the hardware useful for regular stuffā¦
So, from what youāre saying: Generative AI is fucking up in the past, present, and future
broad brush strokes, yes largely that
thereās some extremely fucking interesting details in the weeds, but thatās beyond the scope of merely a comment (and also I donāt feel equipped to make a goodpost about it as yet)
My baseline understanding is that āNPUs,ā as such, are vector accelerators with perhaps lower precision and definitely lower peak TDP. I say this because much of the incremental ML research Iāve skimmed over seems to be around getting away with lower precision, dropping down to FP8 or even FP4 from FP16 when they can get away with it.
Iām still confused as to why and how this is an acceptable tradeoff to firing up an iGPU with precise power/TDP stepping. Perhaps one of those situations where the power budget and latency to fire up the whole GPU block or burst it to max power ends up costing as much as the actual calculation. I think for purposes of this discussion, we also need a source that sheds light on the architectural differences between NPUs and GPU shader/execution units.