US big mad

  • aaaaaaadjsf [he/him, comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 year ago

    I think you mean hyperthreading. Multi threading has been a thing on ARM mobile chips forever. It being based on server architecture has advantages like that, but also disadvantages in terms of thermals and efficiency at higher clock speeds. (It only clocks up to 2.6GHz, and energy efficiency and thermals really suffer at that clock speed). It has the largest vapour chamber I’ve seen on a phone, the size of the entire screen. Single core performance being right in between the 865 and 888 is still highly impressive. Less efficient than the 865, which was made on the older TSMC process, but more efficient than the 888 and 8 gen 1, which was on Samsungs process. The 8+ gen 1 and 8 gen 2 blow it out if the water on efficiency and raw power with the new TSMC process, but that’s to be expected.

    ARM GPUs are a whole different ballgame compared to desktop GPUs. Raw numbers and benchmarks aren’t always the best to compare performance because of different levels of driver support for certain features. There’s the Mali approach, which treats the GPU and CPU as an APU with its approach, and has poor driver implementation of features such as dual source blending. Then you get Adreno, which goes for a more traditional approach in having the GPU and CPU more seperate, and has much better implementation on the driver side of things. This is why it’s always recommended to go for an Adreno GPU in GPU bound tasks on Android, like high performance game emulation (think PS2 games at 1080p or higher resolution) or maxing out genshin impact. I think Apple ARM GPUs are still based off of what PowerVR did back in the day, and PowerVR made very good ARM GPUs. Such as the ones in the early iPads, and the Samsung Galaxy S4 international model.

    It’s positive that Huawei have gone for their own GPU implementation, Mali keeps dropping the ball, and Adreno needs competition. As for how it works in intensive real world tasks, only time will tell.

    The less said about tensor, the better. It’s not even a Google design, it’s years old Samsung designs that even Samsung ditched. Purely a stop gap measure until Google can actually make their own chips.