I take things from point A to point B
There are two main aspects to coreboot in my opinion that differentiate it from other firmware ecosystems:
The first is a strong push towards having a single code base for lots of boards (and, these days, architectures). Historically, most firmware is build in a model I like to call “copy&adapt”: The producer of a device picks the closest reference code (probably a board support package), adapts it to work with their device, builds the binary and puts it on the device, then moves to the next device.
Maintenance is hard in such a setup: If you find a bug in common code you’ll have to backport the fix to all these copies of the source code, hope it doesn’t break anything else, and build all these different trees. Building a 5 year old coreboot tree on a modern OS is quite the exercise, but many firmware projects are near impossible to build under such circumstances.
With coreboot, we encourage developers to push their changes to the common tree. We maintain it there, but we also expect the device owner (either the original developer or some interested user) in helping with that, at least with testing but more ideally with code contributions to keep it up to current standards of the surrounding code. A somewhat maintained board is typically brought up to latest standards in less than a day if a new build is required, and that means that everybody has an easy time to do a new build when necessary.
The second aspect is our separation of responsibilities: Where BIOS mandates the OS-facing APIs and not much else (with lots of deviation in how that standard is implemented), UEFI (and other projects like u-boot) tends to go the other extreme: with UEFI you buy into everything from build system, boot drivers, OS APIs and user interface. If you need something that only provides 10% of UEFI you’ll be having a hard time.
With coreboot we split responsibilities between 2 parts: coreboot does the hardware initialization (and comes with its build system for the coreboot part, and drivers, but barely any OS APIs and no user interface). The payload is responsible for providing interfaces to the OS and user (and we can use Tianocore to provide a UEFI experience on top of coreboot’s initialization, or seabios, grub2, u-boot, Linux, or any program you build for the purpose of running as payload).
The interface between coreboot and the payload is pretty minimal: the payload’s entry point is well-defined, and there’s a data table in memory that describes certain system properties. In particular the interface defines no code to call into (including: no drivers), which we found complicates things and paints the firmware architecture into a corner.
To help payload developers, coreboot also provides libpayload, a set of minimal libraries implementing libc, ncurses and various other things we found useful, plus standard drivers. It’s up to each coreboot user/vendor if they want to use that or rather go for whatever else they want.
credit: [deleted] user on Reddit.
As a side note about BIOS
Framework’s official stance on Coreboot:
“As this keeps popping up even after multiple responses, let this be the “official” response so we can put this to bed, at least for now.
It is not that Framework “does not care” about Coreboot, it is that we have a very long list of priorities for a very small team (we are less than 50 globally and have existed for less than 3 years) and while being able to support Coreboot would be fantastic, it is just not a priority for Framework right now given the sheer number of initiatives that we have to launch now and in the immediate future. We pivot from one NPI (New Product Introduction) to the next, back to back, and have since our first product launch. Our firmware/BIOS team is small and is supplemented by an outside 3rd Party partner. The consistent, “well, just hire more people then” is unfortunate as those in the know understand that’s not how it works, especially for a small, private company trying to exist in a very mature market segment. While tech in general is shrinking, layoffs are in the news constantly, and global economies are getting hit hard, we’re still here, releasing new products, and working hard to support everything we’ve already launched.
If and when we decide to add Coreboot to the docket of active projects, we’ll let the Community know, but if you want Framework to continue to exist, and you believe in our mission, we’ll have to continue to ask for your patience. If not having Coreboot is a blocker for you, personally, to join the Framework Family, we do hope that we can earn your business in the future.”
https://community.frame.work/t/responded-coreboot-on-the-framework-laptop/791/239
the 7640u and 7840u are both rated for a default TDP of 28w, although it is configurable as low as 15w by the laptop manufacturer.
That reference seems to be using the default for the 7840u, whereas they’re using the configurable minimum for the 7640u, which is misleading.
The 7840u and 7640u are actually the exact same chip, just the 7640u has 2 CPU cores and 4 GPU cores disabled.
Ryzen is pretty good at putting cores to sleep when they aren’t needed, so when at idle or running a load that can’t take advantage of those cores the 7840u should behave pretty much the same as a 7640u and have similar power consumption.
Then when under heavy loads both CPUs will likely hit whatever the maximum power the cooler can handle is, however having more cores each running at lower power (ex. 7840u) generally performs better than fewer cores each running at higher power (ex. 7640u).
So under heavy loads the 7840u should actually have better performance with similar power consumption, however the better performance allows it to complete the task quicker and get back to low power idle sooner, overall improving battery life.
So theoretically the 7840u should overall have similar to slightly better battery life than the 7640u assuming all software is implemented properly (I was an early adopter of Ryzen 3000 desktop CPUs and it took several driver/BIOS updates before it would reliably put unneeded cores to sleep and significantly reduce idle/low load power consumption).
++
credit: u/RiftBladeMC on Reddit and @RiftBlade@lemmy.world on Lemmy.
original link: https://old.reddit.com/r/framework/comments/13dz5nb/comment/jjnv1nq/?utm_source=share&utm_medium=web2x&context=3
With the workloads you listed the only place that you may have a noticeable difference is in gaming. But if the games you play are not very intensive then you will only see a negligible improvement
For that use case, the Ryzen 5 seems perfectly suitable. It’s what I pre-ordered myself, with a similar expected workload.
This is data on a previous generation Ryzen 5: https://pc-builds.com/fps-calculator/result/1fB1dg/4T/dragon-age-inquisition/ This might be helpful too: https://www.youtube.com/watch?v=ykRYYl6xSpo
++
credit: u/runed_golem on Reddit
original link: https://old.reddit.com/r/framework/comments/13dz5nb/comment/jjnow91/?utm_source=share&utm_medium=web2x&context=3
thanks for the interest – according to the instructions on Lemmy, “the person has to post a comment in the community, before there will be an option to appoint as mod…” please go ahead and post something on the community anytime and we can go from there :)
thanks
If you’re already on a Linux-based operating system, and you gotta run a real instance of Windows for some reason, your safest bet from both a security and privacy standpoint is to run it in a virtual machine (I like VirtualBox, personally, but VMWare, or whatever else will do the job fine also) and firewall the hell out of it. In a virtual machine, you can totally lock it down as much or as little as you need for the task at hand, and ain’t a damned thing Windows itself can really do about it, and as an added bonus, it saves you from the required reboots of dual-booting. It’s confined to a “safe space” (until you start opening enabling network stuff and opening ports to it). You’re in control.
edit: or QEMU/KVM (with virt-manager)
Really you’d have to fire up Wireshark and see what telemetry Windows was blabbing away behind your back. Analysing those logs can be a tedious business, especially as you’d need a large dataset.
Thing with just about any tech related question posted is likely some geek will have done the heavy lifting for you already. Here is a nice start:
https://www.zdnet.com/article/windows-10-and-telemetry-time-for-a-simple-network-analysis/
Here is another one:
https://www.comparitech.com/blog/information-security/windows-10-data/
That’s logs required to be collected, doesn’t say whether or not the data is sent back to Windows. Best assume yes.
Course, all that proprietary software will have a voluminous licence agreement that nobody reads. They’ll collect as much data as they can to “maximise user experience” or whatever rubbish.
Pro is a little bit better because of features like Bitlocker. A lot better would be Education/Enterprise variant. You’d need special licenses for running enterprise I think. There are also registry hacks that would give you some protection against telemetry (I personally haven’t done this).
Privacy-wise though, any “windows” is going to fare lower than linux is what I’d say. Wait for others in the sub for more insights.
https://ssd.borecraft.com/SSD_Buying_Guide.pdf