Interesting, that’s my experience with anesthesia.
Interesting, that’s my experience with anesthesia.
I thought it was “you’re not the boss of me now”.
https://m.youtube.com/watch?v=HFLW1853Abg&pp=ygUVdGlvbiBlbGVjdHJpYyBjb21wYW55 Show made songs for kids and taught spelling.
There was also a pbskids show with the same name https://www.pbslearningmedia.org/collection/the-electric-company/t/tec-full-episodes/
Which also had a tion song though I can’t find the episode right now.
deleted by creator
I offered help and disagreed with what you said that was wrong. Your response is unrelated and misinterprets my reply.
I disagree, Cargo is very simple and easy to use for developers.
I agree, binaries are easier for end users.
I’m surprised cargo run --release
didn’t work for you. What was the project and OS?
Because the kernel doesn’t like you spawning 100k threads.
Why do you say this?
Your RAM doesn’t, either
Not if your stacks per thread are small.
Even all the stacks aside, the kernel needs to record everything in data structures which now are bigger and need longer to traverse.
These data structures must exist either in userland or the kernel. Moving them to the kernel won’t help anything. Also, many of these data structures scale at log(n). Splitting have the elements to userland and keeping the other half gives you two structures with log(n/2) so 2log(n/2) = log(n^2/4). Clearly that’s worse.
Each thread is a process which could e.g. be sent a signal, requiring keeping stuff around that rust definitely doesn’t keep around (async functions get compiled to tight state machines).
If signals were the reason async worked better, then the correct solution is to enable threads that opt-out of signals. Anything that slows down threads that isn’t present in an async design should be opt-out-able. The state-machines that async compiles to, do not appear inherently superior to multiple less stateful threads managed by a fast scheduler.
Specifically with io_uring: You can fire off quite a number of requests, not incurring a context switch …
As described here you would still need to do a switch to kernel mode and back for the syscalls. The extra work required from assuming processes are hostile to each other should be easy to avoid among threads known to have a common process as they are obviously not hostile to each other and share memory space anyway. The synchronization required to handle multiple tasks should be the same regardless if they are being run on the same thread by a user land scheduler or if they are running on multiple threads with an os scheduler.
Anyhow, your mode of inquiry is fundamentally wrong in the first place: …
I’m not interested in saying that async is the best because it appears to work well currently. That’s not the right way to decide the future of how to do things. That’s just a statement of how things are. I agree, if your only goal is get the fastest thing now with no critical thought, then it does appear that async is faster. I am unconvinced it must fundamentally be the case.
Threads are callbacks.
You cant say async is the fundamentally better model because threading is purposely crippled in the browser.
The conversation at hand is not “how do io in browser”. Its “async is not inherently better than threads”
I assume by performance you mean CPU usage per io request. Each io call should require a switch to the kernel and back. When you do blocking io the switch back is delayed(switch to other threads while waiting), but not more taxing. How could it be possible for there to be a difference?
The only way I have heard threads are expensive, in the context of handling many io requests, is stack usage. You can tell the os to give less memory (statically determined stack size) to the thread when it’s spawned, so this is not a fundamental issue to threads.
Go ahead and spin up a web worker and transfer a bunch of data to it and tell us how long you had to wait.
Time to transfer data to one thread is related to io speed. Why would this have anything to do with concurrency model?
The “do something while waiting for something else” is not a reason to use async. That’s why blocking system calls and threads exist.
Threads don’t need to be expensive. Max stack usage can be determined statically before choosing the size when spawning a thread.
Any other reasons?
Probably not for ddos/security reasons. Would need to use something like nohasher to get noops.
I’ve had no problem for years.
Biggest issue I’ve had was forgetting I committed something on one device before committing on another. Then I had two branches where one had " conflict" in the name. I just deleted all conflict files and everything continued as normal. If your repo is never corrupted before syncing worst case you should be able to find and delete all conflict files.
Syncthing conflicts include the source of the conflict so you could just choose to delete all files whose conflict is from one device and leave everything from the other.
If you’re worried you could just ignore your ‘.git’ folder in syncthing since you’re purposefully not committing during this. Then sync through git when you finally commit your changes on a device.
If only there was some syncing thing that would let you move arbitrary files between devices.
GrapheneOS has had better compatibility with sanboxed google services for a while now. Microg is worse.
Did you measure that empirically? Gsam indicates it only accounts around 1% of battery drain.
Feel free to elaborate.