• Björn Tantau@swg-empire.de
    link
    fedilink
    arrow-up
    305
    ·
    4 months ago

    Fake news.

    Both Windows and Linux have their respective SIGTERM and SIGKILL equivalents. And both usually try SIGTERM before resorting to SIGKILL. That’s what systemd’s dreaded “a stop job is running” is. It waits a minute or so for the SIGTERM to be honoured before SIGKILLing the offending process.

    • 𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍@midwest.social
      link
      fedilink
      arrow-up
      50
      arrow-down
      5
      ·
      4 months ago

      Also fake because zombie processes.

      I once spent several angry hours researching zombie processes in a quest to kill them by any means necessary. Ended up rebooting, which was a sort of baby-with-the bath-water solution.

      Zombie processes still infuriate me. While I’m not a Rust developer, nor do I particularly care about the language, I’m eagerly watching Redox OS, as it looks like the micro kernel OS with the best chance to make to it useful desktop status. A good micro kernel would address so so many of the worst aspects of Linux.

      • CameronDev@programming.dev
        link
        fedilink
        arrow-up
        83
        arrow-down
        1
        ·
        edit-2
        4 months ago

        Zombie processes are already dead. They aren’t executing, the kernel is just keeping a reference to them so their parent process can check their return code (waitpid).

        All processes becomes zombies briefly after they exit, just usually their parents wait on them correctly. If their parents exit without waiting on the child, then the child gets reparented to init, which will wait on it. If the parent stays alive, but doesn’t wait on the child, then it will remain zombied until the parent exits and triggers the reparenting.

        Its not really Linux’s fault if processes don’t clean up their children correctly, and I’m 99% sure you can zombie a child on redox given its a POSIX OS.

        Edit: https://gist.github.com/cameroncros/8ae3def101efc08be2cd69846d9dcc81 - Rust program to generate orphans.

        • senkora@lemmy.zip
          link
          fedilink
          arrow-up
          3
          ·
          4 months ago

          I haven’t tried this, but if you just need the parent to call waitpid on the child’s pid then you should be able to do that by attaching to the process via gdb, breaking, and then manually invoking waitpid and continuing.

          • CameronDev@programming.dev
            link
            fedilink
            arrow-up
            8
            ·
            edit-2
            4 months ago

            I think that should do it. I’ll try later today and report back.

            Of course, this risks getting into an even worse state, because if the parent later tries to correctly wait for its child, the call will hang.

            Edit: Will clean up the orphan/defunct process.

            If the parent ever tried to wait, they would either get ECHILD if there are no children, or it would block until a child exited.

            Will likely cause follow on issues - reaping someone elses children is generally frowned upon :D.

      • MNByChoice@midwest.social
        link
        fedilink
        arrow-up
        26
        arrow-down
        1
        ·
        4 months ago

        Zombie processes are hilarious. They are the unkillable package delivery person of the Linux system. They have some data that must be delivered before they can die. Before they are allowed to die.

        Sometimes just listening to them is all they want. (Strace or redirect their output anywhere.)

        Sometimes, the whole village has to burn. (Reboot)

      • aubeynarf@lemmynsfw.com
        link
        fedilink
        arrow-up
        15
        ·
        edit-2
        4 months ago

        Performance is the major flaw with microkernels that have prevented the half-dozen or more serious attempts at this to succeed.

        Incurring context switching for low-level operations is just too slow.

        An alternative might be a safe/provable language for kernel and drivers where the compiler can guarantee properties of kernel modules instead of requiring hardware guarantees, and it ends up in one address space/protection boundary. But then the compiler (and its output) becomes a trusted component.

        • nickwitha_k (he/him)@lemmy.sdf.org
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          4 months ago

          Thank you. Came here to say this. Microkernels are great for limited scope devices like microcontrollers but really suffer in general computing.

          • uis@lemm.ee
            link
            fedilink
            arrow-up
            1
            ·
            4 months ago

            Quite opposite. Most firmware microcontrollers run is giant kernel. Some microcontrollers don’t even have context switching at all. And I’m not even starting to talk about MMU.

            • nickwitha_k (he/him)@lemmy.sdf.org
              link
              fedilink
              arrow-up
              4
              ·
              4 months ago

              I was not meaning to say that all microcontrollers (or microcontroller firmwares) run a microkernel but, rather, that microcontrollers are an environment where they can work well because the limited scope of what the device is expected to do and its necessarily supported peripherals can be much smaller, making the potential impact of context changes smaller.

              For some good examples of microkernels for such purposes, take a look at FreeRTOS, ChibiOS, or Zephyr pre-1.6 (at which point architecture changed to a monolith because it is creeping towards general computing functionality).

              Some microcontrollers don’t even have context switching at all.

              As long as there’s some processing RAM and sufficient ROM, I’m sure that it can be crammed in there via firmware (in a sub-optimal way that makes people who have to maintain the code, including your future self, hate you and wish a more appropriate part were used).

              And I’m not even starting to talk about MMU.

              Some madlads forked Linux to get it to work without an MMU, even getting it merged into the mainline kernel: https://en.m.wikipedia.org/wiki/ΜClinux

              So, doable. Adviseable? Probably not in most cases but that’s subjective.

              • uis@lemm.ee
                link
                fedilink
                arrow-up
                2
                ·
                4 months ago

                take a look at FreeRTOS

                AFAIK FreeRTOS always ran drivers in kernel.

                As long as there’s some processing RAM and sufficient ROM, I’m sure that it can be crammed in there via firmware

                You can’t even emulate MPU without MPU. The only way is running bytecode, which is still not context switching.

                Some madlads forked Linux to get it to work without an MMU, even getting it merged into the mainline kernel: https://en.m.wikipedia.org/wiki/ΜClinux

                You are correct here. Should have said MPU instead.

                • nickwitha_k (he/him)@lemmy.sdf.org
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  4 months ago

                  AFAIK FreeRTOS always ran drivers in kernel.

                  At least in the docs, I see it described as a microkernel but, with a kernel that small, the differences are probably academic (and I’ll leave that to people with more formal background in CS than myself).

                  You can’t even emulate MPU without MPU. The only way is running bytecode, which is still not context switching.

                  You are correct here. Should have said MPU instead.

                  Oh yes! That makes a lot more sense. I’ve been on-and-off looking at implementing multithreading and multiprocessing in CircuitPython. Memory protection is a big problem with making it work reliably.

        • uis@lemm.ee
          link
          fedilink
          arrow-up
          2
          ·
          4 months ago

          where the compiler can guarantee properties of kernel modules instead of requiring hardware guarantees

          Then you would need to move compiler to kernel. Well, there is one: BPF(and derivatives). It’s turing-incomplete by design.

      • Diabolo96@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        9
        ·
        4 months ago

        RedoxOS would likely never become feature complete enough to be a stable, useful and daily-drivable OS. It’s currently a hobbyist OS that is mainly used as a testbed for OS programming in Rust.

        If the RedoxOs devs could port the Cosmic DE, they’d become one of the best Toy OS and maybe become used on some serious projects . This could give them enough funds to become a viable OS used by megacorps on infrastructures where security is critical and it may lead it to develop into a truly daily drivable OS.

      • uis@lemm.ee
        link
        fedilink
        arrow-up
        2
        ·
        4 months ago

        Ok, how change of kernel would fix userspace program not reading return value? And if you just want to use microkernel, then use either HURD or whatever DragonflyBSD uses.

        But generally microkernels are not solution to problems most people claim they would solve, especially in post-meltdown era.

        • This particular issue could be solved in most cases in a monolithic kernel. That it isn’t, is by design. But it’s a terrible design decision, because it can lead to situations where (for example) a zombie process locks a mount point and prevents unmounting because the kernel insists it’s still in use by the zombie process. Which the kernel provides no mechanism for terminating.

          It is provable via experiment in Linux by use of fuse filesystems. Create a program that is guaranteed to become a zombie. Run it within a filesystem mounted by an in-kernel module, like a remote nfs mount. You now have a permanently mounted NFS mount point. Now, use mount something using fuse, say a WebDAV remote point. Run the same zombie process there. Again, the mount point is unmountable. Now, kill the fuse process itself. The mount point will be unmounted and disappear.

          This is exactly how microkernels work. Every module is killable, crashable, upgradable - all without forcing a reboot or affecting any processes not using the module. And in a well-designed microkernel, even processes using the module can in many cases continue functioning as if the restarted kernel module never changed.

          Fuse is really close to the capabilities of microkernels, except it’s only filesystems. In a microkernel, nearly everything is like fuse. A linux kernel compiled such that everything is a loadable module, and not hard linked into the kernel, is close to a microkernel, except without the benefits of actually being a microkernel.

          Microkernels are better. Popularity does not prove superiority, except in the metric of popularity.

          • uis@lemm.ee
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            4 months ago

            This particular issue could be solved in most cases in a monolithic kernel. That it isn’t, is by design.

            It was(see CLONE_DETATCHED here) and is(source)

            Create a program that is guaranteed to become a zombie. Run it within a filesystem mounted by an in-kernel module, like a remote nfs mount. You now have a permanently mounted NFS mount point.

            Ok, this is not really good implementation. I’m not sure that standard requires zombie processes to keep mountpoints(unless its executable is located in that fs) untill return value is read. Unless there is call to get CWD of another process. Oh, wait. Can’t ptrace issue syscall on behalf of zombie process or like that? Or use vfs of that process? If so, then it makes sense to keep mountpoint.

            Every module is killable, crashable, upgradable - all without forcing a reboot or affecting any processes not using the module.

            except without the benefits of actually being a microkernel.

            Except Linux does it too. If graphics module crashes, I still can SSH into system. And when I developed driver for RK3328 TRNG, it crashed a lot. Replaced it without reboot.

            Microkernels are better. Popularity does not prove superiority, except in the metric of popularity.

            As I said, we live in post-meltdown world. Microkernels are MUCH slower.

            • As I said, we live in post-meltdown world. Microkernels are MUCH slower.

              I’ve heard this from several people, but you’re the lucky number by which I’d heard it enough that I bothered to gather some references to refute this.

              First, this is an argument that derived from first generation microkernels, and in particular, MINIX, which - as a teaching aid OS, never tried to play the benchmark game. It’s been repeated, like dogma, through several iterations of microkernels which have, in the interim, largely erased most of those performance leads of monolithic kernels. One paper notes that, once the working code exceeds the L2 cache size, there is marginal advantage to the monolithic structure. A second paper running benchmarks on L4Linux vs Linux concluded that the microkernel penalty was only about 5%-10% slower for applications than the Linux monolithic kernel.

              This is not MUCH slower, and - indeed - unless you’re doing HPC applications, is close enough to be unnoticeable.

              Edit: I was originally going to omit this, as it’s propaganda from a vested interest, and includes no concrete numbers, but this blog entry from a product manager at QNX specifically mentions using microkernels in HPC problem spaces, which I thought was interesting, so I’m post-facto including it.

              • uis@lemm.ee
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                4 months ago

                First, this is an argument that derived from first generation microkernels, and in particular, MINIX, which - as a teaching aid OS, never tried to play the benchmark game.

                Indeed, first generation microkernels were so bad, that Jochen Liedtke in rage created L3 “to show how it’s done”. While it was faster than existing microkernels, it was still slow.

                One paper notes that, once the working code exceeds the L2 cache size, there is marginal advantage to the monolithic structure.

                1. The paper was written in pre-meltdown era.
                2. The paper is about hybrid kernels. And gutted Mach(XNU) is used as example.
                3. Nowdays(after meltdown) all cache levels are usually invalidated during context switch. Processors try to add mechanisms to avoid this, but they create new vulnreabilities.

                A second paper running benchmarks on L4Linux vs Linux concluded that the microkernel penalty was only about 5%-10% slower for applications than the Linux monolithic kernel.

                  1. Waaaaay before meltdown era.

                I’ll mark quotes from paper as doublequotes.

                a Linux version that executes on top of a first- generation Mach-derived µ-kernel.

                1. So, hybrid kernel. Not as bad as microkernel.

                The corresponding penalty is 5 times higher for a co-located in-kernel version of MkLinux, and 7 times higher for a user- level version of MkLinux.

                Wait, what? Co-located in-kernel? So, loadable module?

                In particular, we show (1) how performance can be improved by implementing some Unix services and variants of them directly above the L4 µ-kernel

                1. No surprise here. Hybrids are faster than microkernels. Kinda proves my point, that moving close to monolithic improves performance.

                Right now I stopped at the end of second page of this paper. Maybe will continue later.

                this blog entry

                Will read.

        • areyouevenreal@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          4 months ago

          But generally microkernels are not solution to problems most people claim they would solve, especially in post-meltdown era.

          Can you elaborate? I am not an OS design expert, and I thought microkernels had some advantages.

          • uis@lemm.ee
            link
            fedilink
            arrow-up
            2
            ·
            4 months ago

            Can you elaborate? I am not an OS design expert, and I thought microkernels had some advantages.

            Many people think that microcernels are only way to run one program on multiple machines without modyfing them. Counterexample to such statement is Plan 9, which had such capability with monolithic kernel.

            • areyouevenreal@lemm.ee
              link
              fedilink
              arrow-up
              2
              ·
              4 months ago

              That’s not something I ever associated with microkernels to be honest. That’s just clustering.

              I was more interested in having minimal kernels with a bunch of processes handling low level stuff like file systems that could be restarted if they died. The other cool thing was virtualized kernels.

              • uis@lemm.ee
                link
                fedilink
                arrow-up
                1
                ·
                4 months ago

                Well, even monolithic Linux can restart fs driver if it dies. I think.

      • Vilian@lemmy.ca
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        4 months ago

        nah, you can have micro-kernel features on linux, but you can’t have monolithc kernel features on microkernel, there’s zero arguments in favor of a micro kernel, except being a novel project

        • ORLY.

          Do explain how you can have micro kernel features on Linux. Explain, please, how I can kill the filesystem module and restart it when it bugs out, and how I can prevent hard kernel crashes when a bug in a kernel module causes a lock-up. I’m really interested in hearing how I can upgrade a kernel module with a patch without forcing a reboot; that’d really help on Arch, where minor, patch-level kernel updates force reboots multiple times a week (without locking me into an -lts kernel that isn’t getting security patches).

          I’d love to hear how monolithic kernels have solved these.

          • frezik@midwest.social
            link
            fedilink
            arrow-up
            3
            ·
            4 months ago

            I’ve been hoping that we can sneak more and more things into userspace on Linux. Then, one day, Linus will wake up and discover he’s accidentally made a microkernel.

          • areyouevenreal@lemm.ee
            link
            fedilink
            arrow-up
            2
            ·
            4 months ago

            I thought the point of lts kernels is they still get patches despite being old.

            Other than that though you’re right on the money. I think they don’t know what the characteristics of a microkernel are. I think they mean that a microkernel can’t have all the features of a monolithic kernel, what they fail to realise is that might actually be a good thing.

            • I thought the point of lts kernels is they still get patches despite being old.

              Well, yeah, you’re right. My shameful admission is that I’m not using LTS because I wanted to play with bcachefs and it’s not in LTS. Maybe there’s a package for LTS now that’d let me at it, but, still. It’s a bad excuse, but there you go.

              I think a lot of people also don’t realize that most of the performance issues have been worked around, and if RedoxOS is paying attention to advances in the microkernel field and is not trying to solve every problem in isolation, they could end up with close to monolithic kernel performance. Certainly close to Windows performance, and that seems good enough for Industry.

              I don’t think microkernels will ever compete in the HPC field, but I highly doubt anyone complaining about the performance penalty of microkernel architecture would actual notice a difference.

              • areyouevenreal@lemm.ee
                link
                fedilink
                arrow-up
                2
                ·
                4 months ago

                Windows is a hybrid kernel, and has some interesting layers of abstraction, all of which make it slower. It’s also full of junkware these days. So beating it shouldn’t be that hard.

                Yeah to be fair in HPC it’s probably easier to just setup a watchdog and reboot that node in case of issues. No need for the extra resilience.

                • That’s my point. If you’re l33t gaming, what matters is your GPU anyway. If HPC, sure, use whatever architecture gets you the most bang for your buck, which is probably going to be a monolithic kernel (but, maybe not - nanokernels allow processes basically direct access to hardware, with minimal abstraction, like X11 DRI, and might allow even faster solutions to be programmed). For most people, the slight improvement in performance of a monolithic kernel over a modern, optimized microkernel design will probably not be noticeable.

                  I keep getting people telling me monolithic kernels are way faster, dude, but most are just parroting the state of things decades ago and are ignoring many of the advancements micro kernels like L4 have made in intervening years. But I need to go find links and put together references before I counter-claim, and right now I have other things I’d rather be doing.

          • Vilian@lemmy.ca
            link
            fedilink
            arrow-up
            1
            ·
            4 months ago

            you don’t need a micro kernel to install medules, nor to make a crash in certain module don’t bring the kernel down, you program it isolated, they don’t do that now because it’s unecessary, but android do that, and there’s work being doing in that way https://www.phoronix.com/news/Ubuntu-Rust-Scheduler-Micro

            the thing is that it’s harder todo that, that’s why no one does, but not impossible, you also need to give the kernel the foundation to support that

              • Vilian@lemmy.ca
                link
                fedilink
                arrow-up
                1
                ·
                4 months ago

                bro thinking a chromecast OS gonna run in google servers 💀, micro kernels has their utility in embedded system, we know, saying that they are replacement for monolithic kernel is dumb, also companies can’t do different/hacks project anymore?

    • Mojave@lemmy.world
      link
      fedilink
      arrow-up
      35
      ·
      4 months ago

      Clicking end task in windows task manager has definitely let the hanging task live in its non-responsive state for multiple hours before.

      • Björn Tantau@swg-empire.de
        link
        fedilink
        arrow-up
        16
        arrow-down
        1
        ·
        4 months ago

        Been a while since I’ve been on Windows but I distinctly remember some button to kill a task without waiting. Maybe they removed it to make Windows soooo much more user friendly.

        • Rev3rze@feddit.nl
          link
          fedilink
          arrow-up
          19
          ·
          4 months ago

          Off the top of my head: right click the task and hit end process. That has literally never failed me. Back in windows XP it might sometimes not actually kill the process but then there was always the “kill process tree” button to fall back on.

          • Zoot@reddthat.com
            link
            fedilink
            arrow-up
            7
            ·
            4 months ago

            Yep, typically “Kill Process Tree” was like the nuke from orbit. You’ll likely destroy any unsaved data, but it works nice when steam has 12 processes running at once.

            • Aux@lemmy.world
              link
              fedilink
              arrow-up
              3
              ·
              4 months ago

              It’s not really a nuke as some processes might be protected. The nuke is to use debugger privileges. Far Manager can kill processes using debugger privileges, that will literally nuke anything and in an instant: the app won’t even receive any signals or anything.

        • blind3rdeye@lemm.ee
          link
          fedilink
          arrow-up
          7
          ·
          4 months ago

          The normal Windows task manager’s ‘end task’ button just politely asks the app to close - but then later will tell the user if the app is unresponsive, and offer to brutally murder it instead.

          There is also the sysinternals Process Monitor, which is basically ‘expert’ version of the task manager. Procmon does allow you to just kill a task outright.

      • Aux@lemmy.world
        link
        fedilink
        arrow-up
        9
        ·
        4 months ago

        The end task doesn’t terminate the app, it only sends a message to the window to close itself. The app will then decide what to do on its own. For example, if the app has multiple windows open, it might close the active one, but still continue running with other windows open. Or it might ignore the message completely.

    • DefederateLemmyMl@feddit.nl
      link
      fedilink
      English
      arrow-up
      11
      ·
      4 months ago

      That’s what systemd’s dreaded “a stop job is running” is

      The worst part of that is that you can’t quickly login to check what it is (so maybe you can prevent it in the future?), or kill it anyway because it’s likely to be something stupid and unimportant. And if it actually was important, well… it’s gonna be shot in the head in a minute anyway, and there’s nothing you can do to prevent it, so what’s the point of delaying?

      • Björn Tantau@swg-empire.de
        link
        fedilink
        arrow-up
        21
        ·
        4 months ago

        so what’s the point of delaying?

        In the best case the offending process actually does shut down cleanly before the time is up. Like, some databases like redis keep written data in memory for fast access before actually writing the data to disc. If you were to kill such a process before all the data is written you’d lose it.

        So, admins of servers like these might even opt to increase the timeout, depending on their configuration and disc speed.

        • DefederateLemmyMl@feddit.nl
          link
          fedilink
          English
          arrow-up
          12
          ·
          edit-2
          4 months ago

          I know what it theoretically is for, I still think it’s a bad implementation.

          1. It often doesn’t tell you clearly what it is waiting for.
          2. It doesn’t allow you to checkout what’s going on with the process that isn’t responding, because logins are already disabled
          3. It doesn’t allow you to cancel the wait and terminate the process anyway. 9/10 when I get it, it has been because of something stupid like a stale NFS mount or a bug in a unit file.
          4. If it is actually something important, like your Redis example, it doesn’t allow you to cancel the shutdown, or to give it more time. Who’s to say that your Redis instance will be able to persist its state to disk within 90 seconds, or any arbitrary time?

          Finally, I think that well written applications should be resilient to being terminated unexpectedly. If, like in your Redis example, you put data in memory without it being backed by persistent storage, you should expect to lose it. After all, power outages and crashes do happen as well.

    • Avid Amoeba@lemmy.ca
      link
      fedilink
      arrow-up
      8
      ·
      4 months ago

      Stop jobs are a systemdism and they’re nice. I think the desktop environment kills its children on its own during reboot and it might not be as nice. Graphical browsers often complain about being killed after a reboot in GNOME.

      • Perry@lemy.lol
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        4 months ago

        AFAIK running firefox in a terminal and pressing ^C (SIGINT) has kind of the same effect as logging out or poweroff in GNOME (SIGTERM, if you’re using systemd). This gives the browser (or other processes with crash recovery) enough time to save all its data and exit gracefully for the crash recovery the next time they are run.

        Please correct me if I’m wrong

        • uis@lemm.ee
          link
          fedilink
          arrow-up
          5
          ·
          4 months ago

          SIGTERM, if you’re using systemd

          SIGTERM it was since original init

    • Constant Pain@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      4 months ago

      Windows gives you the option to kill on shutdown if the app is trying to delay the process. I think it’s ideal.

    • dorumon@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      4 months ago

      BTW you can control systemd and how fast it chooses SIGKILL after sending SIGTERM. I don’t know why people complain so much about it. It’s really just there such that things on your computer end properly without any sort of data corruption or something bad going on after a reboot or the next time you turn on your computer.

  • MonkderDritte@feddit.de
    link
    fedilink
    arrow-up
    176
    ·
    4 months ago

    SIGTERM is the graceful way tho? It nicely asks programs to please close and cleanup. Unlike SIGKILL, which bombs the shop and creates orphans.

    • Thann@lemmy.ml
      link
      fedilink
      English
      arrow-up
      38
      ·
      4 months ago

      And we give steam a fewilliseconds to comply, so IDK what they’re complaing about…

      • MonkderDritte@feddit.de
        link
        fedilink
        arrow-up
        11
        arrow-down
        2
        ·
        edit-2
        4 months ago

        ?

        You’re supposed to close Steam via menu or systray. If you run it in cli, you see that it cleans then a whole bunch up for a few seconds.

      • TechAnon@lemm.ee
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        4 months ago

        Steam is clunky… Exit -> Oh you want to exit? Let me launch a new window letting you know I’m shutting down and take about 20 seconds while I was sitting here idle before you asked to shutdown.

        See you tomorrow where I’ll validate your games again. Just in case!

    • SpaceCowboy@lemmy.ca
      link
      fedilink
      arrow-up
      20
      arrow-down
      2
      ·
      4 months ago

      Yup. And you can kill processes in Windows to in the task manager. Or probably with a Powershell command too, but nobody’s gonna learn Powershell LOL.

      There’s nearly always equivalent functions in both Linux and Windows, just in Windows you gotta click around in more bullshit forms and shit to find stuff. Or learn Powershell, but again, LOL. They are both OSes after all, they do similar things. Just one might do them better than the other.

        • stetech@lemmy.world
          link
          fedilink
          arrow-up
          7
          ·
          4 months ago

          It might be nice and all that (I wouldn’t know), but it’s not a sub- nor superset of glorious POSIX

          • pantyhosewimp@lemmynsfw.com
            link
            fedilink
            arrow-up
            7
            ·
            4 months ago

            Boy oh boy would you hate AppleScript. This is what I have to type to throw files in the trash instead of deleting them.

            tell application ”Finder” to delete POSIX file “/full/fucking/path/to/file”
            
            • PlexSheep@infosec.pub
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              4 months ago

              Why do you need to “tell” some “application”? Why do you need a “finder” if you know the absolute path already? Does this imply that “finder” always runs, ready to be told something?

              • pantyhosewimp@lemmynsfw.com
                link
                fedilink
                arrow-up
                2
                ·
                4 months ago

                Finder is macOS equivalent of Windows Explorer (maybe, it’s been a while). I assume Linux desktop suites have various similar processes. In other words, a second optional layer (with more features) to access runtime libc file manipulation api.

          • capital@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            4 months ago

            I really appreciate the consistency. People also dog it for being verbose to write but it makes it so much more legible.

            /shrug

            • MrPommeroy@lemmy.world
              link
              fedilink
              arrow-up
              4
              ·
              4 months ago

              I usually write verbose code and use self-documenting function names, but to have such a limited set of verbs available can be frustrating. They could at least have used a proper dictionary and included all verbs. Then have a map of synonyms that are preferred, like instead of ‘create’ they prefer ‘new’ (which isn’t even a verb).

          • lud@lemm.ee
            link
            fedilink
            arrow-up
            1
            ·
            4 months ago

            You don’t have to follow best practices though. You can name shit pretty much whatever you want.

        • SpaceCowboy@lemmy.ca
          link
          fedilink
          arrow-up
          1
          ·
          4 months ago

          It’s one of those things wher eI’m sure it’s fine if you learn it. But it’s not DOS CMD, but also not bash.

          So instead of improving CMD to have more features or just going all the way and offering an official bash implementation, they want me to learn a third thing. Just don’t have time for it.

          • capital@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            4 months ago

            It’s second to none if you have to get things done in a Windows environment, especially if dealing with Active Directory.

            But if not, I don’t blame you for not picking it up. Right tool for the job and all that.

            • SpaceCowboy@lemmy.ca
              link
              fedilink
              arrow-up
              1
              ·
              4 months ago

              I do use it occasionally, but I gotta google for the command every time. So not exactly learning it.

      • lud@lemm.ee
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        4 months ago

        I use powershell quite a bit at work and I really like it.

        If anything it’s much easier to read than the abomination called bash.

        • JackbyDev@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          4 months ago

          I wanna learn PowerShell but I only really learn extra stuff like that if I have to. My work computer is a Mac now and has been since 2019. At home I don’t use too much on Windows to really warrant it. I did used to know how to do “sudo” in PowerShell which was useful. Best the hell out of restarting as admin.

          The “object” approach instead of everything as text seems desirable.

  • Ironfacebuster@lemmy.world
    link
    fedilink
    arrow-up
    75
    ·
    4 months ago

    Almost every time I restart my Windows PC from an update, it sits on the “closing apps screen” or “restarting” screen then gives up completely and I have to force it to shut down/restart

    And, just about every other time I restart with an update, it closes apps and then just fully shuts down after the update!

    It’s super graceful! 😭

    • MonkeMischief@lemmy.today
      link
      fedilink
      arrow-up
      27
      ·
      edit-2
      4 months ago

      EVERY TIME!!

      “A program is preventing Windows from shutting down”

      The program : A generic non-descript white box icon with no title.

      Clicking shutdown/restart anyway becomes standard procedure at this point.

    • 𝕸𝖔𝖘𝖘@infosec.pub
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      edit-2
      4 months ago

      “restarting” for 15 minutes. Then crashes. Now I have to reinstall updates and go through it all over again. I hate how crappy the windows update process has become.

      Except for the immutable versions I have, Linux almost never needs to reboot after an update. Upgrades, yes, but not standard updates. And even after upgrades, it just works [(except for one of the immutable versions I have)].

      I usually close all programs before shutting down / rebooting, anyway (a habit I picked up from Win95 days, where it would crash if programs prevented it from shutting down), so I don’t really feel this SIGKILL issues.

      • Shareni@programming.dev
        link
        fedilink
        arrow-up
        3
        ·
        4 months ago

        Linux almost never needs to reboot after an update

        Doesn’t it often need a reboot to apply some updates?

        I rember reading something along those lines then I was researching why Fedora installs some updates after a reboot. Most

        • 𝕸𝖔𝖘𝖘@infosec.pub
          link
          fedilink
          English
          arrow-up
          4
          ·
          4 months ago

          Fedora is the immutable I was referring to that does need to reboot. Linux Mint and OpenSuse only need to reboot after an upgrade. I’ve never had to reboot them after updates. Mileage may vary, of course, as different people have different software, tools, and libraries installed.

          • Shareni@programming.dev
            link
            fedilink
            arrow-up
            5
            ·
            4 months ago

            I was talking about regular fedora. It’s not that you have to reboot, but you don’t get to use those updates until you do. The most obvious example is updating the kernel and its modules.

            • 𝕸𝖔𝖘𝖘@infosec.pub
              link
              fedilink
              English
              arrow-up
              4
              ·
              4 months ago

              You’re correct. A kernel update would fall under the umbrella of a system upgrade, where the system needs to shut down to allow underlying components to be reloaded.

          • Vilian@lemmy.ca
            link
            fedilink
            arrow-up
            2
            ·
            4 months ago

            to be fair, fedora downloads and apply the update before reboot, windows download, apply and then reboot, that’s why it take so much time

            • 𝕸𝖔𝖘𝖘@infosec.pub
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 months ago

              Right, but Fedora failures allows me still to boot. Windows failures forces an uninstallation of the update, killing even more time. There are good and bad things to each approach.

      • uis@lemm.ee
        link
        fedilink
        arrow-up
        2
        ·
        4 months ago

        I was doing my project while system updated itself from sources. Шindows should take notes here.

        And I’m not even talking about CRIU, where you can save entire progtam state on disk, reboot and restore it back in the state before reboot.

      • Ironfacebuster@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        4 months ago

        As Microsoft adds ads in more places more and more, I consider moving over to Linux but I just have too many files and weird Windows only programs that I use that I can’t

        I also haven’t really found a desktop environment I really like yet, so I’m open to suggestions for dual booting!

        • 𝕸𝖔𝖘𝖘@infosec.pub
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          I pretty much always recommend Linux Mint Cinnamon for anyone entering Linux for the first time or anyone who wants something to just work 98% of the time. I use Mint Debian Edition (testing it out. So far, so good, and it’s quickly entering first place in terms of recommendations, as it seems just as stable and uses Debian packages instead of Ubuntu’s), OpenSuse with KDE (less for beginner’s and more for those who want “eye candy” and some nostalgia), and Fedora Silverblue (currently have an update issue with its certificates, so can’t really recommend it yet). I’ve found very few Windows programs to not work within WINE (more complex, system file dependent programs generally are those that fail), so you may find that all of your Windows-only programs work perfectly fine under WINE.

          With Mint (and others, I’m sure), you can install multiple DEs and test them out, then remove those you don’t like. Or keep them all and play DE roulette I guess lol

          • Ironfacebuster@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            4 months ago

            I’ll definitely check some of those out, thanks! I have a little experience with Linux since every self hosted server PC I’ve built has always had Ubuntu Server, but even then I was tempted to try and dual boot Mint

  • NeatNit
    link
    fedilink
    arrow-up
    66
    arrow-down
    1
    ·
    4 months ago

    Windows’ might be complex, but it is NOT graceful. If you have notepad open with unsaved text, then shutdown will never shut down - but nothing on the screen will make this obvious to a non-technical person.

  • Laser@feddit.org
    link
    fedilink
    arrow-up
    53
    ·
    edit-2
    4 months ago

    SIGTERM is a graceful request to the application to terminate itself and despite their names kill and killall default to SIGTERM (also useful to send other signals to processes, like START, STOP and NOHUP).

    kill -9 though…

  • Phoenixz@lemmy.ca
    link
    fedilink
    arrow-up
    36
    ·
    4 months ago

    Linux actually also has a graceful shutdown process. It tells apps its shutting down by sending SIGTERM, and its up to each process to flush data asap, do whatever they gotta do, and then shut down.

    If they don’t listen then linux will indeed pull out the baseball bat chainsaw katana and make processes die whether they want to or not.

  • BmeBenji@lemm.ee
    link
    fedilink
    arrow-up
    29
    ·
    edit-2
    4 months ago

    The number of times I have had the Windows shutdown process tell me “please close <some windows process that I never opened> before shutting down” is fucking annoying. Wipe your own ass, Windows.

    • SkaveRat
      link
      fedilink
      arrow-up
      15
      ·
      4 months ago

      Bonus points if that exact dialogue is the cause. Had that happen more than once. No idea how

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      7
      ·
      4 months ago

      The funniest one is when it tells you this but by the time you get back to close the ones that didn’t close they’ve already closed on their own. Confused Travolta.gif

  • jet@hackertalks.com
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    3
    ·
    4 months ago

    Every computer should have a hard cutoff power switch, when thrown it cuts all physical electricity.

    Off means off.

    The current trend of soft power buttons, parasitic loads to service impi, or management engines wol, etc is just bad practice and removes agency from the user.

    Who hasn’t wanted to turn off a laptop to put it in a bag only for the shutdown to trigger an update that takes 10m whole your running late, so the laptop overheats. Or worse, the laptop turns on while in the bag!

    The fact windows has a poor ability to apply updates live or In a a/b fashion is no excuse for soft poweroff buttons. Sure it’s nice to flush file system write through caches, but Ive been burned by fake power off far more then incomplete file writes.

    • Toes♀@ani.social
      link
      fedilink
      arrow-up
      8
      ·
      4 months ago

      This is why I’m only interested in laptops with removable batteries but it’s become rarer and rarer.

    • CrazyLikeGollum@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 months ago

      At least for desktop computers, you have the power switch on the back of the PSU. Assuming your PSU is actually ATX compliant and not some proprietary or otherwise non-standard bullshit.

      That switch is inline with the AC input and will kill power to the device completely.

    • SpikesOtherDog@ani.social
      link
      fedilink
      arrow-up
      4
      ·
      4 months ago

      This is what frustrates me about HP laptops. The biggest issues users see with them could be resolved with a hard reset to clear chip states, but you have to perform a hard reset by powering off, unplugging, and holding power for 30 seconds. A shut down or a restart doesn’t fully reset all chips and network/audio issues seem to persist.

    • QuarterSwede@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      4 months ago

      This is one of the greatest reasons to get a MacBook. It just sleeps instantly and sleeps seemingly forever (loses about 2% overnight). No need to deal with Window’s BS hibernation mode that takes longer to wake than just powering it off and then on.

      Now just to get work to let me get a MacBook as my next hardware instead of another Thinkpad (most of my work is cloud based or in the Office suite).

    • Aux@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      2
      ·
      4 months ago

      Just hold the power button on your laptop for a few seconds and you’ll get hard power off.

  • nucleative@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    4 months ago

    EarlyOOM is your friend. Tweak it to save the most important stuff and kill irrelevant stuff first when low on memory.

    • interdimensionalmeme@lemmy.ml
      link
      fedilink
      arrow-up
      5
      ·
      4 months ago

      Why can’t browsers discard tabs to disk instead of this ridiculous assumption that the server will still exist to redownload the tab content rom.

      • Phoenixz@lemmy.ca
        link
        fedilink
        arrow-up
        2
        ·
        4 months ago

        Well… though there are reasons to save pages to disk, the server being still up is a fair assumption, really.

        • biddy@feddit.nl
          link
          fedilink
          arrow-up
          1
          ·
          4 months ago

          It isn’t on a mobile device where you might go out of wifi or cellular coverage. But it’s probably a good thing as I don’t want my tab habit wearing out my disk

  • OR3X@lemm.ee
    link
    fedilink
    arrow-up
    17
    ·
    4 months ago

    Meanwhile Windows regularly gets hung up for several minutes on the “shutting down…” screen for no fucking reason. Only happens when I’m in a hurry too.

    • Tryptaminev@lemm.ee
      link
      fedilink
      arrow-up
      11
      arrow-down
      1
      ·
      4 months ago

      i love it when the “this program is keeping the computer from shutting down” program is the shutting down program

      • lud@lemm.ee
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        4 months ago

        In theory it is a good thing because it’s usually programs with unsaved stuff.