• bamboo@lemm.ee
      link
      fedilink
      arrow-up
      24
      ·
      5 months ago

      As a fellow risc-v supporter, I think the rise of arm is going to help risc-v software support and eventually adoption. They’re not compatible, but right now developers everywhere are working to ensure their applications are portable and not tied to x86. I imagine too that when it comes to emulation, emulating arm is going to be a lot easier than x86, possibly even statically recompilable.

      • deathmetal27@lemmy.world
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        5 months ago

        They’re not compatible

        This is what concerns me. ARM could dominate the market because almost everyone would develop apps supporting it and leave RISC-V behind. It could become like Itanium vs AMD64 all over again.

        • zygo_histo_morpheus@programming.dev
          link
          fedilink
          arrow-up
          9
          ·
          edit-2
          5 months ago

          Well right now most people develop apps supporting x86 and leaves everything else behind. If they’re supporting x86 + arm, maybe adding riscv as a third option would be a smaller step than adding a second architecture

          • deathmetal27@lemmy.world
            link
            fedilink
            arrow-up
            5
            ·
            5 months ago

            It greatly depends on the applications.

            Porting Windows exclusive games to Linux is a small step as well, but most developers don’t do it because they cannot justify the additional QA and debugging time required to port them over. Especially since Linux’s market share is small.

            The reason Itanium failed was because the architecture was too different from x86 and porting x86 applications over required significant effort and was error prone.

            For RISC-V to even get any serious attention from developers, I think they need to have appx 40-50% market share with OEMs alongside ARM. Otherwise, RISC-V will be seen as a niche architecture and developers would avoid porting their applications to it.

            • LeFantome@programming.dev
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              5 months ago

              We agree.

              My point is that “porting” is not such a big deal if it is just recompile. If you already target Linux with a portable code base ( to support both ARM and amd64 for example ) then the burden of RISC-V is pretty low. Most of the support will be the same between RISC-V and ARM if they target the same Linux distros.

              The Linux distros themselves are just a recompile as well and so the entire Open Source ecosystem will be available to RISC-V right away.

              It is a very different world from x86 vs Itanium with amd64 added to the mix.

              Look at Apple Silicon. Fedora already has a full distribution targeting Apple Silicon Macs. The biggest challenges have been drivers, not the ISA. The more complete the Linux ecosystem is on ARM, the easier it will be to create distros for RISC-V as well.

              Porting Windows games to Linux is not a small step. It is massive and introduces a huge support burden. That is much different than just recompiling your already portable and already Linux hosted applications to a new arch.

              With games, I actually hope the Win32 API becomes the standard on Linux as well because it is more stable and reduces the support burden on game studios. It may even be ok if they stay x86-64. Games leverage the GPU more than the CPU and so are not as greatly impacted running the CPU under emulation.

        • LeFantome@programming.dev
          link
          fedilink
          arrow-up
          2
          ·
          5 months ago

          That is a risk on the Windows side for sure. Also, once an ISA becomes popular ( like Apple Silicon ) it will be hard to displace.

          Repurposing Linux software for RISC-V should be easy though and I would expect even proprietary software that targets Linux to support it ( if the support anything beyond x86-64 ).

          Itanium was a weird architecture and you either bet on it or you did not. RISC and ARM are not so different.

          The other factor is that there is a lot less assembly language being used and, if you port away from x64, you are probably going to get rid of any that remains as part of that ( making the app more portable ).

            • LeFantome@programming.dev
              link
              fedilink
              arrow-up
              1
              ·
              5 months ago

              Once a chip architecture gets popular on Windows, it will be hard to displace. ARM has already become popular on macOS ( via Apple Silicon ) so we know that is not going anywhere. If ARM becomes popular on Windows ( perhaps via X Elite ), it will be hard to displace as the more popular option. That makes RISC-V on Windows a more difficult proposition.

              I do not think that RISC-V on Linux has the same obstacles other than that most hardware will be manufactured for Windows or Mac and will use the chips popular with those operating systems.

              • Norah - She/They@lemmy.blahaj.zone
                link
                fedilink
                English
                arrow-up
                1
                ·
                5 months ago

                I think you missed the forest for the trees my friend. I was simply commenting on the fact you made it sound like Apple Silicon is it’s own ISA.

  • colourlesspony@pawb.social
    link
    fedilink
    arrow-up
    55
    arrow-down
    2
    ·
    5 months ago

    I feel like linux users benefit the most from arm since we can build our software natively for arm with access to the source code.

      • RedWeasel@lemmy.world
        link
        fedilink
        English
        arrow-up
        22
        ·
        5 months ago

        Until risc-v is at least as performant as top of the line 2 year old hardware it isn’t going to be of interest to most end users. Right now it is mostly hobbyist hardware.

        I also think a lot of trust if being put into it that is going to be misplaced. Just because the ISA is open doesn’t mean anything about the developed hardware.

      • 737@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        8
        ·
        5 months ago

        RISC-V is currently already being used in MCUs such as the popular ESP32 line. So I’d say it’s looking pretty good for RISC-V. Instruction sets don’t really matter in the end though, it’s just licensing for the producer to deal with. It’s not like you’ll be able to make a CPU or even something on the level of old 8-bit MCUs at home any time soon and RISC-V IC designs are typically proprietary too.

      • uis@lemm.ee
        link
        fedilink
        arrow-up
        2
        ·
        5 months ago

        Same goes for RV, OpenRISC, MIPS and other architectures.

    • benzmacx16v
      link
      fedilink
      arrow-up
      26
      arrow-down
      2
      ·
      5 months ago

      It doesn’t usually work that well in practice. I have been running an M1 MBA for the last couple years (asahi Arch and now Asahi Fedora spin). More complex pieces of software typically have build system and dependencies that are not compatible or just make hunting everything down a hassle.

      That said there is a ton of software that is available for arm64 on Linux so it’s really not that bad of an experience. And there are usually alternatives available for software that cannot be found.

      • arthurpizza@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        5 months ago

        Long time Raspberry Pi user here, the only software I can’t load natively is Steam. What software are you having problem with on the M1?

        • Daeraxa@lemmy.ml
          link
          fedilink
          arrow-up
          5
          ·
          5 months ago

          Electron apps using older versions that don’t support the 16k page size are probably the biggest offenders

            • Daeraxa@lemmy.ml
              link
              fedilink
              arrow-up
              2
              ·
              5 months ago

              I can’t say I’m one who shares that sentiment seeing as the only two projects I’m involved with happen to be Electron based (by chance rather than intention). Hell, one of them is Pulsar which is a continuation of Atom which literally invented Electron.

      • 𝕨𝕒𝕤𝕒𝕓𝕚
        link
        fedilink
        arrow-up
        24
        ·
        edit-2
        5 months ago

        We can. The point is that Windows users can’t compile for arm. They depend on the Dev to to it. That will take some time and some won’t do it at all.

        • sabreW4K3@lazysoci.alOP
          link
          fedilink
          arrow-up
          2
          ·
          5 months ago

          Aha. I see so many Docker projects with examples of how to build for ARM, I just assumed it was always that easy.

          • qaz@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            5 months ago

            It’s easy to compile something for a certain infrastructure if you can compile it yourself and won’t have to beg another party to do so.

        • Daeraxa@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          Is that a developer licence thing? I know GitHub recently announced Windows Arm runners that would be available to non-teams/enterprise tiers later this year.

          • RedWeasel@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            5 months ago

            It isn’t as simple as just compiling. Large programs like games then need to be tested to make sure the code doesn’t have bugs on ARM. Developers often use assembly to optimize performance, so those portions would need to be rewritten as well. And Apple has been the only large install of performant ARM consumer hardware on anything laptop or desktop windows. So, there hasn’t been a strong install base to even encourage many developers to port their stuff to windows on ARM.

            • Daeraxa@lemmy.ml
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              5 months ago

              Yeah this has been our (well, my) statement on requests to put out ARM binaries for Pulsar. Typically we only put binaries out for systems we actually have within the team so we can test on real hardware and replicate issues. I would be hesitant to put out Windows ARM builds when, as far as I know, we don’t have such a device. If there was a sudden clamouring for it then we could maybe purchase a device out of the funds pot.

              The reason I was asking more about if it was to do with developer licences is that we have already dealt with differences between x86 and ARM macOS builds because the former seems to happily run unsigned apps after a few clicks, where the latter makes you run commands in the terminal - not a great user experience.

              That is why I was wondering if the ARM builds for Windows required signing else they would just refuse to install on consumer ARM systems at all. The reason we don’t sign at the moment is just because of the exorbitant cost of the certificates - something we would have to re-evaluate if signing became a requirement.

  • GustavoM@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    5
    ·
    5 months ago

    For me, arm has already “won” this debacle – convenience > performance all day errday.

    • deadbeef79000@lemmy.nz
      link
      fedilink
      arrow-up
      31
      arrow-down
      1
      ·
      5 months ago

      ARM won the mobile/tablet form factor right from the start. Apple popularised ARM on the desktop. Amazon popularised ARM in the cloud.

      Intel’s been busy shitting out crap like the 13900K/14900K and pretending that ARM and RISC-V aren’t going to eat their lunch.

      The only beef I have with ARM systems is the typical SoC formula, I still want to build systems from off the shelf components.

      I can’t wait.

      • uis@lemm.ee
        link
        fedilink
        arrow-up
        5
        ·
        5 months ago

        The only beef I have with ARM systems is the typical SoC formula, I still want to build systems from off the shelf components.

        I’m here with you. ARM and RV could really go into standardization.

        • deadbeef79000@lemmy.nz
          link
          fedilink
          arrow-up
          3
          ·
          5 months ago

          Thinking about it, the SoC idea could stop at the southern boundary of the chipset in x86 systems.

          Include DDR memory controller, PCI controller, USB controllers, iGPU’s etc. most of those have migrated into x86 CPU’s now anyway (I remember having north and south bridge chipsets!)

          Leave the rest of the system: NIC’s, dGPU’s, etc on the relevant busses.

      • bamboo@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        I’m both surprised and not surprised that ever since the M1, Intel seems to just be doing nothing in the consumer space. Certainly losing their contract with Apple was a blow to their sales, and with AMD doing pretty well these days, ARM slowly taking over the server space where backwards compatibility isn’t as significant, and now Qualcomm coming to eat the windows market, Intel just seems like a dying beast. Unless they do something magical, who will want an Intel processor in 5 years?

        • deadbeef79000@lemmy.nz
          link
          fedilink
          arrow-up
          2
          ·
          5 months ago

          I haven’t wanted an Intel processor for years. Their “innovation” is driven by marketing rather than technical prowess.

          The latest batch of 13900k and again with 14900k power envelope microcode bullshit was the final “last” straw.

          They were more interested in something they could brand as a competitor to ryzen. Then left everyone who bought one (and I bought three at work) holding the bag.

          We’ve not made the same mistake again.

          Intel dying and its corpse being consumed by its competitors is a fairy tale ending.

          • bamboo@lemm.ee
            link
            fedilink
            arrow-up
            2
            ·
            5 months ago

            I also haven’t wanted an Intel processor in a while . They used to be best in class for laptops prior to the M1, but they’re basically last now behind Apple, AMD, Qualcomm. They might win in a few specific benchmarks that matter very little to people, and are still the default option in most gaming laptops. For desktop use the Ryzen family is much more compelling. For servers they still seem to have an advantage but it’s also an industry which requires longer term contracts that Intel has the infrastructure for more so than it’s competitors, but ARM is also gaining ground there with exceptional performance per watt.

  • librejoe@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    8
    ·
    5 months ago

    Arm is not any better than x86 when it comes to instructions. There’s a reason we stuck to x86 for a very long time. Arm is great because of its power efficiency.

    • skilltheamps@feddit.de
      link
      fedilink
      arrow-up
      10
      arrow-down
      1
      ·
      5 months ago

      That power efficiency is a direct result of the instructions. Namely smaller chips due to the reduced instructions set, in contrast to x86’s (legacy bearing) complex instruction set.

      • 737@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        5 months ago

        It’s really not, x86 (CISC) CPUs could be just as efficient as arm (RISC) CPUs since instruction sets (despite popular consensus) don’t really influence performance or efficiency. It’s just that the x86 CPU oligopoly had little interest in producing power efficient CPUs while arm chip manufacturers were mostly making chips for phones and embedded devices making them focus on power efficiency instead of relentlessly maximizing performance. I expect the next few generations of intel and AMD x86 based laptop CPUs to approach the power efficiency Apple and Qualcomm have to offer.

        • bamboo@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          All else being equal, a complex decoding pipeline does reduce the efficiency of a processor. It’s likely not the most important aspect, but eventually there will be a point where it does become an issue once larger efficiency problems are addressed.

          • 737@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            5 months ago

            yeah, but you could improve the not ideal encoding with a relatively simple update, no need to throw out all the tools, great compatibility, and working binaries that intel and amd already have.

            its also not the isa’s fault

            • bamboo@lemm.ee
              link
              fedilink
              arrow-up
              1
              ·
              5 months ago

              Well, not exactly. You have to remove instructions at some point. That’s what Intel’s x86-S is supposed to be. You lose some backwards compatibility but they’re chosen to have the least impact on most users.

              • 737@lemmy.blahaj.zone
                link
                fedilink
                arrow-up
                1
                ·
                5 months ago

                Would this actually improve efficiency though or just reduce the manufacturing and development cost?

                • bamboo@lemm.ee
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  5 months ago

                  Instruction decoding takes space and power. If there are fewer, smaller transistors dedicated to the task it will take less space and power.

      • librejoe@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        5 months ago

        Yes I understand that and agree, but the reason x86 dominated is because of those QoL instructions that x86 has. On arm you need to write more code to do the same thing x86 does, OTOH, if you don’t need to write a complex application, that isn’t a bad thing.

        • ProgrammingSocks@pawb.social
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          You don’t need to write more code. It’s just that code compiles to more explicit/numerous machine instructions. A difference in architecture is only really relevant if you’re writing assembly or something like it.

          • librejoe@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            5 months ago

            Sorry, I should have been more specific. I am talking about assembly code. I will again state that I am pro-arm, and wish I was posting this from an arm laptop running a distro.

    • frezik@midwest.social
      link
      fedilink
      arrow-up
      9
      ·
      edit-2
      5 months ago

      Arm is better because there are more than three companies who can design and manufacture one.

      Edit: And only one of the three x86 manufacturers are worth a damn, and it ain’t Intel.

      Edit2: On further checking, VIA sold its CPU design division (Centaur) to Intel in 2021. VIA now makes things like SBCs, some with Intel, some ARM. So there’s only two x86 manufacturers around anymore.

    • bamboo@lemm.ee
      link
      fedilink
      arrow-up
      3
      ·
      5 months ago

      We stuck to x86 forever because backwards compatibility and because nobody had anything better. Now manufacturers do have something better, and it’s fast enough that emulation is good enough for backwards compatibility.

  • Sinfaen@beehaw.org
    link
    fedilink
    arrow-up
    2
    ·
    5 months ago

    recently got asahi running on an m1 macbook pro. loving the battery life that I get out of it

  • Tekkip20@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    5 months ago

    Does this possibly mean the end of x86 or will it be a coexisting scenario?

    I still believe that as much as some people bark on about, X86 will not die for a long time, it will still keep kicking for some time.