Sometimes I’ll run into a baffling issue with a tech product — be it headphones, Google apps like maps or its search features, Apple products, Spotify, other apps, and so on — and when I look for solutions online I sometimes discover this has been an issue for years. Sometimes for many many years.

These tech companies are sometimes ENORMOUS. How is it that these issues persist? Why do some things end up being so inefficient, unintuitive, or clunky? Why do I catch myself saying “oh my dear fucking lord” under my breath so often when I use tech?

Are there no employees who check forums? Does the architecture become so huge and messy that something seemingly simple is actually super hard to fix? Do these companies not have teams that test this stuff?

Why is it so pervasive? And why does some of it seem to be ignored for literal years? Sometimes even a decade!

Is it all due to enshittification? Do they trap us in as users and then stop giving a shit? Or is there more to it than that?

  • stealth_cookies@lemmy.ca
    link
    fedilink
    arrow-up
    4
    ·
    3 months ago

    This is a topic that could be a novel for how much there is to consider, but in the end it comes down to resources and companies trying to choose what it best for the company overall. For a company to do anything, they are giving up many other things they could be doing instead. Whether it is limited budgets, limited personnel, or company priorities every decision made is always a tradeoff that means you aren’t doing something else.

    Most companies prioritize releasing new product so they can start getting revenue from it as soon as possible. A new product has the largest potential market, and thus makes shareholders happy to see revenue coming in. The sales from a new product are the easiest ones in most product’s lifecycle. Additionally. releasing new products helps keep you ahead of competitors. So ongoing maintenance work is de-prioritized over working on new things.

    The goal of testing is to simulate potential use cases of a product and ensure that it will work as expected when the customer has the product in their hands. It is impossible to fully test a product in a finite amount of time, so tests are created that expose flaws within a reasonable search space of the expected uses. If an issue is found then it needs to be evaluated about whether it is worth fixing and when. There are many factors that affect this, for example:

    • How much would it cost to fix?
    • How much time would it take to fix?
    • Does it need to be fixed for launch or can it be a running change?
    • How many customers are actually going to see the issue? Is it just a small annoyance for them or will it cause returns/RMAs?
    • Is it within the expected use case of the product?
    • Can we mitigate it in software/firmware instead of changing hardware?
    • Is it a compliance/regulatory issue?
    • Would this bring in new customers for the product?
    • Was this done a specific way for a reason?

    Unfortunately, after considering all this the result is often that it isn’t worth the effort to fix something, but it is considered.