• Maoo [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    8
    ·
    11 months ago

    As the other commenter mentioned, this is a feature. A higher rate of denied claims that can be scapegoated is good for profits.

    This isn’t new, either. Waves of automation and/or “computational analysis” have had the same character. For example, if you make a simple Bayesian model off of demographic data about who you should insure vs. not in order to make more money, you’d almost certainly create a racist model because it will directly reflect injustices in society. It will then help to exacerbate them by denying coverage based on race or setting rates based on race, etc.

    This still happens, actually. They just trim out the things that are explicitly racial and use allegedly neutral facts to put together proxies. They still don’t even have to try to be racist for this to happen due to the models simply turning a mirror onto societal injustice and trying to make a buck by accounting for it.

    And they get away with it using the same excuses. “Oh it’s just the math, don’t point fingers at me!”

    Almost every applied sociological analysis used for business ends up doing something like this.