No, see the “AI” makes all the decisions we want to make but aren’t allowed to. Nobody can get mad at us, because computers are math and math is always right.
As the other commenter mentioned, this is a feature. A higher rate of denied claims that can be scapegoated is good for profits.
This isn’t new, either. Waves of automation and/or “computational analysis” have had the same character. For example, if you make a simple Bayesian model off of demographic data about who you should insure vs. not in order to make more money, you’d almost certainly create a racist model because it will directly reflect injustices in society. It will then help to exacerbate them by denying coverage based on race or setting rates based on race, etc.
This still happens, actually. They just trim out the things that are explicitly racial and use allegedly neutral facts to put together proxies. They still don’t even have to try to be racist for this to happen due to the models simply turning a mirror onto societal injustice and trying to make a buck by accounting for it.
And they get away with it using the same excuses. “Oh it’s just the math, don’t point fingers at me!”
Almost every applied sociological analysis used for business ends up doing something like this.
Handing off such important shit to AI seems like a really poor decision. Whomever made that decision needs strung up by their toes.
No, see the “AI” makes all the decisions we want to make but aren’t allowed to. Nobody can get mad at us, because computers are math and math is always right.
As the other commenter mentioned, this is a feature. A higher rate of denied claims that can be scapegoated is good for profits.
This isn’t new, either. Waves of automation and/or “computational analysis” have had the same character. For example, if you make a simple Bayesian model off of demographic data about who you should insure vs. not in order to make more money, you’d almost certainly create a racist model because it will directly reflect injustices in society. It will then help to exacerbate them by denying coverage based on race or setting rates based on race, etc.
This still happens, actually. They just trim out the things that are explicitly racial and use allegedly neutral facts to put together proxies. They still don’t even have to try to be racist for this to happen due to the models simply turning a mirror onto societal injustice and trying to make a buck by accounting for it.
And they get away with it using the same excuses. “Oh it’s just the math, don’t point fingers at me!”
Almost every applied sociological analysis used for business ends up doing something like this.