cross-posted from: https://hilariouschaos.com/post/13325
Uhhh I guess I’ve seen a lot of people concerned about AI destroying jobs recently but I remember some years back… we were kind of rooting for AI to destroy some jobs?
The doctors, lawyers, and other such professions have had legally held monopolies on their professions, creating artificial barriers to competition… I was very much looking forward to their “professions” being leveled for having defended such destructive compulsory environments, from schools to workplaces.
Much to silence the critics that say, “would you really want your doctor to have a degree from ‘youtube university’?” (answer: yes, in some cases, indeed) I can now look forward to AI replacing such doctors, with the normies not complaining about it.
Of course, the various “professionals” agreeing to voluntarily allow competition to their professions (to allow for schools to open in competition and to not require licenses to practice certain professions) would be an acceptable alternative. But since these “professionals” refuse to give up their (stolen?) privileges, I guess I (we?) were rooting for AI to destroy their (taken?) privileges in this way.
Anyone else still have this attitude? (I suppose some of the other disrupted “unregulated” or less regulated industries are more collateral damage in this “attack”)
Why would a legal monopoly on a profession go away just because AI can do the job? The point of a legal monopoly is that others can do the job and presumably successfully get hired, and the government wants to prevent those people from getting hired.
It may not go away, but if a person could for example just look up answers to their legal questions, it would seem to make lawyers’ jobs redundant in a lot of cases for example, hence creating AI/technologically created unemployment
Assuming it doesn’t fail miserably at law like it does now.
It does but all law is written and all cases are doccumented. It shouldn’t be hard for someone to create an LLM based on the laws and court cases.
Tough to say. Most LLMs are shite at coding too, and that’s much more mechanical
Yes, exactly. Coding is much more technical. Law is just like normal text. If you have a news article and ask it to take the most important stuff from it and bring it back to you why wouldn’t it be able to do the same with laws?
There’s a number of reasons.
First, law is technical in a different way than coding. The classic joke of Bill Clinton asking what the definition of “is” is does have a basis, that when you get into incredibly technical definitions.
Second, Stare Decisis is a thing which means that you don’t just need to know what’s in all the laws and all the cases, you need to know what precedent supercedes or supplements each other decision. While it is rare, there’s time it occurs that some case from the 1600s common law ends up applying in court. It’s like finding an old opcode in an 8086 manual – can you still use it? What can you surmise about it?
On a technical level, the biggest benefit of LLMs becomes a weakness because there’s so many things that look like they could be the right answer but aren’t.
I know one question I asked ChatGPT (which we’ve established sucks at this, but hear me out) was about hazard gas monitoring in industrial environments. I asked about what particular laws applied to it. It created good looking fake citations from whole cloth and even wrote out entire laws that don’t exist, but even if it could prevent itself from doing that, there’s a logical leap you need to take to answer the question – In reality it’s likely covered under the lines in occupational health and safety acts that say something like “employers must do everything possible to ensure a safe work environment”