I think there’s specific industrial problems for which AI is indeed transformative.
Just one example that I’m aware of is the AI-accelerated nazca lines survey that revealed many more geoglyphs that we were not previously aware of.
However, this type of use case just isn’t relevant to most people who’s reliance on LLMs is “write an email to a client saying xyz” or “summarise this email that someone sent to me”.
One of my favorite examples is “smart paste”. Got separate address information fields? (City, state, zip etc) Have the user copy the full address, clock “Smart paste”, feed the clipboard to an LLM with a prompt to transform it into the data your form needs. Absolutely game-changing imho.
Or data ingestion from email - many of my customers get emails from their customers that have instructions in them that someone at the company has to convert into form fields in the app. Instead, we provide an email address (some-company-inbound@ myapp.domain) and we feed the incoming emails into an LLM, ask it to extract any details it can (number of copies, post process, page numbers, etc) and have that auto fill into fields for the customer to review before approving the incoming details.
So many incredibly powerful use-cases and folks are doing wasteful and pointless things with them.
If I’m brutally honest, I don’t find these use cases very compelling.
Separate fields for addresses could be easily solved without an LLM. The only reason there isn’t already a common solution is that it just isn’t that much of a problem.
Data ingestion from email will never be as efficient and accurate as simply having a customer fill out a form directly.
These things might make someone mildly more efficient at their job, but given the resources required for LLMs is it really worth it?
Well, the address one was an example. Smart paste is useful for more than just addresses - Think non-standard data formats where a customer provided janky data and it needs wrangling. Happens often enough and with unique enough data that an LLM is going to be better than a bespoke algo.
The email one though? We absolutely have dedicated forms, but that doesn’t stop end users from sending emails to our customer anyway - The email ingestion via LLM is so our customer can just have their front desk folks forward the email in and have it make a best guess to save some time. When the customer is a huge shop that handles thousands of incoming jobs per day, the small value adds here and there add up to quite the savings for them (and thus, value we offer).
Given we run the LLMs on low power machines in-house … Yeah they’re worth it.
I work in a field which is not dissimilar. Teaching customers to email you their requirements so your LLM can have a go at filling out the form just seems ludicrous to me.
Additionally, the models you’re using require stupid amounts of power to produce so that you can run them on low power machines.
Anyhow, neither of us is going to change our minds without actual data which neither of us have. Who knows, a decade from now I might be forwarding client emails to an LLM so it can fill out a form for me, at which time I’ll know I was wrong.
That’s really neat, thanks for sharing that example.
In my field (biochemistry), there are also quite a few truly awesome use cases for LLMs and other machine learning stuff, but I have been dismayed by how the hype train on AI stuff has been working. Mainly, I just worry that the overhyped nonsense will drown out the legitimately useful stuff, and that the useful stuff may struggle to get coverage/funding once the hype has burnt everyone out.
I suspect that this is “grumpy old man” type thinking, but my concern is the loss of fundamental skills.
As an example, like many other people I’ve spent the last few decades developing written communication skills, emailing clients regarding complex topics. Communication requires not only an understanding of the subject, but an understanding of the recipient’s circumstances, and the likelihood of the thoughts and actions that may arise as a result.
Over the last year or so I’ve noticed my assistants using LLMs to draft emails with deleterious results. This use in many cases reduces my thinking feeling experienced and trained assistant to an automaton regurgitating words from publicly available references. The usual response to this concern is that my assistants are using the tool incorrectly, which is certainly the case, but my argument is that the use of the tool precludes the expenditure of the requisite time and effort to really learn.
Perhaps this is a kind of circular argument, like why do kids need to learn handwriting when nothing needs to be handwritten.
It does seem as though we’re on a trajectory towards stupider professional services though, where my bot emails your bot who replies and after n iterations maybe they’ve figured it out.
Oh yeah, I’m pretty worried about that from what I’ve seen in biochemistry undergraduate students. I was already concerned about how little structured support in writing science students receive, and I’m seeing a lot of over reliance on chatGPT.
With emails and the like, I find that I struggle with the pressure of a blank page/screen, so rewriting a mediocre draft is immensely helpful, but that strategy is only viable if you’re prepared to go in and do some heavy editing. If it were a case of people honing their editing skills, then that might not be so bad, but I have been seeing lots of output that has the unmistakable chatGPT tone.
In short, I think it is definitely “grumpy old man” thinking, but that doesn’t mean it’s not valid (I say this as someone who is probably too young to be a grumpy old crone yet)
I think there’s specific industrial problems for which AI is indeed transformative.
Just one example that I’m aware of is the AI-accelerated nazca lines survey that revealed many more geoglyphs that we were not previously aware of.
However, this type of use case just isn’t relevant to most people who’s reliance on LLMs is “write an email to a client saying xyz” or “summarise this email that someone sent to me”.
One of my favorite examples is “smart paste”. Got separate address information fields? (City, state, zip etc) Have the user copy the full address, clock “Smart paste”, feed the clipboard to an LLM with a prompt to transform it into the data your form needs. Absolutely game-changing imho.
Or data ingestion from email - many of my customers get emails from their customers that have instructions in them that someone at the company has to convert into form fields in the app. Instead, we provide an email address (some-company-inbound@ myapp.domain) and we feed the incoming emails into an LLM, ask it to extract any details it can (number of copies, post process, page numbers, etc) and have that auto fill into fields for the customer to review before approving the incoming details.
So many incredibly powerful use-cases and folks are doing wasteful and pointless things with them.
If I’m brutally honest, I don’t find these use cases very compelling.
Separate fields for addresses could be easily solved without an LLM. The only reason there isn’t already a common solution is that it just isn’t that much of a problem.
Data ingestion from email will never be as efficient and accurate as simply having a customer fill out a form directly.
These things might make someone mildly more efficient at their job, but given the resources required for LLMs is it really worth it?
Well, the address one was an example. Smart paste is useful for more than just addresses - Think non-standard data formats where a customer provided janky data and it needs wrangling. Happens often enough and with unique enough data that an LLM is going to be better than a bespoke algo.
The email one though? We absolutely have dedicated forms, but that doesn’t stop end users from sending emails to our customer anyway - The email ingestion via LLM is so our customer can just have their front desk folks forward the email in and have it make a best guess to save some time. When the customer is a huge shop that handles thousands of incoming jobs per day, the small value adds here and there add up to quite the savings for them (and thus, value we offer).
Given we run the LLMs on low power machines in-house … Yeah they’re worth it.
Yeah, still not convinced.
I work in a field which is not dissimilar. Teaching customers to email you their requirements so your LLM can have a go at filling out the form just seems ludicrous to me.
Additionally, the models you’re using require stupid amounts of power to produce so that you can run them on low power machines.
Anyhow, neither of us is going to change our minds without actual data which neither of us have. Who knows, a decade from now I might be forwarding client emails to an LLM so it can fill out a form for me, at which time I’ll know I was wrong.
That’s really neat, thanks for sharing that example.
In my field (biochemistry), there are also quite a few truly awesome use cases for LLMs and other machine learning stuff, but I have been dismayed by how the hype train on AI stuff has been working. Mainly, I just worry that the overhyped nonsense will drown out the legitimately useful stuff, and that the useful stuff may struggle to get coverage/funding once the hype has burnt everyone out.
I suspect that this is “grumpy old man” type thinking, but my concern is the loss of fundamental skills.
As an example, like many other people I’ve spent the last few decades developing written communication skills, emailing clients regarding complex topics. Communication requires not only an understanding of the subject, but an understanding of the recipient’s circumstances, and the likelihood of the thoughts and actions that may arise as a result.
Over the last year or so I’ve noticed my assistants using LLMs to draft emails with deleterious results. This use in many cases reduces my thinking feeling experienced and trained assistant to an automaton regurgitating words from publicly available references. The usual response to this concern is that my assistants are using the tool incorrectly, which is certainly the case, but my argument is that the use of the tool precludes the expenditure of the requisite time and effort to really learn.
Perhaps this is a kind of circular argument, like why do kids need to learn handwriting when nothing needs to be handwritten.
It does seem as though we’re on a trajectory towards stupider professional services though, where my bot emails your bot who replies and after n iterations maybe they’ve figured it out.
Oh yeah, I’m pretty worried about that from what I’ve seen in biochemistry undergraduate students. I was already concerned about how little structured support in writing science students receive, and I’m seeing a lot of over reliance on chatGPT.
With emails and the like, I find that I struggle with the pressure of a blank page/screen, so rewriting a mediocre draft is immensely helpful, but that strategy is only viable if you’re prepared to go in and do some heavy editing. If it were a case of people honing their editing skills, then that might not be so bad, but I have been seeing lots of output that has the unmistakable chatGPT tone.
In short, I think it is definitely “grumpy old man” thinking, but that doesn’t mean it’s not valid (I say this as someone who is probably too young to be a grumpy old crone yet)