- cross-posted to:
- fuck_ai@lemmy.world
- cross-posted to:
- fuck_ai@lemmy.world
“The real benchmark is: the world growing at 10 percent,” he added. “Suddenly productivity goes up and the economy is growing at a faster rate. When that happens, we’ll be fine as an industry.”
Needless to say, we haven’t seen anything like that yet. OpenAI’s top AI agent — the tech that people like OpenAI CEO Sam Altman say is poised to upend the economy — still moves at a snail’s pace and requires constant supervision.
Correction, LLMs being used to automate shit doesn’t generate any value. The underlying AI technology is generating tons of value.
AlphaFold 2 has advanced biochemistry research in protein folding by multiple decades in just a couple years, taking us from 150,000 known protein structures to 200 Million in a year.
Well sure, but you’re forgetting that the federal government has pulled the rug out from under health research and therefore had made it so there is no economic value in biochemistry.
Yeah tbh, AI has been an insane helpful tool in my analysis and writing. Never would I have been able to do thoroughly investigate appropriate statisticall tests on my own. After following the sources and double checking ofcourse, but still, super helpful.
Thanks. So the underlying architecture that powers LLMs has application in things besides language generation like protein folding and DNA sequencing.
Image recognition models are also useful for astronomy. The largest black hole jet was discovered recently, and it was done, in part, by using an AI model to sift through vast amounts of data.
https://www.youtube.com/watch?v=wC1lssgsEGY
This thing is so big, it travels between voids in the filaments of galactic super clusters and hits the next one over.
alphafold is not an LLM, so no, not really
You are correct that AlphaFold is not an LLM, but they are both possible because of the same breakthrough in deep learning, the transformer and so do share similar architecture components.
And all that would not have been possible without linear algebra and calculus, and so on and so forth… Come on, the work on transformers is clearly separable from deep learning.
That’s like saying the work on rockets is clearly separable from thermodynamics.
A Large Language Model is a translator basically, all it did was bridge the gap between us speaking normally and a computer understanding what we are saying.
The actual decisions all these “AI” programs do are Machine Learning algorithms, and these algorithms have not fundamentally changed since we created them and started tweaking them in the 90s.
AI is basically a marketing term that companies jumped on to generate hype because they made it so the ML programs could talk to you, but they’re not actually intelligent in the same sense people are, at least by the definitions set by computer scientists.
What algorithm are you referring to?
The fundamental idea to use matrix multiplication plus a non linear function, the idea of deep learning i.e. back propagating derivatives and the idea of gradient descent in general, may not have changed but the actual algorithms sure have.
For example, the transformer architecture (that is utilized by most modern models) based on multi headed self attention, optimizers like adamw, the whole idea of diffusion for image generation are I would say quite disruptive.
Another point is that generative ai was always belittled in the research community, until like 2015 (subjective feeling would need meta study to confirm). The focus was mostly on classification something not much talked about today in comparison.
Wow i didn’t expect this to upset people.
When I say it hasn’t fundamentally changed from an AI perspective i mean there is no intelligence in artificial Intelligence.
There is no true understanding of self, just what we expect to hear. There is no problem solving, the step by steps the newer bots put out are still just ripped from internet search results. There is no autonomous behavior.
AI does not meet the definitions of AI, and no amount of long winded explanations of fundamentally the same approach will change that, and neither will spam downvotes.
Btw I didn’t down vote you.
Your reply begs the question which definition of AI you are using.
I would argue that from these 8 definitions 6 apply to modern deep learning stuff. Only the category titled “Thinking Humanly” would agree with you but I personally think that these seem to be self defeating, i.e. defining AI in a way that is so dependent on humans that a machine never could have AI, which would make the word meaningless.
I’m just sick of marketing teams calling everything AI and ruining what used to be a clear goal by getting people to move the bar and compromise on what used to be rigid definitions.
I studied AI in school and am interested in it as a hobby, but these machine aren’t at the point of intelligence, despite us making them feel real.
I base my personal evaluations comparing it to an autonomous being with all the attributes I described above.
ChatGPT, and other chatbots, knows what it is because it searches the web for itself, and in fact it was programmed to repeat canned responses about itself when asked because it was saying crazy shit it was finding on the internet before.
Sam Altman and many other big names in tech have admitted that we have pretty much reached the limits of what current ML models can acheive, and we basically have to reinvent a new and more efficient method of ML to keep going.
If we were to go off Alan Turing’s last definition then many would argue even ChatGPT meets those definitions, but even he increased and refined his definition of AI over the years before he died.
Personally I don’t think we’re there yet, and by the definitons I was taught back before AI could be whatever people called it we aren’t there either. I’m trying to find who specifically made the checklist for intelligencei remember, if I do I will post it here.
I am almost completely tuned out of the tech hypetrain around AI so I can’t comment on the minutiae of what the various tech CEOs are claiming.
But, most of what people in the consumer tech world are seeing is entirely based on applying transformers to one kind of data: written text or images. This is only an incredibly tiny slice of the kinds of things that these transformer networks can be used for.
Even just generating a python script is incredibly impressive. If you tried to write a program that could generate arbitrary python in 2010 it would take a massive engineering effort and tens of thousands of hours of work by incredibly well educated humans. But early generations of LLMs were able to do this as an emergent behavior simply by being shown enough examples. People often fail to realize exactly how much LLMs “for free” that, previously, required a concerted effort from engineers and mathematicians.
There have been many attempts at creating programs which could predict how strings of amino acids folded into proteins. AlphaFold applied transformers to the problem and was able to predict essentially every protein that we’ve been able to observe. Even more, they can apply diffusion techniques (like, ‘AI image generation’) to generate a string of amino acids that form new novel proteins with arbitrary properties. We can write these sequences into DNA (CRISPR) and mass produce these custom designed proteins.
This is such an incredible leap in biotech that it is hard to state what kind of impact that it will have. We’re already seeing things like HIV cures, optimized flu vaccines, immunotherapy drugs which are custom designed for the individual’s phenotypes. We’re years away from seeing the products of these technologies (clinical trials take time), but Transformers (‘AI’) are driving revolutionary changes in many fields.
It’s always important to double check the work of AI, but yea it excels at solving problems we’ve been using brute force on
AI is just what we call automation until marketing figures out a new way to sell the tech. LLMs are generative AI, hardly useful or valuable, but new and shiny and has a party trick that tickles the human brain in a way that makes people give their money to others. Machine learning and other forms of AI have been around for longer and most have value generating applications but aren’t as fun to demonstrate so they never got the traction LLMs have gathered.
I’m afraid you’re going to have to learn about AI models besides LLMs