Notice the distinction in my comments between an LLM and other algorithms, that’s a key point that you’re ignoring. The idea that other commenters have is that for some reason there is no input that could produce the output of human thought other than the magical fairy dust that exists within our souls. I don’t believe this. I think a sufficiently advanced input could arrive at the holistic output of human thought. This doesn’t have to be LLMs.
You’re missing the forest for the trees. Replace “magical fairy dust” with [insert whatever you think makes organic, carbon-based processing capable of sentience but inorganic silicon-based processing incapable of sentience].
whatever you think makes organic, carbon-based processing capable of sentience but inorganic silicon-based processing incapable of sentience
No one I see here took that position. The position being taken is that LLMs are not that and their trajectory isn’t really going there no matter how much hype you’ve bought into out of Reddit New Atheist contrarian knee-jerk desire to stick it to those that you assume believe in “the magical fairy dust that exists within our souls.”
I haven’t seen anyone here (or basically anyone at all, for that matter) suggest that there’s literally no way to create mentality like ours other than being exactly like us. The argument is just that LLMs are not even on the right track to do something like that. The technology is impressive in a lot of ways, but it is in no way comparable to even a rudimentary mind in the sense that people have minds, and there’s no amount of tweaking or refining the basic approach that’s going to move it in that direction. “Genuine” (in the sense of human-like) AI made from non-human stuff is certainly possible in principle, but LLMs are not even on that trajectory.
Even setting that aside, I think framing this as an I/O problem elides some really tricky and deep conceptual content, and suggests some fundamental misunderstanding about how complex this problem is. What on Earth does “the output of human thought” mean in this sense? Clearly you don’t really mean human thought, because you obviously think whatever “output” you’re looking for can be instantiated in non-human systems. It must mean human-like thought, but human-like in what sense? Which features are important to preserve, and which are incidental or parochial to the way humans do human-like thought? How you answer that question greatly influences how you evaluate putative cases of “genuine” AI, and it’s possible to build in a great deal of hidden bias if we don’t think carefully and deliberately about this. From what I’ve seen, virtually none of the AI hypers are thinking carefully or deliberately about this.
The top level comment this chain is on specifically reduces GPT by saying it’s “just an algorithm”, not by saying it’s “just an LLM”, which is implicitly claiming that no algorithm could match or exceed human capabilities, because they’re “just algorithms”.
You can even see this person further explicitly defending this position in other comments, so the mentality you say you haven’t seen is literally the basis for this entire thread.
The smol bean LLM is unfairly misunderstood sometimes while presently tightening the grip of the surveillance state and denying medical coverage to people while putting artists out of work. I’m sure the billionaires bankrolling it will wipe away those statistically-produced tears with wads of cash, so all will be well.
Notice the distinction in my comments between an LLM and other algorithms, that’s a key point that you’re ignoring. The idea that other commenters have is that for some reason there is no input that could produce the output of human thought other than the magical fairy dust that exists within our souls. I don’t believe this. I think a sufficiently advanced input could arrive at the holistic output of human thought. This doesn’t have to be LLMs.
Who said that?
You’re missing the forest for the trees. Replace “magical fairy dust” with [insert whatever you think makes organic, carbon-based processing capable of sentience but inorganic silicon-based processing incapable of sentience].
No one I see here took that position. The position being taken is that LLMs are not that and their trajectory isn’t really going there no matter how much hype you’ve bought into out of Reddit New Atheist contrarian knee-jerk desire to stick it to those that you assume believe in “the magical fairy dust that exists within our souls.”
I haven’t seen anyone here (or basically anyone at all, for that matter) suggest that there’s literally no way to create mentality like ours other than being exactly like us. The argument is just that LLMs are not even on the right track to do something like that. The technology is impressive in a lot of ways, but it is in no way comparable to even a rudimentary mind in the sense that people have minds, and there’s no amount of tweaking or refining the basic approach that’s going to move it in that direction. “Genuine” (in the sense of human-like) AI made from non-human stuff is certainly possible in principle, but LLMs are not even on that trajectory.
Even setting that aside, I think framing this as an I/O problem elides some really tricky and deep conceptual content, and suggests some fundamental misunderstanding about how complex this problem is. What on Earth does “the output of human thought” mean in this sense? Clearly you don’t really mean human thought, because you obviously think whatever “output” you’re looking for can be instantiated in non-human systems. It must mean human-like thought, but human-like in what sense? Which features are important to preserve, and which are incidental or parochial to the way humans do human-like thought? How you answer that question greatly influences how you evaluate putative cases of “genuine” AI, and it’s possible to build in a great deal of hidden bias if we don’t think carefully and deliberately about this. From what I’ve seen, virtually none of the AI hypers are thinking carefully or deliberately about this.
The top level comment this chain is on specifically reduces GPT by saying it’s “just an algorithm”, not by saying it’s “just an LLM”, which is implicitly claiming that no algorithm could match or exceed human capabilities, because they’re “just algorithms”.
You can even see this person further explicitly defending this position in other comments, so the mentality you say you haven’t seen is literally the basis for this entire thread.
The smol bean LLM is unfairly misunderstood sometimes while presently tightening the grip of the surveillance state and denying medical coverage to people while putting artists out of work. I’m sure the billionaires bankrolling it will wipe away those statistically-produced tears with wads of cash, so all will be well.