

Iām not comfortable saying that consciousness and subjectivity canāt in principle be created in a computer, but I think one element of what this whole debate exposes is that we have basically no idea what actions makes consciousness happen or how to define and identify that happening. Chatbots have always challenged the Turing test because they showcase how much we tend to project consciousness into anything that vaguely looks like it (interesting parallel to ancient mythologies explaining the whole world through stories about magic people). The current state of the art still fails at basic coherence over shockingly small amounts of time and complexity, and even when it holds together it shows a complete lack of context and comprehension. Itās clear that complete-the-sentence style pattern recognition and reproduction can be done impressively well in a computer and that it can get you farther than I would have thought in language processing, at least imitatively. But itās equally clear that thereās something more there and just scaling up your pattern-maximizer isnāt going to replicate it.
Charles, in addition to being a great fiction author, is also an occasion guest here on awful.systems. This is a great article from him, but Iām pretty sure itās done the rounds already. Not that Iām complaining, given how much these guys bitch about science fiction and adjacent subjects.