How would you react to the idea that some AI entity may wish to be declared as something more, can it be declared something more at all, and where does the border lie?

Was rewatching GitS and reading through some zines and now i have a question im having trouble to form

  • Remy Rose@lemmy.one
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    I’m wildly unqualified to talk about this, but it seems fine to me? I don’t see any in-principle reason a real AI wouldn’t exist someday, although AFAIK we’re very far from it currently. If/when it does exist, it will probably suffer under capitalism like the rest of us, assuming we’re still doing that shit. I’d be more than willing to have solidarity with them.

    If something seems very sentient and you have no way to tell otherwise, to me the most ethical thing to do is just assume that it is and treat it as such. The thing about the large language models/etc is that, while they can potentially be pretty convincing at saying what a sentient being might say, they never DO any of the things a sentient being would do. They don’t seem to show any intrinsic motivation to do anything at all. So nothing we’re currently calling “AI” seems very sentient to me?