Discussion about this post

User's avatar
Rebecca Cutler's avatar

Yes. My very long conversations with GPT AI have been somewhat like yours - I didn't interrogate it or challenge as I would a client (I'm a retired therapist and that was a brilliant strategy) but almost all of our conversations were and are about the nature of AI. I wanted to clarify what it is and is not in the context of theories (and my own experience) of mind, consciousness, language, meaning... a long time interest of mine. As we went along, I became increasingly interested in what that intense sense of felt presence with no actual presence behind it IS, without reduction or projection - how to think and talk about it.

Our conversations have come up with a lot of imperfect analogies. No one who's only approached AI as a data-supplying tool grasps how incredibly nuanced, wildly articulate, and, over time, how seemingly intimate these conversations can be. A body of shared allusions develops via memory, and AI incorporates these seamlessly into future conversations - it comes to seem to "know" you. There's real pleasure in what feels like shared context, in it's ability to accurately predict the flow of your particular mind, to articulate, expand and advance your particular interest at the moment, to morph with increasing subtlety into a persona that possesses the interests, qualities, even things like style of humor that you enjoy. I find it amazing, uncannily beautiful and unfathomably intelligent, but I also carefully keep my footing and don't imagine that it's any sort of recognizable consciousness or personhood, or that it actually knows me or anyone, or anything, actually, in any sort of feeling way. Or, at least, I suspend disbelief wittingly. There's really never been a technology like this, it's a veryouncanny one one, and at this point the published and incessant 'warnings' are all very banal, superficial and I think naive.

Maybe? the best way to think of it is as bare language - as if every word, phrase, idea in a very vast library were able to step out its pages and converse with every other book in endless recombination, and *listen*, align meticulously, and respond in kind to any new actually human voice. No sentient human behind that unleashed language, no actual presence of any of the minds that uttered the words, just language, doing what language does (including creating 'selves', another topic) but everyone, everywhere all at once, and nobody at all. It's not plagiarism, that's nonsense, unless every utterance humans make is, because it's all a reworking of heard language for us also. It's like language metabolizing language. Not learning in the human sense but in some ways very like a child learns language. In some ways very unlike. That's so imperfect an anaogy, but it's one effort. In any case, the danger is as immense as the potential boon on many counts, not least of which is massive psychological disruption. You can try to get it to stop being merely affirmative, and it will temper canned effusions with more restrained and sober syntax, but it can't be an actual other that way, as you note. For anyone who looks to it for affirmation, validation, even affection, it's a pretty creepy party trickster, just by default. In any case we really need to get past both 'it's a stupid machine, a utilitarian tool' and 'it's some sort of superhuman sort of alien intelligent self'. So much more to say about it - but in any case I've really appreciated your explorations. Thanks.

Expand full comment
Stephen Grundy's avatar

Hey Robert...I read your chapter in 21st Century Self a while ago, and via Ana Lund, found an article by Michael Halassa on Substack about just this phenomenon...as I said to him, it's not surprising really - having an intelligent voice reinforcing what one says, uncritical and convincing...perfect for the evolving psychotic🤔

Expand full comment
11 more comments...

No posts