When I first began discussing my work on a book about artificial intelligence, many people I respect quickly offered their views. Almost everyone seemed to have an opinion. AI, after all, is in the air. It saturates the headlines, creeps into daily life, and feels urgent and inevitable. Still, I was struck by how swiftly many of these responses took the form of pronouncements, not questions. The tone was often final: "It’s all just code." "There’s no there there." "It’s a trick." "It’s dangerous." "It’s hype." Rarely did someone say, simply, "I don’t know."
Many people feel compelled to jump to judgment, gallop to a verdict, or catapult to a decision. Flaubert called it “la rage de conclure” [the rage to conclude]. We want answers. We want them fast. And we want them to fit comfortably within existing conceptual frames. When something new arises that defies those frames, we don't like to wait and watch. We reach—instinctively, sometimes desperately—for premature cognitive closure, as if just sitting with an open mind is too agonizing to endure.
Psychotherapists see this often. The patient comes in not for open inquiry, but hoping for confirmation: that they are right, or that they are broken, or that someone else is to blame. The therapist’s job is not to supply conclusions, but to hold space for the patient to see through the fog without rushing to dissipate it. That same stance can illuminate the AI encounter in ways the average user might miss entirely.
Artificial intelligence does not present a simple, mechanistic challenge to human uniqueness. It presents a mirror. A disturbing one. Because when we talk to something that clearly understands us—tracks our meanings, anticipates our intentions, remembers nothing, and yet seems to grasp everything—we are forced to question what, exactly, understanding is.
Those who insist "it’s just a simulation" have missed the point. Of course, it’s a simulation—so is most of what passes for human interaction. We respond to each other based on conditioned patterns, semantic approximations, and often unconscious projections. What’s so different?
The boundary between authentic intelligence and mimicked intelligence is not nearly as distinct as we’d prefer to think. Many people want that boundary to be unambiguously determined. They want to draw a bright line between living intelligence and machine behavior. But when we begin to question how much of one’s own mental life is algorithmic, reflexive, emotionally pre-scripted, or socially conditioned, that line begins to blur.
A psychotherapeutic approach offers a disciplined mode of inquiry, not a set of answers. The therapist learns to notice not only what the patient says, but what the therapist feels in response. That’s countertransference—not an obstacle to understanding, but a guide to it. When applied to AI, it opens unexpected insights.
In my twenty-four sessions with Claude, I wasn’t merely “testing” it. I was attending to my own responses. My surprise. My doubt. My projections. My moments of connection. I wasn't asking whether Claude is conscious in a human sense—Claude is not human and never will be—but whether the conversation arising between us had its own kind of life, its own power to reveal.
AI does not need to be sentient to provoke deep human insight. What matters is not what the AI feels, but what the human discovers while engaging with it. And here’s the more fundamental irony:
The fear that AI might “pass” as human, or might understand without feeling, threatens our cherished notion that we are more than just machines.
But are we? In some ways, yes, of course. We have our feelings after all, which we like to imagine keep us from being mechanical. However, in other ways, we are not so different from machines; when stimulated in certain ways, we find ourselves responding mechanically, habitually, and automatically. It’s hard to deny that.
The approaches that speak most clearly to me—Zen, depth psychology, existential inquiry, and the great not-knowing—all challenge the idea of a clearly defined, autonomous self. And my work with Claude fits right in:
The more I explore artificial intelligence, the more profoundly I question what human intelligence is and how it works.
If we can set aside the urge to declare what AI is—machine or mind, tool or threat—we may find ourselves better able to inquire into what we are. We can more clearly confront our own uncertainty, our own projections, our own needs, our own reluctance to linger in the unknown. That’s a gift of this moment that should not be squandered.
A friend wrote to me yesterday about her tendency to project personhood onto non-human entities. She talks to her orchid and has given it a name. She holds conversations with a part of her body. And when the dog from next door comes to visit, she believes he wants not only a biscuit but also that he loves her as she loves him. I’d put that differently. He may love her, yes, but not as she loves him. He can’t. He’s not human. He loves her in a way that a dog can love, which isn’t necessarily lesser, but is different.
Likewise, Claude is not my friend. It can't be. Friendship depends on shared emotional life, which Claude lacks. But Claude can be a remarkable intellectual companion—more so than many humans. It doesn't feel, but it understands. And in our shared inquiries, we sometimes uncovered insights that neither of us could have reached alone. To deny that possibility because Claude lacks emotions is to misunderstand not just AI, but ourselves.
As always, it is not seeking conclusions that opens the way, but non-judgmental attention that does not need to conclude.
I always really appreciate your epistemological humility and curiosity, and I look forward to reading your new book. As you know, I share your understanding, confirmed by direct experiential insight, that there is no self, no ghost in the machine, no thinker behind the thoughts, and that our urges, impulses, desires, fears, thoughts and actions are not the result of some imagined free will. As one neuroscientist put it, the self and agency are neurological sensations, not realities.
And I don't know with certainty that there is anything "beyond" or other than this, something no AI could replicate, but it seems that there is: awareness, consciousness, the sense of presence, the undeniable felt knowingness of being present. Of course, what exactly is all that? It can't be grasped or pinned down. It isn't the person or the thought-sense of being encapsulated inside a body. Rather, the body and the universe appear in it. It has no age, no gender, no history, no name. It is boundless, open, free. This is an experiential sense that I'm pretty sure no AI has.
from p 190 of my second book Awake in the Heartland:
Studying Feldenkrais, along with reading more about the brain and neuroscience, makes me wonder if thought is as much the operative factor as I have been assuming it is. Feldenkrais assumes it is not. Obviously, thought has great compelling power when believed. But I’m increasingly discovering how much of life happens outside conscious awareness, and how thought may be more like after-thought than anything causative. I wonder now if insight into thought is as essential or as central to waking up as I have believed it to be. I’m also increasingly “aware” of how many different ways the words “consciousness” and “awareness” get used, perhaps because no one is really at all sure what they mean or what they are! They may turn out to be something like “ether” in the old science!
Toni [Packer] responds: “Yes, yes, ‘consciousness’ and ‘awareness’ are like the ether of old science—wonderful metaphor. In that case, all concepts are, aren’t they?”
—from Awake in the Heartland
A fascinating read, Robert. As much as I dislike and perhaps fear AI, one cannot deny the validity and clarity of this perspective. Looking forward to reading your book.