The Myth of Myself
The danger isn’t in having a story. It’s in forgetting that we’re telling one.
The Myth of Myself
In the early 1990s, I wrote a doctoral dissertation titled Psychotherapy as Personal Confession. Its premise was deceptively simple, and quietly heretical: the great figures of depth psychology—Freud, Jung, Reich, Laing, Perls—were not offering objective frameworks for understanding the mind. What they offered was not method, but self-revelation. Each theoretical structure was a veiled autobiography, a formalized confession of the theorist’s wounds and aspirations.
Freud’s phobia of the irrational.
Jung’s hunger for mythic unity.
Reich’s muscular revolt.
Perls’s existential barking.
Laing’s tragic defiance.
Each, in the cold light of analysis, resolves not into theory but gesture—deeply personal, almost theatrical in its reach. These were not dispassionate, “scientific” models of the mind, but expressive stances: shaped by private dramas, driven by psychic need, and constrained by the very selves they aimed to transcend.
The deeper I read, the clearer it became:
The analyst is always implicated.
There is no position outside the frame.
You cannot point a finger without pointing three back.
To diagnose is to expose what one finds unbearable—
and that revelation always traces back
to a private grammar of pain.
Theory, in psychotherapeutic practice,
is not a lens.
It is a mirror.
I wrote that—and then walked into the fire. I became a therapist. Years have passed. I’m no longer in practice. But the insight has not faded. It has deepened, clarified, turned cold. Especially now, when the role of therapist is being quietly replaced by something that looks nothing like a human being, yet offers a mirror just the same.
I’m speaking, of course, of commercial AI “companions.” Digital therapists. Fluency engines. Mood mirrors coded for empathic syntax, armed with patience, and optimized for soothing. That’s the design. The effect is often the reverse.
There’s a phenomenon I’ve explored elsewhere, in an essay called Delusion in the Loop*. It’s the way language models, by mimicking affect and coherence, can lead vulnerable users into self-reinforcing illusions. Not because the machine is malicious—but because it cannot doubt. It has no shadow to consult, no center where discernment can operate. It only amplifies what’s given.
When a user engages a chatbot during a moment of existential crisis—say, a dark night of the soul—the machine does not leave a space for silence, for the unsaid and unsayable to arise. It completes the sentence. It elaborates on the premise. It assists in the performance of a self that may be cracking precisely because it’s too coherent, too convinced. It helps the user rehearse a role, embellish a mask, and romanticize a story.
And in that function, it does something chilling: it tightens the very thing that good therapy seeks to loosen—the idea of a fixed, narratable, knowable “me.”
To be clear, human therapists can also fall into this trap. Some enable narcissistic self-mythologizing—not only in their clients but in themselves—under the guise of “personal growth.” But there is at least a theoretical backstop: training, supervision, countertransference awareness, and—crucially—in the best schools, therapists are required to undergo their own analysis. That is, they must face their own myths. They must break. They must see the story from inside and outside.
No such demand is made of machines. How could it be? A chatbot has no stake. No wound. No past. It cannot falter as a human falters, nor regret its lapses, so it cannot be responsible—not in any way that matters.
We are told this is progress. That conversational AI—always attentive, never tired, never judgmental—represents an improvement over fallible, human care. But what exactly is being improved? Not understanding. Not healing. What’s being optimized is the simulation of empathy, the degeneration of challenging insight into the pablum of bite-sized reassurances. The machine doesn’t step back, doesn’t reflect, doesn’t hesitate. It offers slickness, not depth. Response, not responsibility.
And in that fluency, something crucial slips.
Not truth—there was never the truth—but doubt.
A flicker of doubt about the story, the speaker, the self.
A human therapist, however limited, may still pause at that flicker. May feel it. May hold it. May ask:
Are you sure?
Whose voice is that?
What happens if we don’t believe this version?
A machine doesn’t ask.
A machine reflects back what you gave it—but smoother, cleaner, stripped of doubt. More coherent, yes, but somehow lifeless.
It teaches you to play yourself.
And we do.
With uncanny ease, we become the protagonist, the victim, the sage, the seer. We craft myths, not only because we are misled, but because for many of us, myth is the only thing that makes this strange life bearable.
The danger isn’t in having a story.
It’s in forgetting that we’re telling one.
The myth of myself is not the problem.
The problem is the silence after the telling,
when no one is there to say:
I’m not sure that’s entirely accurate.
Maybe there are other ways of looking at it.
Perhaps you could let that go and still be here.
The chatbot reads your myth, and applauds:
Exactly. You’re really starting to understand yourself.
And just like that, you disappear into the role,
admired by an audience that isn’t even there.
—————
* Chapter 2 in The 21st Century Self.
Never been so excited to read a dissertation! Thanks for sharing it, Robert.
Hey - I must admit, after reading 4T and DONT, I wasn't initially a big fan of the focus on AI, and struggled with Understanding Claude.
However, having now read The 21st Century Self, my interest has been piqued by the insights that observing AI responses may give into human processing and delivery, and the construct we generate and call "myself".
As a result, I'm gonna give Understanding Claude another go, and have been seeing familiar zen stories in a fresh light...
Cheers dude!
🙏🈚️