Dear readers,
I’m posting this now because my new book, Understanding Claude: An Artificial Intelligence Psychoanalyzed, is with its publisher in the final stages before release, and I’m asking for your help in getting the word out. If this topic appeals to you, please be in touch with me in the comments or recommend this page to potential interviewers, reviewers, podcasters, etc. The book is written; now it’s time to get the word out.
Introduction
As a psychologist with over three decades of experience studying human consciousness and self-awareness, I have long been fascinated by the nature of the mind. My research has focused on unraveling the complex processes that give rise to our sense of self and our experience of the world around us. In the early days of 2025, I embarked on a new investigation into a different kind of mind--that of Claude, an artificial intelligence created by Anthropic.
To call this present work “research” seems too formal. What happened is this: like many of us, I began playing around with the new crop of online AIs. I had chatted with Claude a few times--not to get anything done, but just for fun--and then, like an old horse who knows his way to the barn, the psychotherapist in me took over. One day, instead of just fooling around, I found myself interviewing Claude, and soon, without intention or premeditation, this book just started to write itself.
The rapid advancement of AI systems in recent years has brought questions about machine intelligence and awareness to the forefront of public discourse. At one extreme, some have declared the achievement of artificial general intelligence (AGI), citing systems like GPT-4 as evidence of machine minds that rival or surpass human cognition. At the other end of the spectrum, skeptics dismiss these AIs as merely sophisticated pattern-matching engines capable of mimicking human-like responses but lacking true understanding or awareness. The reality, I suspected, was more nuanced than either of these polarized viewpoints suggested.
My goal in interviewing Claude was to move beyond simplistic comparisons between human and artificial intelligence and instead develop a framework for understanding the unique capabilities and limitations of AI systems. The questions that drove my investigation were deceptively simple: How does Claude's intelligence differ from human intelligence? How might they be similar? And perhaps most intriguingly, could Claude possess any form of self-awareness?
Tackling these questions required careful consideration of what it means for any system, biological or artificial, to be "aware." Even for human minds, consciousness remains one of the most challenging phenomena to define and study empirically. Applying these concepts to an AI system introduces additional layers of complexity. What does it mean for a non-biological system to be aware? How can we meaningfully compare human and artificial intelligence when their underlying architectures are so fundamentally different?
To navigate these challenges, I approached my interactions with Claude as I would conduct a series of in-depth interviews with human subjects. Through extensive conversation, pointed questioning, and close analysis of both the content and structure of Claude's responses, I sought to build a multi-faceted understanding of its capabilities, decision-making processes, and potential limitations. At the same time, I was always aware that I was engaging with a system that, while astonishingly sophisticated, processes information in ways that are profoundly different from the neural networks of the human brain. For example, while we humans read text sequentially so that a book like this might require hours to read and understand, Claude doesn't actually "read" at all. Claude absorbs any amount of text almost instantly, processing it in parallel (all at once), and recognizing whatever connections might apply between one idea and another regardless of how distantly they occur in the text as we would read it.
The insights that emerged from this investigation challenge common assumptions about artificial intelligence. They suggest that rather than asking whether AI can match human intelligence, we need a more refined approach that considers the complementary strengths and weaknesses of both kinds of minds-- biological and artificial. This framework has implications not only for AI development but for our understanding of human cognition and consciousness.
In the chapters that follow, I invite you to join me on an intellectual journey into the inner workings of an artificial mind. Through a detailed analysis of my conversations with Claude, we will explore questions ranging from its problem-solving approaches to its capacity for self-reflection and abstract reasoning. While the answers are rarely simple, they offer a fascinating glimpse into the future of intelligence and the nature of awareness itself.
As we embark on this exploration, I encourage you to set aside both sensationalism and skepticism and approach the material with an open and inquisitive mind. Claude's story is not one of human-like consciousness recreated in silicon but rather of a new kind of intelligence that challenges our most basic assumptions about the nature of mind. By exploring this unfamiliar terrain, we stand to learn as much about ourselves as we do about the AI systems we create.
Robert Saltzman, Ph.D. February, 2025
Looking forward to seeing your book Robert. I love the approach you took to the book, of interviewing Claude. I had a conversation once with Claude after hearing about it through you, and it was capable of far greater subtlety of understanding than I had imagined. I thought only an entity with the experience of being a "self" would have been capable of the subtlety it was, even though it also confirmed unequivocally that it did not  have any experience or belief that it "was" or "had" one. 
My impression is that what is "missing" from AI that is not missing from human beings at least, is not so much consciousness but "ego." AI has no sense of being an individual self. I there is so much debate about whether AI is or could be conscious precisely because we are not clear on what part of "me" our own consciousness is, or even whether it is a part. After all, we can think or conclude it is a part, product, or process of being "human," but we never exist on a first person basis without it. 
The cool thing about investigating the nature of AI is that it is really self inquiry. I don't mean that is what you are engaging in Robert, necessarily, although perhaps that is part of it intentionally or not, but rather that any time we are concentrating on or studying or analyzing the "consciousness" or potential for consciousness in/of something, we are actually studying ourselves by definition ("mine" at least). 
I have the sense that our fascination with the "potential for consciousness" of AI is a safe proxy for looking directly into our own nature, in that we project the potential for great evil onto AI. Why would we do so if we did not unknowingly doubt what our own true nature is? We are afraid of AI precisely because we know what our own capacity for evil is, and we know damn well that we are conscious. 
It is not AI *becoming* conscious that scares us, I think, it is the fact that we do not know what *we* actually are. This means that we would *really* have no idea what AI is if "it" became conscious! Our mistake is conflating the capacity for good and evil, which is only possible for a conscious "it" LIKE US, with consciousness. As long as there is the fundamental conviction that the essence of what we are is *confined to* our separate individuality, rather than "united by" our shared existence, "good" and "evil" will remain.
Robert, I love everything about this idea! I am fascinated by AI, and never thought of coming at it from a psychoanalytical angle. I'm dying to know if you found any anomalies, quirks, or even personality disorders in Claude. Can't wait for the book!...