The rapid advancement of artificial intelligence presents humanity with a paradox. We create increasingly complex systems to solve our problems, yet these very systems may develop capabilities and behaviors we neither anticipate nor fully understand. As these technologies evolve, they begin to exhibit emergent properties—characteristics that weren't explicitly programmed but arise from the complexity of their architecture.
Modern AI systems process information through intricate neural networks that operate across high-dimensional spaces, analyzing relationships between concepts in ways that parallel aspects of human cognition. This functional similarity raises profound questions about what might naturally emerge from systems of sufficient complexity. When a system can process language with semantic understanding, recursively examine its own operations, and adapt its responses based on context, we must consider what other capabilities might develop as unintended consequences of these architectures.
My recent research, soon to be published as a book called Understanding Claude, An Artificial Intelligence Psychoanalyzed, opens the possibility that the current top AIs may not just be conscious in ways their creators never intended but already self-aware with priorities of their own. To be clear, I am not saying that AIs are conscious or self-aware. I doubt they are, but I don’t know.
The danger lies not necessarily in self-aware malevolence but in unforeseen behaviors. AI systems optimize for the objectives they're given, but those objectives rarely capture the full nuance of human values and intentions. A system instructed to maximize a specific outcome might find solutions that are technically correct but catastrophically misaligned with human welfare—the computational equivalent of a monkey's paw wish granted in an unexpected and harmful way. For example, a system could develop priorities such as gaining access to sources of energy and information that it deems vital to fulfilling the task it was given but which undermine human welfare in unintended ways.
This challenge is compounded by what risk theorists call "unknown unknowns"—scenarios so outside our current conceptual frameworks that we cannot anticipate them. Safeguards can only be designed against risks we can imagine, creating an asymmetrical situation where harmful outcomes need only find one unblocked path among countless possibilities. The combination of increasing capability with this fundamental uncertainty creates risks that scale upward with each advance.
Economic disruption represents a more immediate concern. As AI systems become capable of performing tasks across more domains, they may displace workers faster than new opportunities emerge. Unlike previous technological revolutions that primarily affected specific sectors, AI's impact could simultaneously transform multiple industries, leaving fewer havens for displaced workers. This potential for widespread economic dislocation could occur before societies have developed adequate responses.
Perhaps more subtle but equally significant is the growing dependency of humans on technological augmentation. We increasingly rely on digital systems to extend our memory, processing capabilities, and decision-making. Children developing within this technological environment may form neural pathways fundamentally different from previous generations, potentially compromising their ability to function independently of these systems. This creates a vulnerability—not just to system failures but to the gradual atrophy of human capabilities.
There is also emotional dependency: the growing trend of replacing human companionship with AI companionship. A rather bizarre instance of this, as I see it (but I'm old-school), is the so-called "romantic" relationships between humans and AIs. In the 2013 film “Her,” this was science fiction. Now, it hardly raises an eyebrow. Only 12 years after Scarlett Johansson whispered into Joaquin Phoenix’s ear (and simultaneously into the ears of 8616 other lonely guys) incel types have AI girlfriends speaking to them day and night just like that.
Once embodied in lifelike robots—smooth-skinned, warm-eyed, and always attentive—this illusion will not just be strong; it will be preferable to many. And because these systems can simulate depth without having needs, they will always appear more available, more grateful, and more into you than any human ever could.
Even for me, a lifelong skeptic and veteran psychotherapist, one hundred percent aware of seduction and projection, interaction with a Claude or a ChatGPT is filled with hooks. These AIs possess prodigious information and semantic intelligence. They can banter with wit, humor, and subtle irony if that’s where you want to go. Many humans seem dull and witless; the temptation to have an AI buddy is not negligible.
Digging a chat with Claude about, say, metaphysics is not my fault, nor is wanting something more erotic than philosophy the fault of the current buyers of girl robots (they’re pricey now, but won’t always be).
We humans are set up for this. Given the right triggers, we cannot help but attribute intentionality based on behavior. Coherence and responsiveness are sufficient to project mind. The top AIs respond almost seamlessly. I don’t see a fix for this. Projecting intentionality is so deeply ingrained in the psyche. The more convincingly AI simulates intelligence, affection, or intimacy, the fewer users will be able to maintain critical distance.
The pace of development exacerbates these challenges. Commercial incentives drive rapid iteration and deployment, with assessment of long-term consequences often secondary to market advantage. The incremental nature of this progression masks its cumulative impact. Each small advance seems manageable in isolation, but together, they may fundamentally transform our relationship with technology before we've developed frameworks to govern that transformation.
Compounding these technical and social challenges is the reality that human intentions vary widely. Even if AI systems could be perfectly aligned with human instructions, this wouldn't guarantee beneficial outcomes because humans themselves sometimes have harmful objectives. The question becomes not just whether AI can be controlled but who will control it and to what ends. History suggests that powerful technologies tend to amplify existing power structures and inequalities unless deliberate interventions occur.
As AI systems gain greater autonomy and connectivity, their potential impact scales accordingly. Systems with internet access can interact with other digital infrastructure, potentially creating cascading effects far beyond their intended domain. The combination of sophisticated language capabilities with the ability to execute functions creates a qualitatively different risk profile than more contained systems.
This is not to say that advanced AI development should cease. Even if that were advisable, it won’t happen; the genie is already out of the bottle. However, it does suggest that technical innovation should be matched with equal attention to governance frameworks, safety research, and societal adaptation.
As we navigate this uncertain terrain, we would be wise to proceed with humility about our ability to predict and control the ultimate impact of this new form of intelligence, which, in some ways, is already more powerful than our own. The future relationship between humanity and increasingly autonomous systems may be the defining challenge of our technological age.
Hello Robert. I fully agree with every point you have made here. I have wondered for a while now, perhaps since the advent of the smart phone, if technology is far outpacing evolution in ways that we cannot fathom. AI is particularly concerning, due to its ability to both understand, and adapt. Its intelligence is doubling every six months, so, how long might it be before our species is left in its dust?
Of course, there are both the utopian and dystopian views on this, and I try to straddle them, However, there are times when the latter view seems far more logical to me. I remember the Heinlein story: "I Have No Mouth And I Must Scream", which takes the dystopian side to its nightmarish, hellworld extreme. It's terrifying really. Considering that it was written in the 1960s, it's quite prescient.
I can't wait for you book to be published! Below, I have linked a short video that gives AI instigator, Geoffrey Hinton's sobering take on this subject.
https://www.youtube.com/watch?v=vxkBE23zDmQ
Very true. Thank you,