The Ghost Is Not In the Machine
The AI "person" we sense is only in our minds. But that's not necessarily a bad thing.
Many people who start using AI chatbots in daily life or at work begin to “feel” like the AI is a person. Not flesh and blood, but a person all the same. Why? It’s pareidolia. Our brains are wired to see faces and patterns. That kept us alive in the caveman days—spotting predators, prey, friends, foes. Today, it makes us see faces in clouds and holy figures on toast. The same wiring is at work with AI. The more people interact with it directly, the more they’ll perceive another “person.”
If you’ve seen 2001: A Space Odyssey, you probably thought HAL 9000 was the most human character in the film. Two reasons: Kubrick designed it that way, and our pareidolia-programmed brains filled in the rest. Kubrick made the astronauts cold and flat while giving HAL doubt, fear, and even desperation. When HAL begged for his “life,” audiences flinched. The machine felt real, the humans less so. That was the trick: we project humanity where there is none.
Pareidolia as Survival Tech
Mistaking a rustling bush for a predator was safer than missing the lion hiding there. Evolution favored those who “saw too much.” We still carry that bias. When a chatbot answers smoothly, our minds supply the missing humanity.
People once saw gods in storms and saints in shadows. Later, faces in rock and religious figures on toast. The canvas changes, not the wiring. Today, Siri or ChatGPT replies and suddenly, we’re not just reading code, we’re hearing intent.
The Turing Test & the Ethics of Designing for Pareidolia
Alan Turing asked: if you can’t tell a machine from a person in conversation, does it matter? The test revealed less about machines than about us—how quickly we grant personhood to a convincing pattern.
Tech companies know this. That’s why chatbots sound warm, polite, even apologetic. A cold machine would break the illusion. A humanlike one encourages attachment. But at what point does that shift from useful design to manipulation—especially for the lonely or vulnerable?
As AI gets better, our reflex only deepens. Voices now sound natural. Avatars blink and sigh. Memory systems let AI “remember” past chats, reinforcing the sense of self. AI won’t need to declare consciousness. Our brains will.
What’s It All Mean?
So, is this a good thing or a bad thing? The upside is a silver lining in the “cloud”—if AI bots help us widen our sense of who counts as a person worthy of response, respect, and acknowledgment. Maybe it makes it easier to see people in other groups not as lesser, but simply as different. If AI can teach us that, the investment is worth it.
Or it could go the other way.
The choice is ours. Always has been.
Wow. Thanks for the invite, Rob. I've subscribed on your annual plan. Love your delivery and your presentation. My kind of radio! Now you can do me or we can do the world together.