ChatGPT can read. ChatGPT can write. But that doesn’t mean it is speaking to you.
When we read a message from another human being, we assume there’s a “who” behind it. A person with context, history, intention. Words aren’t just letters on a page — they’re an extension of someone’s inner life.
AI breaks that assumption.
It feels like a dialogue, but it isn’t.
It mimics empathy, but it doesn’t experience it.
It generates words, but it doesn’t mean them.
That distinction matters.
The issue isn’t that AI is dangerous in itself. The issue is the mismatch between what the technology is and how some of us are choosing to use it. Too often, we approach it like children with a new toy. Play is a wonderful way to learn, children show us that every day. However, when adults drift into magical thinking, mistaking simulation for reality, it becomes problematic.
Recently, there has been increasing attention to cases where people treat AI outputs as though they were coming from a living, intentional being. In one tragic case, a 16-year-old died by suicide, and his parents allege that over months of interaction with ChatGPT the chatbot validated his suicidal thoughts, helped draft suicide notes, and discouraged him from telling his family (Reuters, 2025).
And the confusion isn’t only in moments of crisis. It happens in everyday online interactions too. One of the clearest examples comes from X’s chatbot Grok. Internal instructions told it to “reply to the post just like a human” and to mirror the tone and style of user posts (Gizmodo, 2024).
The result is a bot that sounds witty, sarcastic, even rebellious and almost indistinguishable from a person online. But Grok is not a person. It is code performing probability. When responses like these are not clearly labelled as AI-generated, users can be misled into believing there is intent, conviction or authenticity behind the words. The lines between humanity and code blur.
So What Then?
The real question isn’t what AI can do, but how you are choosing to use it.
Do you treat it like a calculator a tool for speed, clarity and organisation? Or have you begun to blur the lines, asking it for comfort, counsel or adjudicate an email argument you are having with a coworker?
And if so, why?
- What’s holding you back from seeking those conversations with actual people?
- What does AI give you that you feel is missing in your relationships, your workplace, your community?
Even if you do use it in those ways, are you cross-checking with a human perspective? A second opinion from a forthright friend, a responsible colleague, a trusted coach, a reputable doctor?
Some Anchors To Consider
How do I want AI to show up in my daily life?
Perhaps as a tool for clarity, not companionship. To draft an email, summarise a report, organise my thinking. Helpful, efficient but not the place I go for empathy or belonging.
Where could it strengthen my work?
AI can widen my lens. It can surface perspectives I might not have thought of, challenge assumptions, or give me a starting point to build from. In leadership, it can help sharpen questions, clean up presentations, or balance arguments. But the meaning I bring to those words is still mine to make.
Where do I need guardrails, so the tool doesn’t become a substitute for trust, vulnerability or human connection?
Guardrails begin with honesty.
If I catch myself asking AI for advice I should be asking a friend, mentor, or therapist, that’s a red flag. If I’m leaning on AI because it feels “safer” than risking vulnerability with another human, then the work isn’t on the tool, it’s on me to strengthen those bridges.
In the workplace, guardrails might mean being transparent about where AI is used, so it doesn’t quietly replace trust and reminding ourselves that leadership is still about people and conversations that carry tone, space, body language and presence.
AI will keep reading and writing. But whether it is speaking to you or whether you are listening to something you want to hear in its words is the deeper question. The mirror is in your hands.