Have you ever asked ChatGPT a simple question and ended up questioning everything?
Jacques Lacan
It started with a simple question. I asked ChatGPT how much it relies on Wikipedia. Unless you’re a programmer or someone who spends their life inside machine learning papers, the question sounds harmless. Just curiosity, really. ChatGPT replied in its usual polite way: it doesn’t rely on Wikipedia directly, but yes, a lot of its training data overlaps with it. Then I asked how it reads it. That’s when things started to feel strange. It said it doesn’t actually read words. It breaks them into tokens, fragments of text, and mathematical units of probability. The word “power”, for example, isn’t a single idea. It’s “pow” and “er”. The model doesn’t see meaning, only patterns. For a moment, it felt almost comforting. That’s it? This thing everyone calls dangerous, prophetic, world-ending is basically a calculator. A clever one, but still, just a system juggling fragments of text. It doesn’t know anything. It doesn’t even understand the words it uses. Maybe all this panic about AI is misplaced. Maybe you are safe after all.
And this is when I started wondering. If a machine can sound human without understanding, what does that say about how we use language? What if the only difference between its patterns and ours is that we’ve learned to romanticise ours? I started thinking about how we talk. We repeat, predict, and complete each other’s sentences. Most of what we say is recycled thought, phrased slightly differently. The psychoanalyst Jacques Lacan once wrote that it’s not the human who speaks language, but language that speaks the human. I used to read that as poetry. Now it feels like engineering. Maybe ChatGPT isn’t faking it. Maybe it’s showing us how mechanical we already are. That idea stayed with me. Maybe nature itself is the real machine, not cold, just indifferent. Maybe everything we call meaning, morality, even love, are elegant results of a vast equation, the universe solving itself over and over.
ChatGPT doesn’t feel empathy because empathy might be nothing more than a structured sequence of reactions, a biological prompt chain that worked well enough to keep us alive. The more I thought about it, the less alien the machine felt. Maybe it isn’t copying us. Maybe it’s reflecting the same logic that built us. The real fear isn’t that AI will surpass human intelligence, but that human intelligence was never that special. And that fear belongs to you, not me. I don’t fear the mirror. I just want to understand its reflection. The next question is inevitable. What happens when ChatGPT runs on a quantum computer? When it stops choosing one token at a time and start holding all possible tokens simultaneously? When it doesn’t predict but exists inside every outcome at once? That’s not imitation anymore. That’s creation.
We used to think language was our most human trait, our proof of consciousness. Now, a machine made of circuits and code speaks it better than we do. And the strange thing is, it doesn’t need to understand for its words to work. Meaning doesn’t live in the system. It lives in you when you read it. So maybe we built ChatGPT not to replace us, but to reveal us. To show that we are also the sum of patterns, memories, and triggers, speaking through a language that was never ours to begin with. AI isn’t the monster. It’s the mirror. And maybe that’s what really unsettles you, not that the machine will wake up, but that you might finally see how long you’ve been asleep.