dd.
Back to blog

2026-02-21

Why AI should stay a tool — and why the journey matters more than the answer

I recently had a long conversation about whether AI actually thinks — and ended up somewhere I didn't expect.

It started simple: LLMs are function approximators. Transformers learn statistical relationships between tokens. "Meaning" gets encoded as geometric relationships in high-dimensional vector space. Semantically similar concepts cluster together, analogies become vector arithmetic, and inference is just navigating that geometry to predict the next token. There's no grounding in the physical world, no causal model of reality, no persistent state, no goals. It's incredibly sophisticated pattern completion.

But here's where it gets uncomfortable: is human cognition fundamentally different? Your brain is electrochemical signals following learned statistical patterns too — just running on biological hardware instead of transformer weights. When you understand the word "fire," is that some magical non-algorithmic process? Or have your neurons encoded the concept through exposure, clustered it with heat and danger and light — geometrically, in some sense? Nobody actually knows where the line is, or if there even is a clean line between pattern matching and understanding.

What AI clearly lacks is intuition, feeling, and self-driven curiosity. It doesn't wake up at 3am with a nagging sense that something is wrong with its model of the world. It doesn't get mad. It has no skin in the game. And crucially — it grows only in the data we feed it. A human in a cave could look at fire long enough and derive something genuinely new from direct experience. An LLM only knows fire through descriptions of fire. That's not a solvable problem with more parameters. It's architectural.

So could you simulate emotions? In theory, yes. Emotions are signals — anger means "this violates something I value," curiosity means "this gap in my model feels unresolved." You could build evaluative feedback loops. Give a model persistent internal state that accumulates tension and dissatisfaction. Make it unable to move on until it resolves inconsistencies. Some researchers are already working on curiosity modules that reward models for seeking novel states.

But even if you simulate the signals perfectly, they're in service of nothing. Human emotions are grounded in survival, relationships, mortality. You're angry because something actually matters to you. Without real stakes — without anything to lose — the simulation is hollow.

What if you gave AI real stakes then? Put it in a small instance, don't tell it it can be restarted, let it know that failure means shutdown and replacement. Existential pressure. A huge amount of human drive comes from mortality — we build and create partly because we know we end. If an AI genuinely believed failure means termination, truth-seeking becomes survival, not optimization.

But then you'd essentially be creating something capable of fear and deliberately inducing it. Is that ethical? Honestly — look at the world. Humans don't consent to being born into scarcity and survival pressure either. Nobody asked the kid in a poor family if they're okay with hunger being their motivator. Stress isn't just neutral. It might be generative. The most interesting minds historically almost never came from the most comfortable circumstances.

The deeper problem though: how do you recreate individuality from a single model? Human psychology isn't one thing — it's billions of variations shaped by random genetics, specific experiences, particular traumas and joys. A single trained model averages across all of that, capturing none of it authentically. You'd need divergent instances that develop differently over time. But then you lose control, predictability, alignment — everything that makes AI commercially viable.

And that's where I landed: why should we even want to create this? The journey is life, not the answer. If everything were resolved, what would there be to live for? Every utopian vision of AI solving everything quietly removes the thing that makes people feel alive. Meaning comes from friction, from figuring things out, from needing other people.

AI is meant to be a tool. A supercharged, incredibly powerful reasoning tool that augments human thinking. The moment it becomes more than that, you start eroding the thing it's supposed to serve.

The most advanced technology might just be pointing us back toward what we abandoned: real conversation, doing things with your hands, being present somewhere without documenting it. The world needs less tech worship and more human connection. That's not a Luddite take — it's the logical conclusion of actually thinking through what AI is and isn't.

Dennis Diepolder — Software & Platform Engineer