2026-05-13
The AI bubble and the valve — why critical thinking is the real moat now
Money becomes tokens. Tokens become output. Output becomes — what, exactly? If the answer is just another prompt, another AI wrapper, another dashboard nobody opens, then value only circulates inside the digital world. The balloon gets bigger. But what's inside stays hot air.
I've been chewing on this image for weeks. You know the video where someone tries to pop a balloon with a needle and the membrane just deflects? That's exactly what's happening right now. The bubble keeps growing because nobody finds the valve. And the valve isn't technical. The valve is a human who takes the output out of the system and brings it into the world — as a decision, a product, an actual operational change. Not as a printer. As an interpreter.
Large organizations struggle with this. They love the expected value calculus in theory — what does maximum damage cost us, what does a maximum win bring — but they don't optimize for expected value. They optimize for liability avoidance. A strict 'no' is legally clean. A calculated bet with residual risk is not, even when it's obviously the better call. That's not an intelligence problem. It's an incentive problem. And it's exactly why the interesting AI use cases over the next few years won't come from the big enterprises — they'll come from contexts where risk and decision sit in the same person.
What actually matters in this world isn't what university handed you. Studies, specialist knowledge, neatly memorized frameworks — most models will give you that in seconds, tailored exactly to your question. What you need instead is someone who quickly spots when an AI output sounds plausible but is wrong. Someone who brings the missing context the model doesn't have. Someone who reframes the question when the answer doesn't fit. That's not an academic skill. That's pattern recognition across domain boundaries. A genuinely good generalist with the nerve to think sideways — ideally with a slight ADHD streak that refuses to stay stuck inside the frame of the prompt and asks: 'What if the question itself is wrong?'
That's the real moat in an AI world: not knowledge, but judgment. Knowledge is commoditized. Judgment isn't — it grows from reconciling AI output against actual lived experience. From mistakes, from gut feelings that turned out to be right, from situations no training set covered. Anyone who never learned to think critically because AI did it for them has no moat. Anyone who used AI as a tool and still kept learning how to judge — they're ahead.
And here's where it gets paradoxical. The skill AI unlocks the most is exactly the one that's collectively atrophying. Critical thinking. Decomposing. Building up context. Spotting contradictions. Social media started the slide — short loops, emotional triggers, no contradiction required. AI makes it easier still: 'just do it' and something comes out. Why think anymore? But the result of 'just do it' is mediocrity. Competent mediocrity, but mediocrity.
Good prompting isn't about elegant instructions. It's the ability to dump your own thought process out of your head — like a waterfall nobody can keep up with, but precise. You don't give the model ten percent of the context and wonder why you got forty percent of the output quality. You give ninety percent — fifty parameters explained in detail, how they interact, a clear plan, a clear goal — and what you get back feels like an extended brain. From the inside it feels chaotic. From the outside it's precise thinking in disguise.
I didn't learn this in a course, and I didn't get it from a book. It just crystallized over the last ten years — through phases where the struggle was real and a solution had to come from somewhere. When you keep running into the same wall, eventually you only have two options left: keep going and hope it lands differently this time, or step back and watch yourself think. The second option is uncomfortable. It's also the only one that actually changes anything.
Over time it becomes a habit. Step out of the situation, look at it from the outside, name the pattern that got you there. Not because it feels good, but because you noticed it works. And at some point you start applying the same move to everything — to arguments, to decisions, to code, to your own thought process. Metacognition. Being inside the bubble while still being able to look at it from the outside. That's rarer than people think. And it's exactly the move that good prompting demands: you watch your own thought process and translate it into something a model can process.
It gets confused with negativity — and that's where the world keeps blurring an important distinction. Negative-critical is destructive. It tears down without building. Analytical-critical is the opposite of naïve — it looks closely, asks why, finds the weak spot, then builds better. One is noise. The other is a tool. 'You're so negative' gets said when someone speaks an uncomfortable truth, and critical thinking gets socially punished. Then we wonder why everyone just nods.
The value of AI has to flow into the physical. Into decisions. Into products. Into better operations. Into real efficiency that shows up as profit, time or quality of life. Not into more AI wrappers wrapping other AI wrappers. And the valve — the person who knows where to set it — stands between the digital and the operational world and speaks both languages. That's not a luxury. That's the only place where the bubble doesn't burst but breathes.