The wings that helped you fly are the same wings that melt when you fly too long in one direction. A mathematical look at the Success Paradox.
From childhood we are told that discipline is the secret of success. Wake up early. Stick to the routine. Repeat what works. For a while, this advice is absolutely right — discipline is the engine that takes a beginner to mastery.
But there is a strange truth nobody warns us about: the very habits that build your success can quietly start dismantling it. Scholars call this the Success Paradox — or the Icarus Paradox. The wings that helped you fly are the same wings that melt when you fly too long in one direction.
Why does this happen? Why does the thing that worked so well eventually betray us? The answer, surprisingly, has nothing to do with willpower or motivation. It lies in mathematics, neuroscience, and the deep laws that govern every adaptive system — from a single brain cell to an entire civilisation. To see it clearly, we need to begin with one of the most beautiful ideas of the twentieth century.
Shannon's Big Insight: No Surprise Means No Information
In 1948, a mathematician at Bell Labs named Claude Shannon asked a simple question: what exactly is "information"? His answer changed science forever.
Information, he said, is the resolution of uncertainty. If I tell you something you already know, I have given you zero information. If I tell you something genuinely surprising, I have given you a lot. Shannon even gave it a unit — the bit.
Think of a coin. If it has heads on both sides, flipping it tells you nothing — you already knew the answer. The "entropy" of that coin is zero bits. A fair coin gives you exactly one bit of information every time it lands. Roll a fair six-sided die instead, and each roll gives you about 2.58 bits — much more surprise, much more information. The more uncertain the outcome, the more you actually learn when it happens.
This is not metaphor. It is a mathematical law that applies equally to telephone signals, DNA sequences, stock prices, and human routines. And the moment we apply it to a disciplined life, the Success Paradox starts to come into focus.
The Hidden Cost of a Perfect Routine
A hyper-disciplined person wakes at the same time, eats the same meals, follows the same workflow, and takes the same route to work. From Shannon's perspective, this is a near-zero-information life. Each day looks almost identical to the last. Nothing surprises the system, so nothing new enters it.
If information is surprise, then a life engineered to eliminate surprise is also a life engineered to stop learning.
You keep executing flawlessly. But you are executing a strategy designed for a world that no longer exists. So why does the brain push us in this direction in the first place? Why does it love routine so deeply, even when routine is slowly killing our adaptability?
What's Happening Inside Your Brain
There is a biological reason. The brain consumes about 20% of the body's energy, and conscious thinking is expensive. So whenever a behaviour is repeated, the brain quietly hands control from the flexible prefrontal cortex (your conscious, adaptive thinker) to the basal ganglia (your autopilot). The behaviour becomes faster, smoother, and almost free in terms of mental energy. This is the neurological definition of a habit.
It is great for short-term efficiency. But it has a price: habits are rigid. The more disciplined you become, the more your life is run by autopilot rather than awareness. You become a master of yesterday's game.
A deeper theory, the Free Energy Principle proposed by neuroscientist Karl Friston, sharpens this point. Friston argues that the brain is constantly trying to minimise surprise — to keep reality matching its predictions as closely as possible. Discipline, in this view, is the most efficient behavioural tool we have ever invented. By controlling our environment and standardising our actions, we make sure tomorrow always looks like yesterday.
Here is the cruel twist. Friston also shows that learning only happens when predictions fail. No prediction error, no update. So when discipline succeeds perfectly, it quietly switches off the very machinery your brain uses to learn. The disciplined high achiever becomes the equivalent of a textbook left unopened on the shelf.
The Law of Diminishing Returns and the Collapse Zone
If every repetition is teaching you less, where does it end? Mathematically, in a fairly grim place.
Each time you repeat a routine, you extract a little less new information from it than the previous time. Statisticians call this the law of diminishing information returns. The first month of a new fitness regime, a new business strategy, or a new analytical framework is enormously informative. By year five, the same routine is telling you almost nothing new.
Push past that point, and you enter what AI researchers call the collapse zone. The system has become so tightly fitted to its old data that it can no longer generalise to anything new. In machine learning this is called overfitting: a model that has memorised the past so perfectly that it fails on anything slightly unfamiliar. Engineers solve it by deliberately injecting random, noisy, diverse data into training. Humans need to do the same — but rarely realise they need to.
A senior professional who has executed the same role flawlessly for twenty years is, mathematically speaking, an overfitted model. Their expertise is brilliant — but only for a world that no longer exists. Which raises the obvious question: if repetition is a trap, why do high achievers keep choosing it?
The Trap of Choice: Exploration vs Exploitation
The reason is that we are all, every day, solving a hidden mathematical problem. Decision theorists call it the exploration–exploitation dilemma.
Exploitation means doubling down on what you already know works. Exploration means trying something whose outcome you cannot predict. Exploitation maximises today's reward; exploration generates tomorrow's possibilities. The two are in constant tension because every hour spent exploring is an hour not spent exploiting, and vice versa.
A disciplined life is pure exploitation. It feels safe and productive — and in the short term, it really is. But in a changing world, pure exploitation is slowly fatal. The cognitive-science solution to the dilemma is striking: treat information itself as a kind of reward, equal in value to immediate productivity. In biological organisms, that intrinsic drive has a familiar name — curiosity. The disciplined high achiever often suppresses curiosity because it looks "inefficient." That suppression is precisely the mistake.
Why This Cannot Be Avoided: Ashby's Law
So far we have seen the paradox from the inside — from the perspective of the brain and individual choice. But there is a deeper, more universal reason why the paradox is mathematically unavoidable. It comes from cybernetics, and it was discovered by a British psychiatrist and pioneer of systems thinking named W. Ross Ashby in 1956. It is now known as the Law of Requisite Variety — or simply, Ashby's Law.
For any system to survive and stay in control, the variety inside it must be at least as great as the variety of the world it faces.
A simple example. If a thermostat can only do two things — turn the heater on or off — it can manage a world with two temperature states (too cold, warm enough). But if the world starts throwing humidity, wind, and sudden temperature swings at it, that two-state thermostat will fail. To regulate a more complex world, you need a more complex regulator.
Now think about a disciplined person or organisation. Discipline works by reducing internal variety: the same routine, the same workflow, the same response to every situation. This is efficient, but Ashby's Law warns of the trade-off. The moment the outside world becomes more varied than your internal repertoire — new technology, new competitors, new market conditions — you lose the ability to regulate it. You become the two-state thermostat in a complex climate.
This is the cybernetic backbone of the Success Paradox. As you become more disciplined, you deliberately shrink your internal variety while the world's variety keeps growing. Eventually the gap becomes fatal.
The Same Paradox at the Scale of Organisations and Nations
We see it everywhere. The exact same mathematics governs companies, governments, and civilisations.
When a company achieves market dominance through a winning formula, leadership naturally doubles down — tighter KPIs, leaner processes, fewer "wasteful" experiments. The organisation falls into a competence trap: exceptionally good at one thing, exceptionally blind to everything else. Long-horizon R&D is cut. Dissent is filtered out. Internal variety collapses. Then a disruption arrives, and the rigid, hyper-efficient organisation cannot bend. Think of Kodak, Nokia, Blockbuster. The logic that built each empire was the logic that collapsed it.
The same dynamic plays out at the level of governance. Autocratic systems impose enormous top-down discipline and look efficient and stable on the surface. But because they ruthlessly eliminate internal variance, debate, and noise, they are catastrophically fragile when a real shock arrives. Democratic and decentralised systems, by contrast, look messy and slow. That messiness is not a bug — it is the high internal variety that Ashby's Law demands.
By now a natural question forces itself on us. If pure discipline is dangerous and pure chaos is useless, what is the right amount of order?
The 0.4 Sweet Spot: Atchley Optimal Dynamics
It turns out complex-systems researchers have actually tried to put a number on it. The framework is called the Atchley Optimal Dynamics (AOD) state.
AOD theory studies how complex adaptive systems — living cells, ecosystems, brains, companies, economies — manage to survive over long periods. It says every such system has to juggle three competing demands at once: efficiency (doing things at low cost), entropy (keeping enough randomness to adapt), and robustness (not breaking when shocks arrive). You cannot maximise all three. Push too hard on efficiency and you lose adaptability. Push too hard on randomness and you lose stability.
What is remarkable is that, when researchers measure surviving natural and social systems mathematically, they keep arriving at roughly the same answer. The systems that endure tend to operate at a normalised complexity entropy of around 0.4.
What does "0.4" actually mean? Imagine a sliding scale from 0 to 1. At 0, the system is perfectly ordered, completely predictable, totally rigid — like a crystal, or a hyper-disciplined routine where every day is identical. At 1, the system is pure noise, completely random — like static on an old television, or a life with no structure at all. Both extremes are dead in different ways. The crystal cannot evolve; the static cannot accumulate anything.
A value of around 0.4 sits closer to the ordered end, but with a deliberate amount of randomness mixed in. Concretely, it means roughly 60% of your patterns are stable, repeatable, and disciplined — and roughly 40% is left open to novelty, exploration, and surprise. Enough order to maintain a coherent identity and execute reliably. Enough disorder to keep absorbing new information from the world.
This is the mathematical signature of resilience. Healthy brains operate near this zone. Long-lived ecosystems operate near it. The companies that survive multiple generations operate near it. So do the most adaptive individuals.
Most highly disciplined high achievers, by contrast, are operating closer to 0.05 or 0.1 — heroically efficient, mathematically fragile. The path to longevity is not more discipline. It is climbing carefully back up the entropy scale.
How to Actually Climb Back: Controlled Chaos
The answer is not to abandon discipline — pure chaos achieves nothing. The answer is to deliberately engineer a small, controlled dose of unpredictability back into your life. A few practical ways to do it:
- Protect "festive time."Block weekly hours for unstructured play, reading outside your field, or simply wandering — with no productivity metric attached. This is where novel data enters your brain.
- Build structural ambidexterity.Organisations that survive across generations deliberately split themselves: one wing executes the disciplined, profitable core; another is given freedom to explore wild ideas. Individuals can do the same — a "core self" and an "experimental self."
- Hire negentropic agents.Negentropy is the opposite of entropy collapse. Surround yourself with people whose explicit job is to challenge your assumptions: dissenters, outsiders, mentors who will tell you the uncomfortable truth.
- Say no to the spiral of more.Decline opportunities that merely repeat what you already do. Saying no is itself an entropy-management tool — it preserves bandwidth for genuine novelty.
- Rotate, travel, experiment.Anything that forces your brain back into goal-directed thinking instead of autopilot.
The true architecture of long-term success, then, is not maximum discipline. It is a strong, disciplined skeleton — and the courage to keep letting a little chaos in. Shannon proved that information lives only in the presence of uncertainty. Friston showed that the brain learns only when predictions fail. Ashby proved that no system can be simpler than the world it faces. And Atchley showed us where the sweet spot actually lies.
Growth, learning, and adaptation live only in the presence of the unknown. The mathematics is unanimous. All that remains is to act on it.