Moore’s Law and Amdahl’s Law β explained without a single equation, and why one of them matters far more for humanity’s future.
Two Laws That Silently Shape Your Digital Life
Moore’s Law and Amdahl’s Law β explained without a single equation, and why one of them matters far more for humanity’s future.
Every time you unlock your phone in under a second, stream a movie in 4K, or ask an AI to write you a poem β you are standing on the shoulders of two invisible forces. One made your devices incredibly powerful. The other explains why they still sometimes feel frustratingly slow. Together, they tell the story of modern computing β and where society should bet its future.
These two forces are known as Moore’s Law and Amdahl’s Law. Don’t worry β despite the name, neither requires a maths degree to understand. Let’s break them down with everyday language, pictures, and a kitchen analogy or two.
Part I: Moore’s Law β The Engine of Progress
What It Says (In Plain English)
In 1965, Gordon Moore β co-founder of Intel β made a bold observation: the number of tiny switches (transistors) we can fit on a computer chip doubles roughly every two years. More transistors generally means more processing power. So in essence, computers get about twice as powerful every couple of years, at roughly the same cost.
This wasn’t a law of physics. It was an observation β a trend β that turned out to hold remarkably true for over five decades.
Imagine you own a farm. Every two years, you magically get twice as many workers β without paying any extra wages. In 1970, you had 2 workers. By 1980, you had 32. By 2000, you had 32,000. By 2020, you had over 30 million. That’s essentially what happened to computer chips. The “workers” are transistors, and the “farm” is a tiny silicon chip smaller than your fingernail.
The Exponential March of Transistors
Number of transistors on a single chip, 1971β2024 (logarithmic scale)
Why Should You Care?
Moore’s Law is the reason your smartphone today is millions of times more powerful than the computer that guided Apollo 11 to the moon. It’s why a βΉ15,000 phone in 2025 can do things a βΉ50 lakh supercomputer couldn’t do in 1995. This relentless doubling has given us the internet, smartphones, AI assistants, medical imaging, GPS navigation, and virtually every piece of modern technology you interact with daily.
Is Moore’s Law Dying?
Here’s the twist: we are approaching physical limits. Transistors are now just a few atoms wide. You can’t split an atom in half and expect it to still work as a switch. Since around 2015, the pace of doubling has slowed. Chips still get better, but the easy exponential gains are winding down. This is why the tech world is scrambling for alternatives β quantum computing, neuromorphic chips, and new materials like graphene.
The Speed Ceiling: Clock Speed Has Plateaued
Processor clock speed (GHz) over time β notice the flattening after ~2005
Moore’s Law gave us 50+ years of exponential computing growth. But raw processing power alone is no longer enough. The easy gains are ending β and that’s where Amdahl’s Law enters the picture.
Part II: Amdahl’s Law β The Bottleneck Nobody Talks About
What It Says (In Plain English)
In 1967, computer scientist Gene Amdahl made an observation that is, in some ways, the opposite of Moore’s optimism. He said: no matter how much you speed up one part of a system, the overall speed is limited by the slowest part that you can’t speed up.
While Amdahl originally framed this around parallel processing, it has become a powerful metaphor for a much bigger modern problem: the memory bottleneck. Processors have gotten astronomically fast (thanks to Moore), but the speed at which data can travel between the processor and memory has not kept up. This gap β sometimes called the “memory wall” β is now the single biggest constraint in computing.
Imagine a brilliant chef (the processor) who can cook any dish in 10 seconds. Incredible, right? But the ingredients are locked in a warehouse across town (the memory), and the delivery truck (the memory bus / bandwidth) takes 10 minutes per trip. It doesn’t matter that your chef can cook in 10 seconds β they spend 99% of their time waiting for ingredients. Making the chef faster won’t help. You need a faster truck, or better yet, move the pantry into the kitchen.
The Memory Wall: The Growing Gap
This diagram tells the whole story. Processor speed has improved roughly 10,000Γ since 1980. Memory bandwidth? Perhaps 100Γ. That’s a 100-fold gap. Your processor is a Formula 1 car stuck in a traffic jam created by the memory system.
Amdahl’s Law in Action: A Concrete Example
The Diminishing Returns of Adding More Power
Maximum speedup possible vs. how much of the task can be parallelised
This chart is the heart of Amdahl’s Law. Even if you throw 1,000 processors at a problem, the speedup you get is capped by the portion of the task that can’t be parallelised. If just 5% of a program must run sequentially (one step at a time), your maximum speedup is 20Γ β no matter how many processors you add. With 50% sequential? You’ll never go beyond 2Γ.
In the modern context, that “sequential bottleneck” is increasingly the memory system. The data simply can’t arrive fast enough for the processor to chew through it.
Think of it like building a 10-lane highway that feeds into a single-lane bridge. You can keep adding lanes to the highway (more processing power), but the bridge (memory bandwidth) is the chokepoint. Everyone still has to slow down and queue up at the bridge. The only real fix is to widen the bridge.
Part III: Moore vs. Amdahl β A Side-by-Side Look
| Aspect | Moore’s Law | Amdahl’s Law |
|---|---|---|
| Core Idea | Computing power doubles every ~2 years | Speedup is limited by the slowest bottleneck |
| Nature | Optimistic observation (a trend) | Cautionary principle (a hard limit) |
| Focus | Making processors faster & denser | System balance β especially memory bandwidth |
| Analogy | Hiring more & more workers | Workers are idle because materials arrive slowly |
| Status (2025) | Slowing down β nearing physical limits | Becoming more relevant as the gap widens |
| Real-World Impact | Gave us smartphones, AI, the internet | Explains why your apps still hang & AI training is costly |
The Computing Balance: Where We Are vs. Where We Need to Be
The Verdict: Where Should Society Focus?
For 50 years, we placed all our bets on Moore’s Law β make the processor faster, shrink the transistors, pack more power into silicon. And it worked magnificently. But that era is ending.
The bottleneck of the 2020s and beyond is not processing power. It is memory bandwidth β the ability to move data fast enough to keep our powerful processors fed. This is Amdahl’s territory, and it demands our attention now.
Consider the biggest technological challenge of our era: training large AI models. Companies like Google, OpenAI, and Anthropic spend hundreds of millions of dollars on AI training β and a disproportionate amount of that cost is because GPUs (graphics processors) sit idle, waiting for data to arrive from memory. The processors are fast enough. The memory bus is the chokepoint.
Here’s what a society-level focus on Amdahl’s Law looks like:
1. Invest in memory architecture innovation. Technologies like HBM (High Bandwidth Memory), processing-in-memory (PIM), and new interconnect designs directly attack the bottleneck. These deserve the same enthusiasm and funding that processor R&D has received for decades.
2. Redesign software to be “data-aware.” Instead of writing programs that assume unlimited memory speed, we need algorithms and systems that minimise unnecessary data movement β a discipline sometimes called “data-centric computing.”
3. Rethink education and research priorities. University curricula and government research funding still overwhelmingly favour raw compute. We need to elevate memory systems, interconnect design, and systems architecture to the same prestige as processor design.
4. Pursue energy efficiency. The memory wall isn’t just a speed problem β it’s an energy problem. Moving data consumes far more energy than computing on it. Solving the bandwidth problem is also a climate problem.
The Hidden Cost: Energy Spent on Data Movement vs. Computation
Approximate energy breakdown in modern AI workloads
As the chart above illustrates, in modern AI chips, the energy spent moving data between memory and processor vastly exceeds the energy spent on actual computation. This is the Amdahl bottleneck made visible in watts and carbon emissions.
The Bottom Line
Moore’s Law was the story of the past β an extraordinary 50-year run of exponential progress that transformed human civilisation. But the road ahead requires us to shift focus. Amdahl’s Law points us toward the bottleneck that actually constrains progress today: the speed and efficiency with which we can feed data to our increasingly powerful processors.
A society that pours its resources only into faster engines while ignoring clogged roads will eventually grind to a halt. The smartest bet for the next 50 years is to balance the system β to give memory and data movement the same attention, investment, and ingenuity that we have long lavished on processors alone.
The future belongs not to the fastest chip, but to the best-balanced system.
References & Further Reading
- Moore, G.E. (1965). “Cramming More Components onto Integrated Circuits.” Electronics, 38(8). β Original paper (Intel archive)
- Amdahl, G.M. (1967). “Validity of the Single Processor Approach to Achieving Large Scale Computing Capabilities.” AFIPS Conference Proceedings. β ACM Digital Library
- Hennessy, J.L. & Patterson, D.A. (2019). Computer Architecture: A Quantitative Approach, 6th Ed. Morgan Kaufmann. β Publisher’s page
- Wulf, W.A. & McKee, S.A. (1995). “Hitting the Memory Wall: Implications of the Obvious.” ACM SIGARCH Computer Architecture News. β ACM Digital Library
- “Moore’s Law β Transistor Count 1970β2020.” Our World in Data. β ourworldindata.org
- Horowitz, M. (2014). “Computing’s Energy Problem (and what we can do about it).” IEEE ISSCC. β IEEE Xplore
- Sze, V. et al. (2017). “Efficient Processing of Deep Neural Networks: A Tutorial and Survey.” Proceedings of the IEEE. β IEEE Xplore
- “International Roadmap for Devices and Systems (IRDS).” IEEE. β irds.ieee.org
- Mutlu, O. (2023). “Memory-Centric Computing.” Lectures, ETH ZΓΌrich. β ETH SAFARI Group
