AI’s Secret Weapon:
Why India Needs More Power
The path to true intelligence is paved with hardware, not just clever code. And India is waking up to this truth.
Ever Wonder How AI Really Gets Smart?
We see AI everywhere today — predicting tomorrow’s weather, recommending that next binge-worthy show on Netflix, writing poetry, creating realistic images from text descriptions, and even diagnosing diseases. Most of us assume that behind all of this is incredibly smart programming — teams of geniuses writing brilliant code, carefully teaching the machine every trick in the book.
But what if the real secret ingredient isn’t cleverness at all? What if the true engine of AI’s breathtaking progress is something far more straightforward, almost brutally simple?
Enter Richard Sutton, one of the founding figures of modern AI and a Turing Award recipient (often called the “Nobel Prize of Computing”). In 2019, he wrote a short but enormously influential essay titled “The Bitter Lesson.” In it, he distilled seventy years of AI research into a single, uncomfortable truth.
“The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin.”
— Richard Sutton, 2019In plain language: AI breakthroughs come not from painstakingly hand-crafting rules, but from giving simple learning methods massive computational power and enormous amounts of data. More computing muscle, more data, simpler methods — that combination wins, every single time, in the long run.
And the lesson is called “bitter” because it’s hard for researchers to accept. They spend years building elegant, human-knowledge-rich systems, only to watch them get demolished by a simpler system that just had more computing power to throw at the problem.
Two Roads to Intelligence
Comparing the fundamental approaches that have shaped AI history
The “Human Wisdom” Way
Painstakingly teaching machines complex strategies, rules, and nuances. Hand-coding decades of human expertise into AI systems.
- ✕ Limited by human understanding
- ✕ Doesn’t scale with better hardware
- ✕ Becomes fragile in complex situations
- ✕ Plateaus and inhibits further progress
The “Bitter Lesson” Way
Provide basic rules. Let the machine play millions of games, read billions of sentences. Raw computation for search and learning.
- ✓ Rides the wave of Moore’s Law
- ✓ Discovers strategies humans never imagined
- ✓ Scales infinitely with more power
- ✓ General-purpose — works across domains
What Does This Actually Mean? A Simple Analogy
Think of it like learning to cook. One approach is to memorize every recipe in the world — the precise amount of salt for every dish, every grandmother’s secret technique, every Michelin chef’s trick. This is the “human wisdom” way of doing AI. It’s impressive, but it only works for the dishes you’ve already memorised.
The other approach? Give someone a kitchen (the hardware), unlimited ingredients (the data), and one simple instruction: “Try things. Taste. Adjust. Repeat a million times.” Over time, this person doesn’t just learn existing recipes — they invent dishes no cookbook ever contained. They develop an intuition that goes beyond any recipe book.
That’s what Sutton is saying. When you give AI systems enough computing power and data, and let them learn through trial and error using simple general methods, they don’t just match human expertise — they surpass it in ways we never expected.
History Proves Sutton Right
If the Bitter Lesson sounds theoretical, consider these real-world examples where brute computation crushed hand-crafted human expertise:
Deep Blue vs. Kasparov
IBM’s Deep Blue defeated world chess champion Garry Kasparov — not by understanding chess the way a grandmaster does, but by calculating 200 million positions per second through sheer computational brute force. Chess researchers who had spent decades encoding human chess wisdom were shocked. The machine didn’t need their wisdom. It just needed more power.
AlphaGo & AlphaGo Zero
Google DeepMind’s AlphaGo defeated the world Go champion — a game so complex that brute force alone shouldn’t have worked. But then came AlphaGo Zero: it learned Go with zero human knowledge, just by playing millions of games against itself. It became even stronger than the original AlphaGo. No human strategies. No opening books. Just computation, data from self-play, and learning.
Large Language Models (ChatGPT & Beyond)
The revolution of ChatGPT, Claude, Gemini and other AI assistants didn’t come from researchers sitting down and hand-coding grammar rules, vocabulary lists, or logic. These systems learned by processing trillions of words using relatively simple mathematical methods — but run on thousands of powerful chips (GPUs) simultaneously. The same pattern: simple methods + massive compute = remarkable intelligence.
Voice, Vision & Everywhere
Speech recognition shifted from hand-coded rules about phonetics to neural networks trained on millions of hours of audio. Face recognition shifted from carefully designed edge-detection algorithms to deep learning on massive image datasets. In every field, the same story plays out: hand-crafted expertise gets outpaced by computation and data.
But Wait — Is It Always Just Brute Force?
Sutton’s lesson is powerful, but the real world has some important nuances. Being honest about them actually makes the lesson stronger, not weaker:
Human Intuition Still Matters
When data or computing power is limited — which is the reality for most organisations — human expertise and clever design still help enormously. You don’t always have Google-scale resources.
Data Quality Counts
“Garbage in, garbage out” remains absolutely true. If you train AI on biased, incomplete, or poorly curated data, more computation just produces faster garbage. Quality matters alongside quantity.
The Second Bitter Lesson
Sutton himself hints at something deeper: true intelligence isn’t about containing knowledge — it’s about discovering knowledge. The AI systems of tomorrow must learn to learn, just as humans do.
These nuances don’t weaken the Bitter Lesson. They refine it. The core message stands: if you want AI to get smarter, the most reliable path is to give it more computing power, more data, and simpler, more general learning methods.
Why “Compute” Is the New Oil
You’ve probably heard the expression “data is the new oil.” There’s truth in that, but Sutton’s Bitter Lesson suggests a sharper formulation: computing power is the new oil. Data is the crude material, but computation is the refinery that turns it into intelligence.
What does “compute” mean in everyday terms? It refers to the raw processing power of computers — specifically the specialised chips called GPUs (Graphics Processing Units) and TPUs (Tensor Processing Units) that are designed to do millions of mathematical calculations simultaneously. Think of it as the difference between one person counting rice grains versus a million people counting at the same time.
Memory bandwidth is equally critical — this is how fast data can flow between the chip’s memory and its processing cores. Imagine a brilliant chef (the processor) stuck in a kitchen with a tiny, slow refrigerator (low memory bandwidth). No matter how talented the chef, they can’t cook fast if ingredients arrive at a trickle. Modern AI needs both powerful processors and high-speed data highways connecting them.
Training a large AI model like GPT-4 or Gemini requires thousands of these specialised chips running for weeks or months, consuming as much electricity as a small city. The nations and companies that control this computational infrastructure will shape the future of AI — and by extension, the future of everything from healthcare and agriculture to defence and economic growth.
India’s AI Destiny: Strengths and the Missing Muscle
The Brain Trust
Millions of STEM graduates each year. India produces more engineering graduates annually than any other nation. Soaring computer science enrolments across hundreds of universities.
Data Goldmine
825 million+ internet users generating roughly 20% of the world’s data. 22 official languages creating a multilingual treasure trove no other country can match.
The Missing Muscle
Computing power! India has received only 1.12% of global AI investment despite 6.6% of world GDP. Without robust compute infrastructure, our AI ambitions remain theoretical.
India has the brains. India has the data. But the Bitter Lesson tells us clearly: without massive computational infrastructure, talent and data alone won’t produce AI leadership. It’s like having the world’s best cricket team but no stadium to play in.
This is why what the Government of India is doing right now is not just important — it’s transformative.
India’s Bold AI Push: What the Government Is Doing
The India AI Impact Summit in February 2026 and the IndiaAI Mission represent a landmark moment — India is treating compute capacity as industrial policy, not just technology spending.
IndiaAI Compute Mission
From an initial target of 10,000 GPUs, India has now onboarded over 38,000 GPUs — available to startups and researchers at a subsidised rate of just ₹65 per hour. Plans are underway to add 20,000 more. This democratises AI computing for thousands of entrepreneurs who could never afford cloud costs from global providers.
₹10,300 Crore IndiaAI Mission
The Cabinet has approved approximately $1.2 billion over five years for building computing infrastructure, developing indigenous AI capabilities, and training the workforce. This is the backbone of India’s sovereign AI strategy.
India Semiconductor Mission
₹76,000 crore (~$10 billion) in incentives for chip fabrication and packaging. ISM 1.0 approved 10 projects totalling ₹1.60 lakh crore. Budget 2026 launched ISM 2.0 with additional funding for equipment, materials, and chip design capabilities. First commercial production expected in 2026.
AI Impact Summit 2026
India hosted the fourth global AI summit in New Delhi — the first ever hosted by a Global South nation. Massive infrastructure commitments poured in: Tata Group, Reliance ($110B over 7 years), Adani ($100B), Microsoft ($17.5B), Google ($15B), AWS ($12.7B). India became the world’s most contested AI infrastructure battleground.
Indigenous Models & BharatGen
India isn’t just building hardware — it’s building brains. BharatGen, a sovereign AI initiative, has developed a 17-billion-parameter model from scratch. Sarvam AI launched two foundational language models at the Summit. India is developing AI that understands Bharat’s languages, cultures, and contexts.
MANAV Framework & AI Guidelines
India released AI Governance Guidelines in February 2026 — a principle-based framework emphasising “AI for All” and responsible deployment. The New Delhi Frontier AI Commitments position India as a leader in ethical AI governance alongside building capacity.
Renewable-Powered Data Centres
India added ~44.5 GW of renewables in 2025 alone, pushing non-fossil capacity to over 51.5%. The government is offering tax holidays and subsidies for sustainable data centres. Google’s massive Visakhapatnam AI hub will be powered by clean energy.
Bharat-VISTAAR & Sector AI
Three Centres of Excellence established in Healthcare, Agriculture, and Sustainable Cities. Thirty AI applications approved for India-specific challenges. Bharat-VISTAAR uses voice-based AI to help farmers access information — showing that AI infrastructure ultimately serves the common person.
This is genuinely remarkable. India is not just building data centres — it is constructing an entire sovereign AI ecosystem, from silicon chips to governance frameworks, from GPU clouds to farmer-facing applications. The IndiaAI Mission treats computation as what the Bitter Lesson says it is: the fundamental infrastructure of intelligence.
The Bitter Lesson’s Sweet Promise for India
Richard Sutton’s Bitter Lesson, written in just a few pages, carries a message that should be chiselled into the entrance of every policy ministry and every tech startup in India:
India doesn’t need to hand-code intelligence. India doesn’t need to beg for it. India needs to build the infrastructure that lets intelligence discover itself. That infrastructure is compute — GPUs, memory bandwidth, data centres, energy, and connectivity.
The signs are overwhelmingly positive. The IndiaAI Mission, the Semiconductor Mission, the AI Impact Summit, the hundreds of billions in investment commitments — these are not just policy announcements. They represent India’s recognition that in the age of AI, computational infrastructure is as important as roads, railways, and electricity were in the industrial age.
The Bitter Lesson tells us that those who control computation will shape intelligence. India, with its billion-plus people, its unmatched data diversity, its engineering talent, and now its growing commitment to AI infrastructure, has every reason to be at that table.
The lesson may be bitter for those who bet on clever shortcuts. But for India — a nation that has always believed in doing the hard work — it is a sweet, sweet promise.
India’s Moment to Shine
By strategically investing in and expanding our computational infrastructure, we can translate India’s vast digital potential into tangible AI leadership. The Bitter Lesson isn’t just an academic observation — it’s India’s roadmap to the future.
Powering Up for Tomorrow. 🇮🇳
References & Sources
Click any link below to access the original source document
The Bitter Lesson — Original Essay & Analysis
- Primary Source Sutton, R. (2019). “The Bitter Lesson.” incompleteideas.net ↗
- PDF Version Sutton, R. (2019). “The Bitter Lesson” — University of Texas PDF mirror ↗
- Wikipedia Wikipedia. “Bitter Lesson” — Overview, examples, and reception ↗
- Analysis Sequoia Capital (2025). “Richard Sutton’s Second Bitter Lesson” — Inference by Sequoia ↗
- Academic (2024). “Learning the Bitter Lesson: Empirical Evidence from 20 Years of CVPR Proceedings” — arXiv ↗
- Philosophy (2025). “The Bitter Lesson as Operational Constructivism: AI Agents…” — PhilArchive ↗
India AI Impact Summit 2026 & Government Initiatives
- Government Press Information Bureau (2025). IndiaAI Mission — Progress, Pillars & Initiatives — PIB, Government of India ↗
- Government IndiaAI. IndiaAI Compute Capacity Hub — National AI Portal of India, MeitY ↗
- Policy Paper Principal Scientific Adviser, GoI (Dec 2025). “Democratising Access to AI Infrastructure” — Version 3.0, Working Paper (PDF Download) ↗
- Summit Coverage ERP Today (Feb 2026). “India AI Impact Summit 2026: Infrastructure Investment & AI Hosting Expansion Take Center Stage” ↗
India AI Infrastructure — Industry & Investment
- NVIDIA NVIDIA Blog (Feb 2026). “India Fuels Its AI Mission With NVIDIA” — L&T AI factories, E2E Networks, BharatGen, Sarvam AI, DGX Spark partnerships ↗
- Industry Analysis Introl Blog (2025). “India’s AI Infrastructure Boom: $50 Billion and Counting” — Microsoft $17.5B, Google $15B, AWS $12.7B, Reliance 3GW details ↗
- Market Report Mind2Markets (Feb 2026). “AI Data Center India (2026–2035): Best Insights” — ISM 2.0, semiconductor policy, 9 GW capacity projections ↗
- Report Domain-b (Feb 2026). “The Concrete Cloud: India’s $250 Billion Bet on the Physical Foundations of AI” ↗
- Ecosystem Rupareliya, P. (Feb 2026). “Top 20 Indian AI Companies Building Real Infrastructure in 2026” — Medium ↗
- News Diplotic (Feb 2026). “AI Ready Infrastructure in India: Why the Country is Making Headlines in 2026” ↗
Further Reading
- Commentary Jeffries, D. (2023). “Embracing the Bitter Lesson” — Why many people get it wrong, Substack ↗
- Finance Quantitativo (2025). “The Bitter Lesson” — Applying Sutton’s thesis to quantitative trading, Substack ↗
- Business Agathon (2025). “Richard Sutton’s Bitter Lesson Explains Why Your AI Solution Feels Shallow” ↗
- Case Study H2O / OpenCasebook. “Case Studies in Public & Private Policy Challenges of AI: The Bitter Lesson” ↗
