TAGS

AI’s Secret Weapon: Why India needs more Compute Power

by | Feb 24, 2026

AI’s Secret Weapon: Why India Needs More “Compute” Power
Understanding The Bitter Lesson

AI’s Secret Weapon:
Why India Needs More Power

The path to true intelligence is paved with hardware, not just clever code. And India is waking up to this truth.

12 min read
scroll ↓

Ever Wonder How AI Really Gets Smart?

We see AI everywhere today — predicting tomorrow’s weather, recommending that next binge-worthy show on Netflix, writing poetry, creating realistic images from text descriptions, and even diagnosing diseases. Most of us assume that behind all of this is incredibly smart programming — teams of geniuses writing brilliant code, carefully teaching the machine every trick in the book.

But what if the real secret ingredient isn’t cleverness at all? What if the true engine of AI’s breathtaking progress is something far more straightforward, almost brutally simple?

Enter Richard Sutton, one of the founding figures of modern AI and a Turing Award recipient (often called the “Nobel Prize of Computing”). In 2019, he wrote a short but enormously influential essay titled “The Bitter Lesson.” In it, he distilled seventy years of AI research into a single, uncomfortable truth.

“The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin.”

— Richard Sutton, 2019

In plain language: AI breakthroughs come not from painstakingly hand-crafting rules, but from giving simple learning methods massive computational power and enormous amounts of data. More computing muscle, more data, simpler methods — that combination wins, every single time, in the long run.

And the lesson is called “bitter” because it’s hard for researchers to accept. They spend years building elegant, human-knowledge-rich systems, only to watch them get demolished by a simpler system that just had more computing power to throw at the problem.

Two Roads to Intelligence

Comparing the fundamental approaches that have shaped AI history

The “Human Wisdom” Way

Painstakingly teaching machines complex strategies, rules, and nuances. Hand-coding decades of human expertise into AI systems.

  • Limited by human understanding
  • Doesn’t scale with better hardware
  • Becomes fragile in complex situations
  • Plateaus and inhibits further progress

The “Bitter Lesson” Way

Provide basic rules. Let the machine play millions of games, read billions of sentences. Raw computation for search and learning.

  • Rides the wave of Moore’s Law
  • Discovers strategies humans never imagined
  • Scales infinitely with more power
  • General-purpose — works across domains

What Does This Actually Mean? A Simple Analogy

Think of it like learning to cook. One approach is to memorize every recipe in the world — the precise amount of salt for every dish, every grandmother’s secret technique, every Michelin chef’s trick. This is the “human wisdom” way of doing AI. It’s impressive, but it only works for the dishes you’ve already memorised.

The other approach? Give someone a kitchen (the hardware), unlimited ingredients (the data), and one simple instruction: “Try things. Taste. Adjust. Repeat a million times.” Over time, this person doesn’t just learn existing recipes — they invent dishes no cookbook ever contained. They develop an intuition that goes beyond any recipe book.

That’s what Sutton is saying. When you give AI systems enough computing power and data, and let them learn through trial and error using simple general methods, they don’t just match human expertise — they surpass it in ways we never expected.

History Proves Sutton Right

If the Bitter Lesson sounds theoretical, consider these real-world examples where brute computation crushed hand-crafted human expertise:

1997

Deep Blue vs. Kasparov

IBM’s Deep Blue defeated world chess champion Garry Kasparov — not by understanding chess the way a grandmaster does, but by calculating 200 million positions per second through sheer computational brute force. Chess researchers who had spent decades encoding human chess wisdom were shocked. The machine didn’t need their wisdom. It just needed more power.

2016

AlphaGo & AlphaGo Zero

Google DeepMind’s AlphaGo defeated the world Go champion — a game so complex that brute force alone shouldn’t have worked. But then came AlphaGo Zero: it learned Go with zero human knowledge, just by playing millions of games against itself. It became even stronger than the original AlphaGo. No human strategies. No opening books. Just computation, data from self-play, and learning.

2020s

Large Language Models (ChatGPT & Beyond)

The revolution of ChatGPT, Claude, Gemini and other AI assistants didn’t come from researchers sitting down and hand-coding grammar rules, vocabulary lists, or logic. These systems learned by processing trillions of words using relatively simple mathematical methods — but run on thousands of powerful chips (GPUs) simultaneously. The same pattern: simple methods + massive compute = remarkable intelligence.

Now

Voice, Vision & Everywhere

Speech recognition shifted from hand-coded rules about phonetics to neural networks trained on millions of hours of audio. Face recognition shifted from carefully designed edge-detection algorithms to deep learning on massive image datasets. In every field, the same story plays out: hand-crafted expertise gets outpaced by computation and data.

But Wait — Is It Always Just Brute Force?

Sutton’s lesson is powerful, but the real world has some important nuances. Being honest about them actually makes the lesson stronger, not weaker:

Human Intuition Still Matters

When data or computing power is limited — which is the reality for most organisations — human expertise and clever design still help enormously. You don’t always have Google-scale resources.

Data Quality Counts

“Garbage in, garbage out” remains absolutely true. If you train AI on biased, incomplete, or poorly curated data, more computation just produces faster garbage. Quality matters alongside quantity.

The Second Bitter Lesson

Sutton himself hints at something deeper: true intelligence isn’t about containing knowledge — it’s about discovering knowledge. The AI systems of tomorrow must learn to learn, just as humans do.

These nuances don’t weaken the Bitter Lesson. They refine it. The core message stands: if you want AI to get smarter, the most reliable path is to give it more computing power, more data, and simpler, more general learning methods.

Why “Compute” Is the New Oil

You’ve probably heard the expression “data is the new oil.” There’s truth in that, but Sutton’s Bitter Lesson suggests a sharper formulation: computing power is the new oil. Data is the crude material, but computation is the refinery that turns it into intelligence.

What does “compute” mean in everyday terms? It refers to the raw processing power of computers — specifically the specialised chips called GPUs (Graphics Processing Units) and TPUs (Tensor Processing Units) that are designed to do millions of mathematical calculations simultaneously. Think of it as the difference between one person counting rice grains versus a million people counting at the same time.

Memory bandwidth is equally critical — this is how fast data can flow between the chip’s memory and its processing cores. Imagine a brilliant chef (the processor) stuck in a kitchen with a tiny, slow refrigerator (low memory bandwidth). No matter how talented the chef, they can’t cook fast if ingredients arrive at a trickle. Modern AI needs both powerful processors and high-speed data highways connecting them.

Training a large AI model like GPT-4 or Gemini requires thousands of these specialised chips running for weeks or months, consuming as much electricity as a small city. The nations and companies that control this computational infrastructure will shape the future of AI — and by extension, the future of everything from healthcare and agriculture to defence and economic growth.

India’s AI Destiny: Strengths and the Missing Muscle

The Brain Trust

Millions of STEM graduates each year. India produces more engineering graduates annually than any other nation. Soaring computer science enrolments across hundreds of universities.

Data Goldmine

825 million+ internet users generating roughly 20% of the world’s data. 22 official languages creating a multilingual treasure trove no other country can match.

The Missing Muscle

Computing power! India has received only 1.12% of global AI investment despite 6.6% of world GDP. Without robust compute infrastructure, our AI ambitions remain theoretical.

India has the brains. India has the data. But the Bitter Lesson tells us clearly: without massive computational infrastructure, talent and data alone won’t produce AI leadership. It’s like having the world’s best cricket team but no stadium to play in.

This is why what the Government of India is doing right now is not just important — it’s transformative.

India’s Bold AI Push: What the Government Is Doing

The India AI Impact Summit in February 2026 and the IndiaAI Mission represent a landmark moment — India is treating compute capacity as industrial policy, not just technology spending.

Compute

IndiaAI Compute Mission

From an initial target of 10,000 GPUs, India has now onboarded over 38,000 GPUs — available to startups and researchers at a subsidised rate of just ₹65 per hour. Plans are underway to add 20,000 more. This democratises AI computing for thousands of entrepreneurs who could never afford cloud costs from global providers.

Investment

₹10,300 Crore IndiaAI Mission

The Cabinet has approved approximately $1.2 billion over five years for building computing infrastructure, developing indigenous AI capabilities, and training the workforce. This is the backbone of India’s sovereign AI strategy.

Semiconductor

India Semiconductor Mission

₹76,000 crore (~$10 billion) in incentives for chip fabrication and packaging. ISM 1.0 approved 10 projects totalling ₹1.60 lakh crore. Budget 2026 launched ISM 2.0 with additional funding for equipment, materials, and chip design capabilities. First commercial production expected in 2026.

Ecosystem

AI Impact Summit 2026

India hosted the fourth global AI summit in New Delhi — the first ever hosted by a Global South nation. Massive infrastructure commitments poured in: Tata Group, Reliance ($110B over 7 years), Adani ($100B), Microsoft ($17.5B), Google ($15B), AWS ($12.7B). India became the world’s most contested AI infrastructure battleground.

Sovereign AI

Indigenous Models & BharatGen

India isn’t just building hardware — it’s building brains. BharatGen, a sovereign AI initiative, has developed a 17-billion-parameter model from scratch. Sarvam AI launched two foundational language models at the Summit. India is developing AI that understands Bharat’s languages, cultures, and contexts.

Governance

MANAV Framework & AI Guidelines

India released AI Governance Guidelines in February 2026 — a principle-based framework emphasising “AI for All” and responsible deployment. The New Delhi Frontier AI Commitments position India as a leader in ethical AI governance alongside building capacity.

Green Energy

Renewable-Powered Data Centres

India added ~44.5 GW of renewables in 2025 alone, pushing non-fossil capacity to over 51.5%. The government is offering tax holidays and subsidies for sustainable data centres. Google’s massive Visakhapatnam AI hub will be powered by clean energy.

Applications

Bharat-VISTAAR & Sector AI

Three Centres of Excellence established in Healthcare, Agriculture, and Sustainable Cities. Thirty AI applications approved for India-specific challenges. Bharat-VISTAAR uses voice-based AI to help farmers access information — showing that AI infrastructure ultimately serves the common person.

This is genuinely remarkable. India is not just building data centres — it is constructing an entire sovereign AI ecosystem, from silicon chips to governance frameworks, from GPU clouds to farmer-facing applications. The IndiaAI Mission treats computation as what the Bitter Lesson says it is: the fundamental infrastructure of intelligence.

What Comes Next? The Road Ahead

The foundation is being laid. But the Bitter Lesson demands that India think even bigger.

01

Scale to 100,000+ GPUs

38,000 GPUs is a strong start, but global leaders operate millions. India must accelerate to at least 100,000 GPUs in the national pool by 2028, with long-term targets measured in the millions.

02

Domestic Chip Fabrication

The ISM 2.0 must fast-track actual production. Buying GPUs from NVIDIA is necessary today, but designing and manufacturing our own AI chips is the path to true sovereignty.

03

Memory Bandwidth Revolution

Compute is only half the story. India needs investment in high-bandwidth memory (HBM) manufacturing and networking infrastructure. The fastest chip is useless if data can’t reach it fast enough.

04

Green AI Infrastructure

AI data centres consume enormous energy. India’s renewable push is excellent, but we need dedicated green energy corridors for AI facilities. Sustainable AI is not optional — it’s essential.

05

Tier-2 & Tier-3 City Access

Current infrastructure concentrates in Mumbai, Chennai, Hyderabad. The next phase must bring edge computing and AI access to smaller cities and rural India — true democratisation of intelligence.

06

AI Literacy at Scale

Hardware without human capacity is wasted capacity. India needs AI education not just in IITs, but in every district college. A farmer, a nurse, a shopkeeper — everyone should understand and use AI.

The Bitter Lesson’s Sweet Promise for India

Richard Sutton’s Bitter Lesson, written in just a few pages, carries a message that should be chiselled into the entrance of every policy ministry and every tech startup in India:

India doesn’t need to hand-code intelligence. India doesn’t need to beg for it. India needs to build the infrastructure that lets intelligence discover itself. That infrastructure is compute — GPUs, memory bandwidth, data centres, energy, and connectivity.

The signs are overwhelmingly positive. The IndiaAI Mission, the Semiconductor Mission, the AI Impact Summit, the hundreds of billions in investment commitments — these are not just policy announcements. They represent India’s recognition that in the age of AI, computational infrastructure is as important as roads, railways, and electricity were in the industrial age.

The Bitter Lesson tells us that those who control computation will shape intelligence. India, with its billion-plus people, its unmatched data diversity, its engineering talent, and now its growing commitment to AI infrastructure, has every reason to be at that table.

The lesson may be bitter for those who bet on clever shortcuts. But for India — a nation that has always believed in doing the hard work — it is a sweet, sweet promise.

India’s Moment to Shine

By strategically investing in and expanding our computational infrastructure, we can translate India’s vast digital potential into tangible AI leadership. The Bitter Lesson isn’t just an academic observation — it’s India’s roadmap to the future.

Powering Up for Tomorrow. 🇮🇳

References & Sources

Click any link below to access the original source document

Reader Response: Use the form below to share observations, corrections, or relevant insights related to this article.

Share This