We don’t view India as just one thing or the other; it’s a massive opportunity on multiple fronts, says Databricks CEO Ali Ghodsi

/9 min read
magazine-cover-image
This story belongs to the issue:
March 2026
Read Full E-Magazine

This story belongs to the Fortune India Magazine March 2026 issue.

The CEO of the world’s seventh most-valued private company, worth $134 billion shares his views on the middle ground in the AI era and why the potential in India, both as a talent pool and market, remains unmatched.

ADVERTISEMENT

We don’t view India as just one thing or the other; it’s a massive opportunity on multiple fronts, says Databricks CEO Ali Ghodsi
Ali Ghodsi “Don’t try to automate the CEO right out of the gate. Focus on simple automations first, nail them with high reliability, then climb to more complex ones.”  Credits: Sanjay Rawat

There’s increasing debate on the perils of artificial intelligence (AI) and its impact on society. Yet, billions are being poured into AI models by Big Tech, with limited visibility on the return on investment. Which side of the debate are you on?

The way I see it, there are three paradigms playing out in the AI world.

Paradigm 1 — The Scaling Laws Approach: This is the dominant view in Silicon Valley, especially among the top labs: Anthropic, OpenAI, Google DeepMind, xAI (Elon Musk’s company), and Meta with its Meta Superintelligence Labs.

The core idea is that scaling laws hold, that is, whoever has the most compute (GPUs), data, and capital can train the smartest model and reach superintelligence fastest. Superintelligence is often left vague, but it’s essentially god-like intelligence: an AI that can recursively self-improve. Once that happens, you get extreme scenarios: models talking to each other at unimaginable speeds, iterating so quickly that humans can’t keep up, or (in the optimistic version) solving cancer, disease, and every major problem overnight.

These labs pursue this by training ever-larger models and measuring progress against the toughest benchmarks: Math Olympiad-level problems, Physics Olympiad, programming contests, and things like Humanity’s Last Exam (a frontier benchmark designed to test expert-level reasoning across subjects). But the catch is that it’s capital- and energy-intensive. Questions about return on investment (RoI), sustainability, and diminishing returns are mounting(1).

Paradigm 2 — The Research Sceptics: This camp includes foundational researchers such as Yann LeCun (a pioneer in neural networks and former Meta chief scientist), Richard Sutton (Reinforcement Learning expert and 2024 Turing Award winner), and others who helped build the field. They argue that current scaling techniques won’t get us to superintelligence. Why? Because human (and biological) intelligence works nothing like today’s large language models (LLMs). Humans learn efficiently from childhood — picking up language, grasping new concepts, and generalising with minimal data — without massive pre-training followed by frozen inference. We learn continuously and adaptively.

These sceptics say scaling alone hits fundamental limits; true breakthroughs will require new paradigms. They often estimate superintelligence is 20+ years away — or that we simply don’t know yet(2). This is the “let us do fundamental research” camp. I personally lean towards this view.

Paradigm 3 — AGI is here: This is the perspective Databricks (and many practical builders) focus on: we’ve already achieved artificial general intelligence (AGI), even if we don’t yet have superintelligence — and that’s more than enough. We don’t need to wait for god-like AI.

In 2009, I was at UC Berkeley’s AMPLab. Back then, AGI loosely meant an AI that could reason, converse, spot patterns in data, handle language, and perform general tasks, goals that models such as ChatGPT have surpassed.

I double-checked with colleagues from 2009 who were there 16 years ago, and they all agreed: “by those original definitions, we’ve hit AGI”(3). The goalposts, however, keep moving. Now people want it to automate 80% of the economy, never make mistakes, or match superhuman experts everywhere. Humans get things wrong constantly; why hold AI to an impossible standard? In reality, current LLMs demonstrate general intelligence: they handle a wide range of tasks flexibly. We have what we need to build extremely useful systems. We just haven’t fully deployed them yet.

So, in a way Camp 1 is pouring massive capital into infrastructure for superintelligence. I’m sceptical we need it, and the risks and costs seem high. Camp 2 is pursuing long-term research that’s likely decades away. I’ll be excited to use those advances in 20 years. Camp 3 is where the real economic value lies today, which are practical teams inside companies (including Databricks), making AI useful for everyday work.