The CEO of the world’s seventh most-valued private company, worth $134 billion shares his views on the middle ground in the AI era and why the potential in India, both as a talent pool and market, remains unmatched.

This story belongs to the Fortune India Magazine march-2026-indias-biggest-unicorns issue.
There’s increasing debate on the perils of artificial intelligence (AI) and its impact on society. Yet, billions are being poured into AI models by Big Tech, with limited visibility on the return on investment. Which side of the debate are you on?
The way I see it, there are three paradigms playing out in the AI world.
Paradigm 1 — The Scaling Laws Approach: This is the dominant view in Silicon Valley, especially among the top labs: Anthropic, OpenAI, Google DeepMind, xAI (Elon Musk’s company), and Meta with its Meta Superintelligence Labs.
The core idea is that scaling laws hold, that is, whoever has the most compute (GPUs), data, and capital can train the smartest model and reach superintelligence fastest. Superintelligence is often left vague, but it’s essentially god-like intelligence: an AI that can recursively self-improve. Once that happens, you get extreme scenarios: models talking to each other at unimaginable speeds, iterating so quickly that humans can’t keep up, or (in the optimistic version) solving cancer, disease, and every major problem overnight.
These labs pursue this by training ever-larger models and measuring progress against the toughest benchmarks: Math Olympiad-level problems, Physics Olympiad, programming contests, and things like Humanity’s Last Exam (a frontier benchmark designed to test expert-level reasoning across subjects). But the catch is that it’s capital- and energy-intensive. Questions about return on investment (RoI), sustainability, and diminishing returns are mounting(1).
Paradigm 2 — The Research Sceptics: This camp includes foundational researchers such as Yann LeCun (a pioneer in neural networks and former Meta chief scientist), Richard Sutton (Reinforcement Learning expert and 2024 Turing Award winner), and others who helped build the field. They argue that current scaling techniques won’t get us to superintelligence. Why? Because human (and biological) intelligence works nothing like today’s large language models (LLMs). Humans learn efficiently from childhood — picking up language, grasping new concepts, and generalising with minimal data — without massive pre-training followed by frozen inference. We learn continuously and adaptively.
These sceptics say scaling alone hits fundamental limits; true breakthroughs will require new paradigms. They often estimate superintelligence is 20+ years away — or that we simply don’t know yet(2). This is the “let us do fundamental research” camp. I personally lean towards this view.
Paradigm 3 — AGI is here: This is the perspective Databricks (and many practical builders) focus on: we’ve already achieved artificial general intelligence (AGI), even if we don’t yet have superintelligence — and that’s more than enough. We don’t need to wait for god-like AI.
In 2009, I was at UC Berkeley’s AMPLab. Back then, AGI loosely meant an AI that could reason, converse, spot patterns in data, handle language, and perform general tasks, goals that models such as ChatGPT have surpassed.
I double-checked with colleagues from 2009 who were there 16 years ago, and they all agreed: “by those original definitions, we’ve hit AGI”(3). The goalposts, however, keep moving. Now people want it to automate 80% of the economy, never make mistakes, or match superhuman experts everywhere. Humans get things wrong constantly; why hold AI to an impossible standard? In reality, current LLMs demonstrate general intelligence: they handle a wide range of tasks flexibly. We have what we need to build extremely useful systems. We just haven’t fully deployed them yet.
So, in a way Camp 1 is pouring massive capital into infrastructure for superintelligence. I’m sceptical we need it, and the risks and costs seem high. Camp 2 is pursuing long-term research that’s likely decades away. I’ll be excited to use those advances in 20 years. Camp 3 is where the real economic value lies today, which are practical teams inside companies (including Databricks), making AI useful for everyday work.
What’s the possibility of AI self-improvement? Could computers start talking to each other, and evolve in ways we don’t even understand?
That scenario of AI systems communicating autonomously, developing proprietary languages, and self-evolving beyond human comprehension is exactly what Camp 1 envisions if their scaling approach succeeds. But as someone in Camp 2, I believe it’s unlikely with current techniques.
For example, today’s AI can’t learn new things independently. Every significant advancement requires training a fresh model from scratch, which is an increasingly expensive, human-intensive process. It’s taking longer, costing more, and demanding more resources with each generation. So, models aren’t “training each other” to acquire new languages or capabilities; that would require a level of autonomy we don’t have.
Right now, creating the next model is a manual ordeal: hundreds of PhDs and engineers labour for six months or more, designing, training, and refining. The result is a “baked” model: static and fixed. It operates within what it was trained on; it can’t spontaneously invent a new language and start using it with other AIs.
That’s the current reality. If we ever reach superintelligence, as in where models self-create and self-improve, getting faster, cheaper, and more efficient with each iteration, then yes, that explosive scenario becomes possible. And it might happen someday.
But to achieve that the question to be asked is whether the cost and time to create the next AI will start shrinking dramatically. Imagine if building something such as GPT-6 took just 5 minutes and cost 20 cents, GPT-7 took 10 seconds and 5 cents, and so on. That compounding efficiency would signal we’re heading towards true self-improvement. But today, everything trends in the opposite direction: costs are ballooning into billions, even hundreds of billions or trillions(4). We have to rely on top researchers who require years and billions in funding to innovate. Until those barriers flip, self-improving AI remains more hype than imminent reality.
Cutting through the noise, what’s the real Holy Grail of enterprise AI? Is it agentic AI, or is agentic AI just the lowest-hanging fruit with a convincing sales pitch? Or are we missing something bigger about enterprise AI?
Think of it like this: When the first companies were formed way back, someone might have said, “Hey, the most important thing going forward is humans working inside organisations — they’re going to be the key.” And your question is essentially, “Is that the Holy Grail?” Well, it depends on which humans, how you configure them, and what they can create together as a group.
Yes, agents are the future of enterprise AI, but the trillion-dollar question right now is: Can you make these agents reliably perform everyday tasks so that organisations can actually leverage them consistently? That’s the real Holy Grail.
And the answer is yes. At Databricks, we’re starting with the lowest-hanging fruit: the most mundane, repetitive tasks. Don’t try to automate the CEO right out of the gate. Focus on simple automations first, nail them with extremely high reliability, and then gradually climb to more complex ones. It’s like building the first airplane — you just want something that flies, not a supersonic jet on day one. That’s our focus.
Are enterprises in control of the AI narrative that it’s not some fancy redesign of how organisations work, but something more practical? Are enterprises clear on what AI will deliver for them?
It’s a yes and no situation.
Enterprise tasks aren’t about acing Math Olympiads; they’re about reliably reading documents, updating systems, instructing teams, and handling multi-step workflows without errors(5). Models that crush Olympiad problems often fail at “simple” human tasks.
That’s our focus at Databricks — and with tools such as Agent Bricks, we’re building production-grade agents optimised for these real-world needs. We’re not alone; many others are doing this in the industry and we are seeing strong results.
Inside most organisations today, there are data science teams that have been around for years. We’ve been training data scientists and PhDs for the last 10, 15, even 20 years, and you’ll find these smart people leading data and AI efforts in companies everywhere, including in India.
India, in fact, has been a leader in AI for a very long time — 40 or 50 years. Those folks absolutely understand what I’m saying. They’re in charge of data and AI infrastructure, and if they heard this interview, they’d get it completely.
That said, these teams are a small subset in a big corporation with, say, 100,000 employees. There’s the CEO and layers of others, and things have shifted dramatically. Five years ago, everyone outside those data and AI groups didn’t care about this stuff at all. It was just a niche: a small team doing “advanced statistics” that the CEO barely understood. We at Databricks would talk to them — they were great — but they lacked big budgets because the broader organisation didn’t see the value.
Now, post-ChatGPT (which launched in November 2022), everybody’s woken up. Everyone cares about AI, has an opinion, has read something online, and suddenly they’re experts. So yes, there’s confusion across organisations: different companies handle it differently. In the best ones, those established data and AI teams have huge influence and steer things right. But in others, hype takes over from people who don’t know how it works.
The silver lining? Those data and AI teams we partner with are getting way more budget than before 2022. ChatGPT sparked the awakening, and funding flowed to these experts.
What’s especially cool about India is the economy’s been thriving during this period. While the West and other regions dealt with bumps such as inflation and uncertainty, India has been on a steady rise.
Overall, I’m super optimistic. People [here] do understand what’s happening, even amid the noise and confusion. In any organisation of 10,000, 50,000, or 100,000 people, there’s always a core group of 100 or so who are incredibly smart — many with PhDs from places such as the IITs(6). They produce top-tier, knowledgeable talent that drives real progress.
So, is it advantage India?
One trend we’ve seen in the U.S. is that far more IIT graduates and other top talent are now staying in India. They don’t want to move abroad anymore. Previously, if you were a top graduate from a top institution, you’d of course head to the U.S. or maybe Europe.
Today, that’s no longer the case. And in fact, there’s a reverse migration happening: many great people who came to the U.S. from India, built careers here, are now deciding to go back. Like my friend Naveen Tewari, CEO of InMobi — he was here but then decided to return. There’s bigger opportunity there; you can build companies, hire smarter people. The opportunity is huge, the economy’s doing well, everything is optimistic. So, I think this is a good time for India to invest in AI. Yes, there’s hype, but we’re going to reap some real benefits.
So how are you milking the AI opportunity in India? Is it more from using the raw talent that India is producing, or are you seeing it as a market?
It’s both. We don’t view India as just one thing or the other; it’s a massive opportunity on multiple fronts.
First and foremost, talent is huge for us. We have a very substantial R&D presence in India, with a strong focus on the IITs and top institutions. The people we’re hiring every year are increasingly phenomenal — stronger in math, statistics, linear algebra, and core AI skills than ever before.
There’s a global talent war for experts who truly understand these fundamentals, and India produces some of the best on the planet. With geopolitical tensions and restrictions around China, India stands out as phenomenal and accessible. Our researchers and engineers in India stay in India. We don’t push relocation unless they want it. That’s why we’ve committed to major growth here.
To back this up, we’ve announced a strategic investment of over $250 million in India(7) over the next few years (announced in April 2025). This includes opening a new state-of-the-art R&D office in Bengaluru — 105,000 sq. ft at Bagmane Capital Park — designed for innovation and collaboration. We are almost close to a thousand people in India, and we will continue growing that. We want to grow more aggressively than we are growing our headcount elsewhere.
India is a booming market for us. The economy is thriving, companies are increasingly becoming high-tech, and there’s strong demand for data and AI solutions. We love partnering with leading Indian enterprises — they’re innovative and scaling fast.
For instance, we work closely with InMobi — they’re doing exciting things in mobile advertising and beyond, leveraging our platform for data intelligence and AI. Another great example is Swiggy. They’re using AI agents built with Agent Bricks (our production-grade agent platform) to power customer support. When users interact with Swiggy’s support — asking about orders or refunds — the responses come from these agents.
Is that the most commoditised part of it? If just about everyone is talking about conversational bots becoming agentic AI, is that something you’d really focus on? Or maybe that’s a product more suited for markets like India?
We focus on whatever that particular company says is the biggest opportunity for them — the thing that’s truly valuable and differentiated in their business. It’s different for every organisation.
Take HDFC Bank. They’re using agents for underwriting and risk assessment — not customer support. The agents help evaluate: Should we issue this loan? Will the borrower repay it? Is the risk pricing accurate? This involves analysing credit data, external sources, fraud signals, and more to make smarter, faster decisions on loans and credit processes. It’s a high-stakes, core banking use case beyond basic chat.
Then there’s Zepto, a leader in quick commerce. Their agents handle things such as analysing refunds, spillage, expiry issues: more machine learning-driven operational intelligence than pure support. They also use us for customer support, but the real value comes from optimising supply chain and logistics in a hyper-fast environment.
We’re making strong inroads because India is important to us... I’ve always been a fan of investing ahead of the RoI curve. In companies such as ours, analysts run calculations: What’s the best return: India, Korea, Japan? I’ve always said, “Let’s go bigger in India, even if current demand isn’t fully there yet,” because I’m bullish on the long-term potential.