ADVERTISEMENT
In a wide-ranging conversation on People by WTF, Zerodha co-founder Nikhil Kamath and Dario Amodei, CEO of Anthropic, moved beyond discussions on model performance and capital flows to examine two issues that are fast emerging as boardroom priorities: what makes humans economically relevant in an AI-accelerated world, and how far should personalisation go before it begins to erode privacy and trust?
The conversation comes at a time when artificial intelligence is no longer a futuristic add-on but a foundational layer across industries—from financial services and healthcare to enterprise software and education. As models grow more capable of reasoning, coding, and synthesising complex information, the competitive landscape is shifting. According to Amodei, the defining human advantage may no longer lie in narrow, specialised expertise alone.
“Critical thinking may be the most important skill,” he said, suggesting that as AI systems absorb and replicate technical tasks, the premium will move toward those who can interrogate outputs, contextualise insights, and integrate machine intelligence into larger systems of decision-making.
For businesses, this marks a structural transition. The value of static expertise—knowledge accumulated in silos—could compress as AI tools make advanced capabilities widely accessible. What may expand instead is the value of judgement, adaptability, and interdisciplinary thinking. In effect, the workforce equation is evolving from “what you know” to “how you think.”
This shift has direct implications for corporate strategy. Organisations investing aggressively in AI will need to rethink talent development frameworks. Training is unlikely to be limited to technical fluency; it will increasingly revolve around experimentation, systems thinking, and ethical reasoning. Amodei compared working with AI systems to learning a musical instrument—“You mostly learn by doing”—underscoring that fluency emerges through iterative engagement rather than formal instruction alone.
For India, where enterprises are rapidly integrating AI across customer service, fintech, healthtech, and logistics, the recalibration of human capital priorities could shape long-term competitiveness. As AI reduces the marginal cost of cognitive tasks, leadership depth and strategic thinking may become even more decisive differentiators.
If human relevance formed the philosophical arc of the discussion, privacy and personalisation formed its commercial counterpoint.
Amodei observed that modern AI systems can infer strikingly detailed behavioural patterns from relatively small datasets. “The model knows you super well from a relatively small amount of information,” he said, highlighting both the power and the unease embedded in AI-driven personalisation.
For businesses, this capability represents a transformative opportunity. AI systems can tailor financial advice, refine healthcare recommendations, adapt educational content, and optimise enterprise workflows with unprecedented precision. Hyper-personalisation promises higher engagement, stronger retention, and new revenue models.
Yet the same capability sharpens regulatory and ethical scrutiny. Unlike earlier digital platforms that relied on vast static data troves, contemporary AI architectures increasingly leverage dynamic feedback loops, synthetic data, and reinforcement environments. Even so, minimal behavioural signals can now produce deep inference. The boundary between assistance and intrusion is becoming thinner.
This duality creates what executives may increasingly view as a trust calculus. The more intimately a system understands a user, the more valuable it becomes—but also the more sensitive its governance must be. Incentive structures, monetisation strategies, and data-handling norms will likely come under closer examination as AI-native products scale.
The geopolitical layer adds further complexity. As governments tighten data localisation requirements and refine AI oversight frameworks, companies will have to balance global model development with region-specific compliance. Infrastructure, deployment models, and data governance strategies may fragment along regulatory lines, reshaping how AI services are delivered across markets.
At its core, the Kamath-Amodei exchange points to a broader inflection point in the AI economy. The first phase of competition revolved around compute, data accumulation, and model size. The next phase may centre on alignment—between machines and humans, between personalisation and privacy, and between innovation and institutional trust.
As AI systems grow more capable and more embedded in daily decision-making, the question for business leaders is no longer whether to adopt AI, but how to do so without eroding the human judgement and trust that underpin long-term value creation.
In that sense, the future of AI may hinge less on raw intelligence and more on the systems of responsibility built around it.