ADVERTISEMENT
In a conversation on People by WTF, Anthropic CEO Dario Amodei spoke with Nikhil Kamath about the speed of AI development, the jobs most exposed to automation, and the concentration of power inside a handful of frontier labs.
The discussion moved from economic disruption to education, from startup strategy to governance. What emerged was a picture of AI as a force already reshaping incentives, institutions and work.
Amodei described the current moment in blunt terms. “It’s as if this tsunami is coming at us, and yet people are coming up with explanations that it’s not actually a tsunami,” he said.
He stressed that the effects will not be limited to the tech sector. “The economic implications are going to be enormous. The geopolitical implications are going to be enormous.”
At the same time, he rejected the idea that warning about risks amounts to rejecting the technology. “My view isn’t that AI is bad. My view is that you need to steer AI in the right direction.”
The framing was clear: capability is advancing quickly; the real question is how institutions respond.
On employment, Amodei did not hedge. “Coding is going away first,” he said, pointing to how quickly AI systems are improving at generating and debugging software.
He added a caveat. Automation may hit repetitive programming tasks before it reaches higher-level engineering work that requires architecture decisions, product trade-offs and coordination.
He also flagged the cognitive side effects of overreliance. “Depending on how you use the model, we can see de-skilling,” he said.
In that context, he emphasised fundamentals. “Critical thinking skills are going to be really important.”
And he was direct about misuse. “If we deploy AI in the wrong way, if we deploy it carelessly, then yes, people could become stupider.”
When the conversation turned to startups, Amodei warned against surface-level AI businesses.
“Don’t build thin wrappers around models. Anyone can copy that.”
The point was about durability. Businesses that rely solely on access to a foundation model are exposed if the underlying model improves or becomes commoditised. Long-term defensibility, he suggested, comes from deeper integration — industry knowledge, regulatory context and embedded workflows.
Amodei acknowledged discomfort with how quickly frontier capabilities have consolidated within a small group of companies.
“I’ve said openly that I’m at least somewhat uncomfortable with the amount of concentration of power that’s happening here,” he said.
He argued that responsibility cannot be left to market forces alone. “We advocate for AI regulation even though it hurts us commercially,” he said, adding that guardrails should not depend purely on competitive dynamics. He also described Anthropic’s governance structure as one designed to balance commercial incentives with safety considerations.