Cognizant's agentic AI roadmap

/ 3 min read

Naveen Sharma, Global Head of AI and Analytics at Cognizant, decodes the company's plans as it aspires to be an AI builder.

Naveen Sharma, Global Head of AI and Analytics, Cognizant
Naveen Sharma, Global Head of AI and Analytics, Cognizant

Teaneck, New Jersey - headquartered Cognizant, recently announced the use of Anthropic's large language models (LLMs) - Claude,  to align its software engineering and platform offerings with Anthropic capabilities – including Claude for Enterprise, Claude Code, the Model Context Protocol (MCP), and the Agent SDK. The company also plans to provide Claude to all its employees across key corporate functions, engineering, and delivery teams, and for coding,  testing, documentation and DevOps workflows. Naveen Sharma, Global Head of AI and Analytics at Cognizant, decodes the company's plans as it aspires to be an AI builder. 

ADVERTISEMENT
Sign up for Fortune India's ad-free experience
Enjoy uninterrupted access to premium content and insights.

Fortune India: Can you elaborate on the factors that are shaping your approach to developing and deploying AI agents in your newer deals? 

Naveen Sharma: New client engagements today are designed with an “AI first” mindset. Unless we bake autonomous AI agents into solutions from the very outset, you end up building short-lived IP. Invariably, these deployments with AI agents have been able to demonstrate clear ROI, and clients now expect these AI-driven efficiencies. We also have a more mature approach, which combines tooling (e.g. Cognizant Agent Foundry, Neuro Multi Agent accelerator) and partnerships that make deploying AI agents at scale safer and faster. We are also retrofitting AI into existing long-term deals, wherever a multi-year client contract can benefit from automation, we introduce AI agents via contract addenda or renewals. 

Fortune India: Could you elaborate on the partnerships and the nature of work currently being undertaken by Cognizant on agentic AI platform development? 

Naveen Sharma: We have made significant investments in an agentic AI framework for solutions across industries. Earlier this year,  we launched our Cognizant Agent Foundry, which is a framework and toolset for building and deploying AI agents at enterprise scale. It has standardised components that can rapidly customise agents for different use cases. For instance, a customer service bot for retail or an automated claims processor for insurance. We are also partnering with major AI players to strengthen this foundation. We are working with Google Cloud on their Agent Space platform to jointly develop cross-industry agent solutions. This is not just limited to one hyperscaler; we work with all of them. We have also made it a point to build capabilities on platforms such as ServiceNow, Salesforce, SAP and others. The idea is to avoid reinventing the wheel for each project. Eventually, this will evolve into an “Agent-as-a-Service” model. Soon, a client should be able to subscribe to a library of pre-built cognitive agents(developed by Cognizant) that perform common business functions. 

Fortune India. What is the long-term roadmap for internal use of AI agents, and what are the focus areas for the implementation of agents? 

Naveen Sharma: Internally, we are deploying AI agents across a range of functions to boost efficiency and quality. For instance, in the IT operations, our SmartOps system includes AI agents for proactive monitoring and auto-resolving common incidents, resulting in up to 40% faster response times. We have deployed many such agents across various functions in talent management operations, recruitment, marketing, and bid management, and these implementations have delivered tangible gains. Our long-term roadmap is to scale such agents to every suitable internal process and increasingly have them collaborate.

Recommended Stories

Fortune India: Do you find any difference in the quality/output of Gen AI tools when deployed in clients' projects with whom you have a long-standing relationship?  

Naveen Sharma: Absolutely. Having a long-term relationship with a client often means we’ve accumulated years’ worth of domain-specific knowledge and data from that client’s operations, and that is a goldmine for GenAI. When we build AI models for a client, being able to train or fine-tune them on the client’s historical data makes a huge difference in output quality. AI’s answers or predictions come out far more accurate,context-aware, and aligned with the client’s business tone. We’ve seen this firsthand: for one client, we fine-tuned a generative model on five years of their customer service transcripts, and the chatbot’s resolution rate jumped significantly compared to a generic model. Using proprietary data helps the AI move from generic responses to bespoke insights that competitors simply can’t replicate. So yes, it’s a big competitive edge for us when building AI solutions- our long-standing clients trust us with their data, and we can demonstrate better results. 

ADVERTISEMENT
Explore the world of business like never before with the Fortune India app. From breaking news to in-depth features, experience it all in one place. Download Now