SOME YEARS AGO, after a board meeting of Reliance Innovation Council, which drives innovative and transformational strategies at India’s largest and most profitable energy-to-retail conglomerate, Mukesh Ambani engaged in a thought-provoking chat with Ramakanth Mashelkar, the former director-general of the CSIR and the-then member of the council. Ambani proposed, almost offhandedly, “Doc, we must leapfrog to something.” Reflecting on this conversation with Fortune India, Mashelkar, now 82, who serves as chairman of Reliance New Energy Council and chancellor of Jio Institute, says, “My response was to question the premise of his metaphor.” He advised Ambani, “Frogs leap as a natural defence mechanism against predators, a response rooted in fear. Do we really wish to frame our ambitions around fear of our competitors?” Instead, Mashelkar offered a more fitting analogy for ambition. “Reliance should not just leap but pole vault, where the pole’s length symbolises the magnitude of our aspirations.”

Whether the exchange had a profound impact on Ambani is not known, but it did foreshadow the disruptive entry of Jio in telecom in late 2016. The move not only dismantled competition but also laid a fertile ground for the meteoric rise of India’s digital economy, pole vaulting from 156th to first in global mobile data usage even as Jio emerged as the numero uno with 470 million subscribers. “The rapidity, speed, scale, and sustainability of it was incredible,” says Mashelkar, encapsulating the strategy as a form of corporate pole vaulting.

That strategic thinking defines RIL’s approach to innovation and growth. Instead of doing well and doing good, it chose to do good as a means of doing business itself. “Not only has Jio democratised access to telecom but is also making money out of it,” says Mashelkar, whose son, Amey, heads RIL’s start-up accelerator.

Pole Vault 2.0

Even as Jio has firmly entrenched itself at the heart of India’s digital ecosystem, Ambani has now set sights on yet another vault. This time the ambition is to ride the artificial intelligence (AI) wave that is sweeping businesses across the globe. Articulating that vision at the 2023 AGM, Ambani told shareholders, “There is a fifth — and most exciting — frontier of growth for Jio. A global AI revolution is reshaping the world around us, and sooner than we think, intelligent applications will redefine and revolutionise industries, economies, and even our daily life. To stay globally competitive, India must harness AI for innovation, growth, and national prosperity.”

It’s not surprising to see RIL partner with Nvidia, the U.S. chipmaker, which controls 95% of the AI chip market, to develop a foundational large language model (LLM) to be trained on an array of Indian languages, 11 to begin with. While Reliance has not made its intent public, Nvidia CEO Jensen Huang reveals, LLM will serve as a critical component for Generative AI models similar to ChatGPT and will be, ultimately, owned by Reliance. According to Huang, this model will enable RIL to develop AI-driven services and applications. “Reliance can build its own large language models that power generative AI applications made in India, for the people of India,” he adds. Jio Platforms, an RIL subsidiary and the parent of Reliance Jio, has already rolled out Jio Brain, an AI platform designed to incorporate machine learning capabilities into telecom and enterprise networks without necessitating network or IT changes. Besides, it offers LLM as a service for enterprise and mobile applications. “Reliance wants to revolutionise the enterprise space with the use of AI… there is a centre of excellence with 100 experts working on AI solutions. Mukesh believes it is going to be transformative,” says Mashelkar.

While Reliance’s tie-up with Nvidia is for a foundational LLM, it is also collaborating with nine Indian Institutes of Technology (IITs) to launch an advanced AI model on the open-source Bharat GPT programme. Named Hanooman, the model, stands out for its scale and scope, supporting 22 Indian languages offering multimodal capabilities, including generating text, speech, and video across multiple languages. This makes it applicable in healthcare, governance, financial services, and education. Notably large, with up to 40 billion parameters, the model ensures nuanced and contextually accurate responses in Indian languages. A larger number of parameters generally allows for more complex patterns and subtleties in data, potentially leading to better performance and more nuanced responses, though it also depends on factors such as the richness of the data, model architecture, and fine-tuning.

Foundational AI models provide a versatile and extensive base of pre-trained knowledge, which generative AI systems can use to produce new content. By fine-tuning foundational models with additional data or adjusting their parameters, they can be specialised to perform generative tasks with greater effectiveness, benefiting from the broad understanding and capabilities that foundational AI models have developed.

Similar to the journey Reliance has undertaken, the country’s oldest conglomerate, Tata Group, too, has partnered with the American chip maker. Huang says the partnership with Tata Consultancy Services, Tata Motors and Tata Communications is aimed at building AI infrastructure that “is over an order of magnitude more powerful than the fastest supercomputer in India today.” Giving company to the goliaths are the NexGen companies. AI start-ups from India raised $560 million across 25 funding rounds in 2023, with late-stage funding making up for 24% of the total number of rounds and 17% of the value raised, according to AIM Research., a composable software platform provider, raised the highest funding ($250 million) in a Series D round. Expectedly, the proliferation of start-ups is concentrated in the enterprise AI space.

Acting as the proverbial ringmaster, the government, too, sees a bigger role for itself to play within the AI ecosystem. Much before ChatGPT’s launch in November 2022, a taskforce on AI constituted by the Centre identified 10 important domains (including healthcare, education and public utility services) with a stated objective of viewing AI as a socio-economic problem solver at a scale rather than a mere economic growth booster.

While the government recognises the potential to democratise AI for the greater good, similar to what it achieved with the Unified Payments Interface (UPI) in payments, leveraging AI presents a completely different set of challenges.

Where Is The Oil?

Cliché as it sounds, if data is the new oil, access to data remains critical to harness the power of India-centric AI, not to mention the need for computing power as graphic processing units (GPUs) don’t come cheap. GPUs, typically, have higher memory bandwidth than central processing units (CPUs) and are generally faster and more efficient than CPUs for tasks such as AI model training.

India’s strategic advancements in AI hinge on critical pillars of open data, robust computing infrastructure, and accessible AI models. Key initiatives such as the INDIAai programme and the C-DAC collaborations are central to this strategy, facilitating the development of LLMs essential for competing on a global scale.

Launched in 2021, the INDIAai programme aims to make government-anonymised data available to Indian academics and start-ups, fostering innovation. This initiative is supported by the creation of a formidable AI computing infrastructure by C-DAC, in partnership with the private sector, tailored to meet the high computational demands of LLMs.

Additionally, the INDIAai Mission introduced in 2024, focuses on creating a non-personal data collection platform specifically for Indian start-ups and companies. This platform is set to harness vast datasets from government repositories — ranging from Aadhaar metadata to ISRO’s earth observation data — which are instrumental for various applications, including governance and environmental data submitted by citizens and central agencies. There is also the India Stack, a comprehensive digital framework initiated with the launch of Aadhaar in 2009. Built upon this foundational identity layer, is a payments layer featuring the UPI payment system and a data layer for citizens to store government documents online. However, access to this wealth of data is conditional; only companies that can be categorised as trusted sources with proven track records will be eligible, though specific criteria for this trustworthiness remain undefined.

To counter the issue, Sandip Patel, MD, IBM India & South Asia, sees a collaborative approach as the way out for the development and deployment of AI solutions. “The truth is that there’s no AI without IA (information architecture). Hence, partnerships between start-ups and entities that possess significant data resources can be crucial. This approach allows start-ups to innovate and create applications that leverage existing data sets,” says Patel.

Elaborating further, Patel refers to IBM’s “watsonx” initiative in AI that offers foundation models at scale. The platform is intentionally designed to be hybrid and supports multiple models, allowing it not to be limited to just IBM models. “Recently, we released smaller models (SLMs), enhancing performance and latency for users. Over time, these innovations will continue to evolve. The platform’s multi-modal capability enables the creation of diverse use-cases utilising various models,” says Patel. SLMs are scaled-down versions of LLMs and can be used for very specific domain purposes such as those used in natural language processing (NLP).

However, the quality and scope of non-personal data that can be utilised remain broad and ambiguous, encompassing everything from essential citizen information required for services such as passports to datasets that support agricultural advancements and weather prediction. The introduction of the Digital Personal Data Protection (DPDP) Act 2023 complicates the landscape, permitting the practice of data scraping of publicly available data without consent. This poses privacy concerns, especially as AI technologies that can exploit such data continue to evolve.

Despite these privacy concerns, there is also a focus on using data responsibly to foster cultural and linguistic inclusiveness. For instance, efforts are also being made to develop 16 new datasets in Indian languages, managed by the Linguistic Data Consortium for Indian Languages. The initiative aims to enhance technologies such as automatic speech recognition and live voice translation, further democratising technology use across India’s varied linguistic landscape. Though there is limitation on the datasets available in Indian languages, Prashanth Kaddi, partner, Deloitte India, believes there is a way out. “Though there is a scarcity of data sets in Indian languages, there is an opportunity to advance multilingual NLPs using creative techniques such as zero-shot and few-shot learning techniques. Making smaller trained models will also help in advancing multilingual NLPs though the pace will be much slower. There has been some work on IndicNLP Suite (a collection of foundational resources for language processing covering 12 major Indian languages).”

NLP involves comprehension of spoken or written language, which includes grasping syntax (sentence structure), semantics (meaning), and context. Zero-shot and few-shot learning are techniques in machine learning that aim to handle situations where annotated data is scarce or unavailable. For example, in image-recognition tasks, annotated data might include images of objects outlined and labelled (e.g., “dog”, “car”, “tree”). In text analysis, annotations could involve tagging words or phrases with their corresponding parts of speech or marking sentiment (e.g., positive, negative, neutral).

The push for innovation isn’t just happening in traditional computing environments. “As the landscape of technology continues to evolve with innovations like blockchain, its applications extend beyond mere transactional integrity to foundational changes in how AI systems, especially language models, are developed.

Building Blocks

Nikhil Varma, technical lead (India) at Algorand Foundation, underscores the role of blockchain technology in the decentralised development of LLMs for Indian languages. The Singapore-based foundation advocates blockchain technology by leveraging the Algorand cryptocurrency protocol and open-source software. “Blockchain’s decentralised nature has the potential to revolutionise LLM development and management by creating more precise models for Indian languages. Multiple participants can contribute their data and computing power through a blockchain network for more efficient training. This allows capturing the nuances of various ‘bhashas’ better with inputs from language experts,” says Varma.

The advancements in AI technology not only improve computational efficiencies but also open avenues for data management strategies. One such innovative approach is data tokenisation, which offers a novel way for entities to monetise and share data securely. While there is still no single central repository of data that India has in place for creation of foundational LLMs or SLMs, blockchain can make Indian AI models more cost-effective by introducing efficiencies and enabling data tokenisation, especially for resource-hungry start-ups. This, in turn, could create opportunities for data owners to monetise assets and AI developers to access more affordable datasets. For instance, a college with specialised knowledge and data, such as unique research data, can tokenise this information. Each dataset or piece of research could be converted into a digital token. Similarly, a regional music company in India with a vast library of unique and culturally rich music recordings can tokenise this music data. Each song or album can be represented as a digital token on the blockchain. “In the context of AI models, data tokenisation can enable businesses and individuals to tokenise valuable datasets, ensuring their authenticity and providing a decentralised marketplace for AI developers to access and utilise these datasets at a lower cost,” says Varma.

However, for now, Bhashini, India’s AI-led language translation and database platform, has taken the lead by enabling features such as text-to-text translation in 22 languages, automated speech recognition, text-to-speech, voice-based payments, among others. In fact, Bhashini’s real-time translation works in 12 languages, and it has improved the live translation lag from 6 seconds to 1-2 seconds now.

Different Strokes

Even as the buzz around AI for Bharat seems omnipresent, for-profit start-ups are finding ways to keep themselves ahead in the race. This entrepreneurial spirit is illustrated by KissanAI. Led by founders Vivek Raghavan and Pratyush Kumar, recently secured $41 million in a Series A funding round last December. This funding, led by Lightspeed, Peak XV Partners, and Khosla Ventures, represents the largest raise at this stage for an Indian AI start-up. Kumar, also a founder of the research initiative AI4Bharat, has a proven track record in developing state-of-the-art AI models for Indian languages, which have been applied in various public-good and commercial projects.

Harshjit Sethi, MD, Peak XV Partners, explains in a podcast, Intelligence Unscripted, that the founders’ focus on Indic language LLMs made a lot of sense. “Their approach to building a superior model isn’t about surpassing OpenAI or Anthopic by scale. Instead, it hinges on their collection of proprietary data, which better facilitates the training of models on Indian languages. They’re also keenly aware of the cost implications of AI applications on a per user per task basis. High inference costs wouldn’t be viable in India. Thus, they’re exploring how to enable the use of AI at the lowest possible cost. This consideration was key in our decision-making process,” Sethi tells host Bala Parthasarathy.

The OpenHathi initiative at Sarvam AI aims to enhance the ecosystem by providing open models and datasets, especially in Hindi, English and Hinglish. Given that Sarvam’s model would be three-four times cheaper for users compared with global models such as OpenAI’s GPT, it’s not surprising that bootstrapped AI start-ups such as Kissan AI made the most of the initiative.

The agri AI start-up has introduced Dhenu 1.0, a 7-billion parameter LLM tailored for agriculture. Co-founder Pratik Desai acknowledged that to make his solution scalable and accessible to farmers, it needed to be affordable despite the high costs associated with running on GPUs. “Our aim was to develop a compact model using over 355,000 agricultural conversations we conducted with more than 1,00,000 farmers, alongside voice or text datasets gathered on our platform,” Desai tells Fortune India. Following the release of Sarvam AI’s Hindi LLM, OpenHathi, KissanAI began training and fine-tuning its datasets on OpenHathi. “The collaboration quadrupled our cost-efficiency and improved latency,” Desai says, adding that the model’s uniqueness lies in its bilingual nature, adeptly processing 300,000 instruction sets in both English and Hindi.

Interestingly, while the open-source platform has helped Desai in building out Dhenu, he is not too keen about opening up the backend of his AI on the public network. For instance, the Centre for Development of Advanced Computing (C-DAC) has implemented AI Research Analytics and Knowledge Dissemination Platform (AIRAWAT) of 200 AI Petaflops. While the 650 GPU-powered supercomputer has catapulted India to the 65th rank on the global AI supercomputing list, Desai is sceptical about a partnership. “If I have to open up my whole back end, it’s a risk as tomorrow what is the assurance that my proprietary data is safe?” says Desai.

With a 12-member team based out of India, Desai is working with Microsoft, which is helping the start-up through its Microsoft for Start-ups Founders Hub that provides free access to leading AI models through Azure, including OpenAI GPT-4, up to $150,000 in Azure credits, and one-on-one guidance from Microsoft experts. “Since AI needs iterations intermittently, you may end up running out of money very quickly as GPUs are very expensive too,” says Desai.

While is running its service free in India, it is closely working with pilot projects for two Fortune 500 companies engaged in the agro chem space, including Corteva Agriscience. “We are also running some pilot projects and if that translates into customer engagements, that will be a big boost,” says Desai, whose start-up is currently clocking $600,000 in revenue largely through the pilot programmes. The global agrimarket, a $9 trillion opportunity, is much bigger where 10% of the cost goes towards the customer market and that translates into a $900-billion opportunity. “We have enquiries from 40 companies, big and small. In the U.S., customer support is a huge cost and if we can get even 1% of that opportunity in the first year that’s a huge fillip to our revenues. Conversational commerce is a big opportunity that we are looking to capitalise on,” reveals Desai.

Back home, Ola founder Bhavish Aggarwal announced that Krutrim AI has raised $50 million at a valuation of $1 billion from investors led by venture capital fund Matrix Partners India, becoming India’s first AI unicorn start-up to reach the milestone. “We are fully committed towards building the country’s first complete AI computing stack,” Aggarwal said in a statement. The company did not respond to Fortune India’s request for an interaction. The Krutrim Pro model, which includes capabilities for vision, speech, and task execution, is expected to launch later this year. It will support generative tasks in 10 Indian languages and accept inputs in 22 languages. According to Aggarwal, it has been trained on over two trillion data tokens for Indian languages.

Even as start-ups are looking to create their own legacy with the help of accelerators, Sethi feels that because AI is now a mainstream belief — it is the next platform of the future — all companies are investing in it. Established corporations such as Tata and Reliance have significant financial resources, data access, and technical infrastructure that allows them to invest heavily in AI research and development, potentially outpacing start-ups in innovation and speed to market. With their extensive customer bases (e.g., Reliance’s 470 million customers) and established distribution networks, these conglomerates have an inherent advantage in deploying and scaling new technologies across India. “They’re (companies) competing just as strongly with start-ups, and, therefore, I think the opportunity is for companies that are building something new,” Sethi says in the podcast.

While these are still early days in the AI race, Deloitte’s Kaddi believes one of the major factors which will decide AI model pricing is the subsection in which the model operates. “While pricing will be competitive in basic usage such as crop price prediction in agriculture or content suggestion in education, pricing could be more aggressive in niche areas such as drone-enabled farming. We may also look at some AI models for agriculture and education as part of the digital infrastructure of the country which is made available as a public good for the country,” says Kaddi.

That being the case, in this high-stakes AI race, it will be thrilling to watch if the underdogs leapfrog ahead, or whether the titans pole vault to victory.

Follow us on Facebook, X, YouTube, Instagram and WhatsApp to never miss an update from Fortune India. To buy a copy, visit Amazon.