ADVERTISEMENT

Every digital interaction we make today, whether a payment, a medical record, a service request, or a social media post – leaves behind a digital footprint. As Artificial Intelligence (AI) systems increasingly create ‘synthetic data’ by processing and correlating real-world data to predict, model, reshape citizens' perspectives, expectations, etc., the nature of these footprints is also evolving.
It is estimated that 80% of data used by AI systems shall be synthetic, driven by concerns around privacy, scarcity of high-quality real data, and regulatory compliance. Regulated sectors such as healthcare, finance, and insurance increasingly rely on synthetic data to bypass privacy constraints and to capture edge scenarios (e.g., rare diseases, fraud patterns).
These use cases illustrate a shift from reliance solely on collected digital footprints toward model-generated data that preserves utility without compromising confidentiality. In India, this transformation is occurring at an unprecedented population scale, making the management of digital footprints not merely a technical concern but a matter of public trust and democratic accountability.
India’s AI journey is inseparable from its digital public infrastructure. Over the past decade, the country has built foundational digital rails – Aadhaar, mobile connectivity, cloud platforms, and interoperable APIs, that have enabled AI systems to operate across sectors and geographies. With nearly a billion internet users and some of the world’s largest digital platforms in identity, payments, and service delivery, India generates vast volumes of digital footprints daily.
Platforms such as DigiLocker, which enables secure access to authentic digital documents, and the Unified Payments Interface (UPI), which processes billions of transactions each month, demonstrate how digital footprints can be harnessed to deliver convenience, inclusion, and efficiency. AI systems layered on top of these platforms help detect threats, improve service responsiveness and personalise user experiences. Increasingly, AI is also a cyber security ally: models scan large volumes of logs and transactions to spot anomalies, flag suspicious patterns and detect emerging attack signatures.
At the same time, these footprints contain highly sensitive information: financial behaviour, health records, location data, and personal identifiers etc. The aggregation and analysis of such data through AI systems amplifies both value and risk. As national advisories have noted, vulnerabilities in AI design and deployment – such as data poisoning, adversarial inputs, model theft, prompt injection, and exploitation of hallucinated content, therefore, cyber security is essential to ensure that AI systems remain trustworthy.
The challenge before policymakers is to ensure that digital footprints empower citizens by pairing innovation with robust cyber security and responsible use practices.
Across sectors, AI is already shaping how digital footprints are used in India. For instance, in healthcare, AI‑assisted diagnostics and decision‑support tools analyse patient records and imaging data to support early detection and improved outcomes.
Turning to urban governance, AI‑enabled systems for traffic management, flood forecasting, pollution monitoring and grievance redressal rely on continuous streams of digital footprints to enable real‑time decision‑making. Beyond providing operational support, AI can further improve the cyber security posture of such systems by detecting unusual behaviour or attempts to tamper with critical data.
AI applications are evidently demonstrating a positive impact on quality of life, critical infrastructure security, and administrative efficiency. However, their responsible development and deployment is key to building trust in these applications and systems- a matter that necessitates guardrails and oversight.
With AI systems increasingly mediating the relationship between the state and the citizen, governance is emerging as a defining factor in determining outcomes. Digital footprints are not neutral artefacts; they reflect human behaviour and, if misused, can lead to profiling, discrimination or loss of autonomy. Recognising this, India has put in place a legal and institutional framework to govern the use of personal data.
The Digital Personal Data Protection Act, 2023 establishes clear principles around consent, purpose limitation, data minimisation, accountability and grievance redressal. These principles are particularly relevant in an AI‑driven ecosystem, where data collected for one purpose can easily be repurposed through algorithmic analysis.
They also complement emerging best practices for generative AI use, such as avoiding the sharing of sensitive information with public AI services and not over‑relying on AI outputs without verification.
Equally important is the requirement for transparency and explainability. In public decision‑making, AI‑assisted outcomes must be contestable and defensible. Algorithmic systems cannot become opaque substitutes for administrative discretion; they must remain tools that augment, not replace, accountable governance.
Second, federated governance models matter. India’s approach, where national frameworks set standards for security, privacy and interoperability, while states and local governments retain flexibility in implementation, allows AI solutions to address local needs without compromising national safeguards.
Third, trust must be treated as infrastructure. Legal frameworks, institutional capacity, cyber security by design and citizen awareness are as critical as compute power or data volumes. Without trust, AI systems may achieve technical success but fail in societal acceptance.
Internationally, the growing recognition of Digital Public Infrastructure as a development accelerator, highlighted during India’s G20 presidency and by multilateral institutions, underscores the relevance of this model for Global South countries in Africa, Asia and Latin America.
As these countries scale up their own digital public platforms and AI use cases, questions of cyber resilience, responsible data use, and trust‑centred governance will be central to sustainable adoption.
As AI becomes more deeply embedded in daily life, digital footprints will continue to expand in scale and significance; the task before governments is to ensure that innovation advances alongside rights, cyber security alongside openness and efficiency alongside accountability.
These issues - protecting digital footprints at population scale, leveraging AI for cyber‑resilient systems and building trust‑centric digital public infrastructure for the Global South – will be at the heart of the AI Impact Summit 2026, to be held in New Delhi from 16–20 February, where policymakers, industry and civil society will come together to shape the next phase of this journey.
{The writer is is Group Coordinator (Cyber Security) and Scientist 'G' in Ministry of Electronics and Information Technology. Views express are personal}