ADVERTISEMENT

The rising use of AI brings various risks and complexities to the digital landscape, as it becomes an attack surface and an attack vector. Adversaries now target models and data pipelines and also misuse AI to orchestrate more sophisticated attacks. According to the “2024 Deloitte-NASCIO Cybersecurity Study”, 71% of state government CISOs said that AI-enabled threats (including GenAI threats) were “very high” or “somewhat high” among the other cyberthreats.
At the same time, AI is also emerging as a powerful tool for cybersecurity, transforming the way organisations detect, defend, and respond to threats. According to the fourth edition of the Global Future of Cyber survey by Deloitte, 39% of the respondents, on average, reported using AI capabilities in their cybersecurity programmes to a large extent.
This dual dynamic nature of AI is increasingly shaping the future of digital trust, becoming both the critical shield for defence and a new attack surface that must now be secured.
AI can transform the operating model of the cyber landscape from a reactive posture to a proactive enabler of emerging technologies, keeping organisations secure. AI helps build intelligence into the architecture for adaptive learning and continuous improvement.
November 2025
The annual Fortune India special issue of India’s Best CEOs celebrates leaders who have transformed their businesses while navigating an uncertain environment, leading from the front.
AI can be used for cyberthreat detection/response capabilities to detect multi-dimensional threats. AI can spot patterns, understand user and system behaviour, and predict and respond to cyberattacks. Endpoint detection capabilities, through malware detection, are enhanced further via AI-powered endpoint detection and response (EDR) solutions. Anomalies and malicious emails are analysed quickly, helping organisations detect phishing campaigns and resolve issues quickly. Machine learning also allows AI to analyse user profiles and behaviours, helping to prevent spear phishing. AI adds intelligence to incident response workflows and enables automation of responses for swift and prompt reactions. It helps organisations manage network security across large, complex environments by enabling smarter decision-making and providing deeper behavioural insights into users and systems.
The advantages of using AI in cybersecurity revolve around its ability to learn from new data, handle a vast volume of data, self-improvement of the model through constructive feedback and cost reduction.
The AI landscape is constantly learning, adapting, and interacting with users, data, and systems. Complex emerging threats are affecting the AI ecosystem, such as input injection, in which inputs override controls or alter model behaviour.
Another type of cyberattack is training data poisoning, where tampered data in AI models introduces vulnerabilities, biases, and ethical issues. Other threats include model poisoning, model stealing and model inversion. These involve tampering with model behaviour, unauthorised copying and extracting sensitive training data from outputs. Supply chain vulnerabilities due to compromised third-party components further disrupt the functioning of the AI system.
The implementation of these evolving AI systems is critically important, as AI adoption is transforming organisational workflows.
Organisations can implement cybersecurity measures with strong governance, clear policies and compliance frameworks (ISO 42001 and NIST AI RMF). Transparent AI systems, regular risk and compliance assessments and a clear inventory of AI assets will strengthen the digital infrastructure.
Security-by-design principles, threat modelling and privacy safeguards are equally crucial. Real-time monitoring, red teaming, continuous anomaly detection and embedding legal and compliance checks into workflows, along with training and awareness programmes, are also essential to promote responsible AI use.
The role of AI in cybersecurity is changing fast, with AI now serving both as a strong defence tool and a new tactic used by attackers.
AI systems require a strong governance model that embeds security and responsibility into design, implementation and operations. Appropriate guidelines, continuous oversight and resilient architectures enable organisations to ensure the right use of their AI ecosystem, driving innovation and the delivery of services.
(The author is partner, Deloitte India. Views are personal.)