India's caught in the cross hairs of a mounting deepfake crisis, sparking widespread concern across various sectors. From political arenas to entertainment, the surge in doctored media blurs the lines between reality and fiction, setting off alarm bells across the country's digital landscape. 

This is just the beginning and it's likely to unfold in larger measure in 2024. According to the World Economic Forum, between 2019 and 2020, the number of deepfake online content increased by 900%. Forecasts suggest that this trend will continue in the years to come – with some researchers predicting that “as much as 90% of online content may be synthetically generated by 2026.”

In a series of alarming instances, notable figures in Indian politics and entertainment have fallen prey to the deceptive reach of deepfakes, indicating the extent of vulnerability that commoners live in. Videos and images have been tampered with, distorting speeches and actions, leading to misinformation and public uproar.

Remember, back in 2020, in the first-ever use of AI-generated deepfakes in political campaigns, a series of videos of Bharatiya Janata Party (BJP) leader Manoj Tiwari went viral showing him hurling allegations against his political opponent Arvind Kejriwal in English and Haryanvi, before the Delhi elections. In a similar incident, a doctored video of Madhya Pradesh Congress chief Kamal Nath recently went viral, creating confusion over the future of the state government’s Ladli Behna Scheme.

It's not just politics taking a hit; the infiltration of deepfakes extends to the entertainment industry as well. Recently, a fake video of Telugu actor Rashmika Mandanna went viral, which led to IT Minister Ashwini Vaishnav saying that the government will form regulations to control the spread of deepfakes on social media platforms, terming them a “new threat to democracy” and even Prime Minister Narendra Modi referred to them as the “new age sankat”.

According to the IT minister, the government, along with key stakeholders, will collaborate to devise actionable plans against AI-generated deepfakes and misinformation which could impose financial penalties on creators as well as social media platforms enabling the proliferation of such malicious content.

Deepfakes which are created by altering all kinds of media — images, video, or audio using technologies such as AI and machine learning, also includes pretending to be someone else to extract financial information – are not a new concept and have been around for more than a decade, explains Harshil Doshi, country manager (India & SAARC), Securonix, a cyber-security firm.

He further says that it is very unlikely that social media companies don’t already possess the tech to flag down fabricated content but they tend to allow some of it to slip out if it is benign and could generate viral views.

“The use of social media is ensuring that defects can spread significantly more rapidly without any checks, and they are getting viral within a few minutes of their uploading. That’s why we need to take very urgent steps to strengthen trust in the society to protect our democracy," Vaishnav said.

On the other hand, Rohit Kumar, founding Partner - The Quantum Hub (TQH), a public policy firm, believes that based on interactions with industry stakeholders, including social media companies, there is cognisance of the problem and a desire to address the issue. “Unfortunately, it is a difficult problem to address,” he says.

Addressing the challenge of deepfakes while safeguarding freedom of expression requires a nuanced approach.

“Coupled with the problems and inaccuracies in detection and given the volume of content being uploaded to social media on a daily basis, placing the burden solely on social media platforms to detect and take corrective actions may not work either,” opines Kumar.

Major social media behemoths such as Google, YouTube, and Meta have implemented stringent policies specifically designed to tackle the proliferation of deepfakes. For instance, in the coming months, YouTube will require creators to disclose altered or synthetic content that is realistic, including using AI tools, and viewers will be informed about such content through labels in the description panel and video player.

Moreover, Google is developing AI in a way that maximises the positive benefits to society while addressing the challenges. “We’re also building tools and guardrails to help prevent the misuse of technology, while enabling people to better evaluate online information. We have long-standing, robust policies, technology, and systems to identify and remove harmful content across our products and platforms. We are applying this same ethos and approach as we launch new products powered by Generative AI technology,” Google spokesperson tells Fortune India.

Meta also has policies in place to remove misleading manipulated media if it falls under the criteria. However, their policy “does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words.”

Social media companies have a crucial responsibility to wield tech wisely, as its misuse can lead to potential pitfalls such as deep fakes, frauds, and impersonation. The repercussions of deepfakes call for a multifaceted approach that integrates technological advancements, legal frameworks, public education, industry collaboration, and ethical considerations. Only through a concerted effort across these domains can India mitigate the risks and protect itself from the harmful effects of manipulated media.

“As AI continues to advance, it becomes imperative to manage its dual role by harnessing its capabilities to combat deepfake threats while mitigating the risks associated with potential misuse. Despite the numerous potential drawbacks, AI is anticipated to advance cybersecurity and aid organizations in establishing more robust security measures,” says Vishal Salvi, chief executive officer, Quick Heal Technologies Ltd.

Follow us on Facebook, X, YouTube, Instagram and WhatsApp to never miss an update from Fortune India. To buy a copy, visit Amazon.