Artificial Intelligence may be the most booming and beneficial invention of recent times, but its misuse can cause harm or tarnish the image of fellow individuals. The Rashmika Mandanna Deepfake controversy brought to light the risks and consequences associated with modern technology, which allows for the creation of highly realistic but fraudulent content by manipulating images, videos, and audio recordings. India will now crack down on AI-generated deepfakes and misinformation armed with a new regulation.

Deepfake regulation might target both creators as well as social media platforms enabling the proliferation of such malicious content for financial penalties. After chairing a meeting with social media platforms, artificial intelligence (AI) companies and industry bodies, Union Information Technology Minister Ashwini Vaishnaw said the government will come up with a “clear, actionable plan” to tackle deepfakes and misinformation in the next ten days.

deepfakes deepfake regulation

Also Read: Deepfake Technology: The Rashmika Mandanna Incident and The Legal Course

Deepfakes refer to synthetic or doctored media that are digitally manipulated and altered to convincingly misrepresent or impersonate someone using a form of artificial intelligence (AI). This is typically used to spread false information. Terming them a “new threat to democracy,” the government and other stakeholders will draw up actionable items on ways to detect deepfakes, prevent their uploading and viral sharing, and strengthen the reporting mechanism for such content.

Thus allowing citizens recourse against AI-generated harmful content on the internet. “Deepfakes weaken trust in the society and its institutions,” the minister said. He said the regulation could also include financial penalties. “When we do the regulation, we have to be looking at the penalty, both on the person who has uploaded or created as well as the platform.”

Vaishnaw met with representatives from the technology industry, including Meta, Google and Amazon, on Thursday, November 23, 2023, for their inputs on handling deepfake content. “The use of social media is ensuring that defects can spread significantly more rapidly without any checks, and they are getting viral within a few minutes of their uploading. That’s why we need to take very urgent steps to strengthen trust in the society to protect our democracy,” he said.

deepfakes deepfake regulation Union Information Technology Minister Ashwini Vaishnaw

The New Regulation

Vaishnaw insisted that social media platforms need to be more proactive considering that the damage caused by deepfake content can be immediate. “We may regulate this space through a new standalone law, or amendments to existing rules, or a new set of rules under existing laws. The next meeting is set for the first week of December, which is when we will discuss a draft regulation of deepfakes, following which the latter will be opened for public consultation,” Vaishnaw said.

The minister added that ‘safe harbour immunity’ that platforms enjoy under the Information Technology (IT) Act will not be applicable unless they move swiftly to take firm action. Other aspects discussed during Thursday’s meeting included the issue of AI bias and discrimination, and how reporting mechanisms can be altered from what is already present.

The Immediate Need For Legal Measures

The government had issued notices last week to social media platforms following reports of deepfake content. Concerns around deepfake videos have escalated after multiple high-profile public figures, including Prime Minister Narendra Modi and actor Katrina Kaif, were targeted. The Prime Minister raised this issue of deepfakes in his address to the Leaders of G20 at the virtual summit on Wednesday, November 22, 2023.

A Google spokesperson who was a part of the consultation said the company was “building tools and guardrails to help prevent the misuse of technology while enabling people to better evaluate online information.” The search giant said in a statement, “We have long-standing, robust policies, technology, and systems to identify and remove harmful content across our products and platforms. We are applying this same ethos and approach as we launch new products powered by generative AI.”

Identifying Harmful Deepfakes

A senior industry official familiar with the developments said most companies have taken a “pro-regulation stance.” Ashish Aggarwal, vice-president of public policy at software industry body Nasscom, said that while India already has laws to penalize perpetrators of impersonation, the key will be to strengthen the regulations on identifying those who create deepfakes.

“The technology today can help identify synthetic content. However, the challenge is to separate harmful synthetic content while allowing harmless one and to remove the same quickly. One tool that is being widely considered is watermarks or labels embedded in all content that is digitally altered or created, to warn users about synthetic content and associated risks and along with this strengthen the tools to empower users to quickly report the same.”

deepfake deepfakes regulation compliance


Global firms with larger budgets and English-heavy content could find compliance easier. What will be challenging is to see platforms with a greater amount of non-English language content live up to the challenges of filtering deepfakes and misinformation. This will also be crucial in terms of how such platforms handle electoral information. Rohit Kumar, founding partner at policy thinktank The Quantum Hub, added that regulations of deepfake content “should be cognizant of the costs of compliance.”

If the volume of complaints is high, reviewing takedown requests in a short period of time can be very expensive. Therefore, even while prescribing obligations, an attempt should be made to undertake a graded approach to minimise compliance burden on platforms… ‘virality’ thresholds could be defined, and platforms could be asked to prioritise review and takedown of content that starts going viral,” Kumar said. He added that the safe harbour protection should not be diluted entirely, as “the liability for harm resulting from a deepfake should lie with the person who creates the video and posts it, and not the platform.”

These legal measures are definitely the need of the hour. We must ensure that the online space is safe for everyone and that such adverse misinformation created by deepfakes is curbed as soon as possible.