Just 3 months, dear Government?

By Dhruv Suri and Aastha Mathur on May 27, 2021

On February 25, 2021, the Ministry of Electronics and Information Technology (“MEITY”) notified the Information Technology (Guidelines for Intermediaries and Digital Media Ethics and Code) Rules, 2021[1] (“Rules”) to provide for a mechanism for redressal and speedy resolution of complaints by social media users as well as law down a self-regulatory framework for digital media platforms. While much has been written about these Rules, they were back in news recently because the 3-month timeline[2] given to “significant social media intermediaries” (“SSMI”)[3] to comply with them lapsed on May 25, 2021 and there were rumors that companies like Facebook, Twitter, etc. may get “banned”.[4]

At this stage it is important to note that there is no ban. Non-compliance with the 3-month timeline implies that pursuant to Rule 7, the SSMIs may no longer be deemed as an “intermediary” and get benefit of the safe harbor provisions under section 79(1) of the Information Technology Act, 2001 (“IT Act”). In other words, content uploaded on their platforms could be deemed to be uploaded by them, i.e. they could be treated as publishers of that content. Consequently, officials of these SSMIs can be liable for punishment and prosecuted in accordance with the IT Act read with the Indian Penal Code, 1860.

3-Month Targets

Moving on to the subject at hand. There are several compliances which SSMIs were expected to comply with by May 25, 2021. Some of the key ones are listed below:

(i)        Rule 4(a): SSMIs have to appoint a “Chief Compliance Officer” (CCO) who shall be responsible for compliance with the IT Act and these Rules. It further states that such CCO shall be “liable” if the intermediary fails to observe due diligence while discharging its duties under the IT Act and its rules. This makes the role of a CCO a very important one and perhaps not a position that a SSMI can be expected to fill in 3 months. In fact, such senior officials typically have a 3-month notice period to serve. Therefore, to expect such a person to take charge of such a critical role is quite unreasonable. This is notwithstanding the fact that the CCO position is so risky that SSMIs may have to compensate any individual significantly for taking on this responsibility.

(ii)       Rule 4(b): SSMIs also have to appoint a nodal contact person “for 24 X 7 coordination with law enforcement agencies and officers to ensure compliance to their orders or requisitions”. This nodal officer cannot be the CCO and has to be an Indian resident. Once again, this is a very important position that has to be undertaken by 1 individual as they may be called by law enforcement officials in the middle of the night for support. In other words, this individual has to be available round the clock. Like the role of the CCO, the nodal contact person also has to be appointed after thorough background checks and diligence. Therefore, it is unreasonable to expect such a role to be filled within 3 months. Also, why not have at multiple nodal officers working in shifts instead of requiring 1 person to be available 24 X 7?

(iii)     Rule 4(c) read with Rule 3(2) and Rule 4(6): Rule 3(2) requires intermediaries to appoint a Grievance Officer (“GO”). However, for SSMIs, such GO also has to be an Indian resident (“RGO”).  The RGO shall be responsible for addressing complaints from a “user” or a “victim”. This distinction is important because a “user” could be anybody and may have no nexus with the content in question. Once a complaint is received, the RGO has to acknowledge the complaint within 24 hours and dispose it off within 15 days. Any SSMI will be able to vouch that these timelines are extremely aggressive and considering the number of “users” who raise complaints about social media posts/tweets every day, it would not be humanly or technologically possible for SSMIs to dispose-off such complaints within 15 days. The kind of capital and human resource investment required for this is mammoth.

Further, if the complaint is received from an aggrieved individual or any person on behalf of such individual wherein the complaint pertains to content which is prima facie in the nature of any “material which exposes the private area of such individual, shows such individual in full or partial nudity or shows or depicts such individual in any sexual act or conduct, or is in the nature of impersonation in an electronic form, including artificially morphed images of such individual”, the RGO shall ensure that such content is removed or disabled within 24 hours. A big problem here is that just the assessment of whether the complaint prima facie triggers this rule is itself a contentious subject. For instance, should the individual’s face have to be a part of the image/post for a prime facie case to be formed? What is partial nudity? What if the individual is a model and consented to a photoshoot, and the photographer uploaded the picture? Further, processing such complaints is likely to require human interference and, therefore, the verification process to assess whether the complaint is “prima facie” correct would, more often than not, take more than 24 hours. Additionally, Rule 4(6) requires SSMIs to enable the complainant (who could be any user of the platform) to track its complaint by providing a unique ticket number. While the timelines are extremely aggressive, 3 months is certainly not enough to hire a qualified RGO as well as set up technical infrastructure and process to ensure compliance with the relevant rules.

(iv)      Rule 4(2): This is by far the most contentious obligation imposed on SSMIs that primarily provide messaging services (for example WhatsApp). It requires such SSMIs to “enable the identification of the first originator of the information on its computer resource…”. While a request to seek details of the first originator can be made under limited circumstances, compliance with this rule requires a significant technological overhaul. For starters, like an Amazon Alexa has to hear every word spoken before it to know when it’s called out, SSMIs like WhatsApp would have to link a unique identifier with every message that is originated on its platform to trace the creator. Imagine setting up the technology to be able to do this for billions of messages that WhatsApp handles every day? 3 months are certainly not enough for this. Additionally, all WhatsApp messages that have historically been end to end encrypted will suddenly be identifiable and traceable to its creator. This begs the question of proportionality i.e., for a fraction of messages that may have to be made available to the government, should all messages have to be tagged at the risk of compromising privacy and anonymity?

(v)       Rule 4(4): This is another contentious rule where SSMIs have to “endeavor to deploy technology-based measures, including automated tools or other mechanisms to proactively identify information that depicts any act or simulation in any form depicting rape, child sexual abuse or conduct, whether explicit or implicit, or any information which is exactly identical in content to information that has previously been removed or access to which has been disabled…..”. Simply put, SSMIs ought to endeavor to develop and deploy AI tools that can automatically identify and delete/disable certain type of content. While this rule also states that measures taken to deploy such technological tools should be proportionate having regard to freedom of speech and expression, privacy, etc., it fails to realistically understand how AI operates and how much time companies need to deploy such tools. AI tools could inherently be biased based on they were developed. Further, there could be many false positives and false negatives that could, arguable, expose SSMIs to liability for non-compliance with the Rules. For example, the tool may consider a picture of a naked child in the lap of parent as child sexual abuse and incorrectly flag it. The short point is that while development of such tools takes time (and certainly more than 3 months), non-compliance with such rules should not result in taking away the safe harbor benefit for intermediaries.


There is little doubt that expecting SSMIs to comply with the aforementioned obligations within 3 months was extremely unreasonable and, perhaps, arguably, a way for the government to arm-twist them into censoring content on the pretext of giving immunity from non-compliance. There are multiple other legal infirmities and red flags in the Rules which we have not covered in this post. For instance, all intermediaries have to provide information in their control or possession to law enforcement officers within 72 hours from the receipt of an order.[5] This gives them little or no time to exercise their legal remedies if such government order compromises privacy or unreasonably interferes with the right to freedom of speech and expression.

In fact, the very origin of these Rules has been challenged since they were made under sections 87(1)[6] and 87(2)(z) and (zg) of the. As per section 87(2)(z), the government can frame rules to prescribe the “procedure and safeguards for blocking public access” under section 69A(2)[7] of the IT Act. Similarly, section 87(2)(zg) allows the government to frame “guidelines to be observed by the intermediaries” under section 79(2)[8] of the IT Act. However, these Rules aim to regulate digital media which, based on the definition at Rule 2(i), includes “publisher of news and current affairs content” (like any digital news company) or a publisher of online curated content (like OTT platforms). Moreover, a recent amendment[9] to the Allocation of Business Rules, 1961 has placed “digital media” under the Ministry of Information and Broadcasting (“MIB”). The MIB must formulate a law in relation to digital media and then form rules accordingly. MEITY should not indirectly attempt to regulate digital media by merging them with intermediary obligations under the IT Act.

Rightfully, multiple petitions have been filed by various entities challenging the constitutionality of the Rules altogether as well as certain specific rules (like Rule 4(2)).[10] The Delhi High Court[11] has sought the centre’s response on a plea by the Foundation for Independent Journalism which argues that the new Rules are palpably illegal in seeking to control and regulate digital news media when the parent statue IT Act nowhere provides for such a remit. The Kerala High Court[12] is also looking into this matter and has prevented coercive action against the petitioner.

Notwithstanding the above, what is clear is that the next 3-4 months will be full of action in this space.

[1] Available at http://www.egazette.nic.in/WriteReadData/2021/225497.pdf

[2] Rule 4 of the Rules

[3] Any intermediary with more than 500,000 registered users in India

[4] See: http://tehelka.com/will-india-ban-facebook-twitter-and-instagram-from-tomorrow/https://www.thequint.com/cyber/policy/twitter-facebook-whatsapp-instagram-getting-banned-in-india-all-you-need-to-know [accessed on May 27, 2021]


[6] As per this section, the central government can make rules to carry out the provisions of the IT Act

[7] This section pertains to the manner in which any information can be blocked for public access

[8] This section lays down the conditions under which an intermediary will be granted a safe harbor for any third-party information, data, or communication link made available or hosted by it

[9] Available at https://egazette.nic.in/WriteReadData/2020/223032.pdf [accessed on May 27, 2021]

[10] Chaitanya Rohilla v. Union of India and Ors., W.P.(C) 677/2021

[11] Foundation for Independent Journalism and Ors. v. Union of India and Anr., WP(C) 3125 of 2021

[12] Live Law Media and Ors. v. Union of India and Anr., WP(C) 6272 of 2021





The Bar Council of India restricts advocates from maintaining a website as a source of advertising. This site contains general information for informative purposes only. The reader should not consider / construe information on this site to be an invitation for any attorney-client relationship.