As per WhatsApp India’s ‘User Safety Monthly Report’ under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2022, WhatsApp messenger banned 1.4 million Indian accounts in February 2022. The accounts were banned on the basis of complaints received from users through its grievances division and the company’s own mechanism to prevent and detect law violators.
The company said it received 194 ban appeals through its grievance redressal mechanism of which 19 ‘accounts actioned’ were carried out, which means that the company took remedial action based on the report. This denotes either banning an account or a previously banned account being restored as a result of the complaint. In addition to responding to and actioning on user complaints through the grievance channel, WhatsApp also deploys tools and resources to prevent harmful behaviour on the platform.
Earlier, in January 2022, WhatsApp had banned over 1.8 million Indian accounts. The asserted that it has invested in artificial intelligence (AI) and other state-of-the-art technology, data scientists and experts, and in processes, in order to keep users safe on the platform.
Further, in compliance with the new India IT Rules 2021, Google India removed 93,067 pieces of bad content based on user complaints in February 2022. The aforesaid figure indicates a decrease of bad content pieces from 104,285 pieces of bad content that were removed in January 2022.
Separately, as per the monthly report released by Google, 30,065 complaints were received from users in India in February 2022 (from 33,995 in January). These complaints were related to third-party content that is believed to violate local laws or personal rights on various Google platforms. The company mentioned that some requests alleged infringement of intellectual property rights, while others claimed violation of local laws prohibiting types of content on grounds such as defamation.
The company also mentions that 93,067 pieces of content were removed under various categories like copyright, trademark, court order, graphic sexual content, circumvention, and others. Google said that it also removed 338,938 pieces of content as part of automated detection in the above-mentioned period as against 401,374 pieces of content that were removed as part of automated detection in January 2022.
In addition to reports, the company asserts that it invests heavily in fighting harmful content online and use technology to detect and remove such content from their platform. In accordance with the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules), all social media platforms, is mandated to publish monthly transparency reports with details of complaints received from users in India and the actions taken, as well as removal actions taken as a result of automated detection.