India recently adjusted its approach to regulating artificial intelligence (AI), stepping back from a proposal that would have required tech firms to gain government approval before launching or deploying AI models. This decision follows significant criticism of the initial advisory issued on March 1, which drew opposition from both local and international entrepreneurs and investors. Responding to the backlash, the Ministry of Electronics and IT revised the advisory, now recommending that firms label under-tested and unreliable AI models to alert users to potential inaccuracies, rather than seeking prior government approval.
The initial advisory represented a notable departure from India’s previous relatively hands-off stance on AI regulation, which had been in place less than a year earlier. Previously, the country had opted against regulating AI growth, recognizing its importance to India’s strategic interests. However, the March 1 advisory signaled a shift towards more stringent regulation, although it lacked legal binding. It stressed that AI models must not facilitate unlawful content sharing, bias, discrimination, or electoral process interference. Additionally, it advised the use of “consent popups” to notify users about AI-generated output unreliability and emphasized the importance of easily identifying deepfakes and misinformation.
This regulatory adjustment reflects a broader global trend of countries rushing to establish rules governing the rapidly evolving AI field. India’s initial move to tighten regulations, especially targeting social media companies, aligned with its efforts to tackle challenges posed by generative AI and its potential societal impacts, including electoral process integrity ahead of general elections.
The revised advisory strikes a balance between fostering AI sector innovation and addressing risks associated with deploying under-tested or unreliable AI technologies. By recommending rather than mandating labeling of such AI models, the Indian government appears to seek a middle ground, promoting responsible AI development while maintaining a regulatory environment conducive to technological progress.