The Government of India is committed to promoting responsible AI development and secure digital platforms, while safeguarding children and vulnerable users from emerging online risks. Recognizing the benefits of digital technologies for education and access to information, the Government also addresses concerns like harmful content, cyberbullying, screen addiction, and digital dependency.
Under the Information Technology Act, 2000, and related rules, including the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (amended), and the Digital Personal Data Protection Act, 2023, strict safeguards are in place for online safety, data privacy, and accountability of intermediaries. Specific provisions target cybercrime, identity theft, privacy violations, child sexual abuse material, and unlawful content, with mechanisms for rapid action and content removal.
Recent amendments (February 2026) strengthen obligations on social media platforms to manage AI-generated content, requiring labelling, traceability, and safeguards against obscene, misleading, or harmful synthetic media, including deepfakes. Intermediaries must act swiftly, with removal of unlawful content within three hours and efficient grievance redressal mechanisms.
In education, digital safety is reinforced through the PRAGYATA Guidelines, cyber safety modules in NCERT curriculum, and capacity-building initiatives for teachers. AI is being integrated across K–12 curricula under NEP 2020 and the National Curriculum Framework 2023, with age-appropriate learning, teacher training, and Centres of Excellence in collaboration with academic and industry partners.
These measures reflect India’s holistic approach to foster AI innovation and digital growth while ensuring the protection of children and other vulnerable users in a safe and accountable online ecosystem.
This information was shared by Minister of State for Women and Child Development, Savitri Thakur, in the Rajya Sabha on 25 March 2026.



