The Telecom Regulatory Authority of India (TRAI) on July 20 recommended a structure for regulating artificial intelligence (AI) through the lens of a risk-based framework. The regulatory body also proposed that the Ministry of Electronics and Information Technology (MeiTY) be designated as the administrative ministry for AI.
This comes at a time when there has been a global call for regulating AI and generative AI, and at a time when MeitY is expected to bring in similar norms in the upcoming Digital India Bill. The Digital India Bill will supersede the over 2o-year-old Information Technology Act.
The TRAI’s recommendations on establishing an AI framework is a part of their larger recommendations on ‘Leveraging Artificial Intelligence and Big Data in Telecommunication Sector’.
The proposed regulatory structure
The regulatory framework suggests setting up an independent statutory authority called the Artificial Intelligence and Data Authority of India (AIDAI).
The AIDAI will be responsible for framing regulations on various aspects of AI, including its responsible use, defining principles of responsible AI and their applicability to AI use cases based on risk assessment and evolving a framework based on its assessment, global best practices and public consultation, TRAI said.
Secondly, TRAI recommended the establishment of a multi-stakeholder body (MSB), that will act as an advisory body to the AIDAI. The MSB should have members from different ministries/departments, industry, legal experts, cyber experts, academia and research institutes, TRAI said.
Thirdly, TRAI’s proposed framework recommends the categorisation of AI use cases based on their risks and regulating them according to principles of responsible AI.
“The regulatory framework should ensure that specific AI use cases are regulated on a risk-based framework where high-risk use cases that directly impact humans are regulated through legally binding obligations,” said TRAI.
Difference between TRAI and MeitY’s plans
Earlier Minister of State in Electronics and Information Technology (MeitY) had said that the government will bring in regulations on AI through the prism of user harm.
“Our approach towards AI or any regulation is that we will regulate it through the prism of user harm. This is a new philosophy, which started since 2014 that we will protect digital nagriks. We will not allow platforms harming digital nagriks. If they operate here, then they will mitigate user harm,” Chandrasekhar said.
This differs from what TRAI is suggesting: regulation through the prism of use cases.
In its 141-page recommendation document, TRAI said, “It is essential to identify different use cases based on potential risks such as AI systems related to law enforcement, education… Such AI systems can be categorised under the category of high risk.
“If it is a high-risk category AI system, we need to make sure it complies with mandatory compliance requirements before its deployment. Such systems cannot be allowed to be deployed without being fully sure that they are safe and ethical,” it said. Moneycontrol