The legislative process of the EU AI Act is moving forward steadily. The Act, when passed, will have a massive effect on all of us and how we interact with technology. In this post, we briefly explain what the law seeks to achieve and its effect on India.
How could this affect India?
The “Brussels Effect” refers to the European Union’s unilateral power to regulate global markets due to multinational companies and other relevant actors around the world adapting to standards laid down by the EU in order to access the extremely important EU market. In order to reduce compliance burdens, companies adopt these standards globally, which essentially designates EU standards as the global standard. Examples of the Brussels Effect include the EU competition law and the Antitrust laws, EU regulations for consumer health and safety in the chemical industry, and the 2003 Restriction of Hazardous substance directive.The Brussels Effect can be seen quite clearly in how the General Data Protection Regulation (GDPR) has become a global standard with regard to data protection requirements. Similarly, the EU AI Act could become the global standard which multinational companies would comply with to gain access to the EU market.
Presently, surveillance systems in India are being rolled out in a legal vacuum. Our Project Panoptic is currently tracking 126 facial recognition systems across the country that have been deployed by government agencies at the Union and State levels. However, there are minimal regulatory checks in place to ensure that these systems do not result in intrusive and discriminatory surveillance practices. While India has always been independent with regard to its legislative endeavours, it has a history of seeking inspiration from international regulations while formulating domestic laws. Certain principles of the GDPR have inspired provisions in the latest draft of India’s data protection legislation.
What is the EU AI Act?
The European Union Artificial Intelligence Act (EU AI Act) was first introduced in 2021 following a White Paper on Artificial Intelligence (AI) that proposed to set up a regulatory framework for trustworthy AI. The Act proposed to classify AI according to the risk it poses to the health and safety or fundamental rights of a person into 4 categories based on which regulatory burden will be determined:
A. Unacceptable risk AI: Uses such as social scoring by the government (think China), systems that deploy subliminal or purposefully manipulative techniques, and exploit people’s vulnerabilities have been identified as against EU values will be banned due to the unacceptable risk they entail.
B. High risk AI: Uses such as biometric identification and categorisation of natural persons (including AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons), management and operation of critical infrastructure (AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity), employment, workers management and access to self-employment (AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests), law enforcement (AI systems intended to be used by law enforcement authorities for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or the risk for potential victims of criminal offences). See full list here at page 5. These high risk systems can have severe adverse effects on individuals. In order to ensure trust and consistent high level of protection of safety and fundamental rights, a range of mandatory requirements (including a conformity assessment) would apply to all high-risks systems. These requirements include:
- having a risk management system,
- training, validation and testing data sets being subject to appropriate data governance and management practices,
- technical documentation,
- transparency and provision of information to users,
- human oversight,
- accuracy, robustness and cybersecurity,
- quality management system, &
- conformity assessment.
C. Limited risk AI: Uses such as chatbots will have specific transparency obligations to ensure that users can effectively manage their interactions with the AI systems.
D. Minimal risk AI: Uses such as AI-enabled video games or spam filters can be utilised freely.
What happened last week?
A new draft of the EU AI Act was approved by the Internal Market Committee and the Civil Liberties Committee composed of Members of the European Parliament (MEPs). The most significant amendment, that has been overwhelmingly welcomed by civil society, is an expansion of the list of “intrusive” and “discriminatory” AI uses that will be prohibited. The list has been expanded to include uses such as:
- “Real-time” remote biometric identification systems in publicly accessible spaces;
- “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;
- Biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
- Predictive policing systems (based on profiling, location or past criminal behaviour);
- Emotion recognition systems in law enforcement, border management, workplace, and educational institutions; and
- Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and right to privacy).
Further, the draft also aims to regulate generative AI models like ChatGPT which would be required to comply with higher transparency requirements such as disclosing the content that has been generated by an AI, publishing summaries of copyrighted data used for training purposes, and assessing the environmental impact of training these energy-intensive systems.
The path forward
However, before that, there is still a long way for the draft to go before it becomes law. The next step after approval by EU parliamentary committees is a plenary vote with all MEPs to finalise the European Parliament’s position on the draft. The draft will then be discussed in trilogues, which is an “informal inter-institutional negotiation bringing together representatives of the European Parliament, the Council of the European Union and the European Commission”. The draft could tentatively be formalised into law within the next year, after which a grace period of two years will be afforded to the affected parties to comply with the regulations.
As India also seeks to regulate AI with its #AIforAll initiative, it will definitely be looking to learn from their European counterparts. Interestingly, India and the EU recently held their first ministerial meeting of the Trade and Technology Council which aims to deepen their strategic partnership on trade and technology. In this meeting, both India and EU specifically committed to cooperating on trustworthy Artificial Intelligence. It remains to be seen whether the EU AI Act will influence the agencies deploying these systems in India to reevaluate their use and also put in place standards which would follow a more rights respecting approach akin the EU.