Elena Vassileva: Proposed EU framework on artificial intelligence – everything you need to know

Elena Vassileva: Proposed EU framework on artificial intelligence – everything you need to know

Elena Vassileva

Elena Vassileva, senior associate at Ronan Daly Jermyn, examines new EU proposals for the regulation of artificial intelligence.

Our world is increasingly technology centric, offering unlimited opportunities with just the click of a button. As a result, artificial intelligence (AI), which aims to create technology with human-like problem-solving and decision-making capabilities, is becoming part of our daily lives, be it through voice controlled personal assistants, smart cars, or automated investing platforms. The use of AI gives rise to an array of societal, economic, and legal issues causing the European Commission to include AI in its digital strategy and propose the first legal framework on AI in an attempt to encourage the uptake of AI and address the risks associated with its uses. 

In April 2021 the EU Commission published its draft proposal for a Regulation laying down harmonised rules on Artificial Intelligence. The objectives of the Proposal are to:

  1. facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.
  2. guarantee legal certainty to facilitate investment and innovation in AI;
  3. enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems;
  4. safeguard and respect existing law on fundamental rights and European values;

The Proposal defines AI systems as software that is developed with machine learning, logic-and knowledge-based approaches, and/ or statistical approaches and can, for a given set of human defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with. The Proposal puts forward rules for the development, placement on the market and use of AI systems following a human-centric and risk-based approach. Risk is measured in relation to the purpose of the AI system similar to product safety legislation. 

Scope

The scope of application of the Proposal is broad and will have extraterritorial effect. The Proposal is intended to apply to (i) providers placing on the market or putting into service AI systems in the EU, irrespective of whether they are established within the EU or in a third country; (ii) users of AI systems located within the EU; and (iii) providers and users of AI systems located outside of the EU, where the output produced by the system is used in the EU.

AI systems used exclusively for military purposes are expressly excluded from the scope of the Proposal together with those used by public authorities or international organisations in a third country to the extent such use is in the framework of international agreements for law enforcement and judicial cooperation with the EU or individual Member States. 

Prohibited artificial intelligence practices

The Proposal lists several AI system types which are considered to create an unacceptable level of risk and are prohibited. These include:

  1. AI systems that deploy subliminal techniques beyond a person’s consciousness to materially distort a person’s behaviour in a manner that causes or is likely to cause physical or psychological harm;

  2. AI systems that exploit any of the vulnerabilities of a specific group of persons in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause physical or psychological harm;

  3. AI systems placed on the market or put into service by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to detrimental or unfavourable treatment of persons either in social contexts which are unrelated to the contexts in which the data was originally generated or collected and/or disproportionate to their social behaviour or its gravity;

  4. the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for reasons such as targeted search for specific potential victims of crime, prevention of specific, substantial and imminent threat of a terrorist attack, detection, identification and prosecution of a perpetrator or suspect of a criminal offence in relation to whom a European arrest warrant is in place. 

High-risk systems

AI systems identified as high-risk are only permitted subject to compliance with mandatory requirements and a conformity assessment. High-risk AI systems include:

  1. AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons;

  2. AI systems intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions;

  3. AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests;

  4. AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for public assistance benefits and services, as well as to grant, reduce, revoke, or reclaim such benefits and services;

  5. AI systems intended to be used by law enforcement authorities as polygraphs and similar tools or to detect the emotional state of a natural person;

  6. AI systems intended to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts.

High-risk AI systems will give rise to obligations relating to data quality and governance, documentation and record keeping, conformity assessment, transparency and provision of information to users, human oversight, robustness, accuracy and security.

The Proposal envisages an EU-wide database for stand-alone high risk AI systems with fundamental rights implications to be operated by the European Commission and provided with data by the providers of the AI systems prior to placing them on the market or otherwise putting them into service. 

Providers of high risk AI systems will be required to establish and document a post-market monitoring system and to collect, document and analyse data on the performance of the AI system and report serious incidents and malfunctioning to the market surveillance authorities.  

Low risk and minimal risk systems

AI systems which do not qualify as prohibited or high-risk systems are not subject to AI specific requirements other than disclosing the use of AI to the users. However, they could choose to adopt a code of conduct with a view to increasing their AI systems’ trustworthiness. 

Enforcement and sanctions

The Proposal provides for a two-tier AI governance system – at European Union and national level. At Union level, it is envisaged that an Artificial Intelligence Board composed of representatives of the Member States and the Commission will be created to facilitate harmonised implementation and cooperation. At national level, each Member State will be expected to designate one or more national competent authorities and a national supervisory authority. 

It is envisaged that non-compliance of an AI system with requirements or obligations provided for in the Proposal will result in, among other sanctions, administrative fines. The Proposal sets out three sets of maximum thresholds for administrative fines that may be imposed for relevant infringements ranging from the higher of €30m or six per cent of the total worldwide annual turnover of the offender to the higher of €10m or two per cent of the worldwide annual turnover of the offender.

Conclusion

The Proposal is still at the early stages of the European legislative process. Its human-centric, risk-based approach and obligations relating to record keeping, transparency and reporting as well as the introduction of significant administrative fines evoke parallels with the European General Data Protection Regulation. The definition of an AI system as well as the approach to determining the category of risk of an AI system have already been criticised as requiring extensive analysis and having the potential to add significant compliance costs for users and providers. It remains to be seen whether the Proposal will meet with the support of the European Parliament and of the Council and whether it will be successful in setting a new global standard for AI.

Share icon
Share this article: