On 8 April 2019 the European Commission published the Ethics Guidelines for Trustworthy AI, drafted by the High-Level Expert Group on Artificial Intelligence (HLEG AI), an independent body whose main task is to prepare these guidelines as well as recommendations on AI investment policy (work still underway).
Pillars of European approach to AI: The ethical dimension
The main aim of the guidelines is to promote “trustworthy” artificial intelligence. In the view of HLEG AI, this means that AI must be (1) lawful, complying with all applicable laws and regulations, (2) ethical, ensuring adherence to ethical principles and values, and (3) robust, from both a technical and a social perspective.
The drafters stated that the guidelines do not explicitly address the first component, but focus on the other two. The EU is consistent is not taking a direct position on the legal environment for AI. This approach is understandable, as elaboration of a clear opinion in this respect will require thorough examination and consultation and will also have serious socioeconomic implications. Nonetheless, the guidelines do offer information on potential directions for the growth of AI in the EU.
Defining trustworthy AI
Chapter I of the guidelines identifies the ethical principles and their correlated values that must be respected in the development, deployment and use of AI systems:
- AI systems should be developed in a way that adheres to the ethical principles of: respect for human autonomy, prevention of harm, fairness and explicability.
- Particular attention should be paid to situations involving more vulnerable groups such as children, persons with disabilities and others that have historically been disadvantaged or are at risk of exclusion, and to situations which are characterised by asymmetries of power or information, such as between employers and workers, or between businesses and consumers.
- It should be acknowledged that, while bringing substantial benefits to individuals and society, AI systems also pose certain risks and may have a negative impact, including impacts which may be difficult to anticipate, identify or measure. Consequently, adequate measures should be adopted to mitigate these risks when appropriate, proportionately to the magnitude of the risk.
The rest of the document identifies specific requirements that should be met by AI systems.
It is also stated in the guidelines that the document is not meant to take the place of “hard” regulations on AI. It is rather conceived as a document that will undergo revisions along with the growth of AI.
HLEG AI is continuing its work, focusing on recommendations for the EU’s investment policy on artificial intelligence. This will be a document that is sure to exert a great impact on measures involving AI in the EU in the near term. Nonetheless, the conclusions adopted in the guidelines should not be overlooked, as they express the approach adopted in the EU, differing from the position of other major global centres: AI should be people-oriented, remain subordinate to people, and function in a manner respecting fundamental human rights.