Posted on Categories artificial intelligence, privacy/personal data protection

Another look at AI and GDPR

In February 2018 the EU’s Article 29 Data Protection Working Party published its Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679. The guidelines explore Art. 21–22 of the General Data Protection Regulation, and although the title may not indicate it, provide another element in the legal framework for development and use of artificial intelligence. They also show that this framework may be truly restrictive.

Automated decision-making and profiling are among the key notions introduced by the proliferation of AI algorithms.

Profiling is defined in Art. 4(4) GDPR as any form of automated processing using personal data to evaluate certain “personal aspects” relating to a natural person, such as the person’s economic situation, health, preferences, interests, and so on. Thus the method of processing the data (automated) and the aim of the processing (evaluation of personal aspects) are both relevant. Profiling covers all techniques for identifying internet users based on their history of online activity.

Automated decision-making, as explained in the guidelines, means “the ability to make decisions by technological means without human involvement.” This involves situations where not only the analytical process or generation of recommendations is automated, but also the actual determination of certain matters. Thus automated decision-making would not be involved in the case of services performed with the assistance of automated analyses (e.g. profiling), but would occur when services are performed without human involvement.

It appears from an analysis of the guidelines that in both respects, the potential offered by AI runs up against limitations imposed by the GDPR. While the use of AI for automation of profiling or decision-making is permissible, an entity employing AI cannot expect to be permitted to exploit all of the fruits generated by AI.

First, the guidelines leave no doubt that Art. 16 GDPR, providing data subjects the right to rectification of inaccurate data without delay, also applies to automated processing of data and automated decision-making. The examples given in the guidelines indicate that rectification can apply not only to the input data, but also data generated as a result of analysis, e.g. assigning the data subject to a certain category of customers.

Second, the guidelines point out that individuals will also be entitled to object to automated processing of their data. Objection is more far-reaching than rectification. It does not require modification of the data (or results, as the case may be) disputed by the data subject, but cessation of processing of the data in general. It is not clear, however, whether an objection should always be treated as requiring cessation of processing of any and all data, or requires cessation of processing only in the scope indicated in the objection.

Thus undertakings wishing to exploit AI to build consumer profiles and take decisions should recognise that they will have to ignore some of the results at the request of the data subjects. The correctness of the technical process and the reliability of the results will be irrelevant. The objection will have to be honoured whether it seems rational or not.

The only route for opposing a demand for rectification or an objection will be to show that the processing is required by legitimate interests outweighing the interests of the data subject making the objection or demanding the rectification. But even this option will be unavailable to entities using automated data processing for direct marketing purposes. In that case, compliance with the data subject’s demand will be unconditional.

It can’t be ruled out that technical limitations will impact the number of rectifications and objections. In the case of deep-learning algorithms, information such as how customers were grouped or whether the algorithm made any grouping at all may be inaccessible. Deep learning does not allow for step-by-step reconstruction of the analytical process. It is a black box. We know the input data and the output data, but the entirety of the operations conducted by the AI cannot be determined after the fact. Consequently, rights vested in data subjects by the GDPR, and confirmed in the WP29 guidelines, may be hard to enforce when confronted with unreadable black boxes.

This does not change the general scepticism toward AI enshrined in the regulations. Under the GDPR, undertakings using AI algorithms in their business, even with the greatest care and professionalism, can rely on those algorithms only until data subjects decide that they disagree with the algorithm. This leads to the observation that for now at least, the European Union recognises the right to use AI, but allows its use only where regulations protecting other values (e.g. privacy) leave room for AI. Hopefully this harsh approach to AI will change when regulations are adopted aimed directly at AI, not treating AI as only a side issue in protection of other rights.

Bartosz Troczyński