As anonymisation of data appears to the main method for escaping the restrictive regime of the General Data Protection Regulation, it’s worthwhile for data processers to be aware of the risks they may be exposed to if this is not done properly or the data can be traced back to specific people. Should firms applying artificial intelligence to anonymised data expect to be held liable when it turns out that the data they are using have not been permanently anonymised but only been given a pseudonym—a reversible operation?
In February 2018 the EU’s Article 29 Data Protection Working Party published its Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679. The guidelines explore Art. 21–22 of the General Data Protection Regulation, and although the title may not indicate it, provide another element in the legal framework for development and use of artificial intelligence. They also show that this framework may be truly restrictive.
The General Data Protection Regulation entering into force on 25 May 2018 is not the only privacy revolution in store for the EU. The proposed ePrivacy Regulation is also generating greater and greater controversy and may change the shape of the internet as we know it.
Many startups offer their clients big data analysis services based on machine-learning algorithms. The results of such analyses can be of interest to any companies profiling their products or marketing campaigns. But for the analysis to be reliable, it takes data—the more the better. Algorithms using machine learning must have something to learn from. The accuracy of the forecasts subsequently developed for business aims will depend on the scope of the training data fed to them. If the algorithm is limited from the start to analysis of an abridged sample of observations, the risk increases that it will incorrectly group data, overlooking important correlations or causal connections—or see them where they don’t exist. Only training the algorithm on large datasets can minimise the risk of shortcomings in diagnosis and prognosis.
In 2012 a 15-year-old girl was killed when she was hit by a train in the Berlin metro. Not knowing whether the death of her daughter was suicide or an accident, her mother decided to log on to her daughter’s Facebook account and read her messages in the hope that this would resolve the matter.
After attempting unsuccessfully to guess the password, the mother asked Facebook to provide her with her daughter’s details and allow her to read private conversations. Facebook refused to grant access to the account, which had been changed to a “memorialised” account. In effect the account was frozen, and the timeline was being used as a place for friends to share memories of the deceased girl.
The new recommendation on processing of data for purposes of employment is designed to meet challenges posed by greater digitisation.
On 1 April 2015 the Council of Europe adopted Recommendation CM/Rec(20l5)5 of the Committee of Ministers to member States on the processing of personal data in the context of employment. The previous recommendation was issued before the growth of the internet and new technologies, and did not reflect contemporary realities. Aware of the increased use of new technologies and electronic communications in dealings between employers and employees, the Council of Europe decided to modify the recommendation to ensure adequate protection of personal data in employment.