Author: Bartosz Troczyński

He keeps an eye on the e-revolution, cheers for the growth of Industry 4.0, and carefully follows advances in artificial intelligence, analyzing what should be permissible or impermissible for AI and how this can be communicated to AI. He encourages digital progress by writing about collection, transfer and processing of Big Data. Write to the author:

Will Facebook think twice before it removes content?

Last week, the Ministry of Digital Affairs announced that it has concluded an agreement with Facebook introducing a mechanism for Polish users to question a decision to delete content or a profile.

First instance, Facebook; second instance, contact point on NASK platform

Users around the world complain of arbitrary and unreasonable decisions to remove their content or Facebook profiles.

Until now, Facebook has made it possible to appeal against such a decision by filling in a form on its website. Facebook dealt with complaints but that did not always translate into a change of the original decision.

Thanks to the new agreement, after an unsuccessful appeal, a website user will be able to appeal again, this time via a specially created platform on the Research and Academic Computer Network (NASK) website, the so-called contact point.

Continue reading

Ways of excluding applicability of the GDPR

At a meeting summarising public consultations on a bill implementing the General Data Protection Regulation (GDPR) in Poland, the Ministry of Digital Affairs confirmed that during legislative work a change was approved providing for major exceptions to the GDPR. This change was proposed in October 2017 by the Ministry of Development. This proposed exception is an interesting example of how hard it can be to draft legislation properly aligned to the needs of a digital economy.

Continue reading

A few smartphone pushes instead of endless scrolling through terms and conditions

Two new documents were issued in December 2017 by the EU’s Article 29 Data Protection Working Party explaining how to interpret and apply the provisions of the General Data Protection Regulation on the consent that must be obtained from data subjects and the information that must be provided to data subjects for processing their data. The Guidelines on Consent under Regulation 2016/679 and the Guidelines on Transparency under Regulation 2016/679 demonstrate that the era of lengthy, fine-print terms and conditions is over. Data controllers will achieve better compliance with the GDPR by using brief and easily understood FAQ and notices.

Continue reading

Data not entirely anonymous

As anonymisation of data appears to the main method for escaping the restrictive regime of the General Data Protection Regulation, it’s worthwhile for data processers to be aware of the risks they may be exposed to if this is not done properly or the data can be traced back to specific people. Should firms applying artificial intelligence to anonymised data expect to be held liable when it turns out that the data they are using have not been permanently anonymised but only been given a pseudonym—a reversible operation?

Continue reading

Another look at AI and GDPR

In February 2018 the EU’s Article 29 Data Protection Working Party published its Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679. The guidelines explore Art. 21–22 of the General Data Protection Regulation, and although the title may not indicate it, provide another element in the legal framework for development and use of artificial intelligence. They also show that this framework may be truly restrictive.

Continue reading

Small firms, big data

Many startups offer their clients big data analysis services based on machine-learning algorithms. The results of such analyses can be of interest to any companies profiling their products or marketing campaigns. But for the analysis to be reliable, it takes data—the more the better. Algorithms using machine learning must have something to learn from. The accuracy of the forecasts subsequently developed for business aims will depend on the scope of the training data fed to them. If the algorithm is limited from the start to analysis of an abridged sample of observations, the risk increases that it will incorrectly group data, overlooking important correlations or causal connections—or see them where they don’t exist. Only training the algorithm on large datasets can minimise the risk of shortcomings in diagnosis and prognosis.

Continue reading