Category: artificial intelligence

European vision of the data-based economy

Key strategic documents from the European Commission on data and AI—the European data strategy and Excellence and trust in artificial intelligence—were recently released for public consultation. They present a European vision for a new model of the economy.

According to these documents, the new model of the economy is to be founded on principles vital for European values, particularly human dignity and trust. This aspect should be stressed, as the European Union clearly is becoming the global leader in thinking about new technologies in light of humanistic values. This is a unique approach, but also entails several dilemmas. In adopting this approach, the EU risks eroding its competitive advantages, at least in the short-term perspective. Most likely, AI technologies will develop faster in places where their growth is not restrained by ethical doubts. The Commission thus proposes an ambitious but also risky approach.

Continue reading

AI must not predict how judges in France will rule

For a long time, much has been written about artificial intelligence in the legal profession. We discussed various types of solutions in this area on our blog. One is predictive analytics, i.e. using algorithms to anticipate the judgments that will be issued by a given judge under a given state of facts. Such tools rely mainly on an analysis of rulings issued in the past and draw various types of conclusions from them, e.g. with respect to the chances for prevailing in a dispute.

Along with a recent reform of the justice system in France, a ban was recently introduced against using data concerning the identity of judges to evaluate, analyse, compare or predict their actual or supposed professional practices. Violation of this ban can lead to up to five years in prison.

Continue reading

Ethics Guidelines for Trustworthy AI: Key principles

On 8 April 2019 the European Commission published the Ethics Guidelines for Trustworthy AI, drafted by the High-Level Expert Group on Artificial Intelligence (HLEG AI), an independent body whose main task is to prepare these guidelines as well as recommendations on AI investment policy (work still underway).

Continue reading

Are licences the way to achieve responsible AI?

A problem faced by programmers, politicians, and ordinary users is ensuring that artificial intelligence algorithms are not used inconsistently with their original aim. This issue has been raised numerous times in national reports prepared by individual EU member states, including Poland.

Continue reading

Legal personality and artificial intelligence

In October 2017 the humanoid robot known as Sophia, gifted with artificial intelligence, obtained Saudi Arabian citizenship. In May 2018 Google showcased the capabilities of its product Google Duplex, whose AI system can arrange an appointment at the hairdresser’s or reserve a table at a restaurant, while avoiding misunderstandings on the phone and imitating the gap-filling hems and haws of human conversation. Observing the capabilities of these robots, a lawyer’s mind naturally turns to the issue of the potential legal personality of AI.

Continue reading

A limited liability company as a means of attributing legal personality to algorithms?

In a recent article I discussed possible solutions to the question of liability of algorithms for copyright infringement. The solution put forward some time ago by the European Parliament Committee on Legal Affairs is creating the status of electronic persons. This would mean that an algorithm, and not people responsible for the algorithm, would be directly liable for breaking the law.

An alternative, originally proposed in the US and subsequently analysed under Swiss, English, and German law, is use of equivalents of a Polish spółka z ograniczoną odpowiedzialnością (in the US a limited liability company and GmbH in Germany) as a legal vehicle for attributing legal personality to an algorithm. This would be a ‘memberless entity’.

Continue reading