For a long time, much has been written about artificial intelligence in the legal profession. We discussed various types of solutions in this area on our blog. One is predictive analytics, i.e. using algorithms to anticipate the judgments that will be issued by a given judge under a given state of facts. Such tools rely mainly on an analysis of rulings issued in the past and draw various types of conclusions from them, e.g. with respect to the chances for prevailing in a dispute.
Along with a recent reform of the justice system in France, a ban was recently introduced against using data concerning the identity of judges to evaluate, analyse, compare or predict their actual or supposed professional practices. Violation of this ban can lead to up to five years in prison.
Continue reading “AI must not predict how judges in France will rule”
On 8 April 2019 the European Commission published the Ethics Guidelines for Trustworthy AI, drafted by the High-Level Expert Group on Artificial Intelligence (HLEG AI), an independent body whose main task is to prepare these guidelines as well as recommendations on AI investment policy (work still underway). Continue reading “Ethics Guidelines for Trustworthy AI: Key principles”
A problem faced by programmers, politicians, and ordinary users is ensuring that artificial intelligence algorithms are not used inconsistently with their original aim. This issue has been raised numerous times in national reports prepared by individual EU member states, including Poland.
Continue reading “Are licences the way to achieve responsible AI?”
In October 2017 the humanoid robot known as Sophia, gifted with artificial intelligence, obtained Saudi Arabian citizenship. In May 2018 Google showcased the capabilities of its product Google Duplex, whose AI system can arrange an appointment at the hairdresser’s or reserve a table at a restaurant, while avoiding misunderstandings on the phone and imitating the gap-filling hems and haws of human conversation. Observing the capabilities of these robots, a lawyer’s mind naturally turns to the issue of the potential legal personality of AI.
Continue reading “Legal personality and artificial intelligence”
In a recent article I discussed possible solutions to the question of liability of algorithms for copyright infringement. The solution put forward some time ago by the European Parliament Committee on Legal Affairs is creating the status of electronic persons. This would mean that an algorithm, and not people responsible for the algorithm, would be directly liable for breaking the law.
An alternative, originally proposed in the US and subsequently analysed under Swiss, English, and German law, is use of equivalents of a Polish spółka z ograniczoną odpowiedzialnością (in the US a limited liability company and GmbH in Germany) as a legal vehicle for attributing legal personality to an algorithm. This would be a ‘memberless entity’.
Continue reading “A limited liability company as a means of attributing legal personality to algorithms?”
In my last post I examined whether artificial intelligence could be regarded as an “author” for purposes of copyright law. This topic is intriguing, but we must remember that AI can not only create works that at least theoretically can be covered by copyright protection, but it can also infringe copyrights held by others when creating its own works. There are already algorithms that can mimic a certain style of painting or a specific author. In the face of technology enabling anyone with access to it to produce their own “masterpiece by a famous painter,” it is worth considering whether AI can be held responsible for copyright infringement, and if not, who can?
Continue reading “Liability for copyright infringement by AI”