Posted on Categories artificial intelligence

Are licences the way to achieve responsible AI?

A problem faced by programmers, politicians, and ordinary users is ensuring that artificial intelligence algorithms are not used inconsistently with their original aim. This issue has been raised numerous times in national reports prepared by individual EU member states, including Poland.

In simple terms, the problem is that, for example, code used to analyse the data of cancer patients might also be used in systems for surveillance of the population. Once an algorithm is released (e.g. for further work on the algorithm or in an open-source approach), the creators of the algorithm no longer have control over its subsequent use, including use for aims inconsistent with their original assumptions and intentions. This could lead to self-censorship by creators of algorithms, and could also discourage publication of the results of studies involving AI. So far there have not been any effective tools for addressing this problem, and all initiatives have essentially been limited to drawing up codes of best practice in this respect.

Responsible AI Licences—how to limit the use of an algorithm?

A solution to this problem could be Responsible AI Licences (RAIL), enabling creators of algorithms to restrict certain uses of their code. The developers of this solution foresee two types of licences: an end-user licence and a source-code licence. The first restricts how the software containing an algorithm is used, while the second, as declared by the creators of RAIL, is intended to reduce the risk connected with releasing code to the broader public. Developers could then combat the use of an algorithm in a manner they regard as undesirable.

The RAIL project has established a list of unacceptable applications of algorithms created by the project founders, thus restricting the use of the algorithm. The list is to be expanded by the developers of RAIL with the help of members of the industry, but there is no possibility for the list to be expanded by the authors of algorithms using RAIL.

RAIL: start of a discussion on responsible AI

Undoubtedly RAIL in itself will not suffice to ensure appropriate use of AI. It will not entirely prevent AI from being devoted to aims that can cause negative human consequences. Nonetheless, it is one of the first solutions that can be applied immediately and realistically to restrict harmful application of AI algorithms. It clearly represents progress in the ongoing debate on the need to introduce codes of behaviour and procedures ensuring responsible use of AI, which don’t always offer solutions that can be applied immediately. RAIL will certainly contribute to a deeper discussion on the need to provide ready solutions for creators of algorithms, and also on the need to revise documentation, including contracts and terms and conditions for code that has been released, to adapt them to the challenges arising along with the growth in artificial intelligence.

Katarzyna Szczudlik