Posted on Categories artificial intelligence

AI must not predict how judges in France will rule

For a long time, much has been written about artificial intelligence in the legal profession. We discussed various types of solutions in this area on our blog. One is predictive analytics, i.e. using algorithms to anticipate the judgments that will be issued by a given judge under a given state of facts. Such tools rely mainly on an analysis of rulings issued in the past and draw various types of conclusions from them, e.g. with respect to the chances for prevailing in a dispute.

Along with a recent reform of the justice system in France, a ban was recently introduced against using data concerning the identity of judges to evaluate, analyse, compare or predict their actual or supposed professional practices. Violation of this ban can lead to up to five years in prison.

This appears to be the world’s first example of a ban by a national legislature of predictive analytics in the legal profession. Moreover, it cannot be ruled out that such an approach might also be adopted in other countries in the Continental legal tradition. In common-law jurisdictions, by contrast, judges seem resigned to the knowledge that AI algorithms used for predictive analytics will employ the names of judges to determine how the given judge will probably rule in a similar case in the future. However, as argued during the debate over introduction of this provision in France, judges in Continental systems are concerned that algorithms might discover inconsistencies between their rulings and applicable provisions of law, or will have a negative impact on protection of judges’ personal data.

A key question is how this ban will affect legal tech providers. Will they abandon the use of judges’ names in their algorithms, or will they continue to use them regardless? Considering the harsh penalty under the French regulations, the second option seems unlikely. Presumably, these businesses operate on the assumption that even exclusion of judges’ names will not deprive their analyses of significant added value for users. While the possibility of predicting the judgment that will be issued by a specific judge is tempting, that is not essential to the operation of these tools. Predictive analytics algorithms can determine other elements essential for the proceeding, and then the user can draw on publicly available sources to supplement the results with information about the possible behaviour of individual judges (although it is not certain whether such activity might also violate the recently introduced ban).

Are regulations of this type a good solution? The arguments concerning protection of personal data seems particularly wide of the mark, considering that there is no universal requirement in France to delete the names of judges from published judgments. The motivations for introducing these regulations seem to conflict with values such as transparency and understanding of the operations of the justice system. Such prohibitions will probably not have a major negative impact on the legal tech industry, but in France at least they will limit the functionality of the available tools.

Katarzyna Szczudlik