EU regulations banning certain AI practices go into effect on 2 February 2025. Some institutions may assume that the bans only apply to extreme practices, which they would never be involved in. But the ban on using AI systems to assess the risk of that someone has committed a crime, or will commit a crime, shows that this is not the correct approach. A more in-depth analysis reveals that some market practices now considered standard, especially in financial services, may prove questionable once the bans enter into force. This is particularly true for monitoring of money-laundering risk and more broadly the risk of fraud.
Category: artificial intelligence
Monitoring fraud under the Artificial Intelligence Act
Why did I sign an appeal to halt AI development?
Regardless of whether we see benefits or an existential threat in the latest AI technologies, the gravity of the challenges these technologies bring is undeniable. Over the past few decades, technological advances have far outpaced reflections on their possible consequences. This need not and should not be the case. That technologies are not solely a source of good is becoming apparent today as we begin to perceive the destructive impact that certain digital technologies have on our democracies, security, and mental health. In the face of recent technological advances, such as artificial intelligence, we have an opportunity to avoid mistakes and at least try to redirect the development of these technologies toward authentic benefits, while at the same time mitigating risks. In this context, I decided to sign an appeal to temporarily halt work on AI systems. I also encourage others to do so. Below I present the main rationale that guided me.
Artificial Intelligence Act: Will the EU set a global standard for regulating AI systems?
The world pins high hopes on the development of artificial intelligence systems. AI is expected to generate huge economic and social benefits across various aspects of life and sectors of the economy, including the environment, agriculture, healthcare, finances, taxes, mobility, and public administration.
The progressing development of AI systems is forcing the creation of appropriate legal frameworks, which on one hand should facilitate further growth of AI technologies but on the other hand should ensure adequate protection of persons using such systems and raise societal confidence in the operation of AI systems.
Tech versus virus: Remote diagnostics
This time we address solutions from the front lines: devices for remote diagnostics which can improve effective detection of the coronavirus and also unburden the health service in other areas. These solutions can also serve as a proving ground for the regulatory approach to oversight of algorithms.
The immediate inspiration for writing this text was a solution from the company StethoMe presented at the DemoDay organised by the MIT Enterprise Forum CEE. It is a wireless stethoscope combined with an application allowing respiratory examination at a distance. The system also enables analysis of the collected data using an artificial intelligence algorithm. StethoMe is currently testing the possibility of using a remote stethoscope to examine symptoms caused by the coronavirus. Remote diagnostics could greatly improve the effectiveness and safety of our fight against the virus.
We could bet with great odds that one of the effects of the pandemic will be an increased interest in remote diagnostics solutions in the near future. Thus we should point to some of the special regulatory challenges these solutions will necessarily entail.
European vision of the data-based economy
Key strategic documents from the European Commission on data and AI—the European data strategy and Excellence and trust in artificial intelligence—were recently released for public consultation. They present a European vision for a new model of the economy.
According to these documents, the new model of the economy is to be founded on principles vital for European values, particularly human dignity and trust. This aspect should be stressed, as the European Union clearly is becoming the global leader in thinking about new technologies in light of humanistic values. This is a unique approach, but also entails several dilemmas. In adopting this approach, the EU risks eroding its competitive advantages, at least in the short-term perspective. Most likely, AI technologies will develop faster in places where their growth is not restrained by ethical doubts. The Commission thus proposes an ambitious but also risky approach.
AI must not predict how judges in France will rule
For a long time, much has been written about artificial intelligence in the legal profession. We discussed various types of solutions in this area on our blog. One is predictive analytics, i.e. using algorithms to anticipate the judgments that will be issued by a given judge under a given state of facts. Such tools rely mainly on an analysis of rulings issued in the past and draw various types of conclusions from them, e.g. with respect to the chances for prevailing in a dispute.
Along with a recent reform of the justice system in France, a ban was recently introduced against using data concerning the identity of judges to evaluate, analyse, compare or predict their actual or supposed professional practices. Violation of this ban can lead to up to five years in prison.