Tech versus virus: Remote diagnostics

This time we address solutions from the front lines: devices for remote diagnostics which can improve effective detection of the coronavirus and also unburden the health service in other areas. These solutions can also serve as a proving ground for the regulatory approach to oversight of algorithms.

The immediate inspiration for writing this text was a solution from the company StethoMe presented at the DemoDay organised by the MIT Enterprise Forum CEE. It is a wireless stethoscope combined with an application allowing respiratory examination at a distance. The system also enables analysis of the collected data using an artificial intelligence algorithm. StethoMe is currently testing the possibility of using a remote stethoscope to examine symptoms caused by the coronavirus. Remote diagnostics could greatly improve the effectiveness and safety of our fight against the virus.

We could bet with great odds that one of the effects of the pandemic will be an increased interest in remote diagnostics solutions in the near future. Thus we should point to some of the special regulatory challenges these solutions will necessarily entail.

Medical devices as a proving ground for
oversight of AI

Remote
diagnostics systems will in the great majority of cases be regarded as medical
devices. This is a category of products subject to special regulations for
introduction onto the market. The aim is to ensure an adequate level of quality
and safety for products on which human health or even life often depend.

In legal
terms, one of the most interesting elements of remote diagnostics systems is AI
modules, which will no doubt be an increasingly common feature of these
systems. Software, including AI algorithms which are part of remote diagnostics
systems, will also be treated as a medical device. This is expressly included
in the definition of a medical device under Polish and European law. The case
law from the Court of Justice of the European Union also confirms the
possibility of treating software as a medical device if it meets certain
conditions (e.g. C-329/16, Snitem).

Consequently,
medical devices will be one of the first fields where true oversight of AI
algorithms will occur. It will serve as a proving ground, and the experiences
gathered on this occasion will be invaluable for designing a model for
oversight of algorithms in other fields. It is well known that designing the
right model for supervising algorithms is one of the key challenges facing the
Data Economy (as recognised for example in European Commission documents on AI).

Can the MDR keep up with algorithms?

Are
regulations governing medical devices prepared to tackle this ambitious task? The
Medical Device Regulation ((EU) 2017/745) will play a key role. The MDR was
supposed to be applied from 26 May 2020, but due to the COVID-19 epidemic the
effective date is being postponed by one year. A regulation from the European
Parliament to this effect is expected within days.

In the
context of AI algorithms which are a part of medical devices, there are two key
challenges: establishing an approach to the “explainability” of algorithms, and
the evolutionary nature of some algorithms.

  • Explainability

The
“explainability” of algorithms is one of the hottest topics in AI debate. It is
not easy to understand the process by which some algorithms operate. Sometimes
it cannot be determined what conditions led to generation of a certain result
by the algorithm. In the absence of developed standards for methods of
explaining the operation of algorithms, we increasingly face the dilemma of
whether to release algorithms for use when the rules behind their functioning
are unclear to us. Obviously, this dilemma is particularly striking in fields
where the results of the operation of algorithms directly impact the situation
of individuals, including their life and health.

With
respect to the explainability of algorithms, the MDR does not seem to dictate
any specific solution. True, it does set specific conditions concerning for
example software verification and validation (e.g. point 6.1(b) of Annex II), but
it does not expressly require that algorithms used be “explainable.” Thus,
based on the literal wording of the MDR, it would be hard to assume that only
explainable algorithms may be used in medical devices. Going forward, much will
depend on the specific oversight practices.

  • Adaptability

The
adaptable nature of some algorithms may pose an even bigger challenge. Adaptable
algorithms have an inbuilt ability to change and learn. They undergo endless
evolution, which at least in theory will lead to their continual improvement. This
process can be largely automated and occur without human intervention.

The MDR does
not expressly address adaptable algorithms. It may thus be concluded that their
use in medical devices is not prohibited. But there are at least a few
provisions of the MDR whose application may in practice present many problems
in the use of such algorithms. For example, Art. 10(9) MDR requires changes
in device design or characteristics to be adequately taken into account in a
timely manner. Point 2.4 of Annex IX requires notification of substantial
changes to the quality management system. Part C, point 6.5.2, of Annex VI suggests
the need to assign new UDI-DI codes for example in the case of a change in
interpretation of data by an algorithm.

The
problem in applying these provisions is primarily in determining the legal
significance of a change caused to a medical device by adaptation of the algorithm.
As stated, in the case of adaptable algorithms, change is in a sense a constant
process. It would be highly impractical if any change resulting from the
algorithm’s self-learning process triggered the need to undergo the process of
reauthorising the device. On the other hand, it is obvious that certain
adaptations of an algorithm should result in launching at least a partial
reauthorisation. The key is setting the boundary conditions that give rise to a
need to take certain regulatory actions. Unfortunately, the MDR does not
contain any guidelines in this respect. Thus in this case as well, the
supervisory practice will play a vital role.

How other jurisdictions do
it

Obviously, these challenges also arise on other markets. The US Food and Drug Administration takes an interesting approach in its Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD). With respect to such algorithms, the FDA calls for a total product lifecycle (TPLC) regulatory approach, which involves introduction of mechanisms enabling continual, effective monitoring of evolving algorithms. The system assumes the possibility of agreeing on an “algorithm change protocol,” which would set in advance the framework for adaptation of a given algorithm in the future. Adaptation of the algorithm within this frame, so long as appropriate monitoring is ensured, would not require reauthorisation of the algorithm. This is one idea for ensuring adequate flexibility of the system in the face of new phenomena inherent in the development of algorithms.

The growth of remote diagnostics systems containing AI components will be possible only if we face these challenges for oversight of algorithms. Although the EU’s Medical Device Regulation is a new law, it does not contain provisions directly applicable to the reality created by algorithms. Thus national regulators will have to play a major role in overcoming these challenges. Hopefully they will have sufficient openness and courage to ensure the safe development of these promising technologies.

Krzysztof Wojdyło, Dr Ewa Butkiewicz, Joanna Krakowiak

Previous post
Tech versus virus: Contact tracing
Next post
Legal aspects of the video game industry