Tech versus virus: Contact tracing
The battle with the coronavirus is dynamically entering another phase.
After the initial shock, we are realising that technology may have a crucial
impact on the rate of return to a somewhat more normal life. This doesn’t mean
just biotech. Solutions keeping the virus under relative control until effective
vaccines reach the market can prove just as important.
With this article, we would like to launch a series of publications on the legal aspects of solutions for supporting the battle with the coronavirus. These solutions are extremely interesting from the conceptual and technological perspective, but also entail numerous legal issues.
From a lawyer’s
perspective, the first topic that comes to mind is fairly obvious. This has to
do with applications based on the concept of contact tracing, monitoring
interpersonal interactions to identify individuals infected with the virus. In
these solutions, we place great hopes on overcoming the pandemic. But they are
based on monitoring human behaviours, which presents fundamental legal
challenges.
What’s up with
contact tracing?
Contact tracing
apps include many potential solutions with essentially the same aim. The point
is to effectively identify persons who may be infected with the coronavirus.
Identifying these people is one of the key factors for slowing the spread of
the virus until society gains herd immunity.
This aim can be
achieved through monitoring of human interactions. After determining that a
person is carrying the virus, his or her social interactions can then be
quickly recreated to reach people whose interaction with the carrier places
them in a risk group. These persons might then, for example, be given priority
testing for the virus.
This aim can be
achieved in many different ways. A range of ideas are being pursued, the
situation is dynamic, and it is hard to say at the moment which solutions will
prevail. In practice, different solutions may be applied in different regions
of the world.
A common feature
of many solutions is the technology used to monitor interactions. Most of the
apps are based on the Bluetooth function in smartphones. This technology
enables automatic sharing of data between devices within a certain distance of
each other, allowing smartphones to gather data on the interaction of their
users.
A major
difference between contact tracing projects is the solutions they adopt for
collecting and processing data, and the architecture of the applications, particularly
the degree of centralisation or decentralisation of data exchange.
The most
centralised models enable collection of data which can unequivocally identify
individuals and transmit their data to administrative authorities (such as the
sanitary inspectorate, but also the police). At the opposite pole are models providing
only for processing of anonymous identifiers and sharing of data on a peer-to-peer
model, without involving the public administration.
The differences between
these models can be best illustrated by tracing the life cycle of data in the
application. Let’s assume that citizens use their smartphones on a mass scale
with the Bluetooth function switched on (raising the first serious dilemma: to
what degree use of the application should be voluntary or compulsory.) The
smartphones record data on their interaction with other smartphones. In
practice, certain identifiers generated by the smartphones are recorded (presenting
another dilemma on how such identifiers are constructed), along with for
example data on the distance and duration of the interactions. These data are
recorded on smartphones. Depending on the model selected, the data may also be passed
on, e.g. to a central repository of data monitored by administrative authorities.
The models can also differ in the length of storage of the collected data. A
key moment in the functioning of the app is when it learns that an individual
is carrying the virus. Models can differ in how this information reaches the
system. Some assume that the citizen will voluntarily submit this information to
the system, while others assume top-down introduction of this information by
the administration. (In the second variant, clearly the system would have to
allow processing of data enabling the administration to compare data on
infected persons with data obtained from smartphones.) After information on
carriers is entered in the system, the carriers’ interactions with other people
could be recreated, either within a central repository or locally on users’
devices. To this end, the carrier’s identifier would need to be compared to the
identifiers recorded on the given smartphone. If they match, the app would use
the data on distance and duration of interaction to assess the probability of infection
of the smartphone’s user. At this moment, another dilemma arises. The
information on risk of infection could be accessible only to the smartphone
user, or could be automatically transmitted to the authorities.
It should be
evident that the number of possible configurations and variants for social
tracing applications is large. When the coronavirus reached Europe and the
United States, concepts for such apps were examined in light of the approach to
privacy and permissible bounds of state intrusion on individual privacy prevailing
in those regions. This review has led to several interesting initiatives
attempting to develop standards for contact tracing applications complying with
the regional peculiarities in approaches to privacy.
One example is
the PEPP-PT initiative (Pan-European
Privacy-Preserving Proximity Tracing). Its creators are seeking to set certain
standards for the operation of proximity-tracing applications. In simple terms,
these standards are based on publication of only anonymous identifiers of
smartphone users. This model can achieve the aim of notifying an exposed individual
that he or she is at risk of infection, without sharing data identifying the
carrier. In this context, the concept of temporary contact numbers has also
been developed, i.e. randomly generated identifiers that would be replaced at
regular intervals. This would increase the anonymity of this solution.
Apple and Google have also engaged in creating applications, joining forces to create a common standard. It includes many elements called for by PEPP-PT. There are many indicators that the Google/Apple standard has a chance to become the dominant standard.
Legal conundrums
Personal data
issues obviously pose a fundamental legal challenge for social tracing. There
is no need to repeat here the many public comments raised on this issue.
Rather, I would like to point out a few less obvious aspects.
As stated, there
are a great many potential models for contact tracing applications. This
diversity also translates into many possible configurations in the area of
personal data. First, in some models it is justified to ask whether the apps
would actually be processing personal data at all. Some models are based only
on forwarding of anonymous identifiers randomly generated at certain intervals.
Some models call for these identifiers to be shared directly only between end
users’ devices, using Bluetooth connectivity. Keys would be transmitted to the
central server, and only when these are downloaded to the end user’s devices could
it be determined whether the specific identifier is stored on a given end
user’s device.
Thus we have a
whole palette of options. The matter is obvious in the case of apps transmitting
data to the central server unequivocally identifying an individual. That will
involve processing of personal data. At the opposite end of the spectrum are
apps that create several levels of data anonymisation. First, the app is built
on anonymised identifiers randomly generated at set intervals. Second, only
encryption keys, not the identifiers themselves, would reach the central
repository. In the case of such applications, even if we recognise that anonymous
identifiers can in certain circumstances enable identification of individuals
(in this context I recommend an analysis of the DP3T system), they are processed only at
the level of the devices of the end users of the application. In turn,
regarding keys as personal data seems at first glance more problematic,
particularly if the architecture of the system ensures that only end users can
use such keys to establish identifiers.
The second and
perhaps even more interesting issue involves the determination of whether
certain models actually involve a data controller. This applies particularly to
decentralised models, which involve at most transferring to a central server a
very narrow bundle of anonymised data. The potential central operator of the
repository would process only anonymised identifiers. As a rule, the operator
could not associate these identifiers with specific persons. Such association
would potentially be possible only at the level of end users. Moreover, some
developers are considering placement of anonymous identifiers (or alternatively
keys) on a public blockchain, which would even more greatly hinder
identification of any data controller. Consequently, it may turn out that even
if the apps process information which, under a cautious approach to the
definition, could be regarded as personal data, practically there would be no
entities in the system obliged to comply with certain duties under the General
Data Protection Regulation (such as informational duties with respect to data
subjects). The end users, in particular, would not seem to qualify as data
controllers, because as a rule they would fall within the exclusion from
application of the GDPR for use of the application for purely personal or
household activities (Art. 2(2)(c) GDPR).
Another
interesting aspect in the context of personal data is potential profiling
conducted by these applications. Under the GDPR, “profiling” means any form of
automated processing of personal data consisting of the use of personal data to
evaluate certain personal aspects relating to a natural person, in this case to
analyse or predict aspects concerning the person’s health. This seems to be
precisely what social tracing apps would be used for. Art. 22 GDPR contains
a crucial provision attempting to create legal protection for individuals whose
situation is dependent on automated processing of their personal data,
including profiling. Leaving aside the many nuances that would have to be
analysed to determine whether Art. 22 would apply at all to processing of
data in social tracing apps, I would point to the peculiarity of these apps,
which conduct profiling only at the level of end users’ devices. In that case,
even if the grounds for applying Art. 22 GDPR were fulfilled, its
application would be practically excluded.
In the context of
the Polish Telecommunications Law, there are at least two essential elements
that must be considered in connection with social tracing applications. First, the
operation of the app should be analysed in terms of whether the data processed
using the app are covered by telecommunications secrecy. If we identify such
data, apart from personal data issues the operating model must be reconciled
with the restrictions on processing of telecommunications secrets. Second, the Telecommunications
Law also imposes rules for access to data found on an end user’s device and
installation of programming on the end user’s device. Apps providing such possibilities
must be designed to comply with Art. 173 of the Telecommunications Law.
Just in case, contact
tracing apps should also be examined in terms of regulations on medical
devices. For some time, along with increasing digitalisation, there has been a
growing problem in the legal classification of apps used for monitoring human
health. They often fall into a grey area, where it can be very hard to
determine whether or not the app constitutes a medical device. This results
from the broad definition of medical devices, which extends for example to software
intended for use in people for diagnosis, prevention, monitoring, treatment or alleviation
of disease. The judgment of the Court of Justice in C-329/16, Snitem, is
relevant in this context, along with the European
Commission’s guidance document on qualification of standalone software as
medical devices.
Admittedly, the
foregoing comments are highly abstract. In practice, due to the great
differences between particular applications, it is essential to conduct a legal
analysis of each model based on the technical details of each solution.
The EU
position
EU authorities
have also recently issued official positions on contact tracing apps. Particularly
notable in this context are the Commission Guidance
on Apps supporting the fight against COVID 19 pandemic in relation to data
protection and
the Common EU Toolbox
for Member States on mobile applications to support contact tracing in the EU’s
fight against COVID-19.
These documents
present a preliminary outline of the architecture of social tracing apps which,
taking into account the peculiarities of the European approach to privacy
protection, would be desirable from the perspective of EU bodies. Among other
things, the guidance calls for appointing public bodies as data controllers,
primarily to avoid doubts on which entity is required to carry out duties in
the GDPR with respect to data subjects. The suggested basis for processing of
personal data is Art. 6(1)(c) and 9(2)(i) GDPR. The point is to ensure
transparency in the grounds for processing of data. Data should be processed on
end users’ devices and transmitted to a central repository only if there is a
positive diagnosis. In line with the principle of data minimisation, the
Commission recommends avoiding the processing of location data as unnecessary
in the context of the aims of the apps. The guidance also supports
decentralised architecture of apps as promoting the principle of data
minimisation. There are also specific demands for creation of solutions
enabling interoperability between various apps used in different member states.
Setting
new boundaries
Notwithstanding the legal nuances, it is important to recognise the broader context of the debate over social tracing apps. In the upcoming weeks, a battle will play out not only over our health, but also over setting new boundaries for protection of privacy and permissible state encroachment on it. Dilemmas that previously could be resolved in processes spread over years, now must be decided in a matter of weeks. Under these circumstances, the desire to bring the pandemic under control may dull our sensitivity to the need to protect our fundamental rights. The risk is that once a barrier has been breached, it may be impossible to rebuild it. In this context, initiatives like the international Data Rights for Exposure Notification or, in Poland, the seven pillars of trust, are vital. They create a chance to ensure that our choices are adequately thought-through, even if they ultimately lead to a redefinition of our notions of privacy.
Krzysztof Wojdyło