We're Hiring!

Contact Tracing Apps: Privacy vs. Security?

Microscopic view of virus cell on dark background next to text 'Privacy vs. Security'Photo by Fusion Medical Animation on Unsplash

Last Friday, there was an unusual joint announcement from Apple and Google providing details of a new phone API for Covid-19 contact tracing via Bluetooth. The protocol allows mobile phones to continually transmit Bluetooth advertisements to one another. This includes a proximity identifier derived from randomly generated keys that can be held secretly on each device. If a phone user is later diagnosed with Covid-19, they are able to upload the daily tracing keys for those days when they might have been infectious. 

Apps using this tracing mechanism will download the daily tracing keys for new infections and derive the proximity codes that would have been transmitted by the infected user’s phone. If there are matches then this means that the user has been in close proximity to that person in the past, and is at a higher risk of infection. Since only daily tracing keys need to be sent, rather than all of the individual proximity codes, the data remains compact so that all apps can download all of the new diagnosis information every day. Thus the server cannot determine from the backend server accesses made if a given user has been in contact with a potentially infectious person. Users of the app remain completely anonymous and cannot be tracked, and their contacts remain private. A more detailed analysis of the cryptography is provided in “Contact Tracing: The Most Amazing And Scariest Technology of The 21st Century”.

The basic approach outlined is remarkably similar to the one we published a couple of weeks ago, and to others published at similar times. So we of course welcome the fundamentally privacy preserving aspects of the proposal. There are many groups working on similar apps, including the TCN coalition.

A key practical benefit of the Apple/Google proposal is that it will allow iOS devices to transmit their IDs even when the app is in background and the phone is locked. This is not something that can be done currently, hindering the usability of contact tracing apps on iOS and forcing the use of connection based approaches rather than Bluetooth advertising only. It also ensures that there is full interoperability between iOS and Android devices, and that there is effectively a de facto international standard for the derivation and matching of the proximity IDs.

Apple and Google are not initially planning to build the contact tracing apps themselves, although it seems that this might be part of their long term plans, or to build the features directly into the OS. This will take longer to achieve, and since Google doesn’t have such a tight grip on its OS update ecosystem this might be more problematic for them. They have already decided that they need to push out the basic protocol through the play store. So in the meantime it is up to the numerous country specific and international efforts to build the apps, and the backend systems to support them.

What has been gained in privacy by the underlying protocol can be easily lost in the implementation of the app. Any given app might include trackers that could fundamentally undermine the privacy of its users, as research from the Defensive Lab Agency shows. Some of this tracking capability might be unintentional, and simply a side effect of the SDKs and approaches normally used to build consumer facing apps. But we must hold these apps to higher ethical standards. What might be deemed acceptable in some consumer app just isn’t right for an app that we might need to install to return to normal life. Moreover, the larger the code base of the app, the higher the chance that some unintended personal information leakage is exposed and undermines confidence in the whole exercise. It seems that the developers of most contact tracing apps have at least agreed to make them available as open source, but continued static analysis is crucial to ensure this corresponds to what is actually available in the app store. Further, there can be no assurity of what is actually running on the backend so any transmission of potentially personally identifiable information has to be taken very seriously.

Widespread use of the app brings some new security risks, such as a new form of trolling - fake diagnosis disclosures. In the UK, limited access to testing is driving an approach whereby users simply need to answer a questionnaire to broadcast their self diagnosis to other app users. Excellent API security between the contact tracing app and the API used to disclose a new infection is critical. Successful scripted attacks on this backend could wreak havoc on the system’s ability to operate. Since the user authentication bar for making a diagnosis disclosure is necessarily low, the system cannot differentiate real and faked information. Attempting to transmit large scale faked diagnosis codes to all users could effectively bring down the entire system. As we said in our recent blog Covid-19 App User Anonymity Mandates App Authentication, we need to at least ensure that only official versions of the app can even make these requests to limit this possibility.

Beyond this though there is an even more disconcerting attack scenario - spoofing. Imagine if large numbers downloaded a variant of the app that, instead of transmitting a unique set of codes for that particular app instance, all transmitted the same proximity codes. This could be achieved by simply rebroadcasting the codes from an authentic version of the app, or by using an app that is always synchronised to the same code sequence. The Apple/Google implementation doesn’t appear to allow the app to explicitly set the codes, presumably for this very reason, but there doesn’t appear to be anything to stop any Android device emulating the beacon format used in the protocol with arbitrary codes. For maximum coverage, the Bluetooth transmission power could be maximised too. To any official version of the app the advertising beacons will look exactly like just another user. If those codes are subsequently disclosed as being from a user with a fake positive diagnosis, then everyone who has been exposed to this attack will be notified of potential contact, causing significant disruption and undermining confidence in the system. In our paper we referred to this as a Scaled Replay attack. We suggest that it is necessary for the quantity, and perhaps locations, of those matching any individual diagnosis disclosure be measured as an additional step in the API protocol. This way disclosures resulting in suspiciously high counts can be revoked. But of course, this additional security step adds significant complexities in maintaining privacy for individual diagnosis. There is naturally a tradeoff between privacy and security which needs to be carefully balanced.

There are some other higher level app design considerations too. We are told that use of the app will be entirely voluntary, justified as a public health intervention for everyone’s benefit.  But even if a state doesn’t impose mandatory usage, and only seeks to behaviourally nudge its populace into compliance, that doesn’t prevent others from doing so. In the post lockdown world it is easy to imagine a scenario in which would-be patrons are asked to show their app status before they are allowed to enter clubs, bars or restaurants. A potentially fearful future in which only those who acquiesce to compliance have true freedoms. It is crucial that the design of the app anticipates this possibility and designs accordingly. It shouldn’t be possible to open the app and see the current status, notifications need to be made so that they can be deleted so only the user, not the app, retains the status.

It is clear that in the last few weeks we have made huge strides in designing approaches that can preserve privacy for contact tracing apps. However, the task is not complete until the full system is scrutinised and shown to provide the appropriate trade offs for a privacy preserving, but secure, post Covid-19 lockdown world.

 

Richard Taylor

- CTO and Co-Founder at Approov Ltd
Chief Technical Officer with more than 30 years of industry experience. Background in compiler optimization and processor architecture, working more recently in application security and cloud computing technologies. Richard Co-Founded and is CTO of Approov Mobile Security (previously Critical Blue Ltd) and has led a number of innovative product developments in the area of EDA, software optimization and remote software attestation.