We're Hiring!

Preventing Faked Proximity

Apple Google Covid Exposure Notification

We’ve been thinking a lot about contact tracing apps in recent weeks. There are ongoing debates about whether a centralised or decentralised model is superior, and how the ensuing discussions around privacy will impact their takeup. 

Beyond the above architectural and philosophical concerns there are some other practical matters to attend to. If these apps are to be successfully rolled out worldwide then they need to be robust against malicious actors. There is a definite tradeoff here between the desire to make these apps private and to make them secure, as we explored previously. Both centralized and decentralized approaches require a secure backend infrastructure which forms the conduit of information between the apps. Mechanisms need to be put in place to avoid this being undermined by malicious actors.

There remains another potential attack vector which is much harder to mitigate against. The possibility of faked proximity. The risk that two devices are recorded as being close to one another for an extended period of time, when they actually weren’t. If one of the device owners goes on to test positive for Covid and notifies the app, then the other victim of the faked proximity will receive an alert that might cause them to have to self isolate, even though they didn’t suffer any actual risk. If attacks like this were to be carried out at any scale then it could severely undermine public trust in the whole system. It’s not clear why anyone would try and subvert the system in this way, but we should certainly be doing all we can to prevent this from ever happening.  

How could this happen though? The continually randomized Bluetooth identifiers that are exchanged between the devices need to have been received by the apps pretty much at the time they were being broadcast. In the centralized approach the app uploads all of the Bluetooth identifiers it has seen, and the server is able to notify the owners of those identifiers.  In the decentralized approach the infected user uploads the seeds that generated its previously transmitted Bluetooth identifiers, and these are sent to all the apps which are then able to check if they have been exposed. In both approaches it would seem that physical proximity is essential to receiving the Bluetooth identifiers.

Unfortunately, that’s not quite true. Both centralized and decentralized systems can be subverted by a relay attack. You might have heard about this sort of thing in the somewhat different guise of keyless car theft. Keyless entry systems allow your car to automatically unlock as you walk up to it, and then drive off without having to use your key. If it’s in your pocket it is communicating by radio with your car, which can tell it is in close proximity and it automatically opens. Neat. Except that most people just keep their keys hanging up just inside their front doors. So all it takes are two bad guys with some high tech kit, that relays the radio signal from just outside your front door to the car. It magically opens, and off they (and your car) go. For devices that communicate over Bluetooth, GATT Attack provides a whole package to automate this, as demonstrated here

A similar style of attack could be perpetrated against contact tracing apps. Imagine if somebody were to develop an app that used exactly the same Bluetooth protocol as the official contact tracing. Many of these apps are open sourced, so the code is widely available. But even if it wasn’t, they can be easily reverse engineered. Real contact tracing apps would communicate with this faked app in good faith. Any Bluetooth identifiers they generated would be recorded as real proximity contacts. Now imagine that the fake apps were actually communicating to one another over the Internet via a central server.  One of the fake apps could record all of the Bluetooth identifiers it was seeing nearby and then send them over the Internet to another faked app which would broadcast them again, just seconds later. Suddenly, as far as the official apps were concerned, it would like they were sitting right next to somebody very far away. Furthermore, a single fake app could transmit many different identifiers from many different locations. That single fake app could look like a whole crowd of different people. The real apps wouldn’t be able to tell the difference.

So how do we defend against this? Well actually it’s pretty hard, but there are some things we can do.

Firstly, we need to recognize that this is only a significant real threat if it were to happen at some scale. The only way that is likely to happen is if the fake app created to do this were to get through either the Google or Apple App stores. It seems unlikely this would happen if this was the app’s advertised purpose, and in any case it would then require sufficient willing volunteers to install it and undermine the contact tracing system for somewhat unclear motivations. The more likely scenario is that it is sneaked in as malware inside some otherwise innocuous app. So it’s vitally important that the app security analysis and screening processes are updated to account for this risk, perhaps scanning for Bluetooth usage code that might be associated with contact tracing app spoofing. We know that it will always be possible to make hardware devices (such as Raspberry Pis) to spoof the protocol, but the reach of any attack involving the need to buy and disperse real hardware is going to be limited, and more detectable.

Secondly, we might want to make contact tracing apps collect some statistics about how many individuals are impacted by a particular exposure event. If there are just too many then it can be concluded that the exposure has been falsely amplified. Ideally we’d like to collect geographic locations too to verify that contacts are not geographically dispersed, but of course collecting and distributing location information is exactly what we want to avoid in a privacy preserving approach.

Thirdly, we can make any such attempt to spoof the protocol much harder by binding the Bluetooth advertising or GATT Characteristic Accesses to the MAC addresses being used. The MAC address is a random 48-bit value that changes every 15-minutes or so to prevent individual devices being tracked. There is the same motivation for changing the contact tracing Bluetooth identifiers at a similar rate (and why we think it is extremely unwise for the UK’s NHSX app not to do the same). If the contact tracing protocol includes the MAC address (or a subset or hash of it), the real contact tracing apps could ignore any contacts where the address in the contact tracing protocol doesn’t match the underlying MAC address that it was advertised upon. This adds additional protection, because it is not possible for apps to influence the MAC address that was used. Of course, we would also need to encrypt the MAC address portion in the protocol using the keys that are only made available if the user becomes infected, or else the fake apps could simply exchange that part for their own MAC identity. So we might not be able to tell if a given proximity event was faked until later when we know the keys. But at least we will know and not act upon them.

A further complication is that it is also actually quite difficult for an app to even discover its own Bluetooth MAC address. Since Android O the permission LOCAL_MAC_ADDRESS is needed, and iOS has always hidden the actual MAC address, abstracting it through a UUID value instead. So this sort of protection is actually quite challenging to implement for many of the contact tracing apps building their own Bluetooth communication stacks. However, Google/Apple could do this with their Exposure Notification approach. They are implementing with much more direct access to these facilities. Indeed, the Bluetooth Protocol even includes an Associated Encrypted Metadata part which could hold a subset of the MAC address.

If it is possible for other apps to spoof advertising packets that would trick the exposure notification system into accepting them as valid, then there is a risk of a relay attack. Binding the MAC address would be a good countermeasure against any scaled attack using malicious apps. If you can’t make a valid looking Bluetooth packet, then you can’t fake the proximity event.  

 

Richard Taylor

- CTO and Co-Founder at Approov Ltd
Chief Technical Officer with more than 30 years of industry experience. Background in compiler optimization and processor architecture, working more recently in application security and cloud computing technologies. Richard Co-Founded and is CTO of Approov Mobile Security (previously Critical Blue Ltd) and has led a number of innovative product developments in the area of EDA, software optimization and remote software attestation.