We are all very aware of the issues around phishing of user credentials. But it is not only users that can be phished, apps can be too. In previous blogs we’ve shown you how you can make a MITM attack against an app. In this blog we’ll demonstrate that a MITM attack against an app is analogous to a phishing attack against a human. Moreover, there are some similar characteristics to the protections required.
If you receive a link in an email or an SMS (so-called smishing) then you might be taken in by it and follow the link. The URL will likely be cleverly crafted to look like the real thing. If you’re directed to a page that seems believable and closely resembles the brand identity of whatever it is purporting to be, you might be fooled into entering your user name and password. Or worse, your credit card details. We’ve all probably been at least close to making that mistake.
User credentials extracted in this way can then be used by an attacker to take over the real account. If you are responsible for the security of such a service there isn’t much you can do about the original phishing attempt - that’s completely outside of your control as it doesn’t depend on any of your services. But you can block the subsequent login attempt by employing a second factor. This typically takes the form of a one time code sent as an SMS or by email and is necessary to gain access to the account. An attacker can’t simply phish this information, or at least a far more elaborate scam or malware is required to do this. The second factor is a very effective and now somewhat ubiquitous defense.
However it’s not only us fallible humans that can be tricked into leaking secrets - apps can be too. That’s really what a Man-in-the-Middle (MITM) attack is - a phishing expedition targeted at revealing your app’s secrets. In our recent blog How To MITM Attack the API of an Android App we show how easy it is to MITM traffic going from your app to its backend APIs and harvest any information, such as API keys, that are transmitted. With this information in hand an attacker can then spoof the identity of your app.
A MITM attack works by convincing your app that the self signed certificate from the MITM proxy is as trusted as your real backend API. So the proxy can then impersonate your API, in an equivalent way a phishing site impersonates some service by copying its logo, layout and other branding. Humans put misplaced trust in visuals. Apps put misplaced trust in root certificates on devices.
Pinning Doesn’t Prevent API Key Extraction
As we show in the next blog in our series, How to Bypass Certificate Pinning with Frida on an Android App, you can improve your app’s security posture by using pinning. Pinning forces the app to only trust specific certificates that it specifically knows are associated with the API backend, so it is not so easily fooled by rogue MITM certificates installed on the device. Unfortunately though, the code to check this is built into your app and is subject to manipulation by an advanced code injection tool like Frida which can change your app’s code on the fly. This tricks your app into still trusting the MITM proxy.
You can think of pinning as a bit like the warnings you might get from your email client if there is something a bit “phishy” about an email. The warning causes some friction, but somehow when the most well crafted email meets the most gullible user, click-through still occurs.
Hardcoding Isn’t the Key Issue
There is much discussion about the issue of hardcoding API keys in apps. A recent exposé on the massively popular Roblox app concluded that it contains hardcoded API keys, although it’s unclear what API doors this particular key unlocks. The recently launched Bevigil app scanning and search tool allows easy secret extraction from any Android app, and indeed searching across their whole decompiled code base. It’s obviously good practice to use an obfuscation or hardening tool to put such keys out of plain sight. But they can still be discovered.
Hardcoding an API key, or some crucial secret, in your app is akin to the practice of writing down the password on a post-it note and sticking it on your monitor. It’s an embarrassingly public lapse of security that you should avoid. But whether our monitors are adorned with all our passwords or not, we are still just as susceptible to a phishing attack. Banning sticky notes, or removing hardcoded keys, doesn’t avoid the need for those fixed credentials to be transmitted where they might be intercepted.
The Second Factor for Apps
The bottom line is that API keys can be extracted from your app, and you probably won’t be able to stop that. Usernames and password credentials can be extracted from unfortunate users too. Broadly the solution is the same - add another factor that uses an independent channel of communication.
An API key provides the identity of the client app making an API call, but to truly trust it you also need a second factor for confirmation. This is what Approov does, it’s a user invisible second factor for your apps. When your app requests an Approov token, this causes a communication with our servers via the Approov SDK embedded in your app. It initiates a complex and heavily defended integrity measurement process whereby your app needs to prove to the Approov service that it is indeed an official untampered version, and that it is running in an uncompromised environment. If it can’t prove that then it won’t get a valid Approov token, in the same way that using phished credentials in an account won’t get you the OTP code sent over SMS. The Approov server acts as the second factor, providing a short lived signed token only to passing apps.
In the case of Approov, the token from the server is passed back to your API in the same communication channel. To prevent any possibility of interception at that point, apps employ pinning and also runtime detection of any compromise to that pinning. Compromised apps are never provided with a valid token to start with.
The bottom line is that fixed credentials, either those of a user or the API keys embedded in an app, can be intercepted. In both cases, and in somewhat analogous ways, you need to employ a second factor to ensure that this doesn’t undermine security.