First of all, this blog was written by a human being! Now that that's out of the way, let's get onto our main topic for today which is to take a look at ChatGPT and use it to understand some key aspects of mobile security.
For those of you who have been living on another planet, ChatGPT has in the space of a few weeks become a global phenomenon and has also caused no end of hand-wringing: it has been used for essay writing, to churn out marketing content, and even to write songs. The results are often impressive, sometimes laughable, but they have generated so much interest that OpenAI stopped all free signups to the service.
There is also no doubt in my mind that ChatGPT will have attracted the attention of bad actors as well as genuine interested users.
Let me break down why that is. Let's look at the Motive, the Means and the Method applied to the idea of hacking ChatGPT.
Maybe it's not that obvious but in fact there are a number of reasons:
Unfortunately we only need one acronym to describe the means - API.
Like everyone else, OpenAI publishes the APIs to the ChatGPT service and uses API Keys to authenticate access.
But supposing I can get my hands on a ChatGPT API key, what could I do with it? Well here's a quote from the ChatGPT documentation itself: “A compromised API key allows a person to gain access to your account quota, without your consent. This can result in data loss, unexpected charges, a depletion of your monthly quota, and interruption in your API access.”
In other words if I could steal an API key I could create scripts and bots that look like they are genuine and access the API, steal data, derail the service, use up quota.… This is not a good scenario!
To be fair, OpenAI do provide some advice about keeping your API keys secret:
But one thing they don't cover is the case of a mobile app accessing the API. Which is strange because that has got to be one of the biggest use-cases.
Their authentication advice does suggest that API keys should be kept out of client code, and “requests must be routed through your own backend server where your API key can be securely loaded”. Using a proxy like this is good advice but really only moves, rather than solves, the problem. How do you stop bad actors accessing the proxy and then using your ChatGPT API anyway?
If I am creating a great new mobile app which uses the ChatGPT API how do I keep the access to that precious ChatGPT account safe - and we know that there are many ways to reverse engineer secrets out of mobile apps or steal them at runtime.
So how could I get my hands on the API Key for the ChatGPT APIs? That brings us to the method.
There are two main ways to steal secrets from a mobile app:
The first is static analysis, inspecting the source code and other components of the mobile app package for exposed secrets. Obfuscation or code hardening, in theory, provides protection against static reverse engineering. This does provide some protection against bad actors using simple tools to statically reverse-engineer keys out of the app, perhaps deterring hobbyists and less serious attackers. However, more sophisticated techniques can render obfuscation ineffective.
The second method is to steal secrets at runtime. The unfortunate thing is that both the mobile app code and the environment it runs on are exposed and can be manipulated. So at runtime, API keys can be stolen during execution by instrumenting the application, modifying the environment or intercepting messages from the app to the backend via Man-in-the-Middle attacks.
Once the hacker has their hands on API Keys they can easily build them into scripts and bots to access the APIs. And if the ChatGPT APIs (or proxies to them) are being used in a mobile app, keys will probably be exposed unless steps are taken to prevent this.
What steps, you ask yourself!?
To prevent ChatGPT or any APIs being abused via the mobile channel you should obviously never store keys in your mobile app code, but that is only the start. You must also do the following at runtime:
If you do all these things, your mobile apps AND their APIs will be safe. Let's hope our friends at OpenAI take our advice and enhance their security guidelines.