We're Hiring!

ChatGPT and API Security

API Concept

First of all, this blog was written by a human being! Now that that's out of the way,  let's get onto our main topic for today which is to take a look at ChatGPT and use it to understand some key aspects of mobile security. 

For those of you who have been living on another planet, ChatGPT has in the space of a few weeks become a global phenomenon and has also caused no end of hand-wringing: it has been used for essay writing, to churn out marketing content,  and even to write songs.  The results are often impressive, sometimes laughable, but they have generated so much interest that OpenAI stopped all free signups to the service. 

There is also no doubt in my mind that ChatGPT will have attracted the attention of bad actors as well as genuine interested users. 

Let me break down why that is. Let's  look at the Motive, the Means and the Method applied to the idea of hacking ChatGPT.  

The Motive - Why Would Someone Want to Hack ChatGPT?

Maybe it's not that obvious but in fact there are a number of reasons:

  • The first is “because it's there” and high-profile. This will be enough motivation for some hackers to try to interfere with the service. OpenAI are closing down their free tier after reputedly  running  up cloud costs of $100K per day / $3M per month for genuine users,  and this certainly gives an indication of the damage and financial costs that could be incurred if and when the service is attacked with bots and scripts. 
  • There are also opportunities for financial gain. I could aim to avoid paying for the service for a start. OpenAI is moving to a subscription model and there are likely to be all sorts of attempts to get free access. There have already been some attempts at reverse engineering the free tier.
  • I could also steal data and monetize that. It does not take much imagination to see that information about who is using the service and what they are doing with it  could have a value and be monetized on the “dark web”.

The Means - What Tools Could a Hacker Use?

Unfortunately we only need one acronym to describe the means  - API. 

Like everyone else, OpenAI publishes the APIs to the ChatGPT service and uses API Keys to authenticate access. 

But supposing I can get my hands on a ChatGPT API key, what could I do with it? Well here's a quote from the ChatGPT documentation itself: “A compromised API key allows a person to gain access to your account quota, without your consent. This can result in data loss, unexpected charges, a depletion of your monthly quota, and interruption in your API access.” 

In other words if I could steal an API key I could create scripts and bots that look like they are genuine and access the API, steal data, derail the service, use up quota.… This is not a good scenario! 

To be fair, OpenAI do provide some advice about keeping your API keys secret:

https://help.openai.com/en/articles/5112595-best-practices-for-api-key-safety

But one thing they don't cover is the case of a mobile app accessing the API. Which is strange because that has got to be one of the biggest use-cases.

Their authentication advice does suggest that API keys should be kept out of client code, and “requests must be routed through your own backend server where your API key can be securely loaded”. Using a proxy like this is good advice but really only moves, rather than solves, the problem. How do you stop bad actors accessing the proxy and then using your ChatGPT API anyway?

If I am creating a great new mobile app which uses the ChatGPT API how do I keep the access to that precious ChatGPT account safe - and we know that there are many ways to reverse engineer secrets out of mobile apps or steal them at runtime. 

So how could I get my hands on the API Key for the ChatGPT APIs?  That brings us to the method.

The Method - How will the Attack be Carried Out? 

There are two main ways to steal secrets from a mobile app:

The first is static analysis, inspecting the source code and other components of the mobile app package for exposed secrets. Obfuscation or code hardening, in theory, provides protection against static reverse engineering. This does provide some protection against bad actors using simple tools to statically reverse-engineer keys out of the app, perhaps deterring hobbyists and less serious attackers. However, more sophisticated techniques can render obfuscation ineffective. 

The second method is to steal secrets at runtime. The unfortunate thing is that both the mobile app code and the  environment it runs on are exposed and can be manipulated. So at runtime, API keys can be stolen during execution by instrumenting the application, modifying the environment or intercepting messages from the app to the backend via Man-in-the-Middle attacks. 

Once the hacker has their hands on API Keys they can easily build them into scripts and bots to access the APIs. And if the ChatGPT APIs (or proxies to them) are being used in a mobile app, keys will probably be exposed unless steps are taken to prevent this. 

What steps, you ask yourself!?

How Can ChatGPT (and you) Protect APIs from Abuse?

To prevent ChatGPT or any APIs being abused via the mobile channel you should obviously never store keys in your mobile app code, but that is only the start. You must also do the following at runtime

  • Systematically check that any request to the API is genuine and not coming from a bot, a script or a repackaged app. 
  • Be able to detect any unsafe operating environments on the client device, such as rooted/jailbroken devices, apps running under debuggers or emulators, or whether malicious frameworks are present.
  • Protect the path from the app to the API from Man-in-the-Middle attacks
  • Use a secrets management solution which manages API keys and certificates securely in the cloud, delivers them “just-in-time” and allows them to be easily rotated over-the-air as required.  

If you do all these things, your mobile apps AND their APIs will be safe. Let's hope our friends at OpenAI take our advice and enhance their security guidelines.

 

George McGregor

- VP Marketing, Approov
George is based in the Bay Area and has an extensive background in cyber-security, cloud services and communications software. Before joining Approov he held leadership positions in Imperva, Citrix, Juniper Networks and HP.