We're Hiring!

If You Can't Make It, Fake It

Group of people represented by wooden figures

As many social media platforms continue to experience incredible growth in popularity, the supporting apps, and the APIs that service them, remain top targets for bad actors. The ability to communicate quickly and indirectly with the platforms’ vast user bases make them ideal for spreading malware, phishing attacks, or fake news. Networks of automated accounts, gaining artificial levels of popularity and influence are often used to instigate attacks and the recent admission by Facebook that Kremlin linked propaganda may have been seen by as many as 126 million users gives us some idea of the scale of the threat and the ambition of the attackers.

While the motivation and strategy behind these attacks vary, all can have a significant impact on the platform; including damage to brand reputation, revenue loss, or an increase in running costs due to illegitimate traffic.

Tools that automate account activity rely on direct access to backend servers to perform their actions. They imitate legitimate behaviour by using the same API calls made by genuine apps and in the same way.

While it’s really important to defend against more traditional API attacks, including data mining, account takeover, DDoS, and malware distribution; for social media platforms it is becoming increasingly important to defend against the more subtle account automation attacks like those described above.

Here we briefly discuss two case studies to illustrate automated systems in the social media sector. First, the rising problem of fake followers on Instagram and second, fake account onboarding, as faced by Nimses. In both cases, if unchecked, automated activity has the potential to damage both platform growth and brand reputation and therefore result in revenue loss.

Why Are You Following Me?

With over 700 million active users as of August 2017, Instagram continues to grow at a phenomenal rate, and remains one of the preferred platforms for social media influencers.

As the advertiser spend with influencers on Instagram is estimated to reach $2 billion per year by 2019, there is increasing incentive for users to buy fake followers and engagement to artificially boost their influence, enabling them to attract bigger brands and achieve higher rates of sponsorship.

Instagram’s brand reputation and revenue are at risk if advertisers continue to lose out by investing in falsely inflated influencers. Additionally, genuine users may miss out on sponsorship deals if their influence is undermined and their credibility is in doubt.

Instagram will be keen to avoid further bad publicity, having suffered a security breach earlier this year. Hackers exploited a bug in the platform’s API to access data containing the email addresses and phone numbers for some accounts. Instagram initially thought the attack only targeted celebrity accounts but later found that private data from up to 6 million Instagram accounts had been compromised.

One approach Instagram have taken is to introduce very restrictive terms of service. Since any sort of automation activity will violate those terms, there is a risk that genuine users may experience their accounts being flagged or even suspended.

Regulating bots can be challenging; existing anti-bot solutions use behavioural analysis to detect automated systems, however, bots are becoming increasingly sophisticated and adapt their behaviour to appear human where necessary. In turn, behavioural approaches need to become increasingly sensitive, resulting in a higher false positive rate.

In what appeared to be a recent crackdown by Instagram on bots, many users complained of being a victim of shadowbanning. This transpired to be a change to Instagram’s algorithm to identify quality content and reduce the automated systems offering paid follower growth. Although aimed at providing users with a more authentic experience, many legitimate influencers with genuine audiences felt like they were being penalised as their engagement rates dropped.

Protect Your Mobile Revenue

Approov, developed by CriticalBlue, is an effective anti-bot solution for mobile APIs. As it is not based on behavioural analysis, it does not suffer from false positives. Approov adopts a positive authentication model, allowing enterprises to identify the software being used to communicate with backend servers before granting access, thus scripts and bots can be blocked. Approov does not require a static secret to be stored in the app - the dynamic integrity check uses a low level, patented approach based on many years of low level software analysis. It is simple to integrate, easy to deploy and has no impact on user experience.

Short of Time? Steal It

Nimses experienced rapid growth when they launched their social media platform, however, they quickly became a target of automated attacks against their API. In Nimses, users can monetize their time once they install and launch the app. Once registered, users gain one Nim for every minute of their life. Nims are a form of cryptocurrency that can be used to buy goods and services and it is this monetization that made Nimses a prime target for hackers.

Within weeks of the their launch, articles on how to scam the system by creating fake accounts to get free Nims were readily available online. With a negative impact on their users’ experience and a polluted environment, Nimses needed a quick and effective solution to stop the bots. Using Approov to authenticate legitimate traffic they were able to identify genuine users and stop the fake accounts abusing their API. Nimses went from initial contact with us to a fully deployed solution in just over one week.

Approov delivers bot mitigation by ensuring that only legitimate traffic can access the API:

  • stop automated systems without penalising genuine users
  • prevent clogging the API backend with illegitimate traffic keeping costs proportional to legitimate use
  • prevent scraping of sensitive data
  • manage access to your API from trusted 3rd party developers

Approov prevents API abuse by both automated software agents and unauthorized 3rd party apps, allowing enterprises to take back control of not just who but also what is accessing their API.

 

Shona Hossell