Since its release last week, ChatGPT has quickly captured the internet’s attention for its uncanny ability to generate human-like responses to a wide array of complex questions. While it’s far from perfect, ChatGPT is the first large-scale deployment of GPT-3.5, the latest AI advancement from OpenAI. For those familiar with OpenAI’s previous model, GPT-3, the improvement is startling. GPT-3 (short for “Generative Pre-Trained Transformer 3”) was already an impressive feat, but it feels like a toy compared to the surprisingly lucid experience of interacting with GPT-3.5. In many ways, ChatGPT feels like a watershed moment for artificial intelligence.
And while we’re only able to speculate on many of the impacts that AI tech advancements may have, one thing is already clear – developers are eager to incorporate the APIs (Application Programming Interfaces) behind ChatGPT into existing and new applications in order to improve current workflows and introduce new, previously unimaginable user experiences. In just 5 days, ChatGPT amassed over 1 million users. Many of these early adopters are developers who are tinkering with the technology and exploring how it could be incorporated into various B2C and B2B use cases.
The developer excitement behind these new capabilities is no surprise. In the 2010s, companies like Stripe, Twilio, and Plaid introduced new superpowers to developers through APIs that simplified embedded payments, telecommunications, and third-party data access into applications, and these building blocks unleashed a wave of application innovation. Similarly, in the 2020s, APIs providing access to underlying artificial intelligence models like GPT-3, ChatGPT and (soon) GPT-4 will unleash unprecedented innovation.
While the opportunities are limitless, as more applications integrate these AI APIs, it will introduce new application security concerns for developers. In particular, applications building with these APIs should expect significant and sophisticated bot traffic directed at their sites. It’s a matter of simple game theory and what the last two decades of commercial APIs have shown us – anytime you expose a resource on the internet (e.g., compute) that offers significant monetization potential to an attacker, fraudsters will search for ways to exploit that resource. In the case of Stripe and Plaid, these companies provide API endpoints for validating credit card numbers and bank account credentials, which attackers exploit to perform account validation attacks to steal valuable financial accounts. In the case of Twilio, attackers commit SMS toll fraud in order to pump expensive SMS traffic through partner mobile network operators (MNO) and share the profits with the MNO.
The value behind artificial intelligence APIs could prove similarly alluring to fraudsters now, just as Stripe, Twilio, or Plaid did when they emerged. Not only are artificial intelligence APIs expensive to use, their API responses are also relatively fungible and public-facing, making them particularly ripe for abuse. Cloud services and other tools that, by design, expose valuable compute resources (e.g. companies like Replit, Github, and Heroku) provide a glimpse into the attack vector that applications exposing AI APIs will need to anticipate. In attacks against open compute resources, fraudsters commit first-party fraud by creating fake accounts for the sole purpose of running commands that can be monetized – with cloud services, cryptomining has been the most reliable way for attackers to make money through this vector.
For applications leveraging expensive AI APIs, we can expect a similar attack vector where fraudsters create numerous fake accounts to initiate valuable queries while purposefully evading rate limits. A couple of the most likely monetization paths behind this attack include:
As a result of these threats, AI companies have unique bot detection needs, as they need to ensure that the APIs they expose are only accessed by authorized users and not by unauthorized bots or other malicious actors.
Some of the unique bot detection needs that AI companies might implement include:
Overall, AI companies have unique bot detection needs because of the valuable APIs they expose. To protect these APIs, AI companies need to implement effective bot detection and blocking mechanisms, and they need to monitor and protect against bot detection evasion techniques as well.
At Stytch, we help companies tackle Fraud and Risk Prevention with products like Device Fingerprinting and Strong CAPTCHA, both of which make it easier for developers to build in bot-protection without adding any friction for the user. By combining these tools with our user-friendly, ironclad authentication products, companies can rest easy knowing their data and resources are protected from malicious bot traffic. If you want to learn more about how to protect your product from bot traffic, talk to an auth expert today!