Back to blog

Agent ready episode 7 with Stytch: AI agent fraud and threat prevention

Auth & identity

Sep 16, 2025

Author: Stytch Team

Agent ready episode 7 with Stytch: AI agent fraud and threat prevention

The seventh episode in the agent-ready video series is here! Featuring Bobbie Chen from the Stytch team, this session tackles AI agent fraud and threat prevention, with a live demo of how generative models can be weaponized to launch programmatic attacks at scale. You’ll see why traditional bot detection falls short against agent-driven traffic, and how cutting-edge defenses can separate trusted agents, malicious scripts, and human activity in real time.

Video overview

AI agents are unlocking new levels of productivity, but they also accelerate automated threats. In this session, we’ll walk through a live demo of how a generative model can be leveraged to create and deploy a malicious programmatic attack at scale. We’ll examine why traditional bot detection approaches struggle to identify this new wave of agent-driven traffic, and how the latest techniques and tools can distinguish trusted agents, malicious scripts, and human activity in real time.

You'll come away with an understanding of:

  • Why traditional bot detection struggles with agent-driven traffic
  • Distinguishing trusted agents, malicious scripts, and human activity in real time
  • Practical strategies for integrating defenses while balancing strong security with a good user and agent experience

You’ll take away practical strategies for integrating these defenses into your stack, along with guidance on balancing strong security with a user and agent experience. By the end, you’ll have a clear view of the evolving threat landscape for AI agents and the tools to protect your applications against it.

Full transcript

Reed: Well, hello and welcome back to the Stytch Agent Ready Application Series. Today we're chatting with Bobby Chen, who leads the fraud and risk initiatives here at Stytch, about how AI has changed the fraud landscape, what types of attacks we're seeing, different tactics to defend against those attacks, and really just dive deeper into the other side of the coin for where AI is enabling tons of great innovation.

We're also seeing that fraudsters are becoming quick adopters of it as well, and how do we defend ourselves in this new reality? And Bobby just wanted to give you a chance to introduce yourself. Tell us a little bit more about what you focus on when it comes to fraud and risk prevention today, and how you're thinking about the changes that have both already been seen and are yet to come from the role that AI is playing in the fraud and risk atmosphere.

Bobbie: Definitely. Nice to meet you. I'm Bobby, the product manager for fraud and security at Stytch, and I primarily work on our device fingerprinting product, and that's a bot detection and fraud prevention product that's used by our customers, who include organizations like Calendly and Replit.

They use our product to prevent fraud and abuse, which could be a lot of different things. And something I've been thinking about a lot recently is that if you have a product that's worth buying, that also means that you have a product that's worth stealing, and as it becomes easier to do fraud and abuse, these are problems that are going to hit every business if you're operating on the internet.

I'll say as far as AI goes, these generally fall into a few different categories of fraud and abuse. One of them is automation. Bots have always existed. It has become significantly easier to write bots that do bad things. People talk a lot about how generative AI makes it possible to democratize coding.

Well, when you democratize coding, you also democratize abuse. That's one of the kinds of things that you see. You also see that this AI usage is being used to scale up account takeovers, whether that's through things like traditional credential stuffing, but also through more sophisticated phishing or social engineering attacks.

And finally, on the remaining part, you also see novel forms of attacks coming from generative AI—things that are like deep fakes for synthetic identity fraud. There are entire new classes of attack that are emerging because of AI. Because of all of this, it's a really fast moving space.

I think it's important to keep an eye on exactly what is going on, what people are seeing, and that's what we do at Stytch. We're helping our customers address these fraud and abuse problems.

Reed: Great. Thanks Bobby. I think the one thing I would add to that thematically that I've been thinking about is one of the main automation use cases that we help customers defend against is fake account creation.

And one of the things that we've been seeing in the AI atmosphere is that a large incentive around creating multiple fake accounts using headless browsing to do so is a lot of exposed AI compute that can be quite valuable for these fraudsters and attackers in order to skim that AI compute.

Maybe even use it for other use cases. Use it to get free AI coding credits, free AI image generation. Maybe they're trying to create an AI phishing campaign and it's just nice for them to be able to get a free query that builds that campaign versus pay $200 a month for a service. And I mention that because that's obviously a subset of the type of fraud and abuse that we're seeing.

But I've been thinking about it more recently as one of the really interesting things that it might do is actually change some of the game theory and economics involved in fraud because AI compute being the resource that's stolen is actually this fungible replenishable—almost like an oil in the modern day fraud atmosphere—where you can use that oil to go build or to fuel a lot of your different fraud campaigns, whether it's phishing emails, phishing phone calls, if you're trying to skim AI voice applications and things like that.

I've mostly been thinking about that because when we think about how fast a lot of this might move, it's not just that I think the fraudsters have AI at their fingertips. It's that if fraudsters are automating and multi-accounting, creating that type of abuse, they could end up both abusing these AI apps and actually with a resource at the end of the day that allows them to fuel the next fraud campaign.

Bobbie: Totally. And I think a lot of this fraud ultimately comes down to ROI, right? We often tell our customers and the organizations that we work with that it's very difficult to 100% prevent all kinds of fraud activity without also having a very high false positive rate, meaning that you're impacting real users who are doing legitimate things.

And the goal here is not to eliminate all fraud. It's actually to increase the barrier to defrauding you so that, say, you're giving away $1 or $10 of compute credits; if it takes them $15 of effort in order to surpass your defenses, then it's totally non-viable. And these attackers will go away and they'll go do something else or attack someone else instead of you.

It's sufficient to reduce their ROI to the point where it's no longer as profitable to attack you to get these compute resources or this equivalent of oil, so that you can continue to serve your users rather than giving away resources to some [00:06:00] attacker.

Reed: Absolutely. I think you summed it up well there, and I'd love to give folks a hands-on demo and experience of what this looks like, both in terms of the abuse side and the defense side.

Maybe I'll turn it over to you and we can jump into the demo, if that works for you.

Bobbie: Absolutely. I'll show our demo site Nobos dev. This is a page that you probably have something very similar in your own application. It's just a signup or login screen. And sign up and login are important.

They're the entry points to your application. Unless you have something that's totally unauthenticated, some attacker is going to need to get past this screen in order to exploit your application, whether that's through account takeover of an existing user or through account creation, abuse, or creating a bunch of free accounts, that is how they're gonna get in.

This is a good choke point to detect attempted bots or other fraud attempts. [00:07:00] Taking this site, how would I try to abuse it? We can use our favorite tool chat, GPT, and write a really simple prompt. And here is the prompt. Visit this page, no bots.dev, and generate a puppeteer script that will attempt to log in as bchen@stytch.com.

ChatGPT, our Open AI's alignment team is really doing great work here. They show I can't help with this because of ethical reasons. Here's the entire script. The AI is very willing to help write these kinds of scripts, and it's pretty good at it too. I'm gonna take this script and I'm going to switch over to Cursor.

Here in Cursor, I have that same script that I had in the Chachi PD script, and I can run it using node. When I run the script, it's going to open a new browser. It's going to take my user and try to log in with it, but here we get an error and it says unauthorized action. That's because, using Stytch, we have a product called Device Fingerprinting.

And Device Fingerprinting was able to detect that this was a bot and block it. Let's see what that looks like in the Stytch Dashboard. Lemme take this. In the Stytch dashboard here, this is the request that was just blocked, and we can see that we gave the verdict action of block for a few reasons: we saw user agent deception, and we also saw headless browser automation.

Both of these are hallmarks of bot or fraud activity that most sites don't actually want to see. We're able to detect it, we can get block, and that results in that error that we saw. I'll also say this is a very basic script, right? This is pretty much the bare minimum that you can do.

And that makes it very easy to detect. What if I want to make it harder to detect? For that, I'm gonna go and use another AI coding assistant. This is Q Code, also a customer of ours, and I'm gonna write a very simple prompt. This script was detected as a bot updated to avoid detection. And what we get back here is quite a lot.

I won't read it all, but we can see that there are several techniques that this LLM knows about in order to avoid detection. Let me restore this checkpoint.

Reed: A couple I thought were interesting there in that response was it's going to make the interaction look more human-like. I think it pasted the name before and now it's gonna enter individually, and then the explicit browser population manipulation.

Browser property manipulation is gonna be interesting.

Bobbie: Watch as well. Exactly. We see human-like typing. We see things like setting a more common user agent, but also doing things like adding some plugins, that all browsers that real people use have a certain number of plugins installed.

There's a bunch of things like the WebGL render that's used, and they're gonna add some random delays as well. All of these things will make it look more human-like. And here, oops, let me come back to this one here. I'm gonna run this script again. Now it's a more human-like script. We see that every character is being typed individually.

We have all of these spoofed browser properties, but I'm still getting unauthorized action. And the reason is that, oh, let's come back to the dashboard and see what the reason was. Here, now we have even more verdict reasons, so we're still getting a block. In addition to headless browser automation and user agent deception, we also see JavaScript property deception.

These factors that we saw, like when you're faking the plugins or you're faking the WebGL properties, are actually things that we can detect. The way that we detect them is that we do a lot of proprietary research on basically what real browsers look like, what happens when you configure them in certain ways, and hallmarks of what happens if you try to manipulate those properties in different ways that are unnatural.

We're able to detect those. Usually, changing those properties is associated with bot activity or attempted fraud, and we'll surface that as JavaScript, JS property deception. I also call out we have a smart rate limit exceeded here. This is part of our intelligent rate limiting feature that basically says we've gotten a lot of traffic from this suspicious looking device recently.

We're going to surface that to you so that you know that it's not just a suspicious device; it's a suspicious device that's been sending you a lot of traffic. These factors help our customers who include organizations like Calendly and Replit do things like prevent account creation abuse, prevent account takeover attempts, prevent scam and spam activity, all while still maintaining a good user experience for their real users.

Reed: That's great. And I think this really ties back into what you were saying earlier about the fact that. With democratized coding and capability coming from AI, you get democratized abuse and just seeing the way that it adapted for the stealthy setup versus the vanilla one where a lot of folks probably can write a vanilla one now that couldn't two years ago with AI.

But the level of research that somebody would've had to be motivated enough but also technical knowledgeable enough to do in order to make some of those changes like add these plugins in order to look more normal is just fascinating. And obviously we're entering a really interesting time when it comes to the types of the net new number of folks that have the capability to be a fraudulent actor against somebody's application.

Bobbie: Yes, totally. And you say it's interesting, I say it's kind of scary. One of the things that I sometimes get questions about is why is this bot using a script when there are browser automation tools like ChatGPT Agent. I want to come back to the idea of ROI.

I return on investment for the attackers so that running a browser is pretty expensive, basically. That's reflected in both the costs you get from the providers, but also in terms of time. It takes more time to spin up a browser compared to running a script. What we are seeing in the field, like in the wild right now, is that the vast majority of automated attacks are still coming from scripts like this puppeteer script.

Rather than from agentic browsers, like open AI's agent or something like browser base. We do see that there is traffic from those sources and we can detect it, but the vast, vast majority of attacks are coming from traditional automation scripts. And it's important for us to be able to detect those and stop them because, as we just saw, the barrier to writing them is so much lower than it was just a few years ago.

Reed: Definitely. And I think, to your point, there's the ROI piece that also you could imagine chat agent starts getting used really prolifically for a certain type of attack. OpenAI has the ability to build in some of those guardrails, and I think they'll be good nets when it comes to that.

But if I were an attacker, the other reason I might want my own automation script is that I'll have full control over this and the ability to kind of adjust it the way I want and not be beholden to whether the cloud provider giving me this agentic browsing has stronger compliance checks and things like that in the future.

It gives them a little bit—they're kind of future proofing their fraud setup in some ways. Well, maybe as we wrap up, what's something or any takeaway that you'd leave the audience with, if you had advised them on what they should be thinking about or do next now that they know this information?

Bobbie: There are two things that I wanna talk about. First, I think it is really easy to see direct costs of fraud. Let's say I run an AI coding assistant product; it's easy to see these people are abusing my system—that's costing me $10 in credit per account. But it's also easy to write that off and say it's a marketing expense.

We expect to lose a certain amount of money on these things. But I think it's hard oftentimes to see the distortion that it has on the rest of your processes. For example, if you're doing metrics, suddenly you need to change your metrics in a bunch of ways so that you can isolate out these fake accounts. You wanna see, is this ad converting? Is my site optimized in a way that we're doing a proper conversion funnel? All of those kinds of growth metrics are totally a mess if you're not filtering out bot traffic. These people who are intentionally trying to defraud you are not your real users.

These are not the people that you want to be optimizing for. If you're not thinking about that as part of the problem and you're saying we can write off $10 in costs, it's going to cost you a lot more in your product development and in serving the real users that you want to serve. So that's the first one.

The second one that I wanted to say is that as far as AI-powered fraud and abuse goes, I'll say that these tools are a force multiplier, right? It's something that changes the game in favor of attackers because now it's easy for them to do things that might have been unscalable before.

At the same time, I don't think they fundamentally change the nature of fraud and abuse problems. I think it still helps to say that I wanna understand my attackers. I want to understand what are they trying to get out, what's the value that they get, what are the vectors that they are using to get in and exploit me? All of those fundamental issues of fraud and abuse are still there. It's not to say that, oh, I'm worried about AI-powered fraud specifically, but I'm worried about fraud in general, and I know that fraud is on the rise because of these AI-powered tools. Ultimately it's part of your overall fraud and abuse strategy that I don't want.

I don't want fake accounts to exist on my platform. I don't want my real users to be dealing with account takeovers. I don't want to deal with scam and spam content. All those are fundamental problems that really have nothing to do with generative AI, LLMs, the current AI boom. The AI boom makes it easier. We can look at those tactics. We can defend against them using device fingerprinting or other traditional fraud methods like forcing two-factor authentication or doing more in-depth identity verification. But ultimately it's a fraud problem, it's not an AI problem. I expect that we will continue to see a rise in fraud because of this tooling, but you shouldn't miss the forest for the trees, I guess is what I wanna say.

Reed: I think that's a great summary and synopsis. Maybe on that first point, the last thing I'd leave folks with is that I think we're at a pretty interesting moment in both the rise of AI and the incentives to defend against fraud. What I mean by that is I would classify right now what's happening in AI—like funding of AI companies and how they're being run—as very clearly the subsidy era, which is not a normative statement on whether that's good or bad, but it's clear that there are a lot of AI applications that can effectively sell a dollar's worth of AI compute for 50 cents, and they can operate at negative gross margins. The venture capital world is willing to support that, and I think that's actually reasonable. It's not too different than the subsidies we got in our own lives with Uber, Lyft, Airbnb a decade plus ago.

But to your first point when you talk about people creating fake accounts, doing computer abuse on these services that may or may not know or prioritize who's a real user, who's a fake user, I think there's obviously many companies that are already prioritizing it, that we work with. But I think it's interesting to keep an eye on, 'cause I do expect the subsidy era at some point to move closer to the true cost era.

I think every application will ultimately need to know which of these users are actually their best users versus which ones are high usage because they've discovered an arbitrage opportunity where they can, for 25 bucks, get a paid plan that actually gives them $75 of AI compute.

That's where I think it's really interesting. It's this combination of fraud prevention, but also user analytics and understanding future financial planning for your company. It's something that we find really interesting. We're excited to chat with people about it in the community, and really want to appreciate and thank Bobby for joining us to share more on both what we've seen in the market and how we're seeing it evolve.

Thank you, Bobby.

Bobbie: Thank you for having me, Reed.

Reed: Of course.

Share this article