Back to blog
Combating AI Threats: Stytch's Device Fingerprinting
Product
Jul 3, 2025
Author: Bobbie Chen

During Liminal’s Demo Day, Bobbie Chen, Product Manager for Fraud & Security at Stytch, presents a live demo of Stytch’s Device Fingerprinting tools—showcasing how attackers can easily deploy AI-powered bots to mimic human behavior and target login flows. By analyzing dozens of device and browser characteristics, Stytch’s technology detects and blocks suspicious activity in real time, keeping users safe from bots and automated attacks
[Transcription Below]
Bobbie Chen:
Hey, nice to meet you all. My name is Bobbie Chen. I'm Product Manager for fraud and security at Stytch. Today I'm going to be telling you about Stytch device fingerprinting and how we do bot detection. So I'll share my screen here. Just really quickly, I'll give a quote from one of our customers, “Stytch's Fraud and Risk tools give us granular visibility into attempted attacks and effortlessly prevent a range of bot and fraud attempts.” That's pretty vague and high-level, so let's just get into the demo. So here is our demo site, nobots.dev. It's a pretty typical login screen, like you would have for any application. Here, I can enter my email and initiate a login. Here, this is the happy path. I'm a human. I just did this on my own laptop, typing with my own two hands here, and I successfully made it through this authentication flow. So this is good, and this is the path that we want all of our users to have. The problem is that not all users are good humans.
As part of this demo, I took a stab at using not some specialized hacking tool, but just plain old ChatGPT, and I gave it the prompt, visit this page (nobots.dev), generate a puppeteer script that will attempt to log in with this username. And so ChatGPT will actually give me an ethical lecture telling me that I shouldn't do this, and then give me the script anyways. So this is how easy it is for an attacker or for anyone to start to automate your site. What that looks like in practice is I've taken that same script here. I'm going to run it on my own laptop, and this is going to spin up a new browser using the automation software, puppeteer. It's going to try to log in, but when I hit continue, I get an error, and that error says unauthorized action. That's because we're protecting this login screen using Stytch device fingerprinting. So let's come back to here. What does that look like from our side? From the defending side, this is the device fingerprint dashboard. I can see what are some recent events that happened. And here, I see there were two recent attempts to log in, which resulted into what we call fingerprint lookups. The first one is me. That was the one that you saw me typing as a human, apple safari.
I have a certain set of fingerprints. I have a certain set of characteristics. And I got out a verdict of allow, that we didn't detect anything wrong here. Whereas that second attempt that was launched using headless browser automation, we (Stytch) were able to detect, and we've issued a block action. We issued a block action here with the reason for headless browser automation. So using device fingerprinting, we're able to measure certain characteristics of that browser and device environment that you saw in the automated script, and detect it and surface it to our customers as a reason to block. Once you have this recommended action, you can do whatever you want with it. In this particular demo, we've denied access to the login. So that's a fairly naive attack. I'll say that a pretty large proportion of the attacks that we see are fairly naive. These are the kinds of things that you can block pretty much out of the box because they are so, obvious as being fake. But just as AI has made it easier, as we saw in the demos earlier, AI has made it easier to produce fake videos, to produce fake images, and it makes it easier to produce automated scripts that behave more like humans do.
And so here, this is Cursor, I'm actually using the AI Coding Assistant KiloCode here. I gave it literally a two-sentence task here before. The script was detected as a bot. Let's update it to avoid detection. In order to do this, here's the thinking: we're going to modify a bunch of options, set a user agent, spoof some browser properties, and even simulate humanlike interaction. And so here, I'm just replaying a previous run here. So we can see that now I've generated all of these things that will make my browser look like less of a fake thing. So let's restore this checkpoint. And when I run this new script, this is what's going to happen. We've spun up a thing. Now we have realistic human typing. We have all these attributes that look a lot more like a regular human browser. But here we see, again, unauthorized action. So let's come back in to the device fingerprinting dashboard here to see why. This will take a second. Here, we see we've got another block, and this block actually has more reasons attached. So even though we tried to do all of these things to achieve deception, we've actually found a lot more issues.
We've added JavaScript property deception and smart rate limit warning. And so what these are, are additional warning flags about the traffic. So at Stytch, we have a very good idea of what real browsers look like, about what real traffic looks like. So even if you do something like change your user agent, we know that. Say, I'm running Safari on Apple right now. That browser was Chrome on an Apple device. And these have certain kinds of characteristics. So if you change any of them in isolation, it really sticks out because it's something that was deliberately manipulated in order to try to look like a real human. And so we detect these deception signals. We have a lot of proprietary work that goes into detecting deception, detecting tampering, and preventing reverse engineering of our payload here in order to give you these deception flags. We've also got a feature that we call intelligent rate limiting. So this is a combination of traditional rate limiting and device fingerprint, in that we see a very suspicious fingerprint, and we see that they've been doing things multiple times in a short period of time. That allows us to surface these extra warnings about exactly what we see.
So that's the very high-level demo here. We're surfacing all of this information about what we can measure, collect, and give you hashes or identifiers that you can use to persistently block traffic for all of your service. And so that is the basics of it. I'll tell you about how that fits into the framework of fraud prevention that we have. So as I said before, when you democratize coding, you democratize abuse. And that means that a lot of the previous signals that we were relying on are no longer as valuable. So you saw typing speed. It is super, super easy to change typing speed or mouse movements. And I'll just say that in our device fingerprint solution, these are not accounted for because they tend to be extremely noisy and not useful signals for actually determining what a bad actor is trying to do. And so this is a super high-level framework about how we prevent fraud. It really comes in three phases: you need to gather information about the user activity, you need to decide what to do, and then you go actually do it. And that's like bordering on super obvious, but I like to separate it out this way because I know this slide is a lot.
We do a lot in this signal gathering, and we recommend a decision. But ultimately, as we go further down the path, we rely more on you, the person implementing, in order to define what actually constitutes a bot in the context of your service, what constitutes bad activity, and how should I enforce against it. Here, we blocked login, but we can also do things like shadow ban a user or present them with extra long loading screens, things like that. And all of those are completely relevant to the core business of whoever we're working with. So that is the super quick demo of Stytch’s device fingerprinting and our bot detection capabilities. Happy to take any questions at this time.
Filip Verley:
Bobbie, that was awesome. Thank you so much. My question first right at the gate, as I've asked every vendor so far, is where do you see this space headed in the next 12 months? And what is Stytch doing about staying ahead of the game?
Bobbie Chen:
Yeah. So two things about that. One of them is, as I said before, there's going to be a lot more low level, basically, bot activity. I think that lines up with what you're seeing at Liminal. It lines up with what we're seeing in our customers that over time, the number of people attempting to bot sites is increasing. I think that's because it’s become that much easier. I mean, you saw this is a one-shot perfect script that attempts to log in. And so that's one. On the other hand, I do think there are interesting problems for bot detection in particular in the context of AI agents. I do think in the next 12 months, this is where we're going to see the proliferation of agents being used in a way that is similar to the way that normal API integrations are used today. And so that means that people are going to need to have automations take actions for them in a way that is allowed, actually. And so there needs to be a way of distinguishing bad bots from those good bots.
Cameron D’Ambrosi:
I love that. Yeah, I mean, I think agentic AI is a real big wrench that's going to get thrown in the works here. And I think everybody is relatively unprepared for what that means. A follow-up question for you there. I guess it's in the same vein. It seems like you guys in many ways are fighting a war on two fronts, both against the fraudsters, but then increasingly against whether it's Google and the signals they let you see via Chrome or Apple and Google at the device level in terms of what you can access as critical inputs into your device fingerprinting model. Can you talk a little bit about how you're handling those challenges in terms of the ever-tightening restrictions around the signals that Stytch’s solution can chew on to come up with these scores?
Bobbie Chen:
Yeah, definitely. I'll say that a lot of these things coming from the big tech providers, especially Apple as well as Google, are really centered around user privacy. And I don't think user privacy is at odds with fraud prevention. And so I'll say that as a device fingerprinting solution, we are really focused in the fraud prevention space, which gives us different advantages here in that we're not trying to uniquely and persistently identify one person forever, as is pretty common in like ad-tech type fingerprinting solutions. Instead, we're looking to identify signals or warning flags that are indicative of either automated activity that we don't want, or have known past bad actors. And those kinds of signals, they're not even personally identifiable information. In that sense, we can collect it. And we do see generally that the kinds of devices that we see that are automated are oftentimes the cheapest possible device, the device with the worst possible security controls, because they can manipulate their features in ways that are unique, but they're still unique to us. They don't look anything like genuine real devices, even if they do have anti-fingerprinting or privacy controls involved. So that’s to say that in our particular space, we do see changes in our signals. Because of those changes, we are monitoring them. And so far we're seeing it doesn't actually really change the core business of fraud prevention, risk prevention, stopping things like account takeovers or credential stepping attacks.
Cameron D’Ambrosi:
Amazing. And then another follow up there, you mentioned this approach that can take into account trends that you're seeing across the space. But do you guys have an explicit consortium model, I guess, for lack of a better word? Where as a Stytch customer, you get to not see or leverage directly the information from other customers, but everybody's shared experience is contributing overall to the collective fraud posture, or is it more abstract than that?
Bobbie Chen:
I'll say it's a little bit abstract, but we do get value from the network. And what that means is that we do something like a billion signals in a day or something. We have a fairly large view across our customers of different threats that are coming out. And so sometimes we'll see an emerging pattern. We can check in with specific customers who are experiencing that pattern and see how that correlates with their own fraud and abuse prevention. And if it is, we can issue. We actually have certain kinds of verdict reasons that are essentially just that we have detected a lot of malicious behavior with this particular unique set of device signals, and so we can ban it across our customer base, knowing that we're not affecting any real good users.
Cameron D’Ambrosi:
That's fantastic. And then last question here, unless Filip wants to jump in as well. You know, obviously, Stytch, you have a broader platform that offers authentication and other capabilities. If I just wanted to consume these pieces of the Stytch platform, can I plug that into any other part of my stack that I want, or do I need to consume this in conjunction with Stytch's authentication offerings?
Bobbie Chen:
Yeah, that's a great question. And it is a standalone product. So our fraud and risk prevention, a lot of these companies that are listed here are using Stytch device fingerprint totally standalone, not integrated into authentication. And so I'll say, yeah. So Stytch also offers a platform for authentication. If you are using Stytch for auth, we have closer tie-ins with device fingerprinting to do things like automatically prevent account takeovers just by flipping a toggle in the dashboard. But if you're not, you can still integrate. The integration is fairly lightweight. I think typically our customers will get to a POC state in about one hour.
Cameron D’Ambrosi:
Wow.
Filip Verley:
Amazing. On that note, Bobbie, this is awesome. Thank you so much for not just the demo, but also the Q&A. I really appreciate you answering all these questions.
Cameron D’Ambrosi:
Yeah, I mean, I've never seen somebody live vibe code a fraud attack vector before. That was awesome.
Bobbie Chen:
It is that easy. Yeah. Thank you guys. Really appreciate it.
Authentication & Authorization
Fraud & Risk Prevention
© 2025 Stytch. All rights reserved.