Back to blog

Sam Altman was wrong: AI didn’t defeat auth. Single factors did.

Auth & identity

Aug 13, 2025

Author: Reed McGinley-Stempel

Sam Altman was wrong: AI didn’t defeat auth. Single factors did.

At the Federal Reserve, Sam Altman recently warned bankers that AI is about to trigger a “significant impending fraud crisis,” claiming AI has “fully defeated most of the ways that people authenticate, other than passwords.” It’s a good soundbite, but it points in the wrong direction. Passwords aren’t the glowing exception. They’re actually the problem.

We agree on one thing. Voiceprints shouldn’t move money. In 2025, cloned voices and faces are drag-and-drop. Any bank still trusting a lone biometric is asking for trouble.

Where we part ways: AI hasn’t “defeated” modern authentication. It’s defeated single-factor authentication. The way out isn’t to cling to the weakest factor we have; it’s to combine phishing-resistant credentials with context and continuous risk signals.

The post-AI baseline for auth

Developers don’t need moonshots here. You can ship a meaningfully safer stack this quarter:

  1. Make passkeys the default. WebAuthn/FIDO2 replaces “something you know” with a device-bound key pair unlocked locally (Face ID/Touch ID/PIN). Phishing a private key that never leaves the device isn’t a thing. Hide “use password instead.” behind a click.
  2. Bind biometrics to hardware, not the cloud. A year-old selfie doesn’t prove much. The cryptographic tether does. Biometrics should unlock a key in a secure enclave on a trusted device.
  3. Score the session before it starts. Fingerprint the client and the network: browser build fidelity, OS quirks, integrity signals, IP reputation, emulator traces, navigation entropy, history of device-to-account relationships. Most fraud is remote; device and network signals are where AI “impersonators” struggle. Step up auth when risk spikes.
  4. Kill visible friction for good users. With strong device-bound auth and pre-session risk scoring, most users should one-tap in. Save CAPTCHAs for the handful of truly weird sessions. Your conversion rate (and support team) will thank you.

What this means for banks (and anyone else with strong security requirements)

Banks aren’t doomed to an AI-driven crime wave if they stop trusting brittle signals in isolation. The right posture is, as always, layered:

  • Device-bound passkeys as the primary credential (phishing-resistant, hardware-rooted).
  • Pre-session risk from device, network, and behavior; step-up only when warranted.
  • Continuous liveness & attestation tied to the trusted device, not just a one-time selfie.

This stack isn’t theoretical. It works today, at scale, without turning every login into a forensics exam. And crucially, it flips the AI table stakes: attackers may clone a face or a voice, but it’s far more difficult to fake a series or interrelated factors.

Extending the model to AI agents

What complicates the fraud problem further is AI agents that act on behalf of real users. These agents, typically interacting via Model Context Protocol (MCP), need the same fundamentals: identity, least privilege, and proof-of-possession. That way fraud and malicious programmatic traffic are stopped, while legitimate agents can still securely interact with tools and data under scoped, auditable permissions.

The model is straightforward: authenticate the human owners of delegated agents with device-bound keys or passkey-backed tokens, scope what each can do, and apply the same risk scoring you use for human sessions. By treating AI clients as first-class “users” in your auth stack, you stop guessing who’s on the other side of the call.

The uncomfortable truth

Yes, fraud volumes will spike as AI democratizes abuse; that’s how every new tool lands. But defenders get the same acceleration, if we adopt it. The real risk isn’t that “AI defeated authentication.” It’s that we keep deploying single factors and call that security.

If you’re a developer, you can help end this debate in code. Ship passkeys. Bind them to hardware. Use device fingerprinting to spot and block suspicious devices before a session even starts. Extend that same trust layer to AI agents and integrations, giving them scoped permissions. Save the drama for your postmortems.


Share this article