NYC Security Happy Hour 2026: What a $5 Phishing Stack and AI Trust Framework Have in Common

Published:
Dashlane's NYC Security Happy Hour covered the new shape of trust: Progressive trust for AI agents and steps to prevent low-cost OAuth phishing attacks.

Last month, Dashlane gathered a group of security leaders, engineers, and practitioners at Dashlane's New York office for our second Security Happy Hour in the U.S. The event featured two focused talks, each tackling a pressing challenge in the field, followed by open conversation in a more intimate setting than what you usually find in large conferences.

This event is part of a series we've been building across cities. Last fall, our Paris Security Happy Hour brought together two excellent speakers to dig into OAuth token risks and privacy-preserving identity, respectively. NYC continued that thread with two sessions that, taken together, cover both ends of the same problem: How trust gets built into systems, and how attackers exploit the gaps when it isn't.

A warm thank you to Umesh Shankar and Rajan Kapoor for sharing their knowledge, and to everyone who joined us in person and on Zoom. To continue sharing their knowledge, I’ve provided my summary of their talks, as well as some key takeaways and recommendations below.

You can also watch the full event here.

Talk 1: What would make you trust an AI agent?

Speaker: Umesh Shankar, Corporate VP of Engineering, Microsoft AI

Umesh opened with a simple question: What would make you trust AI? Then he reframed it: What would make you trust a person to act on your behalf? That shift is the entire point. As AI agents take on more autonomous, consequential actions, the mental model for software security has to change.

His trust framework spans security, value alignment, competence, and judgment. The first two are familiar. The latter two are where the conversation really got interesting.

Competence isn't just task completion. It includes output quality, cost, and critically, avoiding negative side effects. An agent that finishes the job but also does something harmful or unwanted hasn't earned trust.

And judgment is subtler still. The best agent, like the best assistant, should save you from yourself. If your instructions technically allow spending $100 on a single pair of socks, a trustworthy agent should flag that rather than comply. We apply this standard to people instinctively. We haven't built it into our AI systems yet.

That gap points to the core principle: Authority should be proportional to competence. We don't hand a new hire the keys to everything on day one. Agents should earn expanded permissions through demonstrated, measurable performance, not receive them wholesale at deployment.

The architectural corollary Umesh proposed was that we should use the model to generate structured signals and enforce policy deterministically in code. The model reasons; code decides. That separation keeps the system auditable.

My takeaways:

  • Extend trust progressively. Scale agent authority based on demonstrated competence, not upfront permission grants.
  • Define competence explicitly. Task completion is not enough. Build in criteria for side effects, cost, and judgment before you deploy.
  • Separate reasoning from enforcement. The model generates the signals, and the code enforces policy.

Talk 2: Tearing down a $5 OAuth attack stack

Speaker: Rajan Kapoor, VP of Security, Material Security

Rajan did a live teardown of a working OAuth phishing attack he built in two hours for $5. The stack: GCP free tier, Cloudflare Pages as the lure, a Cloudflare Worker for token exchange, Cloudflare KV for storage, and Google Sheets as the exfiltration destination.

Every component runs on infrastructure you already trust, Cloudflare IPs don't appear on threat feeds, the consent screen is served by Google itself, and all exfil traffic goes to googleapis.com. There’s nothing to block.

What makes it particularly hard to contain is that resetting the victim's password has no effect on OAuth tokens. Plus, MFA protects the login but consent happens post-authentication, and killing active sessions doesn't touch OAuth grants. The refresh token persists until explicitly revoked.

As Rajan put it, $5 and two hours is the gap between your organization and a compromised mailbox.

Three recommendations:

  • Restrict third-party app access. Set API controls to "restricted" in the Google Admin Console and maintain an allowlist of reviewed apps.
  • Get visibility into new OAuth grants. Make sure your monitoring gives you visibility on OAuth activity so you can identify unusual activity.
  • Audit and revoke existing grants. Every dormant OAuth token is a potential persistence mechanism, so make sure to clean up continuously.

Staying connected

The conversations after both talks were, as always, the best part of the evening. 

We'll keep building on this format. More events are coming, including an Happy Hour as part of the NY Tech Week in June and a Paris edition later this year, and we'll share details as they're confirmed.

If you want to stay connected and be the first to hear about upcoming Dashlane events, sign up below to receive news and updates from Dashlane.

Sign up to receive news and updates about Dashlane