Beyond the Engineering Team: How We Governed AI Coding for Everyone

Published:
AI tools mean everyone ships code, not just engineers. Here's how Dashlane built a 4-tier governance framework that keeps everyone accountable.

This is the second post in our series on AI adoption at Dashlane. The first post covered how we secured AI coding tools for our engineering team: Threat modeling, Dev Containers as an isolation layer, and MCP permission audits. This one covers how we approach coding for everyone else.

The first question most companies ask when rolling out AI is: Which AI tools do we allow? The next, more important question is: Who’s accountable when something goes wrong?

At Dashlane, engineers aren’t the only people producing code anymore. With AI tools, anybody can start coding their own prototype, tool, or script. A product manager can build a prototype to demonstrate a new feature to a customer, a sales engineer can ship a customer-facing script, and a customer support agent can build a tool to automate some of their daily work.

These people are making the same kind of decisions engineers make, but with less information about the risks. Clearly, we needed a governance framework that covered everyone. 

What we built is a four-tier system with one principle at its core: You own the output.

The principle: The human is accountable

Even if AI generates the code (or any other type of content), as a human, you’re accountable.

This applies regardless of role. If you share, deploy, or hand over an AI-assisted artifact, you’re responsible for every line it contains.

Our governance framework was created to make that accountability concrete and provide guidance to teams.

The four tiers of our governance framework

The framework assigns requirements based on two variables: Who will use the output, and what data it touches. Who created the code is irrelevant.

A table titled with three columns—"Tier," "What," and "Data touched"—describing four code tiers: T1 Exploratory (local prototypes and experiments, no real data); T2 Internal Tool (employee-only scripts and automation, any internal data); T3 External/Shared (code shared outside Dashlane, POCs, customer-facing scripts, varies); and T4 Production (product code, infrastructure, public GitHub repos, anything).

These tiers determine which guardrails apply. 

The baseline for all tiers

There are a set of common rules that apply to all the tiers:

  • You own the output. Again, even if AI generates it, you’re accountable. Review everything before you use, share, or deploy it.
  • Never put secrets in prompts. No API keys, tokens, passwords, or credentials ever.
  • Never put customer data in prompts. No personally identifiable information (PII), sensitive data, or anything covered by our internal “Client and User Data Acceptable Use Policy.”
  • Use only approved AI tools. And follow our internal AI policy.

Requirements at a glance

On top of those shared rules, requirements increase as you go through the tiers.

A requirements matrix table comparing security and development requirements across four code tiers: T1 Exploratory, T2 Internal Tool, T3 External/Shared, and T4 Production. Rows list ten requirements. Code ownership and maintenance, human review of output, use of only approved AI tools, and no secrets or sensitive data in prompts are all required across all four tiers. Engineering code review is not required for T1 but required for T2 through T4. Engineer sponsor is not required for T1, recommended for T2, and required for T3 and T4. SAST and secret detection and dependency scanning are not required for T1 but required for T2 through T4. Full SSDLC and quality gates are only required at T4. Green checkmarks indicate required, red X marks indicate not required, and a yellow warning triangle indicates recommended

Key takeaways

Securing AI for engineering only isn’t sufficient. The risk surface expands the moment non-engineers start producing AI-assisted artifacts, which are now everywhere. A governance framework that stops at the engineering boundary leaves most of the company unaddressed.

This four-tier model gives every person a self-assessment they can run easily. The accountability principle gives them a mental model that holds even when the rules don’t cover a specific situation:You generate it, you own it.

Now that we have a clear framework, this also gives us the foundations to make it structurally part of our workflows and tools. Thus, following the rules doesn’t only depend on goodwill but becomes embedded into our practices.

Stay tuned for the next installment in our AI adoption blog series.

Sign up to receive news and updates about Dashlane