Beyond the Engineering Team: How We Governed AI Coding for Everyone

This is the second post in our series on AI adoption at Dashlane. The first post covered how we secured AI coding tools for our engineering team: Threat modeling, Dev Containers as an isolation layer, and MCP permission audits. This one covers how we approach coding for everyone else.
The first question most companies ask when rolling out AI is: Which AI tools do we allow? The next, more important question is: Who’s accountable when something goes wrong?
At Dashlane, engineers aren’t the only people producing code anymore. With AI tools, anybody can start coding their own prototype, tool, or script. A product manager can build a prototype to demonstrate a new feature to a customer, a sales engineer can ship a customer-facing script, and a customer support agent can build a tool to automate some of their daily work.
These people are making the same kind of decisions engineers make, but with less information about the risks. Clearly, we needed a governance framework that covered everyone.
What we built is a four-tier system with one principle at its core: You own the output.
The principle: The human is accountable
Even if AI generates the code (or any other type of content), as a human, you’re accountable.
This applies regardless of role. If you share, deploy, or hand over an AI-assisted artifact, you’re responsible for every line it contains.
Our governance framework was created to make that accountability concrete and provide guidance to teams.
The four tiers of our governance framework
The framework assigns requirements based on two variables: Who will use the output, and what data it touches. Who created the code is irrelevant.

These tiers determine which guardrails apply.
The baseline for all tiers
There are a set of common rules that apply to all the tiers:
- You own the output. Again, even if AI generates it, you’re accountable. Review everything before you use, share, or deploy it.
- Never put secrets in prompts. No API keys, tokens, passwords, or credentials ever.
- Never put customer data in prompts. No personally identifiable information (PII), sensitive data, or anything covered by our internal “Client and User Data Acceptable Use Policy.”
- Use only approved AI tools. And follow our internal AI policy.
Requirements at a glance
On top of those shared rules, requirements increase as you go through the tiers.

Key takeaways
Securing AI for engineering only isn’t sufficient. The risk surface expands the moment non-engineers start producing AI-assisted artifacts, which are now everywhere. A governance framework that stops at the engineering boundary leaves most of the company unaddressed.
This four-tier model gives every person a self-assessment they can run easily. The accountability principle gives them a mental model that holds even when the rules don’t cover a specific situation:You generate it, you own it.
Now that we have a clear framework, this also gives us the foundations to make it structurally part of our workflows and tools. Thus, following the rules doesn’t only depend on goodwill but becomes embedded into our practices.
Stay tuned for the next installment in our AI adoption blog series.
Sign up to receive news and updates about Dashlane





