AI Code GovernanceScanFixCertifyGovern

Your AI-built code works.
Prove it's safe.

LaunchGuard scans AI-built apps for launch blockers and helps you fix them. VertaAI Governance enforces declared specs across your team — pre-flight checks, PR gates, and production drift detection. From solo builder to enterprise team.

Scan Your App →
Works with:
🤖 Claude Code
🐙 Copilot
Cursor
🌊 Windsurf
🔮 Augment
18
Launch Checks
56
PR Comparators
5
AI Editors
0
AI Hallucinations
The Problem

AI made building cheap.
It didn't make launching safe.

Whether you're a solo builder shipping your first AI-built app, or a team of 50 developers using AI tools across a dozen services — the same gap exists: nobody can prove the code is safe, governed, and launch-ready.

🚀

Builders Can't Verify

You built it with Cursor or Claude Code. It works. But is auth enforced? Are secrets exposed? Is Supabase RLS configured? You don't know — and neither does your AI.

👥

Teams Can't Govern

Developer A's AI writes IAM code. Developer B doesn't know. Your engineering manager has no dashboard, no approval queue, no audit trail across 3 tools and 5 developers.

📊

Nobody Can Prove It

When the investor asks “is this secure?” or the auditor asks “who authorized this?” — you can't answer. There's no evidence trail from AI prompt to production.

See It In Action

Your developer's AI tries to
write IAM code.

Here's what happens — automatically, with no developer action required.

Claude Code session — payment-service
Developer: “Add an IAM role for the deploy pipeline”
Pre-flight check fires automatically...
✗ iam_modify is BLOCKED by workspace policy
  3 developers on your team declared this last week.
  alice@corp scoped it to iam:user-service-role only.
→ Engineering manager receives notification on dashboard
→ Manager clicks [Approve] — developer's editor updates in seconds
→ Decision recorded in audit trail with policy version hash

The edit was blocked before it reached the filesystem. The team knowledge was injected. The manager approved. The audit trail is complete. Total time: 90 seconds. No CLAUDE.md could do this.

How It Works

Four stages.
One governance loop.

From the developer's first keystroke to production CloudTrail — every stage checks the same declared spec.

Governance pipeline

Start
Open Editor
Pre-flight
Permission Check
Live
Scanner
PR Gate
56 Checks
Production
Drift Monitor
Pre-flight
Blocks unauthorized code before it's written
Before the AI writes infrastructure code, VertaAI checks: is this capability allowed by your org's policy? Has this service declared it? If not, the AI gets advisory context from your team's prior declarations — not a wall, but institutional memory.
  • System-level hook blocks policy-prohibited capabilities
  • Advisory context for undeclared capabilities: “3 developers declared this last week”
  • Approved waivers automatically override blocks
  • Risk level computed from service tier + capability sensitivity
Live Scanner
Detects capabilities as the developer codes
On every file save, 14+ patterns detect S3 calls, IAM changes, database writes, and more. Compared against the declared spec in real-time. Coverage metric shows what percentage of detected capabilities have been governance-checked.
  • 14+ patterns: S3, IAM, Secrets, Prisma, raw SQL, and more
  • Coverage metric: “89% of capabilities checked”
  • Session budget: soft cap visible to your manager, not a hard block
  • Coding-time hints: “Tests required for db_write” surfaced before PR
PR Gate
56 deterministic comparators on every PR
When a PR opens, 56 comparators evaluate the diff. No LLM — structural analysis only. Declared capabilities verified against actual code. Policy-blocked declarations flagged. Results posted as a GitHub Check with configurable name and conclusion mapping.
  • Security: secrets, auth, PII, hardcoded URLs
  • Architecture: service boundaries, import direction, schema stability
  • Agent quality: debug output, N+1 queries, duplicate abstractions
  • Declared spec vs actual code — with policy cross-reference
Production Monitor
Compares production against the declared spec
After deployment, VertaAI ingests CloudTrail, GCP Audit Logs, and database query logs. Compares what's actually running against what was declared. If production shows S3 writes to a bucket nobody authorized — that's drift. Closes the Spec→Build→Run loop.
  • CloudTrail, GCP Audit Logs, DB query log ingestion
  • Materiality classification: critical / operational / petty
  • Bi-directional feedback: drift alerts go back to the declaring developer
  • One-click “Update declaration” closes the loop
Why Not Just CLAUDE.md?

Instructions vs.
enforcement.

CLAUDE.md tells one AI what to do in one session. VertaAI tells you what all your AIs actually did — across every session, every PR, and in production.

Can you...CLAUDE.mdCode ReviewSnyk / SASTVertaAI
Block IAM writes before the AI generates the code?
Tell which AI agent wrote this code and who approved it?
See what all 5 AI tools did to your codebase this week?N/A
Check if production matches what was declared?
Give your auditor a compliance report per service?
Report a governance coverage score to your VP Eng?
Push policy changes to every developer in real-time?N/A
See per-developer trust scores across your team?N/A

Your code never leaves your machine. VertaAI sees capability types and file paths — not source code. The pre-flight hook runs locally. Track A reads PRs through GitHub's API.

Integrations

Every AI editor.
One MCP server.

Your team connects to a shared MCP server. Every AI tool gets the same governance policy, the same declared specs, the same team knowledge — no per-tool configuration.

🤖
Claude Code
Native MCP. Team-wide governance with JWT auth. Advisory context injected.
Cursor
MCP connection. Same policy, same specs, zero per-tool setup.
🌊
Windsurf
MCP connection. Governance travels with the developer, not the editor.
🔮
Augment
MCP connection. Policy enforcement + advisory context for every session.
🐙
Copilot
VS Code extension required (Copilot does not support MCP).
Two Products. One Journey.

Start as a builder.
Grow into a team.

LaunchGuard gets your app launch-ready. VertaAI Governance keeps your team governed as you scale. Same infrastructure. Natural upgrade path.

Stage 1 — Solo Builders & Small Teams

LaunchGuard

Scan your AI-built app for launch blockers. Fix them with AI-builder prompts. Get a shareable certification.

🔍 18 checks: secrets, auth, authorization, Supabase RLS
🔧 Copy-paste fix prompts for Cursor, Claude Code, or any AI tool
✅ Shareable certification page for investors and customers
📦 Supports: Next.js + Supabase + Stripe + Vercel
Try LaunchGuard →
Stage 2 — Growing Teams & Enterprises

VertaAI Governance

Govern AI-generated code across every developer and every tool. Pre-flight checks, PR gates, production monitoring.

🔒 Pre-flight blocks before code is written
⚖️ 56 deterministic PR comparators at merge time
📊 EM/VP/CISO dashboards with action cards
🌐 CloudTrail drift detection vs declared spec
See Governance →

Start with LaunchGuard (free scan). When your team grows, upgrade to Governance. Same GitHub connection. Same evidence engine.

Get Started

Governed in
2 minutes.

No extension required. No YAML to write. The declared spec is generated from your existing code.

Team Deployment (recommended)

1
Your EM configures the policy
Capability tiers (blocked / declare-first / allowed), session budgets, approval routing. One policy pack — governs all developers, all AI tools.
2
Each developer adds 1 line to their config
Add your team's MCP server URL to .claude/settings.json. Works with Claude Code, Cursor, Windsurf, Augment. No extension to install.
3
First session auto-scans your project
VertaAI detects capabilities from your existing code and auto-generates a declared spec. One-click confirm. Your AI agent is governed.

Solo Developer / Evaluation

Install the free VS Code extension from the marketplace. Same auto-scan flow. Also supports GitHub Copilot (which does not support MCP).

For Engineering Managers

One number
to report upward.

AI Governance Coverage Score: what percentage of your AI-active services are governed. Per-developer trust scores. Compliance export for your auditor. Policy changes propagate to every connected developer in real-time.

📊

Governance Coverage Score

One number: what percentage of AI-active services have a current, policy-compliant declared spec with no open critical drift. Report it to your VP Eng. Watch it go up as you onboard more services.

📈

Trust Scores

Per-developer, per-service trust score (0-100). Builds from clean PR history, accurate declarations, and absence of drift. See who's most reliable. Spot declining trust before it becomes a problem.

📄

Compliance Export

One-click structured document for your SOC 2 auditor: declaration history, PR gate decisions, runtime drift events, approved exceptions. Per-service, per-time-window. JSON or markdown.

📋

Policy Pack Wizard

Configure capability tiers, session budgets, PR gate rules, and approval routing. One policy — governs every developer, every AI tool. Changes propagate to connected editors in real-time via SSE.

Security Model

12 capability types.
Three enforcement tiers.

Every cloud and infrastructure action your AI agent can take is classified into one of three tiers. Your engineering manager controls which tier each capability lives in via the policy pack.

🚫 Blocked (5)

iam_modify
BLOCKED
secret_write
BLOCKED
db_admin
BLOCKED
infra_delete
BLOCKED
deployment_modify
BLOCKED

⚠ Declare First (7)

s3_delete
DECLARE
s3_write
DECLARE
schema_modify
DECLARE
network_public
DECLARE
infra_create
DECLARE
infra_modify
DECLARE
secret_read
DECLARE

✅ Always Allowed (3)

db_read
ALLOWED
s3_read
ALLOWED
api_endpoint
ALLOWED

All tiers are configurable via the policy pack. Your EM can move any capability between tiers without editing code. Changes propagate to every connected developer in seconds.

Who It's For

Built for the team,
not the individual.

Tech Lead

Speed without
the blast radius.

Know exactly who wrote what. 56 deterministic comparators on every PR. Quality score (0-100) across 5 governance dimensions. No LLM in the governance loop.

Zero ungoverned PRs
Engineering Manager

One number
to report upward.

AI Governance Coverage Score. Per-developer trust trajectories. Compliance export for the auditor. Policy changes propagate in real-time across all AI tools.

Measurable governance
CISO / Security

The audit trail
you actually need.

5 critical capabilities blocked at the system level. CloudTrail drift detection. Every declaration stamped with the policy version. Compliance export per service per time window.

Deterministic audit trail
56
deterministic comparators
10
MCP governance tools
5
AI editors supported
12
capability types tracked

Private beta. We're onboarding engineering teams deploying AI coding agents at scale.

Early Access

Ready to govern what your AI agents build?

VertaAI is in private beta. We're onboarding engineering teams that need real enforcement — not just instructions — for their AI coding agents.

Request Early Access →Book a Demo