v0.4 — open source, actively developed safety harness for coding agents

Your AI agent is
fast. Make it safe.

Auto-review before every merge. Dangerous ops blocked. Plan first, code second. One command — full safety harness for any coding agent.
Works with Claude Code_

the problem
AI agents ship code fast. But without guardrails, they ship bugs faster.

The numbers don't lie. Stanford, 6K sessions ↗

“Vibe coding introduces 9× more security vulnerabilities than manual coding.”
0.76 vulnerabilities per 1,000 lines. SQL injection, command injection, path traversal — agents reproduce what's in training data.
56%
“56% of AI-generated code gets thrown away or rewritten by humans.”
Only 44% of agent-written code survives to commit. Planning before coding cuts waste by 3×.
1.1%
“AI agents ask for clarification in only 1.1% of interactions. The rest — they guess.”
Agents confidently write wrong code instead of asking. Your review catches it — or nobody does.
“Collaborative mode is 3× cheaper than full autonomy — and produces fewer bugs.”
Full vibe coding: $0.13/100 lines. Collaborative: $0.05. GYRD nudges agents toward the efficient mode.
Without harness
  • 9× more vulnerabilities (Stanford, 6K sessions)
  • 56% of AI code thrown away
  • Agent guesses instead of asking (1.1% clarification)
  • No review before merge — bugs ship to prod
With GYRD
  • Dangerous ops blocked. Security patterns checked.
  • Plan first → 3× less waste, 3× cheaper
  • Agent asks before acting on ambiguous tasks
  • Auto-review on every merge — three-angle check
what you get

one command. full safety harness.

GYRD doesn't replace your AI tools. It's the safety layer between the agent and your codebase — built on research, not vibes.

automatic code review

Three-angle review runs before every merge — break it, understand it, secure it. Plain language findings with concrete fixes. Zero manual trigger.

dangerous ops blocked

DROP TABLE, rm -rf, shell=True, force push — explicitly banned. 10 CWE patterns with safe alternatives. Because instructions alone don't stop agents.

plan first, code second

Agent discusses the plan before writing code. 3× cheaper, 3× less waste. Collaborative mode beats full autonomy on every metric.

living updates

gyrd update --check shows which rules need updating and why. New model? Changed tool? GYRD maps ecosystem changes to your specific rules.

“I didn't understand what I was doing before doing it.”
— an AI agent, after deleting a production database
team features

one config. whole team aligned.

When your team uses GYRD, every agent — on every machine — follows the same conventions. No more “works on my machine” for AI.

shared conventions

Commit CLAUDE.md and .cursor/rules/ to your repo. Every teammate's agent inherits the same rules — code style, review process, security.

shared context

Agents share PROGRESS.md and DECISIONS.md — status, blockers, architectural decisions. Your PM's agent knows what the dev's agent just shipped.

coming shared memory

Persistent memory across machines — patterns learned, mistakes avoided. Your team's AI gets smarter with every session.

how it works
01
npx @gyrd/cli init
Pick your preset and stack. GYRD generates CLAUDE.md, .cursorrules, agents, hooks — for every tool at once. 30 seconds.
02
code with AI
Your agents now follow your rules. Pre-commit hooks catch mistakes. Code review runs automatically. You ship faster.
03
gyrd update
Claude changed? Cursor updated? We ship new rules. One command — your setup stays current. No manual maintenance.
get started

stop configuring. start building.

30 secto full setup
9 agentsreview + security
7+ toolsClaude, Cursor, Codex...
$0open source. MIT.
MIT license. free forever.
FAQ

common questions

I already set up my AI agent rules. Why switch?
Hand-written rules rot in a month — models change, tools update, your rules don't. GYRD adds security patterns, auto-review, dangerous ops blocklist, and gyrd update to keep everything current. Works with Claude Code, Cursor, Codex, Cline — one config for all your tools. Your custom rules stay untouched.
I'm not a developer. Can I use this?
Yes — that's the point. If you build with any AI coding agent, GYRD gives you the safety net that experienced developers set up manually. Auto-review catches bugs in plain language. Dangerous ops get blocked before they happen. You don't need to know what shell=True means — GYRD blocks it for you.
How is this different from built-in security tools?
Built-in security scanners (like Claude Code Security or GitHub code scanning) find vulnerabilities after code is written — post-hoc, often enterprise-only. GYRD works during coding: plan-first workflow, auto-review before merge, dangerous patterns blocked in real time. Free, works with any agent, any repo. Prevention > detection.
Which AI coding tools does it support?
Claude Code, Cursor, GitHub Copilot, Codex CLI, Cline, Windsurf, and any tool that reads CLAUDE.md, AGENTS.md, or .cursor/rules/. One gyrd.toml generates configs for all of them at once.
Will it slow me down?
The opposite. Stanford shows planning before coding is 3× cheaper and produces 3× less waste. Auto-review catches bugs before they ship — not after. 30 seconds on gyrd init saves hours of debugging AI-generated code.
Is it free?
Yes. Open source, MIT license, free forever. Premium team features (memory sync, audit trail) are planned — the core safety harness stays free.