Show HN: Plan-linter – pre-flight safety checker for AI agent plans

github.com

3 points by mercurialsolo a day ago

just released our plan-linter – a tiny static-analysis tool that catches "obvious-stupid" failures in AI agent plans before they reach runtime.

GitHub repo -> https://github.com/cirbuk/plan-lint

also read on how to deal with safety using a 4-step safety stack (“No Safe Words”) → https://mercurialsolo.substack.com/p/no-safe-words

Why?

Agents now emit machine-readable JSON/DSL plans. Most prod incidents (loops, privilege spikes, raw secrets) could have been caught by scanning those plans offline, yet everyone focuses on runtime guardrails.

What it does

* Schema + policy validation (JSONSchema / YAML / OPA)

* Data-flow + taint checks for secrets & PII

* Loop detection (graph cycle)

* Risk score 0-1, fail threshold configurable

* Plugin rules via entry_points

Runs in <50 ms for 100-step plans, zero token cost.

how are you dealing with safety (budget overruns, token leaks) when deploying agents in prod with tool access?