Deterministic safety gate for robots and high-stakes agents.
On every pull request, Saykai reruns your scenarios and log replays, checks them against your Safety Spec (your versioned definition of safe enough), and blocks changes that fall below your safety bar.
For teams running agentic systems, physical AI, and other high-stakes software that cannot afford regressions.
Safety Spec = versioned rules • Safety Pack = JSON evidence per run
Teams ship faster than they can validate real-world behavior changes.
Saykai turns CI into the first safety line before anything reaches production hardware, money, or users.
How it fits into your pipeline.
Drop Saykai into your existing CI as a required check.
- Every PR runs safety scenarios and log replays
- Results are compared to your Safety Spec and baseline
- A Safety Pack explains the decision and what changed
- The PR passes or blocks based on policy, not gut feel
Runs in CI. You control what data is included in scenarios and log replays.
> Drop in Saykai once, then every change is screened against the same safety bar.
Latency regression: +0.2% (allowed)
Safety Pack generated: sp-8a7b9c.json
Start with one CI integration.
Define your Safety Spec (saykai.yml)
Describe what safe enough means for one system. Scenarios, metrics, thresholds, and blocking rules live in version control with your code.
Install the GitHub Action
Run Saykai as a required check on every pull request in one critical repo.
If you are not on GitHub Actions, we will help you wire it in during the pilot.
Get a Safety Pack on every PR
Each run produces a JSON Safety Pack with metrics, diffs vs baseline, and a clear pass or block decision you can hand to safety and risk.
Private beta. Accepting a small number of pilot teams in industrial robotics and high-risk agent workflows.
Gate your next deployment.
The first step is simple: add saykai.yml and the Saykai CI check to one critical pipeline. From there, every change to that system is screened through the same deterministic safety bar.