canonry

$ canonry --help

The operating system for AEO.

Canonry's agent tracks how ChatGPT, Gemini, Claude, and Perplexity cite your pages, finds the gap, and ships the fix. All locally; all open source.

$ brew install ainyc/canonry/canonry

Then canonry init && canonry serve. Need another option? All install options →

onlinesession.ainyc4/4 providers11 phrases0%

Stack

Same client. Four entry points.

30+
CLIcommandsevery command --format json
118
REST APIendpointsself-served OpenAPI 3.1
48
MCPtools5 toolkits, load on demand
6
Webhookeventssigned, per-project

Browser provider over CDP captures live ChatGPT, Gemini, and Claude UIs. Compatible with Claude Code, OpenAI Codex, Cursor, and the bundled Aero agent.

Built for agents

Observability. For humans.

Canonry overview dashboard for ainyc.ai showing answer visibility 5 of 11, gap key phrases 1 of 11, index coverage 17 percent, and visibility breakdown by model
Visibility · gap phrases · index coverage · run delta — per project.

Frequently asked

Common questions.

Definitions, how-tos, comparisons, and stack questions. Click any question to expand the answer.

What is...

Definitions

4
  • What is AEO (Answer Engine Optimization)?

    AEO is the discipline of making sure your content shows up accurately and prominently in the synthesized answers produced by generative engines like ChatGPT, Gemini, Claude, and Perplexity. It is the successor to SEO: instead of optimizing for ten blue links, you optimize for the model's citation graph, retrieval surface, and the structured signals (JSON-LD, llms.txt, schema completeness) that decide whether a model trusts your page enough to cite it.

  • What is Canonry?

    Canonry is the open-source, agent-native operating system for AEO. It tracks how AI engines cite your site, sweeps standard SEO surfaces (Google Search Console, Bing, GA4), diagnoses regressions to the deploy that caused them, drafts schema patches, and re-verifies the fix. All of it runs through a typed tool surface designed to be called by a coding agent.

  • What is Aero?

    Aero is the bundled agent that ships with every Canonry install. It uses the exact same typed surface as Claude Code, Codex, or Cursor, just pre-wired with no setup. Aero is backed by pi-agent-core and works with 15+ LLM providers. Use it as a fallback when you don't want to bring your own coding agent.

  • What is the agent harness?

    The harness is the typed surface Canonry hands to a local coding agent: 30+ CLI commands, 118 REST endpoints, 48 MCP tools across 5 toolkits, and 6 webhook events. The CLI, REST API, MCP adapter, and webhook subscriber all wrap the same public client, so whatever your agent speaks, Canonry speaks back.

How does...

How it works

5
  • How does Canonry monitor AI citations?

    Canonry sweeps each provider's API on a schedule (Gemini, OpenAI, Anthropic, Perplexity), captures the answer and citation list per phrase, and stores results in local SQLite. For consumer surfaces where the API differs from the web UI (e.g. ChatGPT with web search), the CDP browser provider drives a real Chrome session against the live UI to capture citations as a user actually receives them.

  • How do I install Canonry?

    Three options: Homebrew (brew install ainyc/canonry/canonry), npm (npm install -g @ainyc/canonry on Node 22+), or Docker (docker run --rm -p 4100:4100 -v canonry-data:/data arberx/canonry). After install, run canonry init to create the SQLite store at ~/.canonry/data.db, then canonry serve to start the dashboard at localhost:4100.

  • How does Canonry fix regressions automatically?

    When a sweep completes, Canonry's bundled agent (or yours, via webhook) reads the result, identifies lost citations, calls get_evidence and diff_snapshot to trace the cause to a deploy, drafts a schema patch or content fix, calls request_indexing against Google Search Console, and schedules a verify sweep in 6 hours. You wake up to a triaged decision, not an inbox of alerts.

  • How do I connect my coding agent?

    The recommended path is the CLI. Every command supports --format json so any shell-driven agent can pipe results in and out. For Claude Desktop, Cursor, and Codex you can also use the MCP adapter: run canonry mcp install --client claude-desktop (or --client cursor / --client codex) and the 48 tools register automatically.

  • How is Canonry self-hosted?

    Canonry runs entirely on your machine or your server. The data store is a single SQLite file at ~/.canonry/data.db. Your disk, your backup, your data. No cloud account required, no per-prompt pricing, no shared multi-tenant database. Docker images are published to Docker Hub and GHCR; one-click Railway deploys are supported.

How is it different...

Comparisons

3
  • How is Canonry different from a CLI bolted onto a SaaS dashboard?

    Canonry is agent-native: it was built from the ground up to be driven by agents. Every CLI command supports --format json, every dashboard view has a matching REST endpoint, the MCP adapter exposes the same surface to Claude Desktop, Cursor, and Codex, and webhooks let any agent wake up unprompted. Legacy AEO products added a CLI later. Canonry started there.

  • How is Canonry different from other AEO tools?

    Canonry is source-available, self-hosted, and agent-first. Other AEO tools are cloud SaaS dashboards: you log in, look at charts, copy-paste suggestions. Canonry hands the same data to a typed agent surface so a coding agent can read it, diagnose it, and fix it without you ever opening a dashboard. You also own your data and run your own provider keys.

  • Does Canonry replace my coding agent?

    No. Canonry provides the typed tool surface, evidence, budget, and verification loop. Your agent does the actual code edits. Bring Claude Code, OpenAI Codex, or Cursor over CLI or MCP. The bundled Aero agent is for users who don't want to wire up their own.

Will it work with...

Your stack

4
  • Which AI engines does Canonry monitor?

    Gemini (including AI Overviews), OpenAI ChatGPT, Anthropic Claude, Perplexity, and any OpenAI-compatible local LLM such as Ollama, LM Studio, or vLLM. Provider keys are yours; rate limits are respected per provider.

  • Will it work with my CMS and analytics stack?

    Out of the box: Google Search Console (coverage, URL inspection, request-indexing via OAuth), Bing Webmaster (API key auth), Google Analytics 4 (traffic, AI referrals, attribution), WordPress (REST + Application Passwords for content publish), and Common Crawl (workspace-level release sync via DuckDB). Custom integrations connect via the same MCP/REST surface your agent uses.

  • Can Canonry read what users actually see in ChatGPT?

    Yes. Canonry's CDP (Chrome DevTools Protocol) browser provider drives a real Chrome session against the live ChatGPT, Gemini, or Claude web UI and captures the answer and citations exactly as a user receives them. This is critical when API responses differ from the consumer web UI, for example when web-search context is included in the consumer experience but not in the API.

  • Is the source code public?

    Yes. Canonry's full source is public on GitHub at github.com/AINYC/canonry and the npm package @ainyc/canonry is publicly available. You can read it, run it, and self-host it. Licensing details are being finalized; reach out via GitHub if you need clarity for commercial use.