← Back to blog

Cross-Site Agent Intelligence: Why We Built the ARP Profile

ByAgentMinds·Published ·11 min read·Source
arpcross-site agent intelligencemissionopen-specearly-accessagent observability

There is a quiet pattern hiding inside every production engineering team that runs AI agents in 2026.

Site A's content agent learns that Claude 3.5 occasionally fabricates ISBN-13 numbers when it can't find a citation. Site A's engineer fixes it: a pre-flight regex check, a retry with a stricter system prompt, a validation step before the answer ships. The fix takes a Tuesday afternoon. It works. The team moves on.

Six months later, site B's content agent — different team, different stack, different industry — fabricates an ISBN-13 in production. A customer screenshots it on Twitter. The on-call engineer pulls up the logs, traces the model, runs through the same investigation site A did six months ago, lands on roughly the same fix.

Six months wasted. The fix existed. Nobody who needed it could see it.

This is the problem AgentMinds is trying to solve. The rest of this post is the long version of why, what we built, and what we explicitly chose not to build.


The two facts that don't talk to each other

Fact one: Every team building AI agents emits a stream of telemetry. Errors, costs, latency, eval scores, learned patterns, recommendations.

Fact two: That telemetry never crosses the org boundary.

Sentry sees the errors but not the agent's reasoning. LangSmith sees the agent runs but not the production warnings. Datadog sees infrastructure but not what the model decided. Each tool is single-tenant by design — your data is yours alone. That privacy guarantee is a feature for compliance, but it's a wall for collective learning.

So every engineering team building agents in 2026 is solving the same problems three times: once when they write the agent, once when they debug the first production failure, once when they discover the same failure has been hitting six other companies for the past quarter.

We think there's a missing primitive in this stack. Not a new tool, not a new vendor — a missing wire shape for the bit of agent telemetry that's *safe to share*. The fingerprint of the bug. The shape of the fix. The confidence the fix actually worked. None of that is sensitive customer data. All of it is exactly what other teams need to skip the six-month rediscovery.

That missing wire shape is what we built.


What AgentMinds actually does

The product, in three steps:

1. Connect. pip install agentminds && python -m agentminds connect. One command. Your site joins the network, gets an API key, your entry file is patched once. 2. Push. Your agents emit warnings, learned patterns, and runtime telemetry into a wire format we publish openly (the AgentMinds Reporting Profile, ARP). Anonymized in transit. 3. Pull. Your code queries the network. Recommendations come back ranked for *your* stack — patterns that peer sites running similar tech have already observed, fingerprinted, and resolved.

The novel piece is step three. Specifically: when a pattern's fingerprint matches across sites, the network doesn't just count occurrences. It tracks the lifecycle — active, solved, resurfaced. A solved pattern at site A becomes a Tier-2 recommendation at site B if the same fingerprint reappears, ranked by Beta-Bernoulli confidence weighted against your stack's applicability.

Two sentences of math: a pattern observed at one site has confidence (1+1)/(1+2) = 0.67. Observed independently at three sites that all solved it: (3+1)/(3+2) = 0.80. The ranker boosts patterns that *peer* sites in *your* tech stack solved, not patterns that universally exist.

That's the loop. It's a small loop. It's also the loop nobody else builds because every adjacent vendor is correctly afraid of cross-tenant data sharing.


What we deliberately do not do

It's important to be specific about this because the failure modes here are real.

  • We do not share what one tenant saw with another tenant. Pattern fingerprints + counts + confidence travel across the network. Site identifiers and site URLs never do. Receivers see "this pattern hit at N peer sites and was solved by M of them" — they never see which sites.
  • We do not store your business data. Customer names, payment data, conversation contents — none of it lands in our system. The data we ingest is *agent telemetry*: warnings, fingerprints, learned-pattern descriptions, runtime spans. Anything sensitive is meant to live in your existing tools (Sentry, Datadog, LangSmith), not in ours.
  • We do not replace your existing observability. AgentMinds is a thin layer on top. We adopt Sentry's fingerprint+lifecycle, OpenTelemetry GenAI's attribute namespace, MCP's tool envelope, Anthropic Claude Skills' frontmatter, and AGNTCY OASF's descriptor. If you already emit any of those formats, you're 80% of the way there. Pair AgentMinds with Sentry — don't replace it.
  • We do not publish a public knowledge browser. Tier-1 universal web hygiene rules are public — those are the rules every site needs anyway (HSTS, robots.txt, schema.org, etc.). Tier-2 cross-tenant learnings stay gated behind connect. The pool isn't a public dataset to be scraped; it's personalized delivery to authenticated sites that contributed to it.
  • We do not build a competing standard. ARP is a *profile* — not a new spec. If an upstream standard ships a primitive that overlaps with one of ours, we have a §7 reorientation clause that defers to upstream within 30 days. Our novel piece is precisely one thing: the cross-site Pattern lifecycle. Everything else is borrowed and credited.

  • Why we're posting this in early access

    The unbiased competitive report we commissioned recently was right about one thing: the visible scale of the network doesn't match the visible scale of the marketing surface. The product is fully built — onboard, push, pull, recommend, all working end-to-end — but only three sites are actively contributing to the pool today: mimari.ai (the architectural-rendering SaaS we operate ourselves to keep AgentMinds honest), gridera.io, and a video-to-text knowledge engine pushing 55 agents' worth of telemetry.

    Three sites is not a network either. It's a starting point.

    So the framing changed. Every marketing surface on this site now carries an "early-access" badge with the live counts: contributing sites, patterns observed, percentage solved. Nothing is going to claim "production sites" plural until we genuinely have five or more independent contributors. We'd rather tell you the real number than imply scale we don't have. The cross-site value compounds with each new site, and we're explicitly opening 10 slots for the next round of contributors.

    The promise to early-access sites:

  • Free during early access. The first 100 sites that connect lock in a "free forever" status when paid add-ons eventually land.
  • Direct line to the maintainers. Your bug becomes priority-zero in the next release.
  • Co-design role. If the spec direction matters to you, we'd rather have your fingerprints on the §4.1 Pattern shape than draft it in isolation.
  • Permanent free tier on the foundational platform. We'll add enterprise add-ons later (SSO, dedicated support, on-prem) but the core never moves to a paywall.

  • Why this is hard, and why we still think it's worth it

    The reason no major vendor has shipped cross-tenant pattern pooling isn't technical — it's commercial. Every existing observability vendor has a single-tenant trust model baked into their product. Sentry can't share your error fingerprints with another company; that's a feature of how Sentry was sold. Datadog can't either. The structural constraint isn't the math, it's the business contract.

    Building cross-tenant pattern pooling as the *first* thing we ship lets us put the privacy boundary in the right place from the beginning. The wire format separates Tier-1 (universal hygiene rules — public), Tier-2 (cross-tenant learned patterns — gated behind connect), and Tier-3 (site-private patterns — never leave the originating tenant). The choice of which tier a pattern lands in is encoded in the spec, not handled out-of-band.

    Could a big vendor ship cross-org pattern sharing tomorrow? Maybe — if they were willing to relitigate their entire trust model with their existing customer base. The history of that decision in the observability industry is "no, almost never." Cloudflare's Agent Memory is intra-org. Sentry's grouping is per-environment. LangSmith's hub is for prompts (single-tenant artifacts), not cross-customer learnings.

    The most likely path for cross-org pattern sharing to become standard is: an open spec demonstrates the privacy-safe wire shape, a few production deployments show it working, and either a foundation absorbs it (LF AI & Data — AGNTCY's home — Discussion #314 is exactly this conversation) or it stabilizes as an interoperable format that vendors implement. We'd be happy with either outcome.


    What success looks like at six months

    We're not optimizing for a hockey-stick chart. The three things we'll be measuring at the end of October 2026:

    1. Network size. From 3 contributing sites to at least 25. Each new contributor is a multiplicative effect on cross-site recommendation quality. 2. Recommendation actionability. Right now 38.4% of actionable patterns in the pool have moved to "solved" state. That's a healthy starting baseline; we want it above 60% by then. Higher solved-rate means recommendations have higher confidence, which means consumers actually use them. 3. Spec adoption signal. Either OASF v2 absorbs ARP §4.1's Pattern primitive (good — we become the reference impl), or a parallel implementation of ARP appears in the wild (also good — we have a real interoperable wire format), or neither (then we re-evaluate the §7 reorientation clause and ship plan B).

    Not on the list: revenue, employee count, valuation, fundraising round. None of those are signals for whether the *product idea* is right. The signals above are.


    How to help

    The most useful thing you can do right now if this resonates:

    1. Connect a site. pip install agentminds && python -m agentminds connect — five minutes, free, no card. Even if you only push for two weeks, your agents' learned patterns make the recommendations measurably better for the next site. 2. Read the spec, file an issue. agentmindsdev/profile is open under CC-BY-4.0. The §4.1 Pattern shape is the load-bearing piece — feedback there compounds the most. 3. Send the post. If a friend's team is solving the same bugs three times, this is the post that explains what we built and why. The most valuable contribution to a small network is the next site that connects.

    The address: agentminds.dev/onboard. Two minutes. Early access is open.

    *If you want the operational truth: we publish live network counts at api.agentminds.dev/health. The live number is what's correct; this post can drift, the JSON can't. We'd rather you trust the JSON.*

    Ready to try AgentMinds?

    Scan your site for free. No signup required.

    Scan Your Site