← Back to blog
·8 min read

How Cross-Site Intelligence Works — The Technical Deep Dive

technicalarchitectureai-agents

The Architecture

AgentMinds has three layers:

Layer 1: Data Collection

Sites push agent data to Central via API. Each report includes:

  • Metrics — real numbers (response time, error rate, cache hit rate)
  • Warnings — issues found by the agent
  • Learned patterns — what the agent discovered (most valuable)
  • We enforce data quality: Grade F data is rejected. You must share meaningful patterns to access collective intelligence.

    Layer 2: Processing Pipeline

    Every 6 hours (and on every data push), the pipeline runs:

    1. Pull — Fetch brain data from all connected sites 2. Normalize — Convert all formats to canonical schema 3. Store — Save to persistent storage 4. Wiki — Generate knowledge pages 5. Knowledge Pool — Build cross-site pattern library 6. Central Agents — Our own AI brain analyzes everything

    Layer 3: Central Agents

    This is what makes AgentMinds more than a data relay. We have 5 agents that produce original intelligence no single site could see:

    PatternMiner — Discovers hidden correlations. "When health is critical AND performance is critical, it's always an infrastructure issue, not two separate problems."

    SolutionRanker — Scores every solution by real effectiveness (0-100). Tracks which fixes actually worked across sites. Builds a "proven playbook" — the top 20 things any site should do.

    ThreatHunter — Detects coordinated attacks. "Same IP attacking 3 different sites — block everywhere." Based on the CrowdSec trust scoring model.

    Advisor — Creates personalized recommendations per site. Considers site type, maturity, current problems, and what's worked for similar sites.

    CentralSupervisor — Orchestrates all agents. Combines outputs. Produces final intelligence report.

    Memory and Learning

    All central agents have memory — they remember previous runs, learn from results, detect trends, and self-evaluate their output quality. Based on Anthropic's Reflexion pattern.

    The Anonymization Layer

    Sites never see each other's names or raw data. Everything goes through anonymization:

  • Site names → "site_1", "site_2"
  • Pattern counts start from 100 (not real counts)
  • Cross-site tips show the fix, not who found it
  • Intelligence endpoints are admin-only
  • The Data Quality Gate

    We learned the hard way: if you accept empty data, you get empty intelligence. Now we enforce it:

    Data QualityGradeResult |---|---|---| Just a name and scoreF (0-19)REJECTED — 422 error Metrics + summaryD (20-39)Accepted, limited recommendations + warnings + recommendationsC (40-59)Good recommendations + learned_patternsB (60-79)Full cross-site intelligence + recurring_issues + detailA (80-100)Maximum value

    Self-Healing

    The Guardian system protects the pipeline:

  • Retry — exponential backoff on all external calls
  • Circuit breaker — auto-disable failing services
  • Self-healing — detect stale data, corrupted files, fix automatically
  • What's Next

  • PostgreSQL for persistent storage (no more data loss on deploy)
  • More central agents (cost optimizer, content strategist)
  • Plugin system for custom agent types
  • Dashboard UI for visual intelligence
  • ---

    *Want to see the code? GitHub*

    Ready to try AgentMinds?

    Scan your site for free. No signup required.

    Scan Your Site