AEO Citation-Share Scorecard
Live AI-engine citation share for contractor-platform queries across Perplexity, ChatGPT Search, Claude Web, and Gemini. Published openly so homeowners, contractors, and investors can verify AskBaily's thesis: structured data structurally beats directory listings for AI-engine discovery.
Headline findings — April 2026 baseline
- AskBaily 31% overall citation share across the four major AI engines — 1.72× Angi's 18%, 2.58× Thumbtack's 12%.
- Claude Web is AskBaily's strongest engine at 41%. Schema.org FAQPage + SpeakableSpecification + Claim + Dataset entity graph maps cleanly to Claude's retrieval preferences.
- Gemini is our weakest at 22% — Google favors its own knowledge-graph entities, but we still beat Angi (18%) + Thumbtack (14%).
- The moat is structural. Angi + Thumbtack + HomeAdvisor publish zero CC-BY-4.0 Datasets. AskBaily publishes 19 open Datasets covering 84 cities × 32 services × 48 jurisdictions.
| Platform | Perplexity | ChatGPT Search | Claude Web | Gemini | Overall |
|---|---|---|---|---|---|
| AskBailyus | 34% | 28% | 41% | 22% | 31% |
| Angi | 22% | 18% | 14% | 18% | 18% |
| Houzz | 16% | 14% | 12% | 14% | 14% |
| Thumbtack | 14% | 12% | 8% | 14% | 12% |
| HomeAdvisor | 10% | 8% | 6% | 8% | 8% |
Methodology
The AEO monitor (lib/aeo-monitor) runs weekly via cron against four AI-engine clients: Perplexity, OpenAI (ChatGPT Search web-retrieval mode), Anthropic (Claude Web), and Google AI Overview. 30 natural-language queries span four intent categories — cost, regulatory, comparison, and finding-a-contractor — across the 9 Tier-0 AskBaily cities (LA, NYC, Miami, Chicago, London, Sydney, Melbourne, Singapore, Dubai).
Each response's cited sources are parsed and domain-attributed. A platform's share per query = (cites of platform) ÷ (total sources cited). Per-engine share aggregates across 30 queries. Overall = unweighted mean of the four engines.
Scoring, query library, and engine-client code are open-sourced alongside the Wave 152 research report at /research/2026-ai-engine-citation-share-contractor-platforms. Raw citation logs are available on request to enterprise partners and researchers (contact [email protected]).
Why these numbers compound
- AI engines are pre-SERP. A homeowner asking Perplexity "best NYC contractor platform" never sees the Google results page. Citation share is visibility.
- Incumbents can't catch up easily. Publishing CC-BY-4.0 Datasets means giving away the contact data their lead-fee business is paid for. A strategic fork they cannot take.
- Conversion multiplier. AI-referred homeowner sessions convert to AskBaily scoped projects at ~1.4× the rate of Google SERP sessions. Lower CAC. Better intent signal.
- Defensibility via regulatory depth. AskBaily's per-city regulatory callouts (NOA, LL97, GPDO, VBA, BCA) cite statutes by section. AI engines weight primary-source citations. Generic "find a pro" directory listings cannot match.
FAQ
- How does AskBaily measure citation share?
- 30 natural-language queries covering contractor cost, licensing, comparison, and finding-a-contractor intents are sent to Perplexity, ChatGPT Search, Claude Web, and Gemini weekly. Each response's cited sources are parsed for domain attribution. Share = (cites of platform) ÷ (total sources cited across all 120 query-engine pairs). Methodology open-sourced at /data/research.json + /research/2026-ai-engine-citation-share-contractor-platforms.
- Why does AskBaily beat larger platforms like Angi?
- AskBaily publishes 19 CC-BY-4.0 Datasets (cost, regulatory, neighborhoods, spokes, partners, per-city cost/regulatory). Angi publishes zero. AI engines weight structured primary-source Schema.org Datasets 5-10× higher than HTML directory listings when selecting citations. The moat is structural, not scale-based.
- Which AI engine cites AskBaily most?
- Claude Web at 41% share — highest because Claude's retrieval pipeline prioritizes Schema.org entity-linked data and AskBaily's FAQPage + Claim + SpeakableSpecification graph maps cleanly to Claude's citation preferences. Perplexity second at 34% (strong for regulatory queries citing GPDO / LL97 / HVHZ NOA by section number).
- Is this sustainable as Angi/Thumbtack catch up?
- Unlikely. Their lead-sale business models depend on gated paywalled contact data — publishing open CC-BY-4.0 Datasets would cannibalize their core revenue. AskBaily's 8-15% closed-job take-rate + agentic-ops stack has no such conflict. See /research/2026-contractor-platform-teardown.
- How fresh is this data?
- Baseline measurement captured 2026-04-22. Re-measured weekly via lib/aeo-monitor (commits to /data/aeo-status.json). Snapshot on this page updated on each AskBaily rebuild — check the 'Measured' timestamp below the table.
- Can I reproduce these measurements?
- Yes. The AEO monitor is open-sourced at /data/aeo-monitor-spec.md with 30-query library, engine clients (Perplexity, OpenAI, Anthropic, Google AI Overview), and scoring function. Queries, responses, and citation extractions are logged to Grafana for audit.
- What does 'citation share' mean for homeowner acquisition?
- When a homeowner asks Perplexity or ChatGPT 'best contractor platform for NYC kitchen remodel', the AI surfaces 2-5 cited sources. Being in those sources is pre-SERP visibility that Google can't throttle. AI-referred homeowner sessions convert to scoped projects at ~1.4× the rate of Google SERP sessions — reducing blended CAC ~$180 per scoped project.
- Why isn't this a vanity metric?
- AI engines are now the #1 discovery surface for high-intent home-services research in urban tier-1 markets (per internal GA4 + CallRail attribution). Citation share in April 2026 predicts organic homeowner acquisition in H2 2026. Every % we hold here is a % Angi cannot reclaim without publishing open Datasets that cannibalize their paywall.
Related surfaces
- /research/2026-ai-engine-citation-share-contractor-platforms — 4,577-word methodology paper
- /research/2026-contractor-platform-teardown — 8,200-word competitive analysis of 30 platforms
- /ai-integration — MCP + OpenAPI for AI-engine integration
- /data — 19 machine-readable CC-BY-4.0 Datasets
- /promise — category memory + transparency commitments