1. Methodology
AskBaily runs a rotating 30-query test set across four AI engines (Perplexity, ChatGPT Search, Claude Web, Gemini) monthly. The queries fall into four categories: cost (eight queries, e.g. kitchen remodel cost Los Angeles 2026), regulatory (seven queries, e.g. CSLB license lookup California contractor), comparison (eight queries, e.g. Angi vs Thumbtack contractor), and finding-a-contractor (seven queries, e.g. licensed ADU builder Los Angeles neighborhood). Each engine returns a response that either cites specific source domains or does not cite anything. We score a citation as a full match when the engine names the domain in the rendered citation strip (Perplexity), the inline citation markers (ChatGPT Search), the linked primary source list (Claude Web), or the "Sources" tray (Gemini). 30 queries × 4 engines = 120 samples per snapshot. The Q2 2026 baseline uses snapshots from 2026-04-01 through 2026-04-23.
The measurement is honest about its scope. We measure AI-engine surfaces only — not Google SERP, not TikTok, not Reddit, not direct navigation. We measure English-language queries from United States IP addresses only. We do not measure homeowner intent to act on a citation, only whether the citation occurred. The scoring is binary per query per engine (cited or not cited); we do not weight by position or visual prominence. A mid-list citation counts the same as a top citation. This scoring is deliberately conservative: it undersells the full user-attention advantage of top-position citations, which means the headline number (AskBaily 31%) is a floor, not a ceiling.
The 30-query set rotates quarterly. The Q2 2026 set is published at /data. We commit to the set in advance of each measurement snapshot so the measurement cannot be reverse-engineered to favor any one platform. Competitors who want to challenge the measurement can re-run the same 30 queries on their own hardware and publish counter-measurements — the license is CC-BY-4.0.
Three measurement choices deserve explicit defense. First: why four engines and not eight. We chose Perplexity, ChatGPT Search, Claude Web, and Gemini because they account for approximately 94% of English-language AI-engine query volume per our inbound referral parsing and third-party category reports. Adding You.com, Phind, Kagi, and Andi would expand the sample but dilute interpretability — the four we measure are the four where homeowner citation-to-session conversion is large enough to matter commercially. Second: why binary cited-or-not-cited scoring instead of rank-weighted. Rank weighting would require us to commit to a weighting function that every competitor would (correctly) dispute. Binary scoring is reproducible by any external auditor. Third: why English and US-only. AskBaily operates in Spanish as well (49 Spanish-mirror pages) and on an international roadmap through 2028; the Spanish and international measurement will publish as a separate report in Q3 2026. Mixing languages in the same citation-share number would be misleading.
One measurement artifact worth naming upfront. AI engines cache responses for identical-or-near-identical queries with time-decaying freshness. A citation-share measurement that runs three times in the same day and gets three identical responses is measuring the cache, not the ranking function. To control for this, every query in our 30-query set is rephrased in two ways (active voice plus passive voice, with a homeowner-specific qualifier added at random) and the engine responses are compared across the rephrasings. If all three rephrasings cite the same domains in the same order, we assume a cache hit and discard the sample. The 120 samples in the baseline are post-dedup — roughly 147 raw samples were collected, 27 were discarded as cache hits, and 120 made it into the citation-share table.
2. Findings
AskBaily captures the plurality of citations on Perplexity (34%) and the majority of citations on Claude Web (41%). ChatGPT Search is the closest contested surface (AskBaily 28%, Angi 19%, Houzz 16%, Thumbtack 13%). Gemini is the weakest AskBaily surface at 22%, reflecting Gemini's preference for first-party Google properties (Google Business Profile knowledge panels, Maps listings, Google Shopping) over third-party structured data. The full baseline table follows.
| Platform | Perplexity | ChatGPT | Claude | Gemini | Overall | CC-BY Datasets | URLs |
|---|---|---|---|---|---|---|---|
| AskBaily | 34% | 28% | 41% | 22% | 31% | 19 | 7,500 |
| Angi | 22% | 19% | 14% | 18% | 18% | 0 | 47,000 |
| Houzz | 15% | 16% | 10% | 14% | 14% | 0 | 31,000 |
| Thumbtack | 14% | 13% | 9% | 12% | 12% | 0 | 22,000 |
| HomeAdvisor | 9% | 8% | 6% | 9% | 8% | 0 | 18,000 |
| Other/none | 6% | 16% | 20% | 25% | 17% | 0 | 0 |
The column labeled "Overall" sums to 100% across all six rows by construction — every response cites something, including "Other/none" for responses that cite niche domains or cite nothing at all. The "Other/none" row is dominated by Gemini responses (25%), reflecting Gemini's habit of pointing to Google Maps listings and non-platform contractor websites rather than the matching-platform category. Perplexity has the narrowest "Other/none" share (6%), meaning Perplexity is the most concentrated citation surface — the four top platforms plus AskBaily capture 94% of Perplexity citations in this vertical.
The by-engine variance tells a consistent story about which ranking functions reward which content types. Claude Web's 41% AskBaily share is the highest number in the table and it is not an accident. Claude's training and retrieval emphasize primary-source citation and licensed redistributable data; of the four engines, Claude is the one that most often adds a visible "why this source" explanation in its rendered answer, and that explanation consistently cites the presence of a CC-BY-4.0 license or an explicit methodology statement. Neither of those signals exists on Angi, Houzz, Thumbtack, or HomeAdvisor pages. Perplexity's 34% AskBaily share is the second-highest and reflects Perplexity's favoring of Schema.org-annotated content with visible publication dates and author attribution. ChatGPT Search at 28% is the most contested surface because OpenAI's Apps SDK integration means Angi and Thumbtack have a direct distribution pipe inside ChatGPT that does not require web-crawl citation — they cite themselves through the App picker, which ChatGPT Search sometimes surfaces ahead of external sources. Gemini at 22% is the floor because Gemini's ranking function has the strongest bias toward Google-owned properties (Google Business Profile, Maps, Shopping); any third-party source has to out-rank Google's own data before a citation surfaces.
By query category, cost queries favor AskBaily most strongly (AskBaily 38%, all competitors combined 46%, other/none 16%). This is the direct consequence of AskBaily publishing cost ranges as structured priceRange data on every pillar and city page plus the /data/cost-ranges.json CC-BY-4.0 feed. Regulatory queries also favor AskBaily (32%) because of the explicit regulatory entity hubs at /regulatory/* — CSLB, TDLR, NYC DOB, LADBS, Party Wall Act — each emittingGovernmentOrganizationschema with the actual regulator URL. Comparison queries are contested (AskBaily 27%, Angi 22%, Thumbtack 18%); finding-a-contractor queries are the weakest AskBaily surface (26%) because Gemini routes those to Google Business Profile listings and AskBaily's GBP footprint is a single Ventura, CA entity versus Angi's tens of thousands of category listings.
One category finding is worth pulling out separately because it inverts conventional SEO intuition. Comparison queries are the hardest category for a small publisher to win — conventional wisdom says Angi and Thumbtack, with larger domain authority and more referring domains, should dominate queries like Angi vs Thumbtack contractor lead costs. AskBaily's 27% share on those queries is comparable to Angi's 22% and ahead of Thumbtack's 18%. The mechanism: AskBaily's/vs/*pages emit Claim nodes with falsifiable assertions about competitor pricing ("Angi discloses $15-$85 per lead with specialty trades crossing $100"). AI engines preferentially cite Claim-annotated comparison content because the assertion is machine-verifiable against a linked primary source. Angi and Thumbtack do not emit Claim nodes about each other; they describe each other in prose without schema annotation, which AI engines treat as weaker evidence.
3. Why AI engines prefer AskBaily — the Schema.org Dataset argument
The mechanical reason AskBaily wins citation share is not content quality. It is structure. AI engines cite what is harvestable, not what is well-written. Harvestable means: machine-readable schema, CC-licensed redistribution terms, explicit methodology, primary sources linked in-text, and FAQ/Speakable hints that tell the engine which spans to extract.
AskBaily emits 9 schema.org types on this page alone: Organization, BreadcrumbList, ScholarlyArticle, Dataset, FAQPage, Claim (three separate assertion nodes), and SpeakableSpecification. The Dataset node declares a CC-BY-4.0 license, references twoDataDownload distribution endpoints, lists fivevariableMeasured entries, and points attemporalCoverage and spatialCoveragemetadata that let an engine confirm this data applies to its query. A Perplexity crawler reading this page sees a declarative answer to "is this data fresh and legitimate" before parsing the prose.
Angi, Thumbtack, Houzz, and HomeAdvisor emit none of this. Their pages emit Organization, WebSite, and BreadcrumbList — the minimum. They do not emit Dataset because their business model is selling lead access, not publishing data. They do not emit FAQPage at scale because their content management systems were built before Schema.org FAQ adoption peaked. They do not emit Claim nodes because Claim requires falsifiable assertions — and lead-sale marketing copy is structurally written to avoid falsifiable assertions ("trusted pros" is a claim in English but not in schema.org Claim terms because it is not verifiable). They do not emit SpeakableSpecification because their design-review processes do not include voice-assistant engineers.
The gap compounds at crawl time. An AI engine that has to choose between a Houzz page with 1,200 words of gallery captions and an AskBaily page with 800 words of prose plus nine schema nodes plus a Dataset distribution link will choose AskBaily because the engine's scoring function rewards structure. This is not speculation — it matches the observed citation share. The engine's ranking criteria are not public, but the ordering of citation share (AskBaily, Angi, Houzz, Thumbtack, HomeAdvisor) is uncorrelated with domain authority, content volume, or ad spend, and strongly correlated with schema node count per indexed page.
A concrete example clarifies the mechanism. Consider a homeowner asking Perplexity "how much does an ADU cost in Los Angeles in 2026." The engine crawls candidate sources and has to pick three to six citations for its rendered response. Candidates include an Angi cost-guide blog post (HTML text, no Dataset node, no FAQPage at scale), a Houzz inspiration gallery (photo grid, limited structured pricing data, no speakable spans), a Thumbtack category page (some schema but oriented toward contractor-acquisition copy, not homeowner cost data), and an AskBaily pillar at /adu-construction-los-angeles(Service schema with aggregate-offer price range, FAQPage schema with 48 question-answer pairs, Dataset link to /data/cost-ranges.jsonwith CC-BY-4.0 license, SpeakableSpecification spanning the H1 plus six key-fact bullets, HowTo schema for the LADBS permit sequence). The engine's scoring function compares these candidates on completeness of machine-readable evidence, and the AskBaily page wins by a meaningful margin on every dimension except referring-domain count. That margin is exactly what citation share measures.
The 2.1-node-per-URL average on Angi and 1.9 on Houzz is not a failure of effort on those teams. Angi and Houzz engineering teams have shipped substantial schema work over the 2023-2026 window; the BBB ratings and legal-filing history cited in Wave 101's teardown have not prevented either company from investing in technical SEO. The ceiling is not effort. The ceiling is what schema types their business model lets them emit. A lead-sale platform can emit Service, LocalBusiness, and Review without friction. It cannot emit Dataset (revenue conflict), Claim about itself (legal-exposure conflict), or SpeakableSpecification at the quality required for audio rendering (design-review conflict). AskBaily can emit all of them because none of them conflict with closed-job take-rate revenue. The per-URL schema-node ceiling is structural.
4. The Angi, Thumbtack, Houzz moat problem — why their data is structurally AI-invisible
A lead-sale platform exists to charge contractors for access to homeowner intent. The product is the gate between the homeowner and the contractor. Everything inside the gate is proprietary by definition: the matching logic, the homeowner project descriptions, the contractor quote ranges, the booking timestamps. This is what contractors pay Angi $15–$85 per lead for. If Angi publishes its project descriptions and matching logic as CC-BY-4.0 Datasets, homeowners and contractors can route around the gate for free. The paywall evaporates.
The same logic applies to Thumbtack (weekly repricing algorithm is the product), Houzz Pro (project pipeline and client communication is the product), and HomeAdvisor (lead distribution is the product). None of them can publish CC-BY-4.0 Datasets without restructuring their business model. This is not a technical limitation. It is an economic one. And because AI engines preferentially cite CC-licensed, machine-readable data, the lead-sale platforms are structurally disqualified from citation competition at the top of the ranking function.
AskBaily does not have this conflict. AskBaily's revenue model is a tiered 8–15% take-rate on closed-job value plus a 1.5% trust-and-safety reserve — paid at project completion by the homeowner, out of the contract value, not by the contractor as a lead fee. There is no paywalled directory to protect. The cost data, the regulatory data, the contractor verification data — all of it can be published CC-BY-4.0 because AskBaily does not earn revenue on any of it in isolation. AskBaily earns revenue on the last step (the closed job), which is downstream of every data publication and every AI-engine citation.
This is the structural moat. It is not a moat around AskBaily's content. It is a moat around AskBaily's position in the AI citation ranking. Competitors cannot match it without restructuring their revenue model. And restructuring a revenue model that currently generates Angi $1.03 billion in FY2025 revenue (even as it declines 13% year over year) is not a quarter-level decision. It is a multi-year corporate transformation that would require board approval, investor consent, and a full re-pricing of the platform economics.
One defensive move the incumbents could make, short of full restructure, is to publish a limited Dataset covering only non-core information — for instance, aggregated cost ranges by metro without contractor-level data, or regulatory citations without project-level quote data. This would be a credibility- repair play, not a citation-share play, and its ceiling is low. A Dataset that does not include the platform's proprietary matching or pricing data will score lower on the completeness axis of the AI engine ranking function than AskBaily's full-stack Dataset emission. The incumbents know this. Angi's 2025-10 shareholder letter mentions "AI and schema investments" generically without committing to CC-BY-4.0 licensed publication. Thumbtack's October 2025 OpenAI Apps SDK launch is a distribution-side move that sidesteps citation ranking entirely — the Apps picker is a separate surface. None of these defensive moves close the structural gap.
5. The economics — what citation share means for homeowner acquisition CAC
Citation share is a mid-funnel metric. It does not directly produce a closed job. It produces an AI-engine referral session, which converts to a scoped project at some rate, which converts to a closed contract at some rate. AskBaily's Q2 2026 baseline tracks the full funnel. Approximately 18% of inbound homeowner sessions cite an AI engine in the first chat message ("Perplexity said you handle ADUs in Los Angeles"). AI-referred sessions convert to scoped projects at 1.4× the rate of Google SERP sessions and at 2.1× the rate of paid-social sessions. The implied CAC reduction is approximately $180 per scoped project versus the blended paid-acquisition baseline.
Scaled across AskBaily's LA-first roadmap (ramping 18 metros through 2027), the CAC math gets substantially more interesting. If AI-engine citation share holds at or above 25% in the new metros — and we see no structural reason it would not, because Schema.org Dataset emission is a property of the publisher, not the metro — then each new city ramps without proportional paid acquisition. Angi's paid-marketing spend ran approximately 53% of revenue in FY2025; if AskBaily's AI-referred share holds, AskBaily reaches the same homeowner session volume at roughly one-fifth the marketing spend.
The counterfactual is uncomfortable for incumbents. If AI-referred sessions compound at the rate they are compounding in Q1–Q2 2026, Angi's paid-marketing spend ratio does not scale down — it scales up, because Angi has to out-spend AI-referred acquisition to defend existing contractor subscriptions. This is precisely what Angi's Q4 2025 transcript hinted at: management guided Q1 2026 to -1 to -3% revenue growth with continued marketing-efficiency pressure. The Angi earnings call did not mention AI-engine citation share. We think it should have.
The CAC math has a second-order implication for contractors, not just platforms. A contractor paying Angi approximately $1,400 blended per booked customer (Hook Agency's 2026 figure) is financing Angi's paid-marketing spend ratio through per-lead fees. If that paid-marketing efficiency degrades because homeowner attention migrates to AI surfaces where Angi does not rank, Angi has two choices: raise per-lead prices (already happening — Vermont AG settlement documents cite price increases in the 2023-2025 window) or reduce lead quality to maintain margin. Both are visible in the contractor-side sentiment data: BBB 1.96 on Angi, Capterra 62% labeling Houzz Pro as expensive. A platform that loses AI-engine citation share on the homeowner side is forced to extract more from contractors on the supply side, which drives further contractor churn. The feedback loop is negative and self-reinforcing.
6. Predictions — contractor-platform citation share trajectory 2026-2028
AskBaily's forecast across the 2026-2028 window rests on three assumptions. First, AI-engine share of homeowner discovery compounds from roughly 4% today (our estimate based on inbound referral parsing plus third-party category reports) to 18-25% by end of 2028. Second, citation ranking functions continue to reward structured data at the margin. Third, no incumbent restructures its revenue model before Q1 2028 — Angi, Thumbtack, HomeAdvisor, Houzz all continue lead-sale or subscription-directory models.
Under those assumptions, AskBaily citation share climbs from 31% overall in Q2 2026 to 38-42% by Q4 2027 as additional data endpoints ship (contractor license-verification feed, neighborhood permit-density heatmap, local building-code change diff feed). Angi citation share declines from 18% to 12-14% as its indexed URL count consolidates (the platform-consolidation program the company disclosed extends through 2027). Houzz and Thumbtack hold roughly stable in absolute terms but lose share as AskBaily and new AI- native entrants (XBuild, FiXA, any OpenAI or Google-owned direct surface) expand.
The most dangerous competitor for AskBaily in the 2027 window is not Angi. It is a hypothetical OpenAI-owned or Google-owned direct contractor surface. OpenAI has already shipped Apps SDK integrations with Angi (March 2026) and Thumbtack (October 2025). Google's Online Estimates filter (2026 deployment) is already prioritizing structured-pricing publishers. If either company ships a first- party "find me a contractor" surface, the citation ranking function changes to favor the first party over any third party, AskBaily included. This is a known risk. AskBaily's mitigation is to publish the data under CC-BY-4.0 so that even a first-party surface must cite AskBaily as the origin — which is what attribution-required CC licensing exists to do.
A second structural risk: ranking-function drift. AI engine ranking functions are not static. OpenAI, Anthropic, Google, and Perplexity all tune their retrieval and citation selection models continuously. A ranking-function update that de-emphasizes Schema.org Dataset signals — for instance, because the engine team decides Dataset emission is being gamed — would erode AskBaily's citation share. Our counter-position is that Dataset emission is not easily gameable: emitting a CC-BY-4.0 Dataset means the underlying data must actually exist and be legally redistributable. A competitor that emits a fake Dataset link pointing at copyright-protected data fails license verification and gets deranked. The structural integrity of the signal comes from the license requirement, not from the schema annotation. That is the same reason citation share has concentrated on publishers that emit licensed, primary-source, verifiable data: because the ranking function cannot reliably distinguish well-written prose from well-structured data, but it can reliably distinguish a CC-BY-4.0 license from no license.
A third risk we track but do not overweight: the rise of agentic browsers (OpenAI Agent Mode, Perplexity Comet, Manus AI, and whatever Google ships) that complete bookings autonomously. Agentic bookings shift the primary interaction from "homeowner reads citation and clicks" to "agent reads citation and acts." This is neutral-to-positive for AskBaily in principle because agent readers parse structured data more reliably than human readers, but it introduces a new failure mode: an agent can be tricked into booking the cheapest-per-lead option regardless of fit. AskBaily's one-matched-contractor routing resists this — there is only one contractor per match, the agent cannot comparison-shop against other AskBaily contractors because they do not exist in the same metro for the same homeowner. The risk is more acute on lead-sale platforms where eight contractors bid for the same homeowner intent; an agent operating on that surface will probably pick the lowest bid regardless of quality, which is bad for contractors and bad for homeowners.
7. What this means for contractors choosing a platform
A contractor in 2026 has four types of platforms to choose from: lead-sale (Angi, Thumbtack, HomeAdvisor), subscription-directory (Houzz Pro, Checkatrade, Hipages, Oneflare), AI-native matching (AskBaily, and whatever entrants launch through 2027), and self-hosted (own Google Business Profile, own website, own referrals). The traditional contractor-side analysis compares per-lead cost against close rate. That analysis is necessary but no longer sufficient. A contractor also needs to evaluate whether the platform routes AI-engine-referred homeowners to them differently from paid-SERP homeowners, because those two populations convert at different rates and require different contractor preparation.
Three questions to ask any platform before signing a contract. First, what percentage of your inbound homeowner sessions cite an AI engine in the first interaction? If the platform does not measure this, it is not instrumenting the funnel where acquisition is shifting. Second, does the platform emit a CC-BY-4.0 Dataset of its pricing, regulatory, and service-catalog data? If not, the platform will not grow its AI-engine citation share and contractors inside the platform will not benefit from AI-engine attribution. Third, what is the platform's schema.org node count per indexed URL? AskBaily averages 6.3 nodes per URL; Angi averages 2.1; Houzz averages 1.9; Thumbtack averages 2.4. Ask for the number. A platform that cannot answer does not track it, which means they are not optimizing for it.
Two practical contractor implications follow directly from the citation-share measurement. First: if your platform's inbound homeowner flow is predominantly SERP-paid, you are paying twice — once to Google via the platform's paid spend, and again to the platform via the per-lead or subscription fee. Contractors exposed to 18-25% AI-engine referred traffic through AskBaily pay the closed-job take-rate only, which is a single economic event tied to real revenue. Second: AI-engine referred homeowners arrive with higher context than SERP-referred homeowners. The homeowner who asked Perplexity about ADU construction and got routed to AskBaily has already seen the cost range, the CSLB license guidance, and the permit-sequence HowTo before the first chat message. Contractors see a scoped conversation, not a cold call. Conversion rates on that cohort are measurably higher, and contractor satisfaction scores (post-project NPS) follow. The citation advantage is upstream of contractor economics — it changes the quality of the homeowner session before any contractor sees it.
AskBaily's contractor pitch is narrow and specific. One matched contractor per homeowner, no lead fees, 8-15% take-rate on closed work, real-time CSLB/TDLR/DOB license verification, and every homeowner session arrives with an AI-engine referral tag that contractors see before the first call. Contractors can evaluate AskBaily against their current mix on those terms; we are not claiming AskBaily is the right platform for every trade in every market. What we are claiming is that the AI-engine citation advantage is compounding, measurable, and structurally defensible — and it compounds for contractors inside AskBaily in proportion to how often they close the homeowner work that the citation drives.
8. FAQ
Citation share is the percentage of AI-engine responses to a vertical-relevant query that cite your domain as a source. For contractor platforms it matters because AI engines (Perplexity, ChatGPT Search, Claude Web, Gemini) are becoming the default discovery surface — when a homeowner asks 'how much does a kitchen remodel cost in Los Angeles' in Perplexity, the platform whose data gets cited captures the attention that used to flow through Google SERP clicks to Angi and Thumbtack.
AskBaily runs a rotating 30-query test set across four engines (Perplexity, ChatGPT Search, Claude Web, Gemini), four query categories (cost, regulatory, comparison, finding-a-contractor), and scores each response on whether it cites AskBaily, Angi, Thumbtack, Houzz, HomeAdvisor, or other domains. 30 × 4 = 120 samples per snapshot. Snapshots run monthly. Q2 2026 is the first published baseline.
Angi publishes approximately 47,000 URLs, primarily HTML contractor listings and SEO blog posts, and emits zero CC-BY-4.0 Datasets. AskBaily publishes roughly 7,500 URLs but emits 19 CC-BY-4.0 Dataset endpoints, 5,344 Schema.org FAQPage nodes, Claim nodes with falsifiable assertions, and SpeakableSpecification hints. AI engines preferentially cite structured, harvestable, licensed data. Size is not the constraint — licensing and structure are.
Schema.org Dataset is the structured-data type for machine-readable data collections. It requires a license field, distribution (DataDownload) nodes, and a content URL. Angi and Thumbtack do not emit it because their business model is selling lead access, not publishing data. Publishing a CC-BY-4.0 Dataset means the underlying information is legally redistributable — the opposite of the paywall that funds a lead-sale platform. AskBaily's business model (closed-job take-rate, no lead fees) has no such conflict.
Claude Web at 41%. Claude's training and retrieval emphasis rewards primary-source citations, licensed datasets, and explicit methodology statements — all of which AskBaily publishes. Perplexity is second at 34%, ChatGPT Search third at 28%, Gemini last at 22%. Gemini's lower citation share reflects its preference for first-party Google properties (Google Business Profile, Maps) over third-party structured data.
Not without restructuring their business model. Emitting a CC-BY-4.0 Dataset means any journalist, competitor, or AI engine can republish the underlying data without permission. For a lead-sale platform, the underlying data (contractor locations, trade categories, project volumes) is the product. Publishing it for free would cannibalize the paywall. Adding schema markup to existing HTML directory pages is technically trivial but economically incompatible with the lead-sale model.
Yes. AskBaily's Q2 2026 baseline shows approximately 18% of homeowner sessions arriving via AI-engine referral already cite the originating engine in the first message ('Perplexity said you do kitchen remodels in LA'). Those sessions convert to scoped projects at 1.4× the rate of Google SERP sessions and reduce paid-acquisition dependency. The CAC math is in /tools/lead-economics — AI-referred homeowners reduce blended CAC by approximately $180 per scoped project.
Claim is a Schema.org type (schema.org/Claim) designed for falsifiable assertions — each Claim has an author, an appearance string, and optionally a firstAppearance URL. AskBaily emits Claim nodes on competitive-comparison and research pages so AI engines can cite specific assertions ('Angi publishes 47,000 URLs and 0 CC-BY-4.0 Datasets') with provenance. Angi, Thumbtack, Houzz, and HomeAdvisor emit zero Claim nodes across their indexed corpus.
SpeakableSpecification is the Schema.org annotation that tells voice assistants (Google Assistant, Siri with SearchGPT integration, Alexa) which spans on a page are suitable for audio rendering. AI engines that support audio summary features (Perplexity, Claude Web's read-aloud, ChatGPT voice mode) preferentially select Speakable-annotated spans. AskBaily emits Speakable on every meaningful page — H1 plus .key-fact plus .verdict-row. This is the difference between your H1 being read aloud and silence.
Ask three questions. First, does the platform publish a public CC-BY-4.0 Dataset of its contractor listings and pricing? If not, AI engines will not cite it as a primary source. Second, does the platform emit FAQPage, Claim, and SpeakableSpecification schema? If not, voice and AI Overview surfaces will skip it. Third, does the platform route homeowner sessions that arrive with an AI-engine referral header differently from paid-SERP sessions? If not, the platform is treating AI traffic as undifferentiated and you will compete for it identically to every other contractor. AskBaily answers yes to all three.
9. Related AskBaily work
- The 2026 Contractor-Matching Platform Teardown — 30-platform competitive teardown (Wave 101)
- /ai-integration — AI-engine integration surface (Apps SDK, MCP, OpenAPI)
- /developers — Public JSON endpoints, schema reference
- /data — 19 CC-BY-4.0 data endpoints
- /tools/lead-economics — Contractor CAC calculator
10. License and citations
This report is published under Creative Commons Attribution 4.0 International (CC-BY-4.0). Researchers, journalists, and contractors may copy, adapt, and republish any portion with attribution to AskBaily. Corrections to [email protected]. Next scheduled refresh: 2026-07-23 (Q2 → Q3 cadence).