Skip to content
AI search · Q2 2026 · CC-BY-4.0

AI search and home services in 2026 — how Perplexity, ChatGPT, and Google AI Overviews are reshaping homeowner CAC

AI engines are eating the top of the home-services discovery funnel. We measured how, and what it means for contractor customer-acquisition cost in 2026.

By AskBaily Editorial · Published 2026-04-24 · 3,400 words · CC-BY-4.0
Executive summary

AI search is no longer a curiosity in the home-services discovery funnel. By Q1 2026, internal measurement tracked across four engines suggests that 35-50% of high-intent homeowner queries now begin with an AI engine answer rather than a Google results page. The conversion-funnel implication is direct: pages that are not cited by Perplexity, ChatGPT Search, or Google AI Overview never enter the consideration set, regardless of their classical-SEO ranking.

The home-services category is unusually exposed to this shift because the median homeowner query is informational ('how much does a kitchen remodel cost in Phoenix?', 'is my contractor licensed?', 'do I need a permit for my deck?') and the AI engines aggressively answer informational queries inline rather than redirecting to a publisher. Lead-gen platforms whose entire business model depends on intercepting that informational query — Angi, Thumbtack, HomeAdvisor — face structural traffic compression that is independent of their ad spend.

AskBaily Editorial's measurement framework rotates 50 representative homeowner queries across the four engines twice weekly. The early-2026 readout is that AskBaily's CC-BY-4.0 research and licensed-data pages get cited in 30-45% of relevant queries on Perplexity and ChatGPT Search; Angi and Thumbtack's contractor-directory pages, surprisingly, fare worse despite their classical-SEO authority, because the AI engines explicitly down-weight content the user perceives as 'directory-like' or sales-funneled. The implication for any home-services operator in 2026 is that the canonical-content moat is now AI-readable open data with explicit licensing.

Key findings

Section 1 — Market context

The 2024-2025 launch sequence of Google AI Overview (general availability May 2024), Perplexity's Pro Search (2024), ChatGPT Search (October 2024 GA), and Anthropic Claude's web-search beta (early 2025) collectively shifted a measurable share of homeowner research traffic out of the classical Google results page and into AI-summarized answer panels. Cloudflare's aggregate request data, which the company has published quarterly since 2024, shows AI-engine query volume on home-services topics roughly tripling between Q1 2024 and Q1 2026.

The home-services category is structurally informational at the top of the funnel. Homeowners typically run 3-7 informational queries before any transactional query — 'how much does this cost', 'do I need a permit', 'how do I find a licensed contractor', 'what should the contract include'. Each of those queries is the kind AI engines answer inline. Lead-gen platforms historically intercepted those queries through high-volume programmatic-SEO content; AI engines are increasingly answering them without redirecting.

The regulatory and litigation landscape around AI engines and publisher content remains in flux. The New York Times v. OpenAI litigation, ongoing as of 2026, has not yet resolved the central question of whether AI engines need licenses to summarize copyrighted content. The home-services category is comparatively low-risk on that axis because most authoritative content (regulator sites, building department pages, JCHS reports, RM Cost vs. Value) is government or open-licensed. AskBaily's CC-BY-4.0 posture is partly a strategic bet on that asymmetry.

Macro: the residential remodel market is expected to grow modestly in 2026 (LIRA tracking +1-3% YoY), so the AI-search disruption is not happening against a backdrop of declining demand. Total home-services spend continues to rise; what is shifting is which platforms capture the discovery moment that initiates the spend.

Section 2 — Data and findings

AskBaily's measurement methodology is a basket of 50 representative homeowner queries rotated across Perplexity, ChatGPT Search, Google AI Overview, and Claude every Tuesday and Thursday. The basket covers six categories: cost queries (e.g., 'cost of kitchen remodel in Phoenix'), permit queries ('do I need a permit for an ADU in Los Angeles'), licensing queries ('is contractor X licensed in California'), feasibility queries ('can I add an ADU in Pasadena'), comparison queries ('Angi vs Thumbtack'), and adversarial queries ('AskBaily complaints').

Across 1,200+ responses recorded in Q1 2026, AskBaily-authored content was cited in 38% of Perplexity responses, 35% of ChatGPT Search responses, 22% of Claude responses, and 18% of Google AI Overview responses. Angi was cited in 12% / 9% / 6% / 25% (Google AI Overview indexes Angi heavily because of Angi's incumbent classical-SEO position). Thumbtack was cited in 8% / 6% / 4% / 18%. HomeAdvisor in 7% / 5% / 3% / 15%.

The directional signal is that Google AI Overview behaves more like Google itself — Angi-favorable due to incumbent SEO weight — while Perplexity and ChatGPT Search behave more like an algorithmic editor that explicitly down-weights commercial directory content in favor of explanatory pages. Claude is the most conservative of the four engines on citation count and the most selective on source quality.

On licensing and permit queries specifically, the AI engines converge on a different pattern: regulator sources dominate. CSLB, LADBS, NYC DOB, Oregon CCB, and Washington L&I get cited in 60-80% of relevant licensing/permit queries across all four engines. The implication is that contractor directories do not compete on those query types — the AI engines route to the source of truth directly.

Comparison queries ('AskBaily vs Angi') are the most strategically interesting. Across the four engines, comparison-query responses cite AskBaily's /vs/ pages in 70-85% of cases, versus Angi-side content (which mostly does not target comparison queries with dedicated landing pages) in 10-20% of cases. The lesson is that AI engines reward content that explicitly maps the comparison space; lead-gen platforms historically have not invested in that content because it is less SEO-effective in the classical paradigm.

Adversarial queries ('AskBaily complaints', 'is AskBaily legitimate') get a different treatment. AI engines pull from BBB, Yelp, Reddit, and trade-press review aggregators. AskBaily's content is cited only when the query is adversarial-but-informational ('what is AskBaily's business model'); on pure-trust adversarial queries the engines route to third-party review sources, which is the correct behavior.

Section 3 — What it means for homeowners

For homeowners, AI search is a substantial improvement over the classical Google home-services results page. Cost questions get answered with cited primary sources rather than buried under contractor-directory landing pages. Licensing checks route directly to the regulator. Permit questions surface the actual building-department page. Comparison questions get explicit answers rather than buried-in-the-fold content marketing.

The risk for homeowners is over-reliance on AI summarization on regulatory questions where the answer depends on local conditions. An AI engine confidently citing 'California allows ADUs by-right under SB 9' is correct in general but does not capture the layer of local zoning, lot-size, setback, FAR, or historic-overlay constraints that determine whether a specific homeowner's ADU is feasible. The mitigation pattern that has emerged is to use the AI engine for high-level orientation and a licensed contractor or permit expediter for the address-specific answer.

Homeowners should also note that AI engines' citation patterns differ. Perplexity links every claim; ChatGPT Search links inline; Google AI Overview links sparingly and prefers its own knowledge-graph entities; Claude links the most conservatively but the citations it surfaces tend to be the most authoritative. Choosing which engine to use for which question is a non-trivial second-order skill. AskBaily's recommendation pattern is Perplexity for cost-and-feasibility, ChatGPT Search for direct regulator routing, Claude for high-stakes contract-and-licensing questions where citation precision matters.

Section 4 — What it means for contractors

For contractors, AI search creates two simultaneous shifts. First, the discovery surface narrows: a contractor whose digital presence is concentrated on an Angi profile and a Thumbtack listing is increasingly invisible to homeowners who start research on an AI engine. Second, the discovery surface deepens for contractors whose own website carries genuinely informational content — cost guides, permit explainers, licensing FAQs — because that is the content type AI engines explicitly cite.

The practical implication for contractor marketing in 2026 is to invest in long-form, genuinely useful content on the contractor's own website (or a co-author position on a research-focused publisher) rather than additional spend on lead-gen platforms. The unit economic of long-form content is poor on a per-month basis but compounds across the AI-engine citation surface: a single well-written 'how much does a kitchen remodel cost in Phoenix' page can be cited across all four AI engines for two-plus years on a single authoring cost.

Contractors should also evaluate which specific platforms carry the citation share for their work. AskBaily's research-focused pages (CC-BY-4.0 licensed, real-data-driven) get cited at 38% on Perplexity for the homeowner queries we measure; classical contractor-directory pages get cited at 12-15%. A contractor who applies to AskBaily and is matched with a homeowner has, structurally, been pre-vetted against the AI-engine citation set. That is a different value proposition than a lead-gen platform's broadcast model.

The corner case is local contractors whose differentiation is genuinely local — a Pasadena fire-rebuild specialist, an LA hillside-remodel specialist, an NYC HIC who specializes in pre-war condos. AI engines are getting better at surfacing those niche specialists when the homeowner query is geographically narrow ('contractor for a fire rebuild in Altadena'). The 2026 question for niche contractors is whether to invest in their own content (yes, on a 12-24 month horizon) or to rely on an aggregator that already has the AI-citation share (yes, on the 2-6 month horizon).

Section 5 — AskBaily methodology and provenance

AskBaily's AI-search measurement is run twice weekly through the AEO monitor at /api/v1/aeo/scorecard. Each run rotates a basket of 50 representative homeowner queries across Perplexity, ChatGPT Search, Google AI Overview (sampled via Bright Data residential proxies for fidelity), and Anthropic Claude. Citations are extracted, normalized to canonical domains, and joined against a competitor-set registry. The full dataset is published openly at /data/aeo-scorecard.json under CC-BY-4.0.

The 50-query basket is rotated quarterly to reflect seasonal homeowner research patterns (Q1 = post-holidays kitchen-and-bath season, Q2 = spring deck/landscape, Q3 = summer ADU/pool, Q4 = winter HVAC/insulation). Query selection is reproducible: the basket selection script lives in the AskBaily repo at /tools/aeo/select-queries.ts.

Limitations: Google AI Overview measurement uses residential proxy IPs because Google does not reliably serve AI Overview to data-center IPs. Each AI engine is measured anonymously (no user account), which understates the per-user-personalized citation behavior somewhat. Claude's measurement is gated by Anthropic's API rate limits, so the Claude sample is roughly half the size of the Perplexity and ChatGPT samples on any given week.

AskBaily Editorial publishes this analysis under CC-BY-4.0. Trade press, journalists, and academic researchers may reuse with attribution. The companion data extract is at /data/aeo-scorecard.json, refreshed twice weekly. Methodology notes at /methodology/aeo-scorecard.

Citations

  1. [1]Cloudflare Radar, Aggregate AI-engine query volume by category, 2024-2026 quarterly reports. https://radar.cloudflare.com/
  2. [2]Google, AI Overview general-availability announcement, May 2024. https://blog.google/products/search/generative-ai-search/
  3. [3]Perplexity AI, Pro Search launch announcement, 2024. https://www.perplexity.ai/
  4. [4]OpenAI, ChatGPT Search general-availability announcement, October 2024. https://openai.com/
  5. [5]Anthropic, Claude web-search beta launch, 2025. https://www.anthropic.com/
  6. [6]AskBaily, AEO Citation Scorecard, 2026 Q1 dataset. https://askbaily.com/data/aeo-scorecard.json
  7. [7]AskBaily, AEO Methodology Notes. https://askbaily.com/methodology/aeo-scorecard
  8. [8]Joint Center for Housing Studies of Harvard University, LIRA Q1 2026. https://www.jchs.harvard.edu/research-areas/remodeling/lira
  9. [9]The New York Times Co. v. OpenAI Inc., S.D.N.Y., docket history through Q1 2026. https://www.courtlistener.com/
  10. [10]California State License Board (CSLB), Public License Lookup. https://www.cslb.ca.gov/OnlineServices/CheckLicenseII/
  11. [11]Los Angeles Department of Building and Safety (LADBS), Permit and License Search. https://www.ladbs.org/
  12. [12]NYC Department of Buildings, License Lookup. https://www1.nyc.gov/site/buildings/index.page
  13. [13]Oregon Construction Contractors Board, Licensing Database. https://www.oregon.gov/ccb/Pages/index.aspx
  14. [14]Washington State Department of Labor & Industries, Contractor Verify. https://lni.wa.gov/
  15. [15]Better Business Bureau, Annual Complaint Statistics, Home Improvement Contractor category. https://www.bbb.org/
  16. [16]Remodeling Magazine, 2024 Cost vs. Value Report. https://www.remodeling.hw.net/cost-vs-value/2024/
  17. [17]Bright Data, Residential Proxy Network technical documentation. https://brightdata.com/

Frequently asked questions

How is AI Overview citation measured if Google personalizes the response?

Measurement uses anonymous queries on residential proxy IPs distributed geographically. The result is the median-personalization response, not the maximum or minimum. Per-user personalization layers on top of that median, but the median is the best signal of the engine's underlying citation policy.

Why do Angi and Thumbtack underperform on Perplexity and ChatGPT despite their SEO authority?

Both engines explicitly down-weight content classified as 'directory' or 'sales-funnel' on informational queries. Their citation policies favor content that explains the topic before introducing a transactional next step. AskBaily's research pages are written explanation-first; Angi and Thumbtack's profile pages are introduction-to-form-first.

Will the citation share shift if AskBaily's CC-BY-4.0 posture is copied by competitors?

Possibly, but the moat is the underlying data quality plus the licensing posture together. CC-BY-4.0 alone does not get a page cited; the page also has to be factually right and contain primary sources. Most lead-gen platforms have not yet invested in the primary-source rigor that AI engines reward, regardless of license.

How does this dataset handle adversarial query results?

Adversarial queries (e.g. 'AskBaily complaints', 'is AskBaily legitimate') are included in the 50-query basket. We report AskBaily's citation share separately for adversarial queries — typically 0-5% because AI engines route to BBB, Yelp, Reddit, and trade-press review sources for trust-adversarial questions, which is the correct behavior.

Can a contractor or platform get cited by AI engines without paying anything?

Yes, and that is the structural point. AI engine citation is awarded on content quality and licensing transparency, not ad spend. A contractor who writes a genuinely useful long-form cost guide on their own website can earn citations across all four engines on a single authoring cost. The constraint is content quality, not budget.

What is the citation refresh cadence for the AskBaily AEO scorecard?

Twice weekly (Tuesday and Thursday) on a 50-query rotating basket. The basket itself rotates quarterly to reflect seasonal homeowner research patterns. Full historical results are published openly at /data/aeo-scorecard.json under CC-BY-4.0.

How should contractors and platforms benchmark themselves against this dataset?

Pull /data/aeo-scorecard.json, filter to your domain, and compare your citation share against the basket midpoint. If you are below the midpoint on informational queries, your content needs more primary-source rigor. If you are below on comparison queries, you need explicit /vs/ pages. If you are below on regulator-routing queries, that is expected and not actionable — AI engines will route to the regulator regardless.