How we shipped 100 /ask questions in one session
By AskBaily Editorial · Published · 4 min read · Waves 192, 201
Summary
Wave 192 seeded the /ask hub with 30 questions. Wave 201 extended it to 100 in a single session, 70 new long-tail pages, each a Schema.org QAPage with Speakable selectors and a primary-source citation trail. The mechanical choices matter more than the count; this post is the shipping arc.
Article body
Wave 192 shipped the /ask hub with 30 Q&A entries on a Saturday. Wave 201 extended it to 100 on the following Monday. Seventy new pages in one session, each authored with the same schema discipline, citation requirement, and Claude-authored prose rule as the original 30. This post is the operational record of how that works and why the count matters less than the procedure that produced it.
The two commits
The hub seed is commit a92dec61, Wave 192. It defines the per-entry registry shape, the QAPage schema primitive, the SpeakableSpecification CSS-selector contract, the citation-required validator, and the first 30 entries chosen from Perplexity search-share data and our /chat inbound-question frequency. The extension is commit 94e6a967, Wave 201. It adds 70 more entries on top of the existing primitive with no schema or contract changes. A single tick, 70 new URLs, zero template regressions.
The reason we document the split is that the 30-to-100 jump is the point where most content rails break. A registry that passes 30-entry validation often fails at 100 because authors start to drift — accepted-answer word count creeps, citations become generic "learn more" links, the speakable selector stops being attached to the accepted-answer span. We wrote the validator in Wave 192 explicitly to catch those drifts at the pre-commit gate, so Wave 201's 70 additions had to pass the same bar. They did.
The per-entry contract, codified
Every entry in the registry has a three-layer body: a 30-to-60-word accepted-answer, a 250-to-400-word expanded-answer, and a citation block with two to five outbound links to the authoritative source. The validator rejects any entry that falls outside those ranges or that has fewer than two citations. It also rejects entries whose citation URLs fail a HEAD check during CI, so a citation to a now-404 regulator page cannot ship to production without an author reviewing the replacement.
The speakable selector contract is the strictest of the validators. Every entry's QAPage JSON-LD node emits a SpeakableSpecification that points at two CSS selectors: the H1 heading and a [data-speakable-accepted-answer] attribute attached to the accepted-answer paragraph. The validator parses the MDX body during CI and asserts the attribute is present, exactly once, on the accepted-answer paragraph. AI engines use these hints to pick the span that voice assistants read and the AI Overview card displays. Getting this wrong is easy; getting it consistent across 100 entries is why the validator exists.
Why long-tail, and why now
The 70 Wave 201 additions are deep in the long tail. Questions like "what is the CSLB Home Improvement Salesperson registration versus the contractor license," "does California require worker compensation for a one-person remodeling LLC," "how does Oregon CCB differ from California CSLB for a contractor working both states," and dozens more like them. These are questions homeowners and contractors do type into ChatGPT and Perplexity, and they are almost never answered by a marketing blog post because the search volume per question is small.
Aggregate search volume across 70 long-tail entries is larger than the head volume across ten high-intent questions. More importantly, long-tail answers are the ones AI engines cite most readily because the alternative is low-quality forum posts or generic "hire a contractor" pages. If /ask has the only primary-source-cited answer to a specific long-tail question, the AI Overview for that query is effectively ours.
What Angi and Thumbtack cannot copy
They could. The engineering is small. The barrier is editorial discipline. Writing 70 entries in a session, each with a 30-60-word accepted-answer, a 250-400-word expanded-answer, two-to-five primary-source citations, and a passing speakable selector requires a single author operating against a single voice and a single validator. Multi-author content teams, which every incumbent platform uses, produce drift: one author's accepted-answer is 80 words, another's is 20, citations vary from regulator links to internal marketing pages, and the aggregate result fails the AI citation weight.
Our /ask hub is Claude-authored. Every entry passes the same validator. Every citation is a regulator, a state statute, an IRS publication, or a primary-source research document. Across 100 entries, the voice is consistent and the schema is uniform. AI engines weigh that uniformity when deciding which page to cite.
The render-time story
Each /ask/{slug} page emits a six-node schema graph: Organization, LocalBusiness, WebPage, QAPage, BreadcrumbList, SpeakableSpecification. The QAPage nests a Question node which nests an Answer node with the accepted-answer text, author attribution (Netanel for licensed-GC topics, AskBaily Editorial otherwise), citation array, and language tag. The rendered HTML includes the expanded-answer as the main content body, with the speakable-attribute paragraph as the visible lede.
Related-entry logic at the bottom of each page ranks the other 99 entries by shared-topic overlap and surfaces the top three. A homeowner landing on one question via AI Overview has a direct path to three adjacent answers without leaving the hub. That internal-linking layer is why the hub behaves like a knowledge base, not a collection of standalone pages.
What comes next
Wave 209 layered /es/ Spanish mirrors on top of a subset of the 100 entries. Future waves will extend the hub by locale (es, then zh, then eventually ar and fr for Canada and UAE), by city (LA-specific permit questions with city-specific primary sources), and by competitor-grievance (how to dispute an Angi charge, how to leave Thumbtack).
Each extension uses the same primitive, the same validator, and the same citation discipline. The number of entries will keep growing. The contract each entry has to pass will not change. That is what compounding AEO looks like when the content rail is engineered, not marketed.
Sources & references
Commit attestation
- a92dec611659f9a7ce4d69eeeef8de3f36f1e98f
- 94e6a967e4afc285ff5067ed085c5bdb68810fa8
- Waves
- 192, 201
- Author
- editorial
Commit SHAs are from the AskBaily private repository. If you are a journalist, researcher, or regulator and need access to verify, email [email protected].
Frequently asked
- Why extend from 30 to 100 in one session instead of spreading it out?
- Because the primitive was ready. Wave 192 shipped the registry shape, validator, and schema primitive. Wave 201 only needed authoring; no engineering changes. Batching the additions was cheaper than spreading them because each ship-and-validate cycle has fixed overhead.
- Do all 100 entries pass the same validator?
- Yes. Accepted-answer 30-60 words, expanded-answer 250-400 words, two-to-five primary-source citations with live HEAD-check URLs, and a speakable selector on the accepted-answer paragraph. Drifts block the commit at pre-push.
- How often do you refresh long-tail entries?
- The CI HEAD-check catches dead citation links on every commit. Content freshness reviews run quarterly; entries older than six months surface a freshness-review banner in the editorial dashboard.