A developer going by morinaga launched three AI-curated directory sites on April 23, 2026 โ€” Top AI Tools, Find Games Like, and Open Alternative To โ€” with a total infrastructure bill of roughly $25 per month. Traffic is essentially nonexistent right now, which you'd think would be the end of the story. But morinaga just published a detailed breakdown of exactly why they're not panicking, staking out a falsifiable claim: at least one site will hit 200 organic clicks per month sustained for two consecutive months by October 2026, or they'll publish their Search Console data and explain what went wrong.

The Obvious Problem Nobody's Ignoring

The counterargument to everything morinaga is building writes itself: Google already does it. AI Overviews now synthesize curated lists with one-sentence descriptions directly in the search results page for queries like "open source alternative to Notion." That's zero clicks, instant answer, and it's baked into the world's dominant search engine. The pessimistic read โ€” that Google extracts your signal and stops sending traffic โ€” has supporting evidence too. Industry-wide click-through rates on informational queries dropped measurably as AI Overviews expanded through 2025, and that trend hasn't reversed.

Three Places Where AI Overviews Have Structural Blind Spots

Morinaga identified specific gaps where traditional directory-style sites still outperform generative synthesis. Attribute-based filtering is the first: if someone wants "open source Notion alternatives that work offline and have a mobile app," AI Overviews respond with hedged prose because they're synthesizing text, not querying structured fields. The underlying database uses columns like works_offline, has_mobile_app, and last_commit_date for faceted browsing โ€” something an LLM writing paragraphs about the general landscape can't match. Editorial negative-space is the second gap: the game recommender includes "avoid if" caveats generated by a Claude Haiku prompt that forces critical answers. AI Overviews default to positive framing because they synthesize existing content, which means someone with a specific disqualifying requirement gets an unhelpful answer. The third blind spot is freshness on maintenance status โ€” ETL jobs pull GitHub commit activity weekly, marking tools inactive after 14 months without commits. AI Overviews don't distinguish between actively maintained projects in 2026 and ones that peaked in 2024.

Why Comparison Queries Are the Real Bet

The thesis isn't about winning discovery queries โ€” that's acknowledged as a loss for directory sites competing against zero-click AI answers. The bet is on downstream research. The pattern morinaga expects: someone searches "Notion alternatives," gets an AI Overview naming four tools, then types "Appflowy vs Anytype performance" to compare the two they're considering. That second query has commercial intent and wants a verdict, not another list. For that query type, a page with structured attribute comparison, a clear verdict, and fast load time competes directly against another AI-style answer โ€” and static pages with typed comparison fields beat generative prose for "which one wins on attribute X." That's why morinaga chose static site generation over dynamic AI rendering: the format needs to be indexable and fast for second-stage research clicks.

Key Takeaways

  • AI Overviews win discovery queries; directories win comparison, filtered browse, and freshness queries โ€” these are different intent types serving different stages of research
  • The $25/month cost structure means morinaga can run this experiment without pressure to interpret ambiguous signals optimistically or justify burning cash on infrastructure
  • Three outcomes would invalidate the bet: near-zero clicks on comparison pages at 90 days (signal extraction), AdSense rejections after genuine content depth improvements, or users migrating "X vs Y" queries away from Google entirely toward LLM chat interfaces

The Bottom Line

This is a bet worth watching precisely because it's falsifiable โ€” morinaga committed to publishing the verdict regardless of outcome. The real question isn't whether AI Overviews are good; they're clearly getting better at synthesis. It's whether structured, filterable, negative-space-rich directories can win on the queries that come after discovery, when people actually need to make a decision rather than just learn what exists.