An open-source RAG engine for municipal content — starting with Needham, MA.
And I thought: What if there were just a clean, AI-native search engine for Needham?
That was it. We (aka Claude and I) started by scraping the town website. Then the school district site. Then the library. Then parks & rec and public documents — 5 content sources total, processed through a 4-tier ingestion pipeline with 38 custom boilerplate-stripping patterns. The search system combines semantic similarity, full-text search, recency weighting, and source-tier ranking to surface answers with confidence scores.
Then Claude suggested scaling the ingestion. Then I blinked. And realized we nearly scraped the entirety of Mass.gov. 49,547 out of 50,000 pages — 99.1%.
Out of thin air, Claude created: structured ingestion pipelines, semantic chunking, vector embeddings, hybrid search, answer synthesis, and fully automated deployment.
Total cost to process and summarize ~70,000 total pages? About eleven dollars. That's less than a Sweetgreen bowl.
At a high level, Needham Navigator is a 5-layer system: user browser → Next.js on Vercel → AI services → data stores → external municipal sources.
The development process became an almost-automated flywheel. Claude CoWork critiques the UX, generates structured prompts, Claude Code implements features, GitHub Actions validates, and Vercel auto-deploys. Rinse. Lather. Repeat.
I asked GPT, Gemini, and Claude to estimate how long a traditional full-stack engineer + QA would need to architect, build, secure, and deploy this system. Then I normalized everything into person-days and weeks.
| Model | Team | Person-Days | Calendar | FTE Months | Key Constraint |
|---|---|---|---|---|---|
| 🟢GPT | 1 Eng + QA | 320 PD | ~48 weeks | ~11 months | Serial engineering bottleneck; long integration/regression tail |
| 🟢GPT | 2 Eng + QA | 320 PD | ~28-30 weeks | ~6-7 months | Parallelism helps, but coordination + QA gates final stretch |
| 🔵Gemini | 1 Eng + QA | 320 PD | ~43-52 weeks | ~10-12 months | Context-switching across RAG, UI, DevOps; burnout risk |
| 🔵Gemini | 2 Eng + QA | 340 PD | ~26-30 weeks | ~6-7 months | Optimal parallel streams; ~15 PD coordination overhead |
| 🟠Claude | 1 Eng + QA | 139 PD | ~14-16 weeks | ~3.5-4 months | Partial QA overlap; mostly serial |
| 🟠Claude | 2 Eng + QA | 92 PD | ~9-11 weeks | ~2-2.5 months | Strong parallelism; ~15% context overhead |
Not one model said “two weeks.”
The average single-person estimate hovered around a year.
I built it in roughly fourteen days.
Not because I typed faster. Because I orchestrated.
A custom web scraper built with Cheerio + Mozilla Readability processes each URL through 6 stages — from raw HTTP fetch to structured storage. 38 CivicPlus boilerplate patterns are stripped during the Clean + Normalize step.
Every user query goes through a hybrid search pipeline that combines semantic vector search (pgvector cosine similarity) with PostgreSQL full-text search, then reranks results using a 4-factor formula before generating a confidence-scored answer via GPT streaming.
Every push triggers a full CI pipeline: lint, type-check, build verification, and static code analysis via GitHub Actions. Three scheduled workflows handle daily article generation, nightly health checks, and weekly security scans.

Prompt-to-menu generator, built for $2.48 on Replit.

Notes from inside the Swiftie Industrial Complex — an AI app that explores the gap between Taylor Swift fandom, parasocial relationships, and what we project onto pop stars. Part satire, part genuine product.

An AI-powered micro app that converts any table into styled charts with PNG/SVG export. Paste data, pick a palette, download instantly.