I had recipes everywhere.
Cookie & Kate links in a "Recipes πΆπ₯¦π" thread in my email. Apple Notes from the time I copied a tofu marinade off Instagram before it scrolled into the void. A photo of a Pike Place Market chalkboard. Three browser tabs open since the Obama administration. Two cookbooks on the shelf I never opened because I couldn't remember which one had the lentil dal. A printed PDF of an endurance fueling recipe shoved between two issues of Cartographic Perspectives.
When I wanted to actually cook something on a weeknight, the search cost was higher than the cooking cost. I'd give up and make pasta.
This is the post about how I fixed that β and how the fix turned into a small, real piece of personal infrastructure I now use multiple times a week. The cookbook itself is the demo. The interesting part is what's underneath it.
What I wanted
A single searchable page on my own site at brooksgroves.com/recipes.html that holds every recipe I cook. Filterable by ingredient, cuisine, cook time, dietary tags. Designed in the same parchment palette as the rest of the site so it doesn't feel like a third-party tool bolted on. Mine. Owned. Static. Fast.
That part was easy. Static HTML, a recipes.json file, a touch of JavaScript to filter and render. Day one work.
The hard part was the input pipeline. Hand-typing recipes into a JSON file is a quick way to ensure that no recipe gets added past the third one. The whole reason the bookmarks-everywhere problem existed is that the friction of structuring a recipe is too high. Solve the input problem or the cookbook just becomes the next graveyard.
What I built
Three things, layered.
1. A Cloudflare Worker
About 230 lines of code holding two encrypted secrets β an Anthropic API key and a fine-grained GitHub Personal Access Token β with four HTTP routes:
| Route | What it does |
|---|---|
| POST / | Relays requests to Anthropic's Messages API |
| POST /save-recipe | Appends a recipe to recipes.json via the GitHub Contents API |
| POST /delete-recipe | Removes one by slug |
| POST /update-recipe | Edits one in place |
Free tier. Encrypted secrets the browser never sees. CORS allowlisted to my domain.
2. A Recipe Extractor at /recipes/extract.html
Three input modes: paste a URL, paste raw text (for paywalled sites, Apple Notes, screenshot OCR), or drop a photo. Every mode sends the content to Claude with a schema prompt. A single Save to Cookbook button commits the structured JSON straight to recipes.json on GitHub. The site rebuilds. The recipe is live on my phone in about thirty seconds.
3. The cookbook at /recipes.html
Searchable, filterable, expand-a-card to read. An Admin button gates edit and delete controls. A Share button lets me text a recipe to anyone β it deep-links straight to the right card, auto-expanded.
That's the whole system. I see a recipe on Instagram, screenshot it, paste it into the extractor's Photo tab, hit Save. Done. Searchable forever.
Why it works the way it works
The Worker as the linchpin
Every API key in this system lives encrypted in Cloudflare. Browser code never touches them. This isn't paranoia β it's the only sane shape for something I want to use from my phone, which means it has to work in any browser on any network without typing in credentials.
The Worker is also general infrastructure. It's not a "recipe Worker." It's a Worker, with routes. The same encrypted-keys + GitHub-commit pattern is already on my list for: a Save Concert button on my setlist log, a Save Book flow that takes a Goodreads URL, an auto-summarizer for the day's Giro d'Italia stage. Build the substrate once. Features get cheaper from then on.
Structure first
Every recipe in recipes.json follows the same shape:
{
"slug": "miso-glazed-tofu",
"title": "Miso-Glazed Tofu",
"cuisine": "Japanese",
"category": "Main",
"cook_time_min": 30,
"active_time_min": 15,
"servings": 2,
"source": "Pike Place",
"dietary": ["vegetarian", "vegan"],
"added": "2026-04-10",
"ingredients": [
{ "qty": "1 (14 oz)", "item": "extra-firm tofu, pressed and sliced 3/4-inch thick" }
],
"steps": ["..."],
"notes": "Watch closely under the broiler β sugar burns fast."
}
Boring. Predictable. The whole point. Filtering, scaling, sorting, exporting β anything I want to do later β works on this shape without re-typing recipes. The schema also encodes my dietary preference directly into the Claude system prompt. If a recipe contains chicken stock, the extractor automatically swaps in vegetable stock and notes the change. I can extract from omnivore food blogs without thinking about it.
Static site, live edits
The whole site is GitHub Pages β a glorified file server, no database, no runtime. By routing writes through a Worker that hits the GitHub Contents API, I get the appearance of a database with none of the maintenance. Every save is a real commit. Every edit is a real commit. If I delete a recipe by accident, git log recipes.json shows me every prior version, recoverable with a single command.
Versioned data with no database is one of the most underrated tricks in small-scale software.
What broke
Plenty. In roughly increasing order of how long it took to diagnose.
Mojibake spiral
The first save worked fine. The fifth corrupted recipes.json. The Worker's base64 decode was using atob() directly, which returns a binary string where each character represents one byte. For ASCII that looks right. For an em-dash or a degree symbol β multi-byte in UTF-8 β every read-and-rewrite cycle re-interpreted the bytes and added a layer of garbage. By the time I noticed, one recipe's notes field had ballooned from "Watch closely β sugar burns fast" to 393,301 characters of nested Γ and Γ sequences. The whole file was 1.46 MB.
The fix was a proper UTF-8 decoder using TextDecoder. The lesson is as old as Unicode: if you find yourself doing String.fromCharCode(...new TextEncoder().encode(s)), ask whether you actually want bytes or characters, because the shortcut compounds over time.
Stack overflow on save
A spread operator into String.fromCharCode works fine when the array is small. Above ~30 KB of input, the engine refuses. Fixed by chunking the encode loop into 32 KB slices.
Git treating HTML as binary
A stray *.html binary line in .gitattributes from years ago meant every commit showed Binary files differ instead of a real diff. Merge conflicts were unrecoverable. Fixed by writing a proper .gitattributes declaring HTML/JSON/CSS/JS as text with LF line endings.
Cache poisoning
GitHub Pages serves static files with caching hints that, in practice, meant my browser would show yesterday's JavaScript long after I'd shipped today's. Fixed with Cache-Control: no-cache, no-store, must-revalidate meta tags on the tool pages. Pages I actively iterate on should never cache.
Each of these bugs took 10 to 90 minutes to diagnose. Each was a four-line fix. Together they turned a proof-of-concept into something I actually trust to hold years of recipes.
What it's like to use
Here's an actual recipe-add, start to finish:
- I'm in the kitchen and want to save the Cookie & Kate enchilada recipe I've been meaning to try.
- I tap the Recipe Extractor bookmark on my phone.
- I paste the URL, hit Extract. About 8 seconds later, structured JSON appears.
- I tap Save to Cookbook. A real GitHub commit fires. A green confirmation shows the commit SHA.
- About 30 seconds later, GitHub Pages has rebuilt. The recipe is live, searchable, filterable by cuisine.
Total elapsed time: under a minute. Total typing: zero.
When the extractor mis-categorizes something β calls a Mediterranean dish "American" because Cookie & Kate's metadata is fuzzy β I open the card, hit the pencil icon, edit the JSON in-place, save. Real commit. Site rebuilds. Done.
What this is actually about
The recipe cookbook is not really the project. The project is a small, growing-over-time AI infrastructure for my personal site. The cookbook is the first feature. The Worker is the load-bearing wall.
A few things this project clarified:
Static sites with one tiny serverless backend hit way above their weight. No database, no Docker, no deploy pipeline, no monitoring stack. Total operating cost: zero dollars. Total places this can break in the middle of the night: approximately zero. The Worker has been running for two weeks and has executed thousands of requests. I have not thought about its health a single time.
LLMs are the right substrate for messy-input-to-clean-output problems. Every recipe page on the internet has a different shape. Some have JSON-LD schema, some don't. Some hide behind ad walls. Some are an infinite-scroll life story before the recipe begins. None of that matters when you can ask Claude to structure content into a fixed schema, substitute meat with vegetarian equivalents, and infer reasonable values for missing fields. The schema does the rest.
Friction kills personal projects. The cookbook would already be abandoned if adding recipes required hand-typing JSON. Every minute shaved off the add-a-recipe loop multiplied the probability I'd actually use the thing. The fix had to be a friction fix.
Versioned data with no database is one of the great free lunches in small-scale software. Every save is a commit. Every commit is recoverable. Audit logs come for free. If I want to know when I added a recipe, what it looked like before I edited it, or who made a change β git log answers all of that without building anything.
What's next
The same Worker pattern is already on the list for other parts of the site β a Save Concert button on the setlist log, a Save Book flow, maybe a geocache save route. Build the substrate once, features get cheaper.
On the cookbook itself: better cooking-from-the-recipe view (scale to N servings, imperial/metric toggle, checkbox ingredient tracking), and freeform tags beyond cuisine and category β "potluck," "weeknight," "from a friend."
But mostly the answer to "what's next" is use it. Two weeks of using it told me more about what to build than any amount of planning. The cookbook has 8 recipes now. By the end of the year it'll have 80, and the system will have answered for itself what the next feature should be.
The friction is gone. That was the whole project.
The cookbook is at brooksgroves.com/recipes.html. The whole site is open source on GitHub. The Worker is not β it's holding my keys.