TLDR Preflight Checks is a free, open-source set of production readiness playbooks. Copy a structured prompt into your AI coding assistant, and it runs a comprehensive audit of your project — quality, security, compliance, accessibility, performance, SEO, and ops — with severity tiers and dated reports. Stack-agnostic, tool-agnostic, MIT licensed. Stop freestyling your launch checklist.

Every project I've shipped has had the same last mile. The code works, the features are done, and then I open a conversation with my AI coding assistant and type something like "hey, check the security headers on this" or "audit the accessibility real quick." And every time, the assistant does a decent job. The problem is that "decent" looks different every time. Sometimes it checks three things. Sometimes it checks thirty. It's like asking a friend to "look over your resume" — you might get a typo check or a full career intervention, and you won't know which until it's done.

I realized I was running the same ad-hoc audits on every project, just phrased slightly differently each time. The coverage depended entirely on how specific my prompt was and how much I remembered to ask about. That's not a system. That's vibes-based quality assurance.

So I made it a system.

What Preflight Checks Is

Preflight Checks is an open-source collection of production readiness playbooks. Not checklists you read and check off yourself — structured prompts you copy into your AI coding assistant. Each one runs a comprehensive audit of your project against explicit categories, with severity classifications (CRITICAL, HIGH, MEDIUM, LOW) and dated output files saved to docs/plans/ so you have a record of what was checked and what was found.

The idea is simple: if you're going to ask an AI to audit your project anyway, give it a proper checklist instead of freestyling every time.

What's in the Box

Five documents, each covering a different phase of the "oh god, are we actually ready to ship this?" process:

  • Production Readiness Playbook — Seven systematic audits covering quality, security, compliance and legal, accessibility, performance, SEO and discoverability, and operational readiness. This is the big one.
  • Pre-Launch Checklist — A binary go/no-go gate with MUST, SHOULD, and NICE priority tiers. Either the item passes or it doesn't.
  • Post-Launch Runbook — Hour-by-hour monitoring guidance for the first 72 hours after you ship. Because launching is the fun part; not breaking things afterward is the hard part.
  • Rollback Decision Tree — For when something breaks at 11pm and you need to decide between "roll it back," "hotfix it," or "pretend this is fine" in under five minutes.
  • Start Here — One prompt that orchestrates the entire process end-to-end. Copy it in, point it at your project, and let it run through everything.

Design Decisions That Matter

A few choices that shaped how these playbooks work — and why they don't just gather dust like every other checklist you've bookmarked.

Stack-agnostic. The playbooks don't assume you're using React, or Python, or any specific framework. The prompts detect your tech stack and adapt the audit accordingly. I use these on static HTML sites, Flutter apps, and React projects — same playbooks, different findings each time.

Tool-agnostic. These work with any AI coding assistant. Claude, Copilot, Cursor, whatever you're using. They're structured prompts, not tool plugins. You can also use them manually if you prefer reading a checklist the old-fashioned way — the categories and severity tiers are useful on their own.

Code smells belong to humans and LLMs. Linters and static analysis tools are great at catching syntax violations and known anti-patterns, but code smells — the stuff that makes you say "this works, but something feels wrong" — require understanding intent. Is this function doing too much? Does this abstraction actually fit the domain? Is this naming misleading? Those are judgment calls, and judgment is exactly what humans and LLMs are good at. Deterministic tools can't evaluate whether your code communicates its purpose clearly. The quality audit leans into that.

Severity tiers. Not everything is equally urgent, and treating every finding as a blocker is a fast path to launch paralysis. CRITICAL means "fix this before you ship or you'll regret it." LOW means "nice to have, file it for later." The tiers help you make rational decisions about what to fix now versus what can wait.

Skip what doesn't apply. Running an SEO audit on a CLI tool is a waste of everyone's time. The playbooks are designed to be modular — skip the sections that don't apply to your project. No guilt, no completionism theater.

Dated output files. Every audit saves its results to docs/plans/YYYY-MM-DD-audit-name.md. This means you have a paper trail. You can see what was flagged last month, what got fixed, and what's still outstanding. Over time, these files become a history of your project's production readiness — which is surprisingly useful when you come back to a project after a few months away and can't remember what state you left it in.

7 audits, 4 severity tiers The Production Readiness Playbook covers quality, security, compliance, accessibility, performance, SEO, and operational readiness — each finding classified as CRITICAL, HIGH, MEDIUM, or LOW so you know what to fix first.

Tested on a Real Project

I didn't publish these and hope they worked. I ran the full flow against this very site — penguinboisoftware.com — before making the repo public. The production readiness audit found real issues: security headers that needed tightening, accessibility gaps I hadn't noticed, legal compliance items I'd been conveniently ignoring. The pre-launch checklist caught things the playbook missed. The post-launch runbook gave me a concrete monitoring plan instead of my usual strategy of refreshing the analytics page every ten minutes and interpreting silence as success.

The whole process took about an hour. That's an hour to systematically verify that a project is ready to ship, with a written record of every finding and its severity. Compare that to the alternative: spending twenty minutes asking scattered questions, getting scattered answers, and launching with the vague feeling that you probably forgot something important.

Why Open Source This

Because I needed it, which usually means other people do too. Production readiness is one of those things that everybody nods along about in blog posts (hi) but nobody has a great process for — especially solo developers and small teams who don't have a dedicated SRE to yell at them about monitoring. These playbooks are the process I wish I'd had from the start.

Here's the thing about being a solo dev in 2026: you're not actually solo anymore. With an AI coding assistant, you've got something closer to a one-person robot army — one that can run seven comprehensive audits across your entire project in about an hour, check things you'd never remember to check yourself, and write up the findings in a format you can actually use later. I couldn't do this much production readiness work by hand. Not at this scale, not at this speed, and definitely not at 1am before a launch. The playbooks exist because humans and LLMs working together can cover ground that neither could alone — you bring the judgment and domain knowledge, the AI brings the tireless thoroughness. That's a better path to production than either one flying solo — and at Penguinboi Software, we know a thing or two about not flying solo.

They're also living documents. As I ship more projects and find more gaps, the playbooks get better. If you use them and find something missing, that's what pull requests are for.

Go Use It

The repo is at github.com/penguinboi/preflight-checks. MIT licensed. Clone it, copy the prompts into your AI coding assistant, and run them against whatever you're about to ship. Or read them as checklists. Or adapt them to your workflow. The point is to stop freestyling your production readiness checks and start running them consistently.

Your future self — the one debugging a production issue at 2am — will thank you for the paper trail.