GenAI can feel like a brilliant teammate—until it misreads the assignment. Just like software, it’s garbage in, garbage out. Treat it like a black-box API, and your prompts become the key to better results.
Over time, I’ve landed on a simple, repeatable protocol—rooted in Test‑Driven Development (TDD)—that treats prompting like a coding task. It gives you control and consistently better results.
Prompt Like an Architect
Before you call an unfamiliar API, you’d skim its docs or a Swagger/OpenAPI endpoint. I do the same with GenAI—I start by asking it how I should ask it.
I’ll paste in the requirements from a work item and say something like:
“Here are the details for the work item: [paste details]. I want to generate TypeScript unit tests that satisfy these requirements. What’s the best way to prompt you for that?”
This “meta‑prompt” gives you a task‑specific “API spec” from the model itself. It tells you:
- What structure it prefers
- Which keywords or context are most useful
- How to frame the request for best results
Armed with those insights, you can build a sharper prompt on the next turn.
Pro tip: Save good meta‑prompts as snippets. They’re reusable across tasks and teams.
Drive GenAI with TDD
TDD pairs surprisingly well with GenAI because it turns fuzzy requirements into concrete, verifiable targets. Instead of asking for a feature and its tests at the same time, run the classic red‑green‑refactor loop with the model.
Here’s the flow:
- Test first (Red): Ask the AI to write a failing unit test that defines the behavior.
For example: “Write a Jest test for a function
calculateDiscountthat returns a 10% discount for orders over $100.” This forces clarity before any implementation exists. - Code second (Green): Feed the test back and ask for the minimal implementation.
“Here is a Jest test. Write the
calculateDiscountfunction to make this test pass.”
This gives the model a clear, verifiable goal. Keep it in the same conversation so it retains context of the tests and prior outputs.
Then iterate—because requirements always evolve:
- Add a new rule (Red again): “Add a new Jest test so VIP customers receive a 20% discount regardless of order total.” Ask for the full, updated test file.
- Update the implementation (Green again): Provide the updated tests and request the smallest change to pass everything. “Preserve existing behavior for non‑VIP customers.”
Rinse and repeat for edge cases (negative totals, rounding, discount caps). Tests drive clarity. Code satisfies the spec. The suite prevents regressions.
Polish Like a Pro
No one ships code without a final pass. Same here. Once the tests pass, ask GenAI to review and refine its own work. Don’t settle for “looks good”—be specific about what you want improved:
- Readability and structure
- “Refactor for readability; extract well‑named helpers and reduce nesting without changing behavior.”
- “Add concise JSDoc/docstrings and only the comments that add value.”
- Modern language features
- “Rewrite callbacks/promises using async/await; handle errors explicitly.”
- “Favor immutability where practical (
const, spread) without harming performance.”
- Design and SOLID
- “Check this against SOLID; point out violations and propose minimal refactors.”
- “Identify responsibilities to decouple; suggest interface boundaries or DI if helpful.”
- Error handling and edge cases
- “List likely edge cases; update code to handle them safely.”
- “Ensure errors are actionable and logged with enough context.”
- Performance
- “Identify hot paths; propose optimizations only if they simplify or speed things up meaningfully.”
- “Assess algorithmic complexity; can we reduce from O(n^2) to O(n log n) or O(n)?”
- Tests and coverage
- “Suggest additional unit tests for branches and edge cases; provide concrete cases.”
- “Include property‑based tests where appropriate to uncover input extremes.”
- Observability and security
- “Add minimal telemetry (metrics/logs) useful in prod; avoid PII.”
- “Quick security review for injection, path traversal, deserialization, secrets handling.”
These prompts nudge the model into a real code review and refactor pass—not just a superficial check. The result is usually clearer code, stronger tests, and fewer production surprises.
Fix the Feedback Loop
This loop isn’t magic; sometimes it drifts. Use the same TDD mindset to debug the process:
- Bad test? Don’t accept it. Correct it with a follow‑up: “This test ignores negative values. Regenerate the test and include a negative input case.”
- Code fails the test? Great—now you have signal. Paste the error and the failing test: “It fails with: [error]. Please fix the code.”
- Stuck conversation? Reset concisely: “Let’s start over on this function. The most important requirement is [state it clearly].”
Try this TDD-style prompting on your next coding task. Start with a test, guide GenAI through the loop, and ship with confidence

