AI generates a component, it doesn't work, you paste the error back, AI gives you a "fix," that fix creates a new error, and twenty minutes later you have a Frankenstein component that's worse than when you started. This lesson is about breaking that cycle.
The effective debugging prompt
The quality of AI's debugging help is proportional to the information you provide. Compare:
Bad prompt:
My component doesn't work. Can you fix it?Good prompt:
My ProductList component crashes when the page loads.
Error: TypeError: Cannot read properties of undefined (reading 'map')
at ProductList (ProductList.jsx:12:24)
Here's the component:
[paste the relevant code]
I expected it to render a list of products from the API.
The API returns { data: { products: [...] } }.The good prompt gives AI three essential things:
| Element | Why it matters |
|---|---|
| The exact error message + stack trace | AI can identify the error pattern without guessing |
| The relevant code | AI sees the actual problem, not an imagined one |
| What you expected vs what happened | AI understands the intent, not just the symptom |
When AI debugging works well
Pattern-matching common errors
For common patterns, missing null checks, incorrect imports, wrong hookWhat is hook?A special function in React (starting with "use") that lets you add state, side effects, or other React features to a component without writing a class. usage, AI can often provide the exact fix immediately:
// You show AI this error:
// Error: Rendered more hooks than during the previous render
// AI correctly identifies the problem:
// You have a conditional before a hook call
function MyComponent({ showExtra }) {
if (!showExtra) return null; // ← this return is before the hook
const [count, setCount] = useState(0); // ← hook after conditional return
}
// AI provides the correct fix:
function MyComponent({ showExtra }) {
const [count, setCount] = useState(0); // ← hooks first, always
if (!showExtra) return null; // ← conditional return after hooks
}AI is also good at spotting typos (missing commas, misspelled variables) and explaining unfamiliar errors: it can translate cryptic framework messages into plain language.
When AI debugging fails
The fix spiral
- You report error A
- AI changes the code to fix error A
- The change introduces error B
- You report error B
- AI changes the code to fix error B
- The change reintroduces error A (or creates error C)
- Repeat until the code is a mess
The fix spiral happens because AI is treating symptoms, not the root cause. If you've gone back and forth more than twice, stop. Do not paste the error back in. Instead, try the techniques below.
Logic bugs and state/timing bugs
AI is weak at debugging logic errors (wrong output, no error message) and state/timing bugs (stale closures, race conditions). These depend on execution order and runtimeWhat is runtime?The environment that runs your code after it's written. Some languages need a runtime installed on the machine; others (like Go) bake it into the binary. behavior, things not visible in the code alone.
Techniques that actually work
Ask AI to explain, not fix
Instead of "fix this bug," try: "Explain what this code does step by step, including what each variable contains at each point."
Walk me through this function line by line.
For each line, tell me:
1. What it does
2. What the value of each variable is at that point
3. Any assumptions it makes
[paste the function]This works because the explanation often reveals the mismatch between what you intended and what the code actually does. You spot the bug yourself while reading AI's explanation.
Provide the working context
AI often generates code that would work in isolation but fails in your specific project. Give AI the surrounding context:
This component receives props from this parent:
[paste parent component's relevant section]
The API response looks like this:
[paste an actual response from the Network tab]
The component is supposed to:
[describe the expected behavior]Ask for multiple solutions
When a fix doesn't work, don't ask for another fix. Ask AI: "Give me three different approaches to solve this problem, and explain the tradeoffs of each." This forces AI to think more broadly instead of making another incremental patch.
Reduce to the minimum failing case
Strip the code down to the smallest version that still shows the bug. This helps AI focus, and the process often reveals the problem to you first.
The rubber duck effect
"Rubber duck debugging" means explaining your code line by line out loud, the act of explaining reveals the bug. AI makes a perfect rubber duck. Many developers find the bug while writing the prompt, before AI even responds.
When to stop asking AI
Clear signals that AI won't solve your bug:
- You've gone back and forth more than 3 times on the same issue
- AI's fixes keep getting longer and more complicated
- AI is suggesting you install new libraries or rewrite major sections
- The error message changes with every "fix" but the code never works
- AI starts contradicting its previous suggestions
When you hit these signals, close the chat. Read the documentation, add console.log statements (next lesson), or ask a human.
| Situation | Best approach |
|---|---|
| Common error pattern (TypeError, missing import) | Ask AI to fix it, it's good at these |
| Unfamiliar error message | Ask AI to explain the error, then fix it yourself |
| Fix spiral (3+ back-and-forth) | Stop. Add console.logs or read docs instead |
| Logic bug (wrong output, no error) | Ask AI to trace the code step by step |
| Works in isolation, fails in your app | Provide surrounding context (parent, API shape, state) |
| You're completely stuck | Explain the whole problem from scratch in a fresh chat |
Starting a fresh chat is underrated. AI conversations accumulate confusion, a clean slate with a well-structured problem description often gets a better answer in one shot.