The most common mistake in skill building is starting with instructions before knowing what problem you're solving. Without a clear plan, you end up with a skill that doesn't reliably handle anything.
Start with concrete use cases
Before writing your SKILL.md, identify 2-3 concrete use cases, specific scenarios with defined triggers, steps, and outcomes.
Good use case definition:
Use Case: Sprint Planning
Trigger: User says "help me plan this sprint" or "create sprint tasks"
Steps:
1. Fetch current project status from Linear (via MCP)
2. Analyze team velocity and capacity
3. Suggest task prioritization
4. Create tasks with proper labels and estimates
Result: Fully planned sprint with tasks created in LinearBad use case definition:
Use Case: Help with projects
(That's it. This is too vague to build anything from.)The good definition gives you everything you need to write instructions. The bad one gives you nothing.
For each use case, work through these questions:
| Question | Why it matters | Example answer |
|---|---|---|
| What does the user want to accomplish? | Defines the end goal | "A fully planned sprint with tasks in Linear" |
| What triggers this workflow? | Controls when the skill loads | "User says 'plan sprint' or 'create sprint tasks'" |
| What multi-step workflow is required? | Becomes your instruction steps | "Fetch backlog → analyze capacity → prioritize → create tasks" |
| Which tools are needed? | Determines MCP dependencies | "Linear MCP server for task creation" |
| What domain knowledge should be embedded? | Things you'd otherwise explain each time | "Team uses Fibonacci story points, 2-week sprints" |
| What does success look like? | Your testing criteria | "Sprint created with all tasks properly labeled" |
Three skill categories
Three categories of skills have emerged as the most common. Understanding which category yours falls into helps you make design decisions early.
Category 1: Document & Asset Creation
Purpose: Create consistent, high-quality output, documents, code, designs, presentations.
Real example: A frontend-design skill that generates production-grade UI components matching your design system with correct color tokens, spacing, and accessibility attributes.
Key techniques:
- Embed your style guide and brand standards directly in the instructions
- Include template structures for consistent output format
- Add a quality checklist Claude runs before finalizing
- Provide before/after examples of acceptable output
No external tools required, this category works with Claude's built-in capabilities alone.
Category 2: Workflow Automation
Purpose: Multi-step processes that benefit from consistent methodology.
Real example: A skill-creator skill that walks you through building a new skill, asking for use cases, generating frontmatter, suggesting test cases, and iterating until solid.
Key techniques:
- Define step-by-step workflow with validation gates between steps
- Include templates for common structures
- Build in review and improvement loops
- Clear decision points: "If X, do Y. If Z, do W."
Category 3: MCPWhat is mcp?Model Context Protocol - a standard that lets AI tools connect to external services like databases, issue trackers, or APIs. Enhancement
Purpose: Add workflow guidance on top of MCP tool access.
Real example: A sentry-code-review skill that analyzes GitHub PRs using Sentry error data from an MCP server, coordinating calls to both APIs and producing a review no single tool could generate alone.
Key techniques:
- Coordinate multiple MCP calls in sequence
- Embed domain expertise the user would otherwise specify each time
- Add error handling for common MCP failures (timeouts, auth issues, rate limits)
Define success criteria before you build
How will you know if your skill works? Define success criteria upfront, both numbers you can measure and qualities you can observe.
Quantitative targets
| Metric | Target | How to measure |
|---|---|---|
| Trigger rate | 90% of relevant queries | Run 10-20 test queries, count automatic loads |
| Workflow efficiency | Fewer tool calls vs baseline | Compare with/without skill enabled |
| API reliability | 0 failed calls per workflow | Monitor MCP logs during test runs |
| Token usage | Lower than manual prompting | Compare token counts with/without skill |
| Completion rate | 95% of workflows finish without errors | Track how often Claude completes the full workflow |
Qualitative targets
- Users don't need to prompt Claude about next steps, the skill handles the flow
- Workflows complete without user correction
- Consistent results across sessions
- A new user can succeed on first try
The fastest path to a working skill
The most effective approach is to solve a real problem first and extract the skill from that solution:
- Start with one hard task: pick the most challenging workflow you want to automate
- Solve it in a conversation: work with Claude until you get the result you want
- Extract the winning approach: what instructions and steps led to success?
- Write those into your skill: instructions that worked in conversation will work in a skill
- Expand from there: once the hardest case works, add more scenarios
This is faster than designing a complete skill from scratch because you're working from proven instructions rather than theoretical ones.
Common planning mistakes
- ScopeWhat is scope?The area of your code where a variable is accessible; variables declared inside a function or block are invisible outside it. creep: A "project management" skill that handles sprints, docs, hiring, and budgets will do all of them poorly. Split into focused skills.
- Premature optimization: Adding scripts and complex folder structures before the basic instructions work. Start with just SKILL.md.
- Copying without understanding: Modifying a skill you found online without understanding why it was designed that way. Build from your own use cases.
- Skipping the conversation test: Writing instructions theoretically without testing them in a real conversation first.