Success comes down to two things: the YAMLWhat is yaml?A human-readable text format used for configuration files, including Docker Compose and GitHub Actions workflows. frontmatter (especially the description) and the quality of your instructions.
The YAMLWhat is yaml?A human-readable text format used for configuration files, including Docker Compose and GitHub Actions workflows. frontmatter
The frontmatter is everything between the --- delimiters at the top of your SKILL.md. It's in the system promptWhat is system prompt?Hidden instructions set by the developer that shape how an AI assistant behaves throughout a conversation. Users don't see it, but it defines the AI's persona and constraints. for every message, making it the highest-impact text in your entire skill.
Minimal required format
---
name: your-skill-name
description: What it does. Use when user asks to [specific trigger phrases].
---That's all you need to start. You can always add fields later.
All available fields
---
name: sprint-planner # Required: kebab-case, matches folder name
description: | # Required: what + when
Manages sprint planning in Linear including task creation and velocity
tracking. Use when user says "plan sprint", "create sprint tasks",
"Linear sprint", or "help me prioritize this backlog".
license: MIT # Optional: for open source skills
compatibility: | # Optional: environment requirements
Requires Linear MCP server. Works in Claude.ai and Claude Code.
metadata: # Optional: custom key-value pairs
author: YourName
version: 1.0.0
mcp-server: linear
---| Field | Required? | Purpose | When to use |
|---|---|---|---|
name | Yes | Identifies the skill, must match folder name | Always |
description | Yes | Controls when Claude loads the skill | Always |
license | No | Declares licensing for shared skills | When open-sourcing |
compatibility | No | Notes platform/dependency requirements | When scripts or MCP needed |
metadata | No | Custom key-value pairs for organization | When versioning or tracking |
Writing descriptions that work
The description is how Claude decides whether to load your skill. A perfect set of instructions behind a bad description will never get loaded. A decent set of instructions behind a great description will get loaded every time.
Formula: [What it does] + [When to use it] + [Key capabilities]
Good descriptions
# Specific and actionable
description: Analyzes Figma design files and generates developer handoff
documentation. Use when user uploads .fig files, asks for "design specs",
"component documentation", or "design-to-code" guidance.# Includes clear trigger phrases
description: Manages Linear project workflows including sprint planning,
task creation, and status tracking. Use when user mentions "sprint",
"Linear tasks", "project planning", or asks to "create tasks".# Covers multiple trigger patterns
description: Generates weekly engineering status reports from GitHub activity.
Use when user says "status report", "weekly update", "what did we ship",
or asks for a summary of recent PRs and commits.Bad descriptions
# Too vague - Claude can't determine when to load this
description: Helps with projects# Missing trigger conditions - what should Claude look for?
description: Creates sophisticated multi-page documentation# Too technical - users don't say these things
description: Implements the Project entity model with hierarchical relationshipsDescription constraints
Two hard rules that will silently break your skill:
- Maximum 1024 characters. Longer descriptions get truncated, cutting off your trigger phrases.
- No XML angle brackets (
<or>). The description is embedded in system prompts using XML. Angle brackets break parsing.
# This will break:
description: Handles <project> setup for <team> workflows
# This works:
description: Handles project setup for team workflowsWriting the main instructions
After the frontmatter comes the skill body, where Claude learns how to execute the workflow. Use this template as your starting point:
---
name: your-skill
description: [Your description here]
---
# Your Skill Name
## Instructions
### Step 1: [First major step]
Clear explanation of what to do.
Example:
`python scripts/fetch_data.py --project-id PROJECT_ID`
Expected output: [describe what success looks like]
### Step 2: [Next step]
...
## Examples
### Example 1: [common scenario]
User says: "Set up a new marketing campaign"
Actions:
1. Fetch existing campaigns via MCP
2. Create campaign with provided parameters
Result: Campaign created with confirmation link
## Troubleshooting
### Error: [Common error message]
Cause: [Why it happens]
Solution: [How to fix]Be specific and actionable
Vague instructions produce vague results. The skill author knows what they mean, but Claude doesn't have that context, it needs explicit detail.
| Quality level | Example instruction | Problem |
|---|---|---|
| Vague | "Validate the data" | Claude doesn't know what tool, what format, or what counts as valid |
| Slightly better | "Validate the CSV data" | Claude knows the format but not the tool or criteria |
| Good | "Run validate.py on the CSV, check for required fields" | Claude knows tool and criteria but not error handling |
| Excellent | "Run validate.py --input {file}. Check for id, name, date columns. On failure, show the missing fields." | Claude knows exactly what to do and what to do when it fails |
Good instruction:
Run `python scripts/validate.py --input {filename}` to check data format.
If validation fails, common issues include:
- Missing required fields (add them to the CSV)
- Invalid date formats (use YYYY-MM-DD)
- Encoding errors (ensure UTF-8, not Latin-1)
Expected output on success:
"Validation passed: 142 rows, 0 errors"Bad instruction:
Validate the data before processing.A good test: could someone who has never seen this workflow follow your instructions exactly? If not, add more detail.
Include examples in your instructions
A single concrete example communicates more than a paragraph of abstract instructions. When Claude sees an example of the desired output, it calibrates its own output to match.
## Output format
Always format the sprint summary like this example:
**Sprint 24 Summary (Jan 15-28)**
- Completed: 18/22 tasks (82%)
- Carried over: 4 tasks (auth-migration, api-v2, perf-audit, docs-update)
- Velocity: 42 story points (team average: 38)
- Key wins: Shipped OAuth integration, reduced API latency by 40%
- Risks: auth-migration blocked on security reviewWithout this example, Claude would generate a sprint summary in whatever format seems reasonable. With the example, Claude matches your format precisely.
Reference bundled files clearly
If you have documentation in references/, tell Claude explicitly when to use it, link to files at the right point in your instructions:
Before writing any API calls, consult `references/api-patterns.md` for:
- Rate limiting guidance (max 100 requests/minute)
- Pagination patterns (cursor-based, not offset)
- Error codes and their meanings
For the complete list of available endpoints, see `references/endpoints.md`.Without explicit references, Claude may never look at your reference files.
Include error handling
Skills that handle errors gracefully get used more. When something goes wrong, Claude should know what to do.
## Common issues
### MCP connection failed
If you see "Connection refused":
1. Verify MCP server is running: Check Settings > Extensions
2. Confirm API key is valid
3. Try reconnecting: Settings > Extensions > [Your Service] > Reconnect
### Rate limit exceeded
If you see "429 Too Many Requests":
1. Wait 60 seconds before retrying
2. If persistent, batch operations into fewer API calls
3. Check if another skill is making concurrent requests
### Unexpected empty response
If an MCP call returns empty data:
1. Verify the resource exists (project ID, task ID, etc.)
2. Check permissions - the API key may lack access
3. Try the call with a known-good ID to confirm connectivityError handling sections encode hard-won knowledge. When a user hits one of these errors without a skill, they're stuck. With a skill, Claude already knows the fix.
The writing process in practice
- Write the frontmatter first. Get the name and description right before anything else.
- Write the instructions by replaying your successful conversation. What did you tell Claude step by step?
- Add one example showing the most common use case with expected input and output.
- Add error handling for the top 2-3 issues you've encountered.
- Test immediately: a rough skill that works is better than a polished skill that doesn't.
Most skills go through 3-5 revision cycles before they're solid.