Claude Code/
Lesson

Success comes down to two things: the YAMLWhat is yaml?A human-readable text format used for configuration files, including Docker Compose and GitHub Actions workflows. frontmatter (especially the description) and the quality of your instructions.

The YAMLWhat is yaml?A human-readable text format used for configuration files, including Docker Compose and GitHub Actions workflows. frontmatter

The frontmatter is everything between the --- delimiters at the top of your SKILL.md. It's in the system promptWhat is system prompt?Hidden instructions set by the developer that shape how an AI assistant behaves throughout a conversation. Users don't see it, but it defines the AI's persona and constraints. for every message, making it the highest-impact text in your entire skill.

Minimal required format

yaml
---
name: your-skill-name
description: What it does. Use when user asks to [specific trigger phrases].
---

That's all you need to start. You can always add fields later.

All available fields

yaml
---
name: sprint-planner          # Required: kebab-case, matches folder name
description: |                # Required: what + when
  Manages sprint planning in Linear including task creation and velocity
  tracking. Use when user says "plan sprint", "create sprint tasks",
  "Linear sprint", or "help me prioritize this backlog".
license: MIT                  # Optional: for open source skills
compatibility: |              # Optional: environment requirements
  Requires Linear MCP server. Works in Claude.ai and Claude Code.
metadata:                     # Optional: custom key-value pairs
  author: YourName
  version: 1.0.0
  mcp-server: linear
---
FieldRequired?PurposeWhen to use
nameYesIdentifies the skill, must match folder nameAlways
descriptionYesControls when Claude loads the skillAlways
licenseNoDeclares licensing for shared skillsWhen open-sourcing
compatibilityNoNotes platform/dependency requirementsWhen scripts or MCP needed
metadataNoCustom key-value pairs for organizationWhen versioning or tracking
02

Writing descriptions that work

The description is how Claude decides whether to load your skill. A perfect set of instructions behind a bad description will never get loaded. A decent set of instructions behind a great description will get loaded every time.

Formula: [What it does] + [When to use it] + [Key capabilities]

Good descriptions

yaml
# Specific and actionable
description: Analyzes Figma design files and generates developer handoff
  documentation. Use when user uploads .fig files, asks for "design specs",
  "component documentation", or "design-to-code" guidance.
yaml
# Includes clear trigger phrases
description: Manages Linear project workflows including sprint planning,
  task creation, and status tracking. Use when user mentions "sprint",
  "Linear tasks", "project planning", or asks to "create tasks".
yaml
# Covers multiple trigger patterns
description: Generates weekly engineering status reports from GitHub activity.
  Use when user says "status report", "weekly update", "what did we ship",
  or asks for a summary of recent PRs and commits.

Bad descriptions

yaml
# Too vague - Claude can't determine when to load this
description: Helps with projects
yaml
# Missing trigger conditions - what should Claude look for?
description: Creates sophisticated multi-page documentation
yaml
# Too technical - users don't say these things
description: Implements the Project entity model with hierarchical relationships
AI pitfall
AI-generated descriptions tend to be too generic: "A comprehensive solution for managing complex project workflows." This tells Claude nothing useful. Always add specific trigger phrases that match how real users talk.

Description constraints

Two hard rules that will silently break your skill:

  • Maximum 1024 characters. Longer descriptions get truncated, cutting off your trigger phrases.
  • No XML angle brackets (< or >). The description is embedded in system prompts using XML. Angle brackets break parsing.
yaml
# This will break:
description: Handles <project> setup for <team> workflows

# This works:
description: Handles project setup for team workflows
03

Writing the main instructions

After the frontmatter comes the skill body, where Claude learns how to execute the workflow. Use this template as your starting point:

markdown
---
name: your-skill
description: [Your description here]
---

# Your Skill Name

## Instructions

### Step 1: [First major step]
Clear explanation of what to do.

Example:
`python scripts/fetch_data.py --project-id PROJECT_ID`

Expected output: [describe what success looks like]

### Step 2: [Next step]
...

## Examples

### Example 1: [common scenario]
User says: "Set up a new marketing campaign"
Actions:
1. Fetch existing campaigns via MCP
2. Create campaign with provided parameters
Result: Campaign created with confirmation link

## Troubleshooting

### Error: [Common error message]
Cause: [Why it happens]
Solution: [How to fix]
Good to know
This template is a starting point. Some skills work better with different structures, match your structure to your workflow.
04

Be specific and actionable

Vague instructions produce vague results. The skill author knows what they mean, but Claude doesn't have that context, it needs explicit detail.

Quality levelExample instructionProblem
Vague"Validate the data"Claude doesn't know what tool, what format, or what counts as valid
Slightly better"Validate the CSV data"Claude knows the format but not the tool or criteria
Good"Run validate.py on the CSV, check for required fields"Claude knows tool and criteria but not error handling
Excellent"Run validate.py --input {file}. Check for id, name, date columns. On failure, show the missing fields."Claude knows exactly what to do and what to do when it fails

Good instruction:

markdown
Run `python scripts/validate.py --input {filename}` to check data format.

If validation fails, common issues include:
- Missing required fields (add them to the CSV)
- Invalid date formats (use YYYY-MM-DD)
- Encoding errors (ensure UTF-8, not Latin-1)

Expected output on success:
"Validation passed: 142 rows, 0 errors"

Bad instruction:

markdown
Validate the data before processing.

A good test: could someone who has never seen this workflow follow your instructions exactly? If not, add more detail.

Edge case
Sometimes instructions should be flexible. If your skill handles multiple file formats, saying "Run validate.py with the appropriate --format flag (csv, json, or xml)" is better than listing every combination. Be specific about important parts and flexible about parts Claude can figure out.
05

Include examples in your instructions

A single concrete example communicates more than a paragraph of abstract instructions. When Claude sees an example of the desired output, it calibrates its own output to match.

markdown
## Output format

Always format the sprint summary like this example:

**Sprint 24 Summary (Jan 15-28)**
- Completed: 18/22 tasks (82%)
- Carried over: 4 tasks (auth-migration, api-v2, perf-audit, docs-update)
- Velocity: 42 story points (team average: 38)
- Key wins: Shipped OAuth integration, reduced API latency by 40%
- Risks: auth-migration blocked on security review

Without this example, Claude would generate a sprint summary in whatever format seems reasonable. With the example, Claude matches your format precisely.

AI pitfall
Claude sometimes treats examples as the only acceptable format. Add a note like "Adapt this format as needed, the structure matters, not the exact wording."
06

Reference bundled files clearly

If you have documentation in references/, tell Claude explicitly when to use it, link to files at the right point in your instructions:

markdown
Before writing any API calls, consult `references/api-patterns.md` for:
- Rate limiting guidance (max 100 requests/minute)
- Pagination patterns (cursor-based, not offset)
- Error codes and their meanings

For the complete list of available endpoints, see `references/endpoints.md`.

Without explicit references, Claude may never look at your reference files.

07

Include error handling

Skills that handle errors gracefully get used more. When something goes wrong, Claude should know what to do.

markdown
## Common issues

### MCP connection failed
If you see "Connection refused":
1. Verify MCP server is running: Check Settings > Extensions
2. Confirm API key is valid
3. Try reconnecting: Settings > Extensions > [Your Service] > Reconnect

### Rate limit exceeded
If you see "429 Too Many Requests":
1. Wait 60 seconds before retrying
2. If persistent, batch operations into fewer API calls
3. Check if another skill is making concurrent requests

### Unexpected empty response
If an MCP call returns empty data:
1. Verify the resource exists (project ID, task ID, etc.)
2. Check permissions - the API key may lack access
3. Try the call with a known-good ID to confirm connectivity

Error handling sections encode hard-won knowledge. When a user hits one of these errors without a skill, they're stuck. With a skill, Claude already knows the fix.

08

The writing process in practice

  1. Write the frontmatter first. Get the name and description right before anything else.
  2. Write the instructions by replaying your successful conversation. What did you tell Claude step by step?
  3. Add one example showing the most common use case with expected input and output.
  4. Add error handling for the top 2-3 issues you've encountered.
  5. Test immediately: a rough skill that works is better than a polished skill that doesn't.

Most skills go through 3-5 revision cycles before they're solid.