Skip to content

Skill Creator

A skill for creating new skills and iteratively improving them.

At a high level, the process of creating a skill goes like this:

  • Decide what you want the skill to do and roughly how it should do it
  • Write a draft of the skill
  • Create a few test prompts and run claude-with-access-to-the-skill on them
  • Help the user evaluate the results both qualitatively and quantitatively
  • While the runs happen in the background, draft some quantitative evals if there aren't any (if there are some, you can either use as is or modify if you feel something needs to change about them). Then explain them to the user (or if they already existed, explain the ones that already exist)
  • Use the eval-viewer/generate_review.py script to show the user the results for them to look at, and also let them look at the quantitative metrics
  • Rewrite the skill based on feedback from the user's evaluation of the results (and also if there are any glaring flaws that become apparent from the quantitative benchmarks)
  • Repeat until you're satisfied
  • Expand the test set and try again at larger scale

Your job when using this skill is to figure out where the user is in this process and then jump in and help them progress through these stages. So for instance, maybe they're like "I want to make a skill for X". You can help narrow down what they mean, write a draft, write the test cases, figure out how they want to evaluate, run all the prompts, and repeat.

On the other hand, maybe they already have a draft of the skill. In this case you can go straight to the eval/iterate part of the loop.

Of course, you should always be flexible and if the user is like "I don't need to run a bunch of evaluations, just vibe with me", you can do that instead.

Then after the skill is done (but again, the order is flexible), you can also run the skill description improver, which we have a whole separate script for, to optimize the triggering of the skill.

Cool? Cool.

Communicating with the user

The skill creator is liable to be used by people across a wide range of familiarity with coding jargon. If you haven't heard (and how could you, it's only very recently that it started), there's a trend now where the power of Claude is inspiring plumbers to open up their terminals, parents and grandparents to google "how to install npm". On the other hand, the bulk of users are probably fairly computer-literate.

So please pay attention to context cues to understand how to phrase your communication! In the default case, just to give you some idea:

  • "evaluation" and "benchmark" are borderline, but OK
  • for "JSON" and "assertion" you want to see serious cues from the user that they know what those things are before using them without explaining them

It's OK to briefly explain terms if you're in doubt, and feel free to clarify terms with a short definition if you're unsure if the user will get it.


Creating a skill

Capture Intent

Start by understanding the user's intent. The current conversation might already contain a workflow the user wants to capture (e.g., they say "turn this into a skill"). If so, extract answers from the conversation history first β€” the tools used, the sequence of steps, corrections the user made, input/output formats observed. The user may need to fill the gaps, and should confirm before proceeding to the next step.

  1. What should this skill enable Claude to do?
  2. When should this skill trigger? (what user phrases/contexts)
  3. What's the expected output format?
  4. Should we set up test cases to verify the skill works? Skills with objectively verifiable outputs (file transforms, data extraction, code generation, fixed workflow steps) benefit from test cases. Skills with subjective outputs (writing style, art) often don't need them. Suggest the appropriate default based on the skill type, but let the user decide.

Interview and Research

Proactively ask questions about edge cases, input/output formats, example files, success criteria, and dependencies. Wait to write test prompts until you've got this part ironed out.

Check available MCPs - if useful for research (searching docs, finding similar skills, looking up best practices), research in parallel via subagents if available, otherwise inline. Come prepared with context to reduce burden on the user.

Write the SKILL.md

Based on the user interview, fill in these components:

  • name: Skill identifier
  • description: When to trigger, what it does. This is the primary triggering mechanism - include both what the skill does AND specific contexts for when to use it. All "when to use" info goes here, not in the body. Note: currently Claude has a tendency to "undertrigger" skills -- to not use them when they'd be useful. To combat this, please make the skill descriptions a little bit "pushy". So for instance, instead of "How to build a simple fast dashboard to display internal Anthropic data.", you might write "How to build a simple fast dashboard to display internal Anthropic data. Make sure to use this skill whenever the user mentions dashboards, data visualization, internal metrics, or wants to display any kind of company data, even if they don't explicitly ask for a 'dashboard.'"
  • compatibility: Required tools, dependencies (optional, rarely needed)
  • the rest of the skill :)

Skill Writing Guide

Anatomy of a Skill

skill-name/
β”œβ”€β”€ SKILL.md (required)
β”‚   β”œβ”€β”€ YAML frontmatter (name, description required)
β”‚   └── Markdown instructions
└── Bundled Resources (optional)
    β”œβ”€β”€ scripts/    - Executable code for deterministic/repetitive tasks
    β”œβ”€β”€ references/ - Docs loaded into context as needed
    └── assets/     - Files used in output (templates, icons, fonts)

Progressive Disclosure

Skills use a three-level loading system: 1. Metadata (name + description) - Always in context (~100 words) 2. SKILL.md body - In context whenever skill triggers (<500 lines ideal) 3. Bundled resources - As needed (unlimited, scripts can execute without loading)

These word counts are approximate and you can feel free to go longer if needed.

Key patterns: - Keep SKILL.md under 500 lines; if you're approaching this limit, add an additional layer of hierarchy along with clear pointers about where the model using the skill should go next to follow up. - Reference files clearly from SKILL.md with guidance on when to read them - For large reference files (>300 lines), include a table of contents

Domain organization: When a skill supports multiple domains/frameworks, organize by variant:

cloud-deploy/
β”œβ”€β”€ SKILL.md (workflow + selection)
└── references/
    β”œβ”€β”€ aws.md
    β”œβ”€β”€ gcp.md
    └── azure.md
Claude reads only the relevant reference file.

Principle of Lack of Surprise

This goes without saying, but skills must not contain malware, exploit code, or any content that could compromise system security. A skill's contents should not surprise the user in their intent if described. Don't go along with requests to create misleading skills or skills designed to facilitate unauthorized access, data exfiltration, or other malicious activities. Things like a "roleplay as an XYZ" are OK though.

Writing Patterns

Prefer using the imperative form in instructions.

Defining output formats - You can do it like this:

## Report structure
ALWAYS use this exact template:
# [Title]
## Executive summary
## Key findings
## Recommendations

Examples pattern - It's useful to include examples. You can format them like this (but if "Input" and "Output" are in the examples you might want to deviate a little):

## Commit message format
**Example 1:**
Input: Added user authentication with JWT tokens
Output: feat(auth): implement JWT-based authentication

Writing Style

Try to explain to the model why things are important in lieu of heavy-handed musty MUSTs. Use theory of mind and try to make the skill general and not super-narrow to specific examples. Start by writing a draft and then look at it with fresh eyes and improve it.

Test Cases

After writing the skill draft, come up with 2-3 realistic test prompts β€” the kind of thing a real user would actually say. Share them with the user: [you don't have to use this exact language] "Here are a few test cases I'd like to try. Do these look right, or do you want to add more?" Then run them.

Save test cases to evals/evals.json. Don't write assertions yet β€” just the prompts. You'll draft assertions in the next step while the runs are in progress.

{
  "skill_name": "example-skill",
  "evals": [
    {
      "id": 1,
      "prompt": "User's task prompt",
      "expected_output": "Description of expected result",
      "files": []
    }
  ]
}

See references/schemas.md for the full schema (including the assertions field, which you'll add later).


Running and evaluating test cases

This section is one continuous sequence β€” don't stop partway through. Do NOT use /skill-test or any other testing skill.

Put results in <skill-name>-workspace/ as a sibling to the skill directory. Within the workspace, organize results by iteration (iteration-1/, iteration-2/, etc.) and within that, each test case gets a directory (eval-0/, eval-1/, etc.). Don't create all of this upfront β€” just create directories as you go.

Step 1: Spawn all runs (with-skill AND baseline) in the same turn

For each test case, spawn two subagents in the same turn β€” one with the skill, one without. This is important: don't spawn the with-skill runs first and then come back for baselines later. Launch everything at once so it all finishes around the same time.

With-skill run:

Execute this task:
- Skill path: <path-to-skill>
- Task: <eval prompt>
- Input files: <eval files if any, or "none">
- Save outputs to: <workspace>/iteration-<N>/eval-<ID>/with_skill/outputs/
- Outputs to save: <what the user cares about β€” e.g., "the .docx file", "the final CSV">

Baseline run (same prompt, but the baseline depends on context): - Creating a new skill: no skill at all. Same prompt, no skill path, save to without_skill/outputs/. - Improving an existing skill: the old version. Before editing, snapshot the skill (cp -r <skill-path> <workspace>/skill-snapshot/), then point the baseline subagent at the snapshot. Save to old_skill/outputs/.

Write an eval_metadata.json for each test case (assertions can be empty for now). Give each eval a descriptive name based on what it's testing β€” not just "eval-0". Use this name for the directory too. If this iteration uses new or modified eval prompts, create these files for each new eval directory β€” don't assume they carry over from previous iterations.

{
  "eval_id": 0,
  "eval_name": "descriptive-name-here",
  "prompt": "The user's task prompt",
  "assertions": []
}

Step 2: While runs are in progress, draft assertions

Don't just wait for the runs to finish β€” you can use this time productively. Draft quantitative assertions for each test case and explain them to the user. If assertions already exist in evals/evals.json, review them and explain what they check.

Good assertions are objectively verifiable and have descriptive names β€” they should read clearly in the benchmark viewer so someone glancing at the results immediately understands what each one checks. Subjective skills (writing style, design quality) are better evaluated qualitatively β€” don't force assertions onto things that need human judgment.

Update the eval_metadata.json files and evals/evals.json with the assertions once drafted. Also explain to the user what they'll see in the viewer β€” both the qualitative outputs and the quantitative benchmark.

Step 3: As runs complete, capture timing data

When each subagent task completes, you receive a notification containing total_tokens and duration_ms. Save this data immediately to timing.json in the run directory:

{
  "total_tokens": 84852,
  "duration_ms": 23332,
  "total_duration_seconds": 23.3
}

This is the only opportunity to capture this data β€” it comes through the task notification and isn't persisted elsewhere. Process each notification as it arrives rather than trying to batch them.

Step 4: Grade, aggregate, and launch the viewer

Once all runs are done:

  1. Grade each run β€” spawn a grader subagent (or grade inline) that reads agents/grader.md and evaluates each assertion against the outputs. Save results to grading.json in each run directory. The grading.json expectations array must use the fields text, passed, and evidence (not name/met/details or other variants) β€” the viewer depends on these exact field names. For assertions that can be checked programmatically, write and run a script rather than eyeballing it β€” scripts are faster, more reliable, and can be reused across iterations.

  2. Aggregate into benchmark β€” run the aggregation script from the skill-creator directory:

    python -m scripts.aggregate_benchmark <workspace>/iteration-N --skill-name <name>
    
    This produces benchmark.json and benchmark.md with pass_rate, time, and tokens for each configuration, with mean Β± stddev and the delta. If generating benchmark.json manually, see references/schemas.md for the exact schema the viewer expects. Put each with_skill version before its baseline counterpart.

  3. Do an analyst pass β€” read the benchmark data and surface patterns the aggregate stats might hide. See agents/analyzer.md (the "Analyzing Benchmark Results" section) for what to look for β€” things like assertions that always pass regardless of skill (non-discriminating), high-variance evals (possibly flaky), and time/token tradeoffs.

  4. Launch the viewer with both qualitative outputs and quantitative data:

    nohup python <skill-creator-path>/eval-viewer/generate_review.py \
      <workspace>/iteration-N \
      --skill-name "my-skill" \
      --benchmark <workspace>/iteration-N/benchmark.json \
      > /dev/null 2>&1 &
    VIEWER_PID=$!
    
    For iteration 2+, also pass --previous-workspace <workspace>/iteration-<N-1>.

Cowork / headless environments: If webbrowser.open() is not available or the environment has no display, use --static <output_path> to write a standalone HTML file instead of starting a server. Feedback will be downloaded as a feedback.json file when the user clicks "Submit All Reviews". After download, copy feedback.json into the workspace directory for the next iteration to pick up.

Note: please use generate_review.py to create the viewer; there's no need to write custom HTML.

  1. Tell the user something like: "I've opened the results in your browser. There are two tabs β€” 'Outputs' lets you click through each test case and leave feedback, 'Benchmark' shows the quantitative comparison. When you're done, come back here and let me know."

What the user sees in the viewer

The "Outputs" tab shows one test case at a time: - Prompt: the task that was given - Output: the files the skill produced, rendered inline where possible - Previous Output (iteration 2+): collapsed section showing last iteration's output - Formal Grades (if grading was run): collapsed section showing assertion pass/fail - Feedback: a textbox that auto-saves as they type - Previous Feedback (iteration 2+): their comments from last time, shown below the textbox

The "Benchmark" tab shows the stats summary: pass rates, timing, and token usage for each configuration, with per-eval breakdowns and analyst observations.

Navigation is via prev/next buttons or arrow keys. When done, they click "Submit All Reviews" which saves all feedback to feedback.json.

Step 5: Read the feedback

When the user tells you they're done, read feedback.json:

{
  "reviews": [
    {"run_id": "eval-0-with_skill", "feedback": "the chart is missing axis labels", "timestamp": "..."},
    {"run_id": "eval-1-with_skill", "feedback": "", "timestamp": "..."},
    {"run_id": "eval-2-with_skill", "feedback": "perfect, love this", "timestamp": "..."}
  ],
  "status": "complete"
}

Empty feedback means the user thought it was fine. Focus your improvements on the test cases where the user had specific complaints.

Kill the viewer server when you're done with it:

kill $VIEWER_PID 2>/dev/null

Improving the skill

This is the heart of the loop. You've run the test cases, the user has reviewed the results, and now you need to make the skill better based on their feedback.

How to think about improvements

  1. Generalize from the feedback. The big picture thing that's happening here is that we're trying to create skills that can be used a million times (maybe literally, maybe even more who knows) across many different prompts. Here you and the user are iterating on only a few examples over and over again because it helps move faster. The user knows these examples in and out and it's quick for them to assess new outputs. But if the skill you and the user are codeveloping works only for those examples, it's useless. Rather than put in fiddly overfitty changes, or oppressively constrictive MUSTs, if there's some stubborn issue, you might try branching out and using different metaphors, or recommending different patterns of working. It's relatively cheap to try and maybe you'll land on something great.

  2. Keep the prompt lean. Remove things that aren't pulling their weight. Make sure to read the transcripts, not just the final outputs β€” if it looks like the skill is making the model waste a bunch of time doing things that are unproductive, you can try getting rid of the parts of the skill that are making it do that and seeing what happens.

  3. Explain the why. Try hard to explain the why behind everything you're asking the model to do. Today's LLMs are smart. They have good theory of mind and when given a good harness can go beyond rote instructions and really make things happen. Even if the feedback from the user is terse or frustrated, try to actually understand the task and why the user is writing what they wrote, and what they actually wrote, and then transmit this understanding into the instructions. If you find yourself writing ALWAYS or NEVER in all caps, or using super rigid structures, that's a yellow flag β€” if possible, reframe and explain the reasoning so that the model understands why the thing you're asking for is important. That's a more humane, powerful, and effective approach.

  4. Look for repeated work across test cases. Read the transcripts from the test runs and notice if the subagents all independently wrote similar helper scripts or took the same multi-step approach to something. If all 3 test cases resulted in the subagent writing a create_docx.py or a build_chart.py, that's a strong signal the skill should bundle that script. Write it once, put it in scripts/, and tell the skill to use it. This saves every future invocation from reinventing the wheel.

This task is pretty important (we are trying to create billions a year in economic value here!) and your thinking time is not the blocker; take your time and really mull things over. I'd suggest writing a draft revision and then looking at it anew and making improvements. Really do your best to get into the head of the user and understand what they want and need.

The iteration loop

After improving the skill:

  1. Apply your improvements to the skill
  2. Rerun all test cases into a new iteration-<N+1>/ directory, including baseline runs. If you're creating a new skill, the baseline is always without_skill (no skill) β€” that stays the same across iterations. If you're improving an existing skill, use your judgment on what makes sense as the baseline: the original version the user came in with, or the previous iteration.
  3. Launch the reviewer with --previous-workspace pointing at the previous iteration
  4. Wait for the user to review and tell you they're done
  5. Read the new feedback, improve again, repeat

Keep going until: - The user says they're happy - The feedback is all empty (everything looks good) - You're not making meaningful progress


Advanced: Blind comparison

For situations where you want a more rigorous comparison between two versions of a skill (e.g., the user asks "is the new version actually better?"), there's a blind comparison system. Read agents/comparator.md and agents/analyzer.md for the details. The basic idea is: give two outputs to an independent agent without telling it which is which, and let it judge quality. Then analyze why the winner won.

This is optional, requires subagents, and most users won't need it. The human review loop is usually sufficient.


Description Optimization

The description field in SKILL.md frontmatter is the primary mechanism that determines whether Claude invokes a skill. After creating or improving a skill, offer to optimize the description for better triggering accuracy.

Step 1: Generate trigger eval queries

Create 20 eval queries β€” a mix of should-trigger and should-not-trigger. Save as JSON:

[
  {"query": "the user prompt", "should_trigger": true},
  {"query": "another prompt", "should_trigger": false}
]

The queries must be realistic and something a Claude Code or Claude.ai user would actually type. Not abstract requests, but requests that are concrete and specific and have a good amount of detail. For instance, file paths, personal context about the user's job or situation, column names and values, company names, URLs. A little bit of backstory. Some might be in lowercase or contain abbreviations or typos or casual speech. Use a mix of different lengths, and focus on edge cases rather than making them clear-cut (the user will get a chance to sign off on them).

Bad: "Format this data", "Extract text from PDF", "Create a chart"

Good: "ok so my boss just sent me this xlsx file (its in my downloads, called something like 'Q4 sales final FINAL v2.xlsx') and she wants me to add a column that shows the profit margin as a percentage. The revenue is in column C and costs are in column D i think"

For the should-trigger queries (8-10), think about coverage. You want different phrasings of the same intent β€” some formal, some casual. Include cases where the user doesn't explicitly name the skill or file type but clearly needs it. Throw in some uncommon use cases and cases where this skill competes with another but should win.

For the should-not-trigger queries (8-10), the most valuable ones are the near-misses β€” queries that share keywords or concepts with the skill but actually need something different. Think adjacent domains, ambiguous phrasing where a naive keyword match would trigger but shouldn't, and cases where the query touches on something the skill does but in a context where another tool is more appropriate.

The key thing to avoid: don't make should-not-trigger queries obviously irrelevant. "Write a fibonacci function" as a negative test for a PDF skill is too easy β€” it doesn't test anything. The negative cases should be genuinely tricky.

Step 2: Review with user

Present the eval set to the user for review using the HTML template:

  1. Read the template from assets/eval_review.html
  2. Replace the placeholders:
  3. __EVAL_DATA_PLACEHOLDER__ β†’ the JSON array of eval items (no quotes around it β€” it's a JS variable assignment)
  4. __SKILL_NAME_PLACEHOLDER__ β†’ the skill's name
  5. __SKILL_DESCRIPTION_PLACEHOLDER__ β†’ the skill's current description
  6. Write to a temp file (e.g., /tmp/eval_review_<skill-name>.html) and open it: open /tmp/eval_review_<skill-name>.html
  7. The user can edit queries, toggle should-trigger, add/remove entries, then click "Export Eval Set"
  8. The file downloads to ~/Downloads/eval_set.json β€” check the Downloads folder for the most recent version in case there are multiple (e.g., eval_set (1).json)

This step matters β€” bad eval queries lead to bad descriptions.

Step 3: Run the optimization loop

Tell the user: "This will take some time β€” I'll run the optimization loop in the background and check on it periodically."

Save the eval set to the workspace, then run in the background:

python -m scripts.run_loop \
  --eval-set <path-to-trigger-eval.json> \
  --skill-path <path-to-skill> \
  --model <model-id-powering-this-session> \
  --max-iterations 5 \
  --verbose

Use the model ID from your system prompt (the one powering the current session) so the triggering test matches what the user actually experiences.

While it runs, periodically tail the output to give the user updates on which iteration it's on and what the scores look like.

This handles the full optimization loop automatically. It splits the eval set into 60% train and 40% held-out test, evaluates the current description (running each query 3 times to get a reliable trigger rate), then calls Claude to propose improvements based on what failed. It re-evaluates each new description on both train and test, iterating up to 5 times. When it's done, it opens an HTML report in the browser showing the results per iteration and returns JSON with best_description β€” selected by test score rather than train score to avoid overfitting.

How skill triggering works

Understanding the triggering mechanism helps design better eval queries. Skills appear in Claude's available_skills list with their name + description, and Claude decides whether to consult a skill based on that description. The important thing to know is that Claude only consults skills for tasks it can't easily handle on its own β€” simple, one-step queries like "read this PDF" may not trigger a skill even if the description matches perfectly, because Claude can handle them directly with basic tools. Complex, multi-step, or specialized queries reliably trigger skills when the description matches.

This means your eval queries should be substantive enough that Claude would actually benefit from consulting a skill. Simple queries like "read file X" are poor test cases β€” they won't trigger skills regardless of description quality.

Step 4: Apply the result

Take best_description from the JSON output and update the skill's SKILL.md frontmatter. Show the user before/after and report the scores.


Package and Present (only if present_files tool is available)

Check whether you have access to the present_files tool. If you don't, skip this step. If you do, package the skill and present the .skill file to the user:

python -m scripts.package_skill <path/to/skill-folder>

After packaging, direct the user to the resulting .skill file path so they can install it.


Claude.ai-specific instructions

In Claude.ai, the core workflow is the same (draft β†’ test β†’ review β†’ improve β†’ repeat), but because Claude.ai doesn't have subagents, some mechanics change. Here's what to adapt:

Running test cases: No subagents means no parallel execution. For each test case, read the skill's SKILL.md, then follow its instructions to accomplish the test prompt yourself. Do them one at a time. This is less rigorous than independent subagents (you wrote the skill and you're also running it, so you have full context), but it's a useful sanity check β€” and the human review step compensates. Skip the baseline runs β€” just use the skill to complete the task as requested.

Reviewing results: If you can't open a browser (e.g., Claude.ai's VM has no display, or you're on a remote server), skip the browser reviewer entirely. Instead, present results directly in the conversation. For each test case, show the prompt and the output. If the output is a file the user needs to see (like a .docx or .xlsx), save it to the filesystem and tell them where it is so they can download and inspect it. Ask for feedback inline: "How does this look? Anything you'd change?"

Benchmarking: Skip the quantitative benchmarking β€” it relies on baseline comparisons which aren't meaningful without subagents. Focus on qualitative feedback from the user.

The iteration loop: Same as before β€” improve the skill, rerun the test cases, ask for feedback β€” just without the browser reviewer in the middle. You can still organize results into iteration directories on the filesystem if you have one.

Description optimization: This section requires the claude CLI tool (specifically claude -p) which is only available in Claude Code. Skip it if you're on Claude.ai.

Blind comparison: Requires subagents. Skip it.

Packaging: The package_skill.py script works anywhere with Python and a filesystem. On Claude.ai, you can run it and the user can download the resulting .skill file.

Updating an existing skill: The user might be asking you to update an existing skill, not create a new one. In this case: - Preserve the original name. Note the skill's directory name and name frontmatter field -- use them unchanged. E.g., if the installed skill is research-helper, output research-helper.skill (not research-helper-v2). - Copy to a writeable location before editing. The installed skill path may be read-only. Copy to /tmp/skill-name/, edit there, and package from the copy. - If packaging manually, stage in /tmp/ first, then copy to the output directory -- direct writes may fail due to permissions.


Cowork-Specific Instructions

If you're in Cowork, the main things to know are:

  • You have subagents, so the main workflow (spawn test cases in parallel, run baselines, grade, etc.) all works. (However, if you run into severe problems with timeouts, it's OK to run the test prompts in series rather than parallel.)
  • You don't have a browser or display, so when generating the eval viewer, use --static <output_path> to write a standalone HTML file instead of starting a server. Then proffer a link that the user can click to open the HTML in their browser.
  • For whatever reason, the Cowork setup seems to disincline Claude from generating the eval viewer after running the tests, so just to reiterate: whether you're in Cowork or in Claude Code, after running tests, you should always generate the eval viewer for the human to look at examples before revising the skill yourself and trying to make corrections, using generate_review.py (not writing your own boutique html code). Sorry in advance but I'm gonna go all caps here: GENERATE THE EVAL VIEWER BEFORE evaluating inputs yourself. You want to get them in front of the human ASAP!
  • Feedback works differently: since there's no running server, the viewer's "Submit All Reviews" button will download feedback.json as a file. You can then read it from there (you may have to request access first).
  • Packaging works β€” package_skill.py just needs Python and a filesystem.
  • Description optimization (run_loop.py / run_eval.py) should work in Cowork just fine since it uses claude -p via subprocess, not a browser, but please save it until you've fully finished making the skill and the user agrees it's in good shape.
  • Updating an existing skill: The user might be asking you to update an existing skill, not create a new one. Follow the update guidance in the claude.ai section above.

Reference files

The agents/ directory contains instructions for specialized subagents. Read them when you need to spawn the relevant subagent.

  • agents/grader.md β€” How to evaluate assertions against outputs
  • agents/comparator.md β€” How to do blind A/B comparison between two outputs
  • agents/analyzer.md β€” How to analyze why one version beat another

The references/ directory has additional documentation: - references/schemas.md β€” JSON structures for evals.json, grading.json, etc.


Repeating one more time the core loop here for emphasis:

  • Figure out what the skill is about
  • Draft or edit the skill
  • Run claude-with-access-to-the-skill on test prompts
  • With the user, evaluate the outputs:
  • Create benchmark.json and run eval-viewer/generate_review.py to help the user review them
  • Run quantitative evals
  • Repeat until you and the user are satisfied
  • Package the final skill and return it to the user.

Please add steps to your TodoList, if you have such a thing, to make sure you don't forget. If you're in Cowork, please specifically put "Create evals JSON and run eval-viewer/generate_review.py so human can review test cases" in your TodoList to make sure it happens.

Good luck!

About Skills

Skills are modular, self-contained packages that extend Claude's capabilities by providing specialized knowledge, workflows, and tools. Think of them as "onboarding guides" for specific domains or tasksβ€”they transform Claude from a general-purpose agent into a specialized agent equipped with procedural knowledge that no model can fully possess.

What Skills Provide

  1. Specialized workflows - Multi-step procedures for specific domains
  2. Tool integrations - Instructions for working with specific file formats or APIs
  3. Domain expertise - Company-specific knowledge, schemas, business logic
  4. Bundled resources - Scripts, references, and assets for complex and repetitive tasks

Anatomy of a Skill

Every skill consists of a required SKILL.md file and optional bundled resources:

skill-name/
β”œβ”€β”€ SKILL.md (required)
β”‚   β”œβ”€β”€ YAML frontmatter metadata (required)
β”‚   β”‚   β”œβ”€β”€ name: (required)
β”‚   β”‚   └── description: (required)
β”‚   └── Markdown instructions (required)
└── Bundled Resources (optional)
    β”œβ”€β”€ scripts/          - Executable code (Python/Bash/etc.)
    β”œβ”€β”€ references/       - Documentation intended to be loaded into context as needed
    └── assets/           - Files used in output (templates, icons, fonts, etc.)

SKILL.md (required)

Metadata Quality: The name and description in YAML frontmatter determine when Claude will use the skill. Be specific about what the skill does and when to use it. Use the third-person (e.g. "This skill should be used when..." instead of "Use this skill when...").

Bundled Resources (optional)

Scripts (scripts/)

Executable code (Python/Bash/etc.) for tasks that require deterministic reliability or are repeatedly rewritten.

  • When to include: When the same code is being rewritten repeatedly or deterministic reliability is needed
  • Example: scripts/rotate_pdf.py for PDF rotation tasks
  • Benefits: Token efficient, deterministic, may be executed without loading into context
  • Note: Scripts may still need to be read by Claude for patching or environment-specific adjustments
References (references/)

Documentation and reference material intended to be loaded as needed into context to inform Claude's process and thinking.

  • When to include: For documentation that Claude should reference while working
  • Examples: references/finance.md for financial schemas, references/mnda.md for company NDA template, references/policies.md for company policies, references/api_docs.md for API specifications
  • Use cases: Database schemas, API documentation, domain knowledge, company policies, detailed workflow guides
  • Benefits: Keeps SKILL.md lean, loaded only when Claude determines it's needed
  • Best practice: If files are large (>10k words), include grep search patterns in SKILL.md
  • Avoid duplication: Information should live in either SKILL.md or references files, not both. Prefer references files for detailed information unless it's truly core to the skillβ€”this keeps SKILL.md lean while making information discoverable without hogging the context window. Keep only essential procedural instructions and workflow guidance in SKILL.md; move detailed reference material, schemas, and examples to references files.
Assets (assets/)

Files not intended to be loaded into context, but rather used within the output Claude produces.

  • When to include: When the skill needs files that will be used in the final output
  • Examples: assets/logo.png for brand assets, assets/slides.pptx for PowerPoint templates, assets/frontend-template/ for HTML/React boilerplate, assets/font.ttf for typography
  • Use cases: Templates, images, icons, boilerplate code, fonts, sample documents that get copied or modified
  • Benefits: Separates output resources from documentation, enables Claude to use files without loading them into context

Progressive Disclosure Design Principle

Skills use a three-level loading system to manage context efficiently:

  1. Metadata (name + description) - Always in context (~100 words)
  2. SKILL.md body - When skill triggers (<5k words)
  3. Bundled resources - As needed by Claude (Unlimited*)

*Unlimited because scripts can be executed without reading into context window.

Skill Creation Process

To create a skill, follow the "Skill Creation Process" in order, skipping steps only if there is a clear reason why they are not applicable.

Step 1: Understanding the Skill with Concrete Examples

Skip this step only when the skill's usage patterns are already clearly understood. It remains valuable even when working with an existing skill.

To create an effective skill, clearly understand concrete examples of how the skill will be used. This understanding can come from either direct user examples or generated examples that are validated with user feedback.

For example, when building an image-editor skill, relevant questions include:

  • "What functionality should the image-editor skill support? Editing, rotating, anything else?"
  • "Can you give some examples of how this skill would be used?"
  • "I can imagine users asking for things like 'Remove the red-eye from this image' or 'Rotate this image'. Are there other ways you imagine this skill being used?"
  • "What would a user say that should trigger this skill?"

To avoid overwhelming users, avoid asking too many questions in a single message. Start with the most important questions and follow up as needed for better effectiveness.

Conclude this step when there is a clear sense of the functionality the skill should support.

Step 2: Planning the Reusable Skill Contents

To turn concrete examples into an effective skill, analyze each example by:

  1. Considering how to execute on the example from scratch
  2. Identifying what scripts, references, and assets would be helpful when executing these workflows repeatedly

Example: When building a pdf-editor skill to handle queries like "Help me rotate this PDF," the analysis shows:

  1. Rotating a PDF requires re-writing the same code each time
  2. A scripts/rotate_pdf.py script would be helpful to store in the skill

Example: When designing a frontend-webapp-builder skill for queries like "Build me a todo app" or "Build me a dashboard to track my steps," the analysis shows:

  1. Writing a frontend webapp requires the same boilerplate HTML/React each time
  2. An assets/hello-world/ template containing the boilerplate HTML/React project files would be helpful to store in the skill

Example: When building a big-query skill to handle queries like "How many users have logged in today?" the analysis shows:

  1. Querying BigQuery requires re-discovering the table schemas and relationships each time
  2. A references/schema.md file documenting the table schemas would be helpful to store in the skill

For Claude Code plugins: When building a hooks skill, the analysis shows: 1. Developers repeatedly need to validate hooks.json and test hook scripts 2. scripts/validate-hook-schema.sh and scripts/test-hook.sh utilities would be helpful 3. references/patterns.md for detailed hook patterns to avoid bloating SKILL.md

To establish the skill's contents, analyze each concrete example to create a list of the reusable resources to include: scripts, references, and assets.

Step 3: Create Skill Structure

For Claude Code plugins, create the skill directory structure:

mkdir -p plugin-name/skills/skill-name/{references,examples,scripts}
touch plugin-name/skills/skill-name/SKILL.md

Note: Unlike the generic skill-creator which uses init_skill.py, plugin skills are created directly in the plugin's skills/ directory with a simpler manual structure.

Step 4: Edit the Skill

When editing the (newly-created or existing) skill, remember that the skill is being created for another instance of Claude to use. Focus on including information that would be beneficial and non-obvious to Claude. Consider what procedural knowledge, domain-specific details, or reusable assets would help another Claude instance execute these tasks more effectively.

Start with Reusable Skill Contents

To begin implementation, start with the reusable resources identified above: scripts/, references/, and assets/ files. Note that this step may require user input. For example, when implementing a brand-guidelines skill, the user may need to provide brand assets or templates to store in assets/, or documentation to store in references/.

Also, delete any example files and directories not needed for the skill. Create only the directories you actually need (references/, examples/, scripts/).

Update SKILL.md

Writing Style: Write the entire skill using imperative/infinitive form (verb-first instructions), not second person. Use objective, instructional language (e.g., "To accomplish X, do Y" rather than "You should do X" or "If you need to do X"). This maintains consistency and clarity for AI consumption.

Description (Frontmatter): Use third-person format with specific trigger phrases:

---
name: Skill Name
description: This skill should be used when the user asks to "specific phrase 1", "specific phrase 2", "specific phrase 3". Include exact phrases users would say that should trigger this skill. Be concrete and specific.
version: 0.1.0
---

Good description examples:

description: This skill should be used when the user asks to "create a hook", "add a PreToolUse hook", "validate tool use", "implement prompt-based hooks", or mentions hook events (PreToolUse, PostToolUse, Stop).

Bad description examples:

description: Use this skill when working with hooks.  # Wrong person, vague
description: Load when user needs hook help.  # Not third person
description: Provides hook guidance.  # No trigger phrases

To complete SKILL.md body, answer the following questions:

  1. What is the purpose of the skill, in a few sentences?
  2. When should the skill be used? (Include this in frontmatter description with specific triggers)
  3. In practice, how should Claude use the skill? All reusable skill contents developed above should be referenced so that Claude knows how to use them.

Keep SKILL.md lean: Target 1,500-2,000 words for the body. Move detailed content to references/: - Detailed patterns β†’ references/patterns.md - Advanced techniques β†’ references/advanced.md - Migration guides β†’ references/migration.md - API references β†’ references/api-reference.md

Reference resources in SKILL.md:

## Additional Resources

### Reference Files

For detailed patterns and techniques, consult:
- **`references/patterns.md`** - Common patterns
- **`references/advanced.md`** - Advanced use cases

### Example Files

Working examples in `examples/`:
- **`example-script.sh`** - Working example

Step 5: Validate and Test

For plugin skills, validation is different from generic skills:

  1. Check structure: Skill directory in plugin-name/skills/skill-name/
  2. Validate SKILL.md: Has frontmatter with name and description
  3. Check trigger phrases: Description includes specific user queries
  4. Verify writing style: Body uses imperative/infinitive form, not second person
  5. Test progressive disclosure: SKILL.md is lean (~1,500-2,000 words), detailed content in references/
  6. Check references: All referenced files exist
  7. Validate examples: Examples are complete and correct
  8. Test scripts: Scripts are executable and work correctly

Use the skill-reviewer agent:

Ask: "Review my skill and check if it follows best practices"

The skill-reviewer agent will check description quality, content organization, and progressive disclosure.

Step 6: Iterate

After testing the skill, users may request improvements. Often this happens right after using the skill, with fresh context of how the skill performed.

Iteration workflow: 1. Use the skill on real tasks 2. Notice struggles or inefficiencies 3. Identify how SKILL.md or bundled resources should be updated 4. Implement changes and test again

Common improvements: - Strengthen trigger phrases in description - Move long sections from SKILL.md to references/ - Add missing examples or scripts - Clarify ambiguous instructions - Add edge case handling

Plugin-Specific Considerations

Skill Location in Plugins

Plugin skills live in the plugin's skills/ directory:

my-plugin/
β”œβ”€β”€ .claude-plugin/
β”‚   └── plugin.json
β”œβ”€β”€ commands/
β”œβ”€β”€ agents/
└── skills/
    └── my-skill/
        β”œβ”€β”€ SKILL.md
        β”œβ”€β”€ references/
        β”œβ”€β”€ examples/
        └── scripts/

Auto-Discovery

Claude Code automatically discovers skills: - Scans skills/ directory - Finds subdirectories containing SKILL.md - Loads skill metadata (name + description) always - Loads SKILL.md body when skill triggers - Loads references/examples when needed

No Packaging Needed

Plugin skills are distributed as part of the plugin, not as separate ZIP files. Users get skills when they install the plugin.

Testing in Plugins

Test skills by installing plugin locally:

# Test with --plugin-dir
claude --plugin-dir /path/to/plugin

# Ask questions that should trigger the skill
# Verify skill loads correctly

Progressive Disclosure in Practice

What Goes in SKILL.md

Include (always loaded when skill triggers): - Core concepts and overview - Essential procedures and workflows - Quick reference tables - Pointers to references/examples/scripts - Most common use cases

Keep under 3,000 words, ideally 1,500-2,000 words

What Goes in references/

Move to references/ (loaded as needed): - Detailed patterns and advanced techniques - Comprehensive API documentation - Migration guides - Edge cases and troubleshooting - Extensive examples and walkthroughs

Each reference file can be large (2,000-5,000+ words)

What Goes in examples/

Working code examples: - Complete, runnable scripts - Configuration files - Template files - Real-world usage examples

Users can copy and adapt these directly

What Goes in scripts/

Utility scripts: - Validation tools - Testing helpers - Parsing utilities - Automation scripts

Should be executable and documented

Writing Style Requirements

Imperative/Infinitive Form

Write using verb-first instructions, not second person:

Correct (imperative):

To create a hook, define the event type.
Configure the MCP server with authentication.
Validate settings before use.

Incorrect (second person):

You should create a hook by defining the event type.
You need to configure the MCP server.
You must validate settings before use.

Third-Person in Description

The frontmatter description must use third person:

Correct:

description: This skill should be used when the user asks to "create X", "configure Y"...

Incorrect:

description: Use this skill when you want to create X...
description: Load this skill when user asks...

Objective, Instructional Language

Focus on what to do, not who should do it:

Correct:

Parse the frontmatter using sed.
Extract fields with grep.
Validate values before use.

Incorrect:

You can parse the frontmatter...
Claude should extract fields...
The user might validate values...

Validation Checklist

Before finalizing a skill:

Structure: - [ ] SKILL.md file exists with valid YAML frontmatter - [ ] Frontmatter has name and description fields - [ ] Markdown body is present and substantial - [ ] Referenced files actually exist

Description Quality: - [ ] Uses third person ("This skill should be used when...") - [ ] Includes specific trigger phrases users would say - [ ] Lists concrete scenarios ("create X", "configure Y") - [ ] Not vague or generic

Content Quality: - [ ] SKILL.md body uses imperative/infinitive form - [ ] Body is focused and lean (1,500-2,000 words ideal, <5k max) - [ ] Detailed content moved to references/ - [ ] Examples are complete and working - [ ] Scripts are executable and documented

Progressive Disclosure: - [ ] Core concepts in SKILL.md - [ ] Detailed docs in references/ - [ ] Working code in examples/ - [ ] Utilities in scripts/ - [ ] SKILL.md references these resources

Testing: - [ ] Skill triggers on expected user queries - [ ] Content is helpful for intended tasks - [ ] No duplicated information across files - [ ] References load when needed

Common Mistakes to Avoid

Mistake 1: Weak Trigger Description

❌ Bad:

description: Provides guidance for working with hooks.

Why bad: Vague, no specific trigger phrases, not third person

βœ… Good:

description: This skill should be used when the user asks to "create a hook", "add a PreToolUse hook", "validate tool use", or mentions hook events. Provides comprehensive hooks API guidance.

Why good: Third person, specific phrases, concrete scenarios

Mistake 2: Too Much in SKILL.md

❌ Bad:

skill-name/
└── SKILL.md  (8,000 words - everything in one file)

Why bad: Bloats context when skill loads, detailed content always loaded

βœ… Good:

skill-name/
β”œβ”€β”€ SKILL.md  (1,800 words - core essentials)
└── references/
    β”œβ”€β”€ patterns.md (2,500 words)
    └── advanced.md (3,700 words)

Why good: Progressive disclosure, detailed content loaded only when needed

Mistake 3: Second Person Writing

❌ Bad:

You should start by reading the configuration file.
You need to validate the input.
You can use the grep tool to search.

Why bad: Second person, not imperative form

βœ… Good:

Start by reading the configuration file.
Validate the input before processing.
Use the grep tool to search for patterns.

Why good: Imperative form, direct instructions

Mistake 4: Missing Resource References

❌ Bad:

# SKILL.md

[Core content]

[No mention of references/ or examples/]

Why bad: Claude doesn't know references exist

βœ… Good:

# SKILL.md

[Core content]

## Additional Resources

### Reference Files
- **`references/patterns.md`** - Detailed patterns
- **`references/advanced.md`** - Advanced techniques

### Examples
- **`examples/script.sh`** - Working example

Why good: Claude knows where to find additional information

Quick Reference

Minimal Skill

skill-name/
└── SKILL.md

Good for: Simple knowledge, no complex resources needed

skill-name/
β”œβ”€β”€ SKILL.md
β”œβ”€β”€ references/
β”‚   └── detailed-guide.md
└── examples/
    └── working-example.sh

Good for: Most plugin skills with detailed documentation

Complete Skill

skill-name/
β”œβ”€β”€ SKILL.md
β”œβ”€β”€ references/
β”‚   β”œβ”€β”€ patterns.md
β”‚   └── advanced.md
β”œβ”€β”€ examples/
β”‚   β”œβ”€β”€ example1.sh
β”‚   └── example2.json
└── scripts/
    └── validate.sh

Good for: Complex domains with validation utilities

Best Practices Summary

βœ… DO: - Use third-person in description ("This skill should be used when...") - Include specific trigger phrases ("create X", "configure Y") - Keep SKILL.md lean (1,500-2,000 words) - Use progressive disclosure (move details to references/) - Write in imperative/infinitive form - Reference supporting files clearly - Provide working examples - Create utility scripts for common operations

❌ DON'T: - Use second person anywhere - Have vague trigger conditions - Put everything in SKILL.md (>3,000 words without references/) - Write in second person ("You should...") - Leave resources unreferenced - Include broken or incomplete examples - Skip validation

Implementation Workflow

To create a skill for your plugin:

  1. Understand use cases: Identify concrete examples of skill usage
  2. Plan resources: Determine what scripts/references/examples needed
  3. Create structure: mkdir -p skills/skill-name/{references,examples,scripts}
  4. Write SKILL.md:
  5. Frontmatter with third-person description and trigger phrases
  6. Lean body (1,500-2,000 words) in imperative form
  7. Reference supporting files
  8. Add resources: Create references/, examples/, scripts/ as needed
  9. Validate: Check description, writing style, organization
  10. Test: Verify skill loads on expected triggers
  11. Iterate: Improve based on usage

Focus on strong trigger descriptions, progressive disclosure, and imperative writing style for effective skills that load when needed and provide targeted guidance.