Your skill’s code is perfect. The gotchas section covers every edge case. The file structure follows progressive disclosure. And nobody is using it.
The problem is almost certainly your description field.
When Anthropic engineer Thariq shared how the Claude Code team builds skills internally, one point kept getting overlooked in the discussion that followed: the description field is for the model, not for humans. It’s not a summary. It’s not marketing copy. It’s the single piece of text that determines whether your skill ever gets activated at all.
This article is for skill creators โ whether you’re publishing to AgentSkillExchange, sharing within a team, or building for your own workflow. If your skill exists but people report it “doesn’t seem to trigger,” this is the fix.
How Claude Decides Which Skill to Load
Before we get into writing better descriptions, you need to understand the activation pipeline. When a user sends a message to Claude Code, here’s what happens:
- Claude reads every installed skill’s description โ not the full SKILL.md, just the
descriptionfield from the frontmatter. - It evaluates which skill (if any) matches the current request. This is a semantic matching step, not keyword search. Claude is reading for intent and context alignment.
- If a skill matches, Claude loads the full SKILL.md and follows its instructions.
- If multiple skills could match, Claude picks the most specific one. Vague descriptions lose this competition every time.
The critical insight: steps 1 and 2 happen before your skill’s actual instructions are ever read. If the description doesn’t convince Claude that this skill is relevant to the current task, the rest of your SKILL.md โ no matter how brilliant โ never gets loaded.
Think of it like a search engine index. Google doesn’t read your entire page to decide whether to rank it for a query. It uses signals. The description field is your primary signal.
What Bad Descriptions Look Like
Here are real patterns we see in skills submitted to AgentSkillExchange that result in near-zero activation rates:
The vague summary
description: A skill for working with databases.
This tells Claude nothing about when to use it. Working with databases how? Schema design? Query optimization? Migration generation? Connection debugging? Claude has no way to distinguish this from any other database-related skill, so it defaults to not loading it unless the user explicitly mentions the skill name.
The feature list
description: >
This skill supports PostgreSQL, MySQL, and SQLite.
It can generate queries, optimize indexes, and create ERD diagrams.
Compatible with Docker and local installations.
Feature lists describe what a skill can do but not when it should activate. Claude doesn’t need to know your compatibility matrix during the activation decision โ it needs to know what kind of user request maps to this skill.
The marketing pitch
description: >
The ultimate database management solution for modern development teams.
Streamline your workflow and boost productivity with intelligent query assistance.
“Streamline your workflow” triggers nothing. Claude doesn’t respond to adjectives and aspirational language in the same way a human reading a landing page might. The description needs to contain the actual phrases and scenarios that users will express.
What Good Descriptions Look Like
An effective description has three components: trigger scenarios, specific phrases, and explicit exclusions.
A complete example
description: >
Use when debugging slow database queries, optimizing SQL performance,
or analyzing query execution plans. Triggers on: "query is slow",
"EXPLAIN ANALYZE", "missing index", "full table scan", "n+1 queries",
"query optimization", "why is this query slow". Also handles index
recommendations and query rewriting. NOT for: schema design (use
db-architect skill), migrations (use db-migrate skill), or
connection issues (use db-connect skill).
Let’s break down why each part matters:
Trigger scenarios (“Use when debugging slow database queries…”) tell Claude the high-level situations where this skill applies. This is the semantic matching layer โ Claude understands intent, so describing the scenario in natural language is effective.
Specific phrases (“query is slow”, “EXPLAIN ANALYZE”, “full table scan”…) cover the exact strings users actually type or paste. When someone pastes an error message containing “full table scan” or asks “why is this query slow,” these phrases create a direct match. Include error messages, common question patterns, and technical terms your users would use.
Explicit exclusions (“NOT for: schema design… migrations…”) are equally important. They prevent your skill from stealing activations from more appropriate skills. Without exclusions, a broad database skill might activate when the user actually needs a migration tool, leading to a poor experience and wasted context window space.
The Exclusion Problem
This is the part most skill authors skip entirely, and it causes real problems at scale.
When a user has 15 or 20 skills installed, overlapping descriptions create activation confusion. Claude has to pick one, and without explicit boundary markers, it picks based on whichever description seems broadest โ which is often wrong.
Consider a team that has three skills installed:
db-query-optimizerโ for slow query analysisdb-schema-designerโ for schema design and ERD generationdb-migration-toolโ for generating and running migrations
If db-query-optimizer‘s description says “A skill for working with databases and SQL,” it will intercept requests meant for the other two skills. The user asks “generate a migration for adding an index” and gets the query optimizer instead of the migration tool.
Exclusions fix this. When the optimizer’s description says “NOT for: migrations (use db-migration-tool),” Claude knows to route correctly.
On AgentSkillExchange, we’ve seen skills go from 12% activation accuracy to over 85% just by adding clear exclusions. The content of the skill didn’t change at all โ only the description field.
Measuring Whether Your Description Works
There’s no built-in analytics dashboard for skill activation rates (yet), but you can test empirically. Here’s a practical approach:
1. Write ten test prompts
Come up with ten realistic messages that should trigger your skill. Not contrived ones โ actual things a user would type. For a code review skill, that might include:
- “Review this PR for security issues”
- “Check if this function handles edge cases properly”
- “Does this code follow our team’s style guide?”
- “I’m seeing a linting error on line 42”
- “What’s wrong with this implementation?”
2. Write five negative test prompts
Come up with five messages that should not trigger your skill:
- “Write unit tests for this function” (testing skill, not review)
- “Refactor this class to use composition” (refactoring skill)
- “Deploy this to staging” (deployment skill)
3. Test with your skill installed
Send each prompt and check whether your skill activated. If activation accuracy is below 80%, revise the description and test again.
Anthropic’s Skill Creator tool can help with this iteration cycle โ it includes measurement steps for evaluating skill performance against real tasks.
Common Patterns That Work
After reviewing hundreds of skills on AgentSkillExchange, here are the description patterns with the highest activation accuracy:
The scenario-first pattern
description: >
Use when [scenario 1], [scenario 2], or [scenario 3].
Triggers on: "[phrase]", "[phrase]", "[phrase]".
NOT for: [out-of-scope task] (use [other-skill]).
This is the most reliable format. It covers both semantic understanding (scenarios) and exact matching (trigger phrases).
The error-message pattern
For skills that handle specific error states, include the actual error text:
description: >
Use when Kubernetes pods are failing or restarting unexpectedly.
Triggers on: "CrashLoopBackOff", "OOMKilled", "ImagePullBackOff",
"pod keeps restarting", "container exit code 137", "kubectl describe
pod shows error". NOT for: cluster provisioning (use k8s-setup),
Helm chart creation (use helm-skill).
Users frequently paste error messages directly into their prompts. Including those exact strings in your description creates near-perfect activation matching.
The workflow pattern
For skills that handle multi-step processes:
description: >
Use when deploying code to production, setting up CI/CD pipelines,
or managing deployment environments. Covers the full deploy workflow:
pre-deploy checks, deployment execution, post-deploy verification,
and rollback. Triggers on: "deploy to prod", "release to staging",
"CI pipeline failing", "rollback deployment". NOT for: code review
(use review-skill), testing (use test-runner).
Length and Token Considerations
Descriptions should be comprehensive but not excessive. Every installed skill’s description is loaded into context during the activation check, which means unnecessarily long descriptions burn tokens across every single request โ even when the skill isn’t relevant.
Based on what we’ve observed:
- Too short (under 30 words): Not enough signal for accurate matching
- Ideal range (50-120 words): Enough for scenarios, trigger phrases, and exclusions
- Too long (over 200 words): Wastes tokens and can actually reduce matching accuracy because the signal gets diluted
If you find yourself writing a 300-word description, that’s a sign your skill’s scope is too broad. Consider splitting it into multiple focused skills.
Real Before-and-After: The SEO Content Writer Skill
Here’s an actual example from a skill published on AgentSkillExchange. The original description:
description: Creates high-quality, SEO-optimized content.
The revised description:
description: >
Use when the user asks to "write SEO content", "create a blog post",
"write an article", "content writing", "draft optimized content",
"write me an article", "create a blog post about", "help me write
SEO content", or "draft content for". Creates high-quality,
SEO-optimized content that ranks in search engines. Applies on-page
SEO best practices, keyword optimization, and content structure for
maximum visibility. For AI citation optimization, see
geo-content-optimizer. For updating existing content, see
content-refresher.
The revised version includes eight specific trigger phrases, a brief functional description, and two explicit exclusions pointing to related skills. This is what comprehensive coverage looks like.
The Description Checklist
Before publishing any skill, verify your description against this list:
- โ Starts with “Use when…” to establish activation context
- โ Includes at least 5 trigger phrases users would actually type
- โ Includes relevant error messages or technical terms if applicable
- โ Has “NOT for:” exclusions if other skills cover adjacent territory
- โ Falls between 50-120 words
- โ Describes when to activate, not just what the skill does
- โ Avoids marketing language (“ultimate,” “streamline,” “boost”)
- โ Tested against at least 10 realistic prompts
Frequently Asked Questions
Does the description field support markdown or formatting?
No. The description field is plain text. Claude reads it as a single string. Avoid markdown headers, bullet points, or other formatting โ they don’t render and can introduce noise. Use natural language sentences and comma-separated lists for trigger phrases.
Can I update the description after publishing?
Yes, and you should. The best descriptions are iterative. Publish, observe which prompts trigger your skill correctly (and which don’t), then refine. Each revision cycle improves activation accuracy. On ClawHub and AgentSkillExchange, updating your description is a version bump.
How many trigger phrases should I include?
Between 5 and 15 is the practical range. Fewer than 5 and you’ll miss common phrasings. More than 15 and you risk diluting the signal โ Claude might match your skill to tangentially related requests just because one of your many phrases partially overlaps.
Should I include the skill name in the description?
Not necessary. Claude already knows the skill name from the frontmatter. Use the description space for trigger context, not for restating metadata that’s already available elsewhere.
What Comes Next
The description field is your skill’s front door. Get it right, and everything else โ the gotchas, the progressive disclosure, the reference files โ gets a chance to do its job. Get it wrong, and none of that work matters because Claude never loads it.
If you have an existing skill with a vague description, revise it now. Use the checklist above, test with ten realistic prompts, and iterate. You’ll likely see immediate improvement in activation rates.
For more on building effective skills, check out our complete guide based on Thariq’s insights and the step-by-step tutorial for building your first skill. The AgentSkills.io specification also covers the full frontmatter reference, including description field formatting.
And if you’ve built a skill with a well-crafted description that’s getting strong activation rates โ submit it to AgentSkillExchange. The marketplace needs more skills that actually trigger when they should.