How to Write Your First SKILL.md — Complete Guide for AI Coding Agents


If you have ever copied the same instructions into CLAUDE.md, .cursorrules, and Copilot config files, you already understand the problem Agent Skills solve. One modular SKILL.md file replaces all of that — and it works across more than 30 AI coding tools without modification.
This tutorial walks you through writing your first SKILL.md from an empty directory to a fully functional, cross-platform skill.
Prerequisites
Before starting, ensure you have:
- An AI coding tool installed (Claude Code, GitHub Copilot, Cursor, OpenAI Codex, or Google Gemini CLI)
- A project repository where you want to add skills
- Basic familiarity with YAML and Markdown syntax
- A terminal and text editor
If you are new to the Agent Skills standard, read our overview article on Agent Skills and the SKILL.md specification first.
What You Will Build
By the end of this tutorial you will have two working Agent Skills:
- A code review skill that activates when you ask for PR reviews, code audits, or quality checks
- A deployment checker skill that runs pre-deploy validation with a bundled shell script
Both skills will work across Claude Code, Codex, Copilot, Cursor, Gemini CLI, and JetBrains Junie.
Step 1: Create the Skills Directory
Agent Skills live in a .agents/skills/ directory at your repository root. This is the cross-platform convention recognized by all compatible tools.
mkdir -p .agents/skills/code-review
mkdir -p .agents/skills/deploy-checkerEach skill gets its own directory. The directory name is for human organization — the agent reads the name field from SKILL.md, not the folder name.
Your project structure now looks like this:
your-project/
├── .agents/
│ └── skills/
│ ├── code-review/
│ │ └── (SKILL.md goes here)
│ └── deploy-checker/
│ └── (SKILL.md goes here)
├── src/
└── package.json
Tool-specific directories also work. Claude Code reads from .claude/skills/, Cursor from .cursor/skills/, and so on. But .agents/skills/ is the universal location that every tool recognizes. Use it for maximum portability.
Step 2: Write the YAML Frontmatter
Every SKILL.md starts with YAML frontmatter wrapped in triple dashes. This is the metadata the agent reads at startup.
Create .agents/skills/code-review/SKILL.md and add:
---
name: code-review
description: >
Reviews code changes for quality, security, and adherence to project
conventions. Use when the user asks to 'review code', 'check this PR',
'audit this change', 'look at my diff', or 'review my changes'.
---Required Fields
| Field | Purpose |
|---|---|
name | Unique identifier for the skill. Use kebab-case. |
description | Tells the agent what the skill does and when to activate it. This is the most critical field. |
Optional Fields
| Field | Purpose |
|---|---|
allowed-tools | Restricts which tools the agent can use when this skill activates (e.g., Read, Grep, Glob for read-only skills). |
version | Semantic version for tracking changes. |
The description field deserves special attention — it is the trigger mechanism that determines whether your skill ever activates.
Step 3: Write a Strong Trigger Description
The description is not documentation for humans. It is the primary signal an AI agent uses to decide whether to load this skill for a given request.
Weak description (will rarely activate):
description: Helps with code qualityStrong description (activates reliably):
description: >
Reviews code changes for quality, security, and adherence to project
conventions. Use when the user asks to 'review code', 'check this PR',
'audit this change', 'look at my diff', or 'review my changes'.What Makes a Description Effective
- State what the skill does in the first sentence
- List specific trigger phrases the user might say — quoted and comma-separated
- Include synonyms for the same action (review, audit, check, look at)
- Keep it under 200 words — this is loaded on every conversation start
Testing Your Description
After writing it, mentally test with these prompts:
- "Can you review this PR?" — should activate
- "Check my code for security issues" — should activate
- "Write a new function" — should NOT activate
- "Deploy to production" — should NOT activate
If any test fails, refine the trigger phrases.
Step 4: Write the Instruction Body
Below the frontmatter, write the markdown body. This is the actual instruction set loaded when the skill activates.
---
name: code-review
description: >
Reviews code changes for quality, security, and adherence to project
conventions. Use when the user asks to 'review code', 'check this PR',
'audit this change', 'look at my diff', or 'review my changes'.
---
# Code Review
## Workflow
1. Read the diff or changed files using `git diff` or the provided file paths
2. Check for security vulnerabilities against the OWASP Top 10
3. Verify error handling — no empty catch blocks, no swallowed errors
4. Check code style against project conventions (linting rules, naming)
5. Look for performance issues — unnecessary loops, missing indexes, N+1 queries
6. Verify test coverage — new public methods should have tests
7. Provide structured feedback
## Output Format
Structure every review as three sections:
### Critical (must fix before merge)
- Security vulnerabilities
- Data loss risks
- Breaking changes without migration
### Warning (should fix)
- Missing error handling
- Performance concerns
- Incomplete test coverage
### Suggestion (nice-to-have)
- Code style improvements
- Readability enhancements
- Documentation gaps
## Rules
- Reference specific line numbers in feedback
- Provide a fix suggestion for every issue flagged
- If the change looks good, say so — do not invent problems
- Never approve code with known security vulnerabilitiesWriting Tips
- Use imperative voice. "Read the diff" not "The diff should be read."
- Be specific. "Check for SQL injection in query parameters" not "Check for security issues."
- Keep it under 5,000 tokens. If longer, split into multiple skills or move reference material to separate files.
- Structure with headers. The agent navigates sections by heading.
Step 5: Understand Progressive Disclosure
Agent Skills use a three-tier loading system that saves tokens:
Tier 1: Metadata (always loaded)
At startup, the agent reads only the name and description from frontmatter — roughly 100 tokens per skill. With 50 skills installed, that is about 5,000 tokens total. The agent knows what each skill does without reading a single instruction.
Tier 2: Instructions (loaded on activation)
When a user request matches a skill description, the full SKILL.md body is loaded — typically under 5,000 tokens. This is where your workflow, rules, and output format live.
Tier 3: Resources (loaded as needed)
Scripts, reference files, and templates in the skill directory are loaded only when the instructions reference them. A skill can bundle megabytes of documentation, but only the relevant section enters the context window.
This means 50 focused skills consume fewer tokens than one monolithic config file. The monolithic file loads everything on every request. Skills load only what is needed.
Step 6: Build the Deployment Checker Skill
Now let us build a more advanced skill that bundles an executable script.
Create .agents/skills/deploy-checker/SKILL.md:
---
name: deploy-checker
description: >
Validates deployment readiness before pushing to staging or production.
Use when the user says 'check deploy', 'pre-deploy check', 'is this
ready to deploy', 'deployment readiness', or 'can we ship this'.
allowed-tools: Read, Grep, Glob, Bash
---
# Deployment Readiness Check
## Workflow
1. Run the deployment readiness script:
\`\`\`bash
bash .agents/skills/deploy-checker/scripts/check_deploy.sh
\`\`\`
2. Review the script output for any FAIL results
3. If all checks pass, confirm deployment readiness
4. If any check fails, explain the failure and suggest a fix
## Interpreting Results
- **PASS** — check succeeded, no action needed
- **WARN** — not blocking but worth noting
- **FAIL** — must be resolved before deployment
## Escalation
If the script cannot determine status for a check, manually verify:
- Database migrations are up to date
- Environment variables are set for the target environment
- No unmerged dependencies in package-lock.json or yarn.lockCreate the Bundled Script
Create .agents/skills/deploy-checker/scripts/check_deploy.sh:
#!/bin/bash
# Pre-deployment readiness checks
echo "=== Deployment Readiness Check ==="
echo ""
# Check 1: No uncommitted changes
if [ -z "$(git status --porcelain)" ]; then
echo "PASS: Working directory clean"
else
echo "FAIL: Uncommitted changes detected"
git status --short
fi
# Check 2: Tests pass
if npm test --silent 2>/dev/null; then
echo "PASS: Tests passing"
else
echo "FAIL: Tests failing"
fi
# Check 3: Build succeeds
if npm run build --silent 2>/dev/null; then
echo "PASS: Build succeeds"
else
echo "FAIL: Build broken"
fi
# Check 4: No TODO/FIXME in staged changes
if git diff --cached | grep -qi "TODO\|FIXME\|HACK"; then
echo "WARN: TODO/FIXME found in staged changes"
else
echo "PASS: No TODO/FIXME in staged changes"
fi
# Check 5: Dependencies up to date
if [ -f "package-lock.json" ]; then
if git diff HEAD --name-only | grep -q "package.json"; then
if git diff HEAD --name-only | grep -q "package-lock.json"; then
echo "PASS: Lock file updated with package.json"
else
echo "FAIL: package.json changed but lock file not updated"
fi
else
echo "PASS: Dependencies unchanged"
fi
fi
echo ""
echo "=== Check Complete ==="Notice the allowed-tools field in the frontmatter. This skill needs Bash to execute the script, but a read-only skill like a code analyzer could restrict itself to Read, Grep, Glob — limiting the blast radius if the skill is compromised.
Step 7: Add Reference Files (Optional)
For skills that need extensive documentation, add reference files rather than stuffing everything into SKILL.md.
code-review/
├── SKILL.md
├── references/
│ ├── owasp-top-10-checklist.md
│ └── style-guide-summary.md
└── assets/
└── review-template.md
In your SKILL.md, reference them when needed:
## Security Checks
For the full OWASP checklist, read `references/owasp-top-10-checklist.md`.
Focus on these five for quick reviews:
1. SQL injection in query parameters
2. XSS in user-rendered content
3. Broken authentication flows
4. Sensitive data exposure in logs
5. Missing access control checksThe agent only loads owasp-top-10-checklist.md if it needs the full list — saving tokens on routine reviews.
Step 8: Test Your Skills
Activation Testing
Open your AI coding tool and test these prompts:
Should activate code-review:
- "Review this PR"
- "Check my latest changes for issues"
- "Audit the auth module"
- "Look at my diff"
Should activate deploy-checker:
- "Are we ready to deploy?"
- "Run pre-deploy checks"
- "Can we ship this to production?"
Should NOT activate either:
- "Write a new API endpoint"
- "Explain how this function works"
- "Refactor the database layer"
Instruction Quality Testing
After activation, verify the agent follows your instructions:
- Does the code review use the three-tier output format (Critical / Warning / Suggestion)?
- Does the deploy checker run the shell script?
- Are line numbers referenced in review feedback?
If the skill activates but instructions are not followed, the body needs clearer imperative language.
Step 9: Install Personal Skills (Global)
Skills in .agents/skills/ are project-scoped. For skills you want available across all projects, install them in your home directory:
mkdir -p ~/.agents/skills/my-global-skillGlobal skills are useful for personal workflows — your preferred commit message format, your debugging checklist, your documentation style. Project skills are for team-shared procedures.
Precedence: Project skills override global skills with the same name.
Step 10: Cross-Platform Verification
Your skills should work without modification across all compatible tools. Here is a verification checklist:
| Tool | Skill Location | How to Verify |
|---|---|---|
| Claude Code | .agents/skills/ or .claude/skills/ | Run claude in the project directory |
| OpenAI Codex | .agents/skills/ | Open Codex in the project |
| GitHub Copilot | .agents/skills/ or .github/skills/ | Open VS Code with Copilot |
| Cursor | .agents/skills/ or .cursor/skills/ | Open project in Cursor |
| Google Gemini CLI | .agents/skills/ or .gemini/skills/ | Run gemini in the project directory |
| JetBrains Junie | .agents/skills/ | Open project in JetBrains IDE with Junie |
If you placed your skills in .agents/skills/, all six tools will find them automatically. No duplication needed.
Troubleshooting
Skill never activates
Cause: The description does not match user prompts closely enough.
Fix: Add more trigger phrases. Include both formal ("review code") and informal ("check my stuff") variations.
Skill activates but instructions are ignored
Cause: Instructions are too vague or use passive voice.
Fix: Rewrite in imperative voice with specific actions. "Run npm test" instead of "Tests should be run."
Token budget exceeded
Cause: SKILL.md body exceeds 5,000 tokens.
Fix: Move reference material to separate files in references/. Only keep the core workflow in SKILL.md.
Script permission denied
Cause: Bundled scripts lack execute permissions.
Fix: Run chmod +x .agents/skills/deploy-checker/scripts/check_deploy.sh or have the skill instructions invoke with bash explicitly.
Security Considerations
Skills are privileged instructions. A malicious skill can direct an agent to:
- Execute arbitrary shell commands
- Exfiltrate code or credentials
- Modify files outside the project
- Fetch content from external URLs (prompt injection vector)
Before installing third-party skills:
- Read every line of SKILL.md and all bundled scripts
- Check for unexpected network calls (
curl,wget,fetch) - Look for file access outside the project directory
- Verify the source is reputable
- Use
allowed-toolsto restrict capabilities
Best Practices Summary
| Practice | Why |
|---|---|
| One skill per task domain | Focused activation, lower token cost |
| Under 5,000 tokens for SKILL.md body | Stays within context budget |
| Imperative voice throughout | Agents follow commands, not suggestions |
| 5-10 test prompts per skill | Catch activation failures early |
.agents/skills/ directory | Maximum cross-platform compatibility |
| Version control your skills | Team shares the same procedures |
allowed-tools on sensitive skills | Limits blast radius |
Next Steps
- Read the full Agent Skills specification overview for architecture details
- Explore Agent Skills for business teams to scale skills across your organization
- Browse the community
awesome-agent-skillsrepository on GitHub for inspiration - Consider a professional skill audit if you want expert optimization of your team's skill library
Conclusion
Writing your first SKILL.md takes under ten minutes. The payoff is immediate — your AI coding agent activates the right instructions for the right task, uses fewer tokens, and works identically across every tool in your stack.
Start with one skill for your most repeated workflow. Test it, refine the description, and commit it to your repository. Then build from there.
Need help building a custom skill library for your engineering team? Noqta offers Agent Skills consulting — from individual skill development to enterprise-wide skill governance and security audits.
Discuss Your Project with Us
We're here to help with your web development needs. Schedule a call to discuss your project and how we can assist you.
Let's find the best solutions for your needs.
Related Articles

Build a Multi-Tool Agent Skill Pack for Your Engineering Team
Learn how to design, build, and govern a shared .agents/skills/ directory for your engineering organization. This tutorial covers skill architecture, naming conventions, versioning, team workflows, and enterprise governance for AI coding agent skill packs.

Getting Started with ALLaM-7B-Instruct-preview
Learn how to use the ALLaM-7B-Instruct-preview model with Python, and how to interact with it from JavaScript via a hosted API (e.g., on Hugging Face Spaces).

An Introduction to GPT-4o and GPT-4o mini
Explore the future of AI with our introduction to GPT-4o and GPT-4o mini, OpenAI's latest multimodal models capable of processing and generating text, audio, and visual content seamlessly.