Key Takeaway
The Command, Agent, and Skill orchestration pattern is the difference between using Claude Code as a fancy autocomplete and using it as a development team. Commands encode repeatable workflows that run identically every time. Agents handle autonomous sub-problems with their own context and tool access. Skills inject on-demand capabilities without polluting your main conversation. Together, they let you build complex development workflows that are reproducible, parallelizable, and context-efficient.
The Three-Layer Architecture
Most developers use Claude Code in a single conversation, typing instructions one at a time and waiting for results. This works for simple tasks, but it breaks down for complex workflows. You end up repeating the same instructions across sessions, losing context when conversations get long, and manually coordinating tasks that could run in parallel. The three-layer architecture solves each of these problems with a dedicated mechanism.
Commands are stored prompts that you invoke with a slash. They live in .claude/commands/ as markdown files with YAML frontmatter. When you type /check, Claude loads the corresponding command file and executes its instructions exactly. No ambiguity, no drift, no forgotten steps. They are the foundation layer.
Agents are sub-Claude instances that run in their own context with their own tool access. They handle a specific sub-problem and return a result. The main Claude session orchestrates them like a tech lead delegating to team members. They are the autonomy layer.
Skills are on-demand capability injections. When Claude encounters a task that requires specialized knowledge, it can load a skill that provides context, instructions, and tool access for that specific domain. Once the task is complete, the skill context is released. They are the specialization layer.
Commands: Repeatable Stored Workflows
A command is a markdown file in .claude/commands/ that Claude executes when you type the corresponding slash command. The file can contain YAML frontmatter for metadata and a body with instructions. The body supports string substitutions that inject dynamic values at runtime, making commands flexible without making them ambiguous.
Command Frontmatter Fields
The frontmatter block at the top of a command file defines its metadata. The allowed-tools field restricts which tools the command can use. The description field appears in the slash command autocomplete menu. The model field pins the command to a specific model, useful for cost control on routine tasks.
String Substitutions
Commands support several dynamic substitutions that are resolved at runtime. $ARGUMENTS inserts whatever text the user typed after the slash command. $FILE inserts the file currently open in the editor. $SELECTION inserts the currently selected text. These let you build commands that adapt to context without hardcoding paths or content.
Essential Commands Library
---
description: Run full quality check — types, lint, tests
allowed-tools:
- Bash
---
Run the following checks in order. Stop at the first failure and report what went wrong.
1. pnpm --filter web check-types
2. pnpm --filter web lint
3. pnpm --filter web build
If all pass, respond with a summary of what was checked and confirmed clean.
If any fail, show the errors and suggest specific fixes.---
description: Review code changes before committing
allowed-tools:
- Bash
- Read
---
Review the current staged and unstaged changes:
1. Run git diff to see all changes
2. For each changed file, analyze:
- Does it follow the conventions in CLAUDE.md?
- Are there any type safety issues?
- Are there any security concerns (exposed secrets, XSS vectors)?
- Is the error handling adequate?
3. Provide a summary with:
- Overall assessment (ship it / needs work / blocking issues)
- Specific line-level feedback for any issues found
- Suggestions for improvement (if any)---
description: Scaffold a new feature with standard structure
allowed-tools:
- Bash
- Read
- Write
- Edit
---
Create the scaffolding for a new feature: $ARGUMENTS
1. Determine which directory the feature belongs in based on the architecture in CLAUDE.md
2. Create the necessary files following existing patterns in the codebase
3. Add TypeScript types first, then implement the component or route
4. Include a basic test file if the project has a test directory
5. Update any barrel exports or route registrations
6. Run check-types to verify the scaffolding compiles---
description: Generate a new component with tests
allowed-tools:
- Bash
- Read
- Write
- Edit
model: sonnet
---
Scaffold a new component: $ARGUMENTS
1. Check existing component patterns in the components/ directory
2. Create the component file with proper TypeScript types and props interface
3. Use the project's styling approach (Tailwind classes, design tokens from CLAUDE.md)
4. Export from the nearest barrel file
5. Verify with check-typesShare commands across your team by committing the .claude/commands/ directory to version control. Personal commands go in ~/.claude/commands/ and are available in every project. Team commands go in the project .claude/commands/ directory.
Agents: Autonomous Sub-Problem Solvers
Agents are sub-Claude instances spawned by the main session to handle a specific task. They run in their own context window, have their own tool access, and return a result to the parent session. The key benefit is isolation: an agent can explore a large codebase, run lengthy tests, or debug a complex issue without consuming context in the main conversation.
Use agents when a task is self-contained and its intermediate steps are not useful to the main conversation. For example, searching an entire codebase for all usages of a deprecated API and proposing replacements is a perfect agent task. The main session only needs the final list of changes, not the hundreds of file reads the agent performed to produce it.
Agent Configuration Template
---
description: Analyze test coverage and suggest improvements
model: sonnet
allowed-tools:
- Bash
- Read
- Glob
- Grep
---
You are a test coverage analyst. Your job is to:
1. Find all test files in the project
2. Identify source files that have no corresponding tests
3. For each untested file, assess its complexity and risk level
4. Produce a prioritized list of files that need tests, ordered by risk
5. For the top 3 priorities, outline what the tests should cover
Return a structured report with:
- Coverage summary (files with tests vs without)
- Prioritized list with risk ratings
- Test outlines for top 3 prioritiesModel Selection for Agents
Choosing the right model for each agent is a cost and quality tradeoff. Haiku is fast and cheap, ideal for routine tasks with clear instructions. Sonnet balances speed and quality for most development work. Opus provides the deepest reasoning for architectural decisions and complex debugging. The model field in the agent frontmatter pins it to a specific model regardless of what the main session is using.
| Model | Best For | Speed | Cost | Reasoning Depth |
|---|---|---|---|---|
| Haiku | File search, formatting, simple edits, boilerplate generation | Very fast | Lowest | Shallow, follows instructions literally |
| Sonnet | Feature implementation, code review, test writing, refactoring | Fast | Moderate | Good, handles multi-step reasoning well |
| Opus | Architecture decisions, complex debugging, security audits, system design | Slower | Highest | Deep, excels at nuanced tradeoffs and edge cases |
Skills: On-Demand Capability Injection
Skills are specialized capability modules that Claude can load on demand. There are two types: on-demand skills and agent skills. On-demand skills are loaded into the current conversation when needed and unloaded when done. Agent skills are attached to an agent and provide it with domain-specific knowledge and tools.
The practical difference is context management. An on-demand skill consumes tokens in your main conversation, so use it when you need the skill output to inform subsequent decisions. An agent skill runs in the agent context, so use it when the skill is needed for an isolated task whose intermediate results do not matter to the main conversation.
Skills solve the expertise breadth problem. No single prompt can make Claude an expert in your frontend framework, your database schema, your deployment pipeline, and your testing strategy simultaneously. Skills let you inject the right expertise at the right time without paying the context cost when that expertise is not needed.
The Orchestration Pattern
The full orchestration pattern combines all three layers. You start a session and use the main conversation for high-level planning and coordination. When you need to execute a repeatable workflow, you invoke a command. When you need an autonomous investigation or a parallel workstream, you spawn an agent. When you or an agent needs specialized knowledge, you load a skill. The main conversation stays lean and focused on decisions while the work happens in dedicated contexts.
- 1
Define your repeatable workflows as commands
Start with the workflows you repeat most often: quality checks, code review, scaffolding, deployment. Write each as a command file with clear instructions and appropriate tool restrictions. Test each command independently before combining them.
- 2
Identify agent-worthy sub-problems
Look for tasks that are self-contained, require significant exploration, and whose intermediate steps are not needed by the main conversation. Common examples: codebase-wide search and replace, test coverage analysis, dependency audit, performance profiling review.
- 3
Build skills for domain expertise
Create skills for specialized domains your team works in: database migration patterns, API design conventions, security hardening checklists, accessibility audit procedures. Each skill should be a focused document that gives Claude the context to be an expert in that specific area.
- 4
Wire them together with progressive disclosure
Start simple. Use commands for the first week. Add agents when you find yourself repeating multi-step investigations. Add skills when you notice Claude lacking domain knowledge in specific areas. The system should grow organically from your actual pain points, not from a theoretical architecture.
Progressive Disclosure Principle
The biggest mistake teams make with Claude Code workflows is trying to build everything at once. They create twenty commands, five agents, and ten skills before anyone has used the system. The result is a maintenance burden with no proven value. Instead, follow the progressive disclosure principle: start with the simplest approach that works, and add complexity only when you have evidence that it is needed.
Week one: write three to five commands for your most common workflows. Week two: identify one or two tasks that would benefit from agent isolation and build those. Week three: notice where Claude lacks domain expertise and create targeted skills. Each addition should solve a specific, observed problem. If you cannot point to a concrete frustration that a new command, agent, or skill resolves, you do not need it yet.
Workflow Files Checklist
5 min
Average time saved per repeatable workflow
Commands eliminate the need to re-type and re-explain common workflows every session
40%
Context savings from agent delegation
Agents handle exploration-heavy tasks in their own context, keeping the main conversation lean
3x
Throughput increase with parallel agents
Multiple agents can work simultaneously on different parts of the codebase