← Back to blog
Industrysubagentsskillsmcp

The Subagent Void: Why Your AI Sub-Agents Are Working Blind

Max Jürschik·March 23, 2026·5 min read

You gave the sub-agent a task. You forgot to give it a brain.

Claude Code's Agent tool is powerful. You dispatch a sub-agent, it runs in parallel, comes back with results. You can fan out three, five, ten agents at once. Security review on one thread, performance audit on another, documentation sweep on a third.

Here is the problem nobody talks about: every sub-agent starts naked.

No project context. No domain expertise. No loaded skills. No memory of what the parent agent knows. Just base Claude with a task description and whatever files it can find. It is the equivalent of hiring three senior contractors and handing each of them a sticky note that says "fix the thing."

I call this the Subagent Void.


What actually happens when you spawn a sub-agent

When Claude Code uses the Agent tool, it creates a fresh execution context. The sub-agent inherits the working directory and MCP connections. That is it. Everything else is gone:

  • Skills loaded by the parent? Gone. The sub-agent has no system prompt beyond the task.
  • Project architecture understanding? Gone. The sub-agent has to rediscover it from files.
  • Domain methodology? Gone. No frameworks, no checklists, no expert heuristics.

So your parent agent spent 30 seconds loading a security skill with 47 OWASP-aligned checks, threat modeling frameworks, and supply chain patterns. Then it dispatches a sub-agent to "review the auth module for security issues." The sub-agent gets... vibes. Generic best practices. "Make sure to validate inputs."

This is not a bug. It is an architectural gap in how agentic AI systems handle context propagation. The parent has expertise. The children do not. And nobody bridges that gap automatically.


Why MCP connections are the fix

Here is what most people miss: sub-agents inherit MCP server connections from the parent.

That means if you have SupaSkills connected as an MCP server, every sub-agent can call search_skills, load_skill, and ask_skills. The parent's loaded skills do not transfer. But the ability to load skills does.

The fix is dead simple. Instead of dispatching a sub-agent with:

"Review the authentication module for security vulnerabilities"

You dispatch it with:

"Review the authentication module for security vulnerabilities.
First, use the search_skills MCP tool to find relevant security
review skills. Load the best match. Then apply its methodology
to the auth module."

Three extra sentences. The sub-agent goes from generic to expert-level in one tool call. It loads a skill with specific checklists, threat models, and frameworks. The review quality jumps from "surface-level suggestions" to "production-grade audit."


A concrete example: parallel expert dispatch

I tested this with a real codebase review. Three sub-agents, dispatched in parallel:

Agent 1: Security Review Prompt includes "search for security code review skills on supaskills, load the best match." The agent finds security-code-reviewer (SupaScore 86.5), loads it, and runs a review using STRIDE threat modeling, checks for 12 injection vectors, validates RBAC patterns. Output: 23 specific findings with severity ratings.

Agent 2: Performance Audit Same pattern. Finds performance-debugging-guide (SupaScore 84.25), loads it. Gets the USE method (Utilization, Saturation, Errors), flame graph analysis patterns, N+1 query detection. Output: 8 bottlenecks with measured impact estimates.

Agent 3: Git Workflow Finds git-worktree-workflow, loads it. Gets proper worktree setup, branch isolation, parallel development patterns. Output: restructured workflow with CI integration.

Without skills, each agent produced 400 to 600 words of generic advice. With skills, each produced 1,200 to 2,000 words of methodology-driven, specific analysis. Same model, same codebase, same task descriptions. The only difference: three sentences telling the agent to load a skill first.


SkillStreaming makes it even cheaper

Loading a full skill uses an activation slot. Free users get 3 slots, Pro gets 42. If you are fanning out 5 sub-agents, that is 5 slots consumed.

SkillStreaming (ask_skills) solves this. Instead of loading a full 12,000-token skill, the sub-agent calls ask_skills and gets focused SubSkill fragments from across 1,279 skills. No activation slot required. Token budget between 2,500 and 5,500 depending on query complexity. Cross-domain assembly happens automatically.

The sub-agent prompt becomes:

"Review the authentication module for security vulnerabilities.
Use the ask_skills MCP tool to get relevant security expertise
before starting. Apply whatever frameworks and checklists it
returns."

One MCP call, ~500ms, and the sub-agent has methodology from 3 to 5 relevant skills without burning a single slot. The 13,381 SubSkill chunks are searchable by any agent with MCP access.

For heavy parallel workloads (10+ sub-agents), SkillStreaming is the correct pattern. You get 80% of the expertise at 20% of the token cost, with no slot pressure.


The pattern for your own agents

If you are building agentic systems with Claude Code, here is the minimal setup:

  1. Connect SupaSkills MCP in your claude_desktop_config.json or .mcp.json
  2. Include skill-loading instructions in every sub-agent dispatch prompt
  3. Use ask_skills for breadth (multi-domain, no slots) and load_skill for depth (single-domain, full methodology)
  4. Be specific about the domain in the prompt. "Find security skills" beats "find relevant skills"

The Subagent Void is a solvable problem. Your sub-agents do not need to be born experts. They just need access to a skill catalog and three sentences of instruction telling them to use it.

1,279 skills. 13,381 SubSkill chunks. 6 domains. All accessible from any sub-agent with an MCP connection.

The void is optional.