We Rated 22 Viral Vibe Coding Tips: Here's What Actually Works
Every week, a new thread goes viral: "50 Claude Code tips that changed my life." We read 22 of the most widely-shared ones, traced them back to their original sources, and scored each against a 4-criteria rubric. The results: 8 are genuinely high-leverage, 8 are context-dependent half-truths, and 6 are outright dangerous.
The Rubric
We score every tip on four dimensions (0-10 each):
| Criterion | What It Measures |
|---|---|
| Measurability | Can you objectively verify the outcome? (test pass rate, build time, defect rate) |
| Security & Scope | Does it reduce blast radius and prevent unreviewed code in production? |
| Context-Cost | Does it conserve or waste the LLM context window? |
| Portability | Is it repo-level config (portable) or personal ritual (fragile)? |
Only tips scoring 7+ average qualify as High-Leverage.
HIGH-LEVERAGE (Do These)
1. "Give Claude a way to verify its own work"
Source: Boris Cherny (Claude Code creator) | Score: 9.5/10
The single most impactful tip. Boris reports a 2-3x quality improvement when Claude can run tests, type-check, and lint its own output. John Kim: "If there's one tip that matters more than the other 49, it's this."
Implementation: Write tests yourself, then let Claude implement against them. Set up hooks that auto-run tsc --noEmit && eslint . after every file edit.
2. "Never send an LLM to do a linter's job"
Source: HumanLayer | Score: 9.0/10
Every CLAUDE.md instruction competes for model attention. ESLint enforces "use single quotes" at 100% with zero context cost. CLAUDE.md achieves ~70%. The math is clear.
Implementation: Move all style rules to .eslintrc and .prettierrc. Reserve CLAUDE.md for architecture and conventions that linters can't express.
3. "Use hooks for enforcement, not CLAUDE.md"
Source: HumanLayer, Anthropic Docs | Score: 9.0/10
CLAUDE.md instructions are probabilistic (~70% compliance). Shell hooks are deterministic (100%). For rules that must always hold ("never force-push", "always run tests before commit") hooks are the only reliable mechanism.
Implementation: Add a PreToolUse hook that blocks dangerous Bash commands. Add a PostToolUse hook that runs formatters after file writes. Exit code 2 = block.
4. "Write tests yourself, let Claude implement"
Source: Addy Osmani, Eric Elliott | Score: 8.5/10
TDD as a safety net. When you write both tests and implementation, the AI marks its own homework. Eric Elliott: "The test becomes the specification. You remain the architect."
Warning: Only works if your tests are actually good. Shallow tests (just checking typeof result === 'object') give false confidence.
5. "Start a fresh session for execution after exploration"
Source: Multiple, Boris Cherny | Score: 8.5/10
Context pollution from exploratory reading degrades implementation quality. At 70% context, precision drops. At 85%, hallucinations increase. A fresh session with a clear spec produces better code than a long conversation that explored, then tried to implement.
Implementation: Explore in one session, write a spec or pseudo-code, then /clear or start a new session for implementation.
6. "Delete most of what /init generates"
Source: Anthropic PDF, HumanLayer | Score: 8.0/10
The default CLAUDE.md from /init is bloated. Keep your CLAUDE.md under 200 lines. Remove anything a linter can enforce. Remove anything obvious. Every line costs context tokens on every conversation turn.
Warning: Delete the noise, but make sure you replace it with targeted rules. An empty CLAUDE.md is worse than a bloated one.
7. "Add a rule the second time you see the same mistake"
Source: Builder.io | Score: 8.0/10
Reactive rules beat pre-filled templates. First occurrence: correct it manually. Second occurrence: add a CLAUDE.md rule. This prevents rule bloat while catching real patterns.
Implementation: Keep a scratch note. When Claude repeats an error, write a rule that prevents it. Include the "why" in the rule so the model understands the reasoning.
8. "If you do it more than once a day, make it a slash command"
Source: Boris Cherny, YK Dojo | Score: 7.5/10
Custom slash commands and skills encode your team's workflows. They load on demand (not always in context like CLAUDE.md), and they're shareable across projects.
CONTEXT-DEPENDENT (Know When to Apply)
9. "Run multiple Claude instances in parallel with git worktrees"
Source: Boris Cherny | Score: 6.5/10
Boris runs 5 terminals + 5-10 web sessions. Powerful if you can review diffs across parallel streams. Dangerous if you rubber-stamp changes. Prerequisite: Good test coverage and the ability to read diffs quickly.
10. "Use Plan Mode for architecture decisions"
Source: Anthropic Docs | Score: 6.5/10
Useful for complex multi-file changes. Overkill for simple edits. The overhead is context tokens spent on planning that could go toward implementation. Use when: >3 files touched, new architecture pattern, unclear approach.
11. "Autonomous backlog processing loops"
Source: Ralph Wiggum pattern (HN) | Score: 6.0/10
A bash loop that feeds Claude a PRD and lets it work through tasks autonomously. Only works with: comprehensive test suites, well-specified tasks, and budget constraints (--max-budget-usd). Without those guardrails, you get expensive garbage.
12. "Write pseudo-code, let the LLM translate"
Source: Multiple | Score: 6.0/10
LLMs are "fuzzy compilers," better at translation than design from scratch. This works well for algorithmic code but adds unnecessary overhead for simple CRUD operations.
13. "For debugging, ask for ideas first, not code"
Source: Community practice | Score: 6.0/10
Forces hypothesis exploration before jumping to solutions. Good for complex bugs. Unnecessary for obvious errors. The risk is spending context tokens on discussion instead of just fixing the issue.
14. "Keep CLAUDE.md under 500 tokens"
Source: Various listicles | Score: 5.5/10
The spirit is right (short is better), but the number is arbitrary. 500 tokens is too aggressive for complex projects. 200 lines (not tokens) is a better guideline. The real rule: every line should earn its place.
15. "Use @mentions to reference files instead of pasting code"
Source: Community practice | Score: 5.5/10
True for Cursor/Copilot. Partially true for Claude Code (which reads files directly). The deeper insight: let the AI read the source of truth, don't copy-paste stale code into the prompt.
16. "Commit after every successful change"
Source: Multiple | Score: 5.0/10
Good for reverting AI mistakes, but creates noisy git history. Better: use /rewind checkpoints within Claude Code sessions, and commit at logical milestones.
DANGEROUS (Never Do These)
17. "Use --dangerously-skip-permissions for speed"
Source: Various tips threads | Score: 1.0/10
Disables ALL safety rails: file writes, command execution, network access. Only acceptable in disposable containers with no network. On a real machine with SSH keys, AWS credentials, and production configs? Never.
18. "Don't read the code. If it runs, ship it."
Source: "Vibe coding" ethos | Score: 1.5/10
Creates "comprehension debt" (Addy Osmani). METR trial: experienced developers were 19% slower with AI when they stopped reviewing code. You lose the ability to maintain what the AI generates.
19. "Let the AI install whatever packages it suggests"
Source: Community practice | Score: 1.0/10
Slopsquatting: attackers register package names that AI models hallucinate. Lovable CVE-2025-48757 exposed 10.3% of apps through AI-suggested dependencies. Always verify package names independently.
20. "Just paste your error into Claude and let it fix everything"
Source: Beginner threads | Score: 2.0/10
Without context about what you changed, why, and what the expected behavior is, Claude will "fix" the error by introducing a new one. Debugging requires understanding, not pattern matching.
21. "One massive CLAUDE.md with everything"
Source: Over-enthusiastic adopters | Score: 2.5/10
A 2,000-line CLAUDE.md consumes context on every turn and creates instruction conflicts. Modular approach: CLAUDE.md (core 200 lines) + .claude/Instructions/*.md (loaded on demand) + skills (loaded only when relevant).
22. "AI-generated tests are fine if they pass"
Source: Various | Score: 2.0/10
45-75% of vibe-coded apps contain security vulnerabilities (Veracode, SusVibes benchmark). AI writing both tests and implementation produces circular validation. The tests "prove" the code works because they were written to match the code, not the requirements.
Three Takeaways
-
Verification beats instruction. The gap between "telling Claude what to do" and "giving Claude a way to check its work" is a 2-3x quality multiplier.
-
Deterministic enforcement beats probabilistic compliance. Hooks > CLAUDE.md > comments > nothing. If a rule must always hold, make it a hook.
-
Context is the scarce resource. Every CLAUDE.md line, every MCP tool, every conversation turn consumes context. Treat tokens like memory in an embedded system. Budget carefully.
Want Rules That Actually Work?
We distilled these 22 tips into an actionable, classified skill with a built-in evaluation rubric.
Load the Vibe Coding Playbook in Claude Code and get an evidence-based framework for evaluating any AI coding tip, including the ones that haven't been invented yet.
For advanced Claude Code patterns (subagents, hooks, CI integration), check out the Claude Code Power User skill.