So, after quite a bit of reluctance, I started using Claude Code, a while ago, combined with Superpowers, and some essential plugins. It really is the ultimate, always (when Claude isn't down lol) on call pair coder.
What Claude Code Actually Is
Claude Code is a command-line tool/agent. You install it, open it with claude in the directory of a codebase, and it can read files, write files, run shell commands, execute tests, and iterate. But it displays each thing it will do and asks for permission before doing it. It's closer to pairing with a really booksmart junior engineer.
The key mental shift: you are orchestrating, not prompting. You set up the context (what files matter, what constraints exist, you define what "done" looks like) and then Claude Code works toward that target. You intervene when it drifts.
$ npm install -g @anthropic-ai/claude-code $ claude # ** ready. Working directory: ~/Git/test # Type a task or / for commands. Ctrl+C to exit. > Refactor the auth module to use async/await throughout and add JSDoc to all public functions
That last line is a real task I gave it on day one. It read the file, made a plan I could see, executed it, ran my test suite, and fixed the one test that broke because it hadn't noticed a callback-style mock. Without me touching a keyboard except to hit enter twice.
Superpowers
Superpowers is the plugin layer that sits on top of Claude Code. Think of it as a curated set of system prompts on steroids. Each plugin gives Claude Code a specialized toolset and knowledge base for that particular kind of task.
If you want to install it, in a Claude Code session type /plugins, search for "superpowers", highlight it and hit enter to install it. If you'd like to use it right away, you'll need to /reload-plugins right away.
Once Superpowers is installed, run /using-superpowers at the start of any new session. This is the one I missed for the first few weeks and it made a real difference when I found it. It bootstraps the whole skill-discovery system, so Claude knows how to find and load the right plugin before it responds to anything, including clarifying questions. Without it, you can end up getting generic answers before the right context has loaded.
Claude will now trigger the appropriate skill before any response.
You'll need to make this a habit. First thing you type in a new session. Beyond that, there are a handful of slash commands worth knowing. Three of them are technically deprecated but still work, and honestly the old names are clearer than the new ones:
/superpowers:brainstormingkicks off an open-ended ideation session scoped to your codebase./superpowers:writing-planswas my go-to for laying out a multi-step refactor before touching any code. Same idea, Superpowers helps you produce a structured plan document you can then feed back into an execution session./superpowers:executing-plansis the other half of that pair. You hand it a plan, it works through the steps.
The Superpowers Commands I Use Most
Most of Superpowers' value isn't in the plugins. It's in these six slash commands. They're not modes you switch into, they're more like protocols you invoke at specific moments in your workflow. Once I started using them in the right order, the whole thing clicked.
/using-superpowers
Run this first. Every session. Before anything else. What it actually does is establish skill discovery, meaning Claude won't just guess at a response based on its general training. It will find and trigger the right skill before replying to anything, including your first clarifying question.
(I skipped this for weeks because it felt ceremonial. It isn't. Without it you're getting generic Claude, not Superpowers-aware Claude).
/using-git-worktrees
Run this before any feature work that touches something you don't want to break mid-session. This creates an isolated git worktree for the work about to happen, with smart directory selection so it doesn't clobber your current workspace. The practical upside: Claude Code can work on the feature branch in isolation while you stay on main. If things go sideways you just discard the worktree. I now run this before executing any plan that touches more than two files.
/executing-plans
This one pairs with the planning workflow. You write a plan first (in a separate session or via /superpowers:writing-plans), then hand it to this command to execute with review checkpoints between steps.
/verification-before-completion
This one I wish I'd had from the start. Run it before Claude declares anything done, fixed, or passing.
What it enforces is simple but important: Claude has to actually run the verification commands and show you the output before it can claim success. No more "the tests should be passing now." It has to prove it. I've caught a bunch of cases where it was about to commit something naughty/failed, purely because it was confident based on what it had written rather than what had actually run.
Before most commits or PRs, I run this command.
/subagent-driven-development
I don't use this one much to be honest, but mostly when I have an implementation plan with tasks that need to run independently of each other in the current session.
The difference from /executing-plans is subtle but real. This one spins up independent subagents per task rather than working sequentially. Parallelism, basically.
/systematic-debugging
Run this the moment you hit a bug, a test failure, or anything that behaves unexpectedly. Before you ask Claude to propose a fix.
/systematic-debugging
Without this command, Claude's default instinct seems to be to just jump to a fix. With it, you get a structured investigation first: reproduce the issue, understand the actual failure, identify the root cause, then fix.
The Plugins I Actually Use
There are a lot of plugins in the ecosystem. Here are the some that I reach for fairly reguarly:
code-simplifier
My most-used plugin on legacy codebases. It untangles overcomplicated logic, extracts magic constants, and flags premature abstractions. It doesn't just make code shorter, it makes code legible to a human reading it six months later.
// before function processUserData(data) { const result = {} if (data !== null && data !== undefined) { if (data.user !== null && data.user !== undefined) { if (data.user.profile) { if (data.user.profile.name) { result.name = data.user.profile.name } if (data.user.profile.email) { result.email = data.user.profile.email } } } } return result }
// after code-simplifier function processUserData(data) { const profile = data?.user?.profile ?? {} return { ...(profile.name && { name: profile.name }), ...(profile.email && { email: profile.email }), } }
Functionally identical. Seven fewer lines. Zero nested conditionals. Not as ugly.
code-review
I run this before every PR now. Not instead of human review, alongside it. The persona it adopts is deliberately critical. It's not trying to make you feel good; it's trying to find problems.
/superpower code-review > Review the changes in git diff HEAD~1 # Reviewing 3 files, 147 additions, 32 deletions... CRITICAL (1) src/payments/charge.js:47 - Promise chain is not handling rejection. stripe.charges.create().then(handleSuccess) will silently drop errors. Add .catch() or convert to async/await with try/catch. WARNINGS (3) src/payments/charge.js:23 - hardcoded USD currency string. src/users/update.js:88 - N+1 query risk in the loop starting here. src/users/update.js:102 - function is 94 lines; consider splitting. STYLE (2) Inconsistent error message casing across the two files. Missing return type hints on three exported functions.
The critical finding about the Promise rejection was real. I missed it initially. It would have caused silent payment failures in prod.
security-guidance
This plugin makes me feel slightly paranoid in a healthy way. It approaches every file as a potential attack surface.
A recent example output: "The user ID is taken directly from req.params.id and interpolated into the SQL query on line 34. Even with parameterised queries elsewhere in the file, this one isn't. It's string concatenation. Classic SQL injection vector. Fix: use db.query('SELECT * FROM users WHERE id = ?', [req.params.id])."
What I dig is it doesn't just identify issues, it explains the attack scenario. "An attacker could pass 1 OR 1=1 and receive all rows" is far more motivating than "use parameterised queries." I find myself actually learning a lot from the fixes rather than cargo-culting them.
python-lsp
My Python code is aight. It's not perfect. I write it the way someone who primarily does JavaScript writes Python: syntactically correct, semantically loose, typed approximately never.
The python-lsp superpower essentially yells at me like a properly configured Pylance would, but in natural language, in context, with explanations I can actually learn from.
# Original def get_user(id): result = db.execute(f"SELECT * FROM users WHERE id = {id}") if result: return result["name"], result["email"] return None
# after python-lsp feedback from typing import Optional, Tuple def get_user(user_id: int) -> Optional[Tuple[str, str]]: """Fetch a user's name and email by ID. Returns None if not found.""" result = db.execute( "SELECT name, email FROM users WHERE id = %s", (user_id,) ) if result: return result["name"], result["email"] return None
Three issues caught in one pass: missing type annotations, SQL injection via f-string, and selecting * when only two columns were needed. The superpower explained each one. I understood all three better afterward than I did before.
The Rough Edges
I'd be lying if I said it was all smooth. A few honest frustrations after some time:
- File context limits bite on large codebases. If your project has thousands of files, Claude Code needs to be guided about which directories matter. It won't read everything by default, and when it guesses wrong about file relevance, the output suffers.
- Superpower switching resets conversational context. When you
/superpowerswitch, the previous mode's personality goes away. This is actually correct behaviour (you want a clean slate for a security review) but it means you occasionally have to re-explain something you already discussed. - The
python-lspsuperpower struggles with complex type inference. It's great at catching simple type issues but starts to break down with heavily generic code or complex decorator patterns. Think of it as a fast linter, not a fullmypyreplacement. - Shit uses A LOT of tokens. When prompting, can't be so open ended/broad.