Clicky

Back to Blog
AI & Automation February 11, 2026 5 min read

Inside My Claude Code Workflow: What the Insights Report Revealed

Claude Code's /insights report showed where to optimize. What it showed (single-goal vs multi-task, friction points), what it recommended (hooks, task agents), and what I'm doing. Takeaways: CLAUDE.md, skills, run your own report.

Claude Code usage insights dashboard showing Top Tools Used, Languages, and Session Types

Claude Code tracks everything. Every message, every file touched, every tool called, every session outcome. And after 6 weeks of intensive use, it generated something I didn’t expect: a detailed usage report that showed me exactly how I work, and where I was leaving productivity on the table.

475
Messages
+66,937 / −3,046
Lines
492
Files
29
Days
16.4
Msgs/day

The data was fascinating. But more importantly, it was actionable. The report didn’t just show me what I did; it identified patterns, highlighted friction points, and suggested specific features I wasn’t using that could eliminate my biggest bottlenecks.

This isn’t a post about AI replacing developers. It’s about using data to understand your own workflow, identify waste, and systematically improve. The insights from this report are what I’m now using to improve my Claude session success rate.

Here’s what the report showed, and how you can use the same approach to optimize your own Claude Code workflow.


TL;DR

The discovery: Claude Code’s usage insights revealed my exact workflow patterns, friction points, and optimization opportunities across 50 sessions.

What the report showed:

  1. Single-goal sessions had 80% success vs 18.75% for multi-task sessions
  2. 47 “buggy code” friction events: Claude declaring work complete without testing
  3. 42 “wrong approach” events: Claude suggesting tools/infrastructure I don’t use

Who this is for: Developers using Claude Code (or any AI coding assistant) who want data-driven insights into their own workflow patterns.


1. What the Claude Code report showed

Claude Code’s /insights report breaks down everything. Here are the highlights from 50 sessions.

Usage snapshot

Session distribution:

Outcomes and success rates

Session outcomes:


Top friction points

What helped most: Multi-file changes

The usage report revealed something I wasn’t expecting:

“Multi-file Changes” was Claude’s top capability (26 instances)

What Helped Most - Claude's capabilities bar chart with Multi-file Changes at 26

This was actually a positive pattern. When sessions worked well, they involved coordinated changes across multiple files:

The key insight: AI assistants excel at multi-file refactoring when the goal is clear and atomic.


The insights report didn’t just diagnose patterns - it recommended product features that reduce the two biggest failure modes (buggy code + wrong approach).

Hooks (auto-run verification after edits)

The idea: Automatically run build/test checks when Claude edits code.

// .claude/settings.json
{
  "hooks": {
    "postEdit": {
      "command": "npx tsc --noEmit 2>&1 | head -20",
      "description": "Type-check after edits"
    }
  }
}

Why this matters: My biggest friction source was buggy code. A post-edit hook that auto-runs type-checking would catch build breaks instantly before Claude moves to the next file.

Task agents (parallel investigation)

The idea: Spawn focused sub-agents for exploratory work while your main session continues.

Example:

"Spawn a task agent to investigate the MDX rendering issue -
check version compatibility, component passing, and provider support -
while you continue fixing the CSS layout separately."

When to use:

On the horizon

The report also included a “horizon” section - three capability directions stood out:


3. What I’m doing about it

Based on the findings above, I’m standardizing three habits:

Here’s the single prompt pattern I use most often now:

Implement [feature].

Before declaring done:
1. Run build - show output
2. Run tests - show results
3. Fix any errors
4. Only report completion when everything passes

Do not skip verification.

If you want one practical next step: open your /insights report, identify your top friction category, and run your next session with one goal plus explicit verification.


Actionable takeaways

1. Build meta-workflows (so Claude stops repeating mistakes)

The report made it obvious that a lot of friction is systemic: the same context gaps and verification misses happen across sessions. The fastest fix is to turn your preferences into reusable defaults.

Two CLAUDE.md files: Use a global file for preferences that apply everywhere, and a project file for repo-specific details. Claude reads both: global first ~/.claude/CLAUDE.md, then project ./CLAUDE.md in each repo. That way you document “use Wrangler, verify builds” once globally, and only add stack/commands per project.

Example skill (keep it short and specific):

# .claude/skills/deploy/SKILL.md

1. Run `npm run build` and confirm zero errors
2. Deploy using wrangler (no API tokens)
3. Verify production responds with 200

Example CLAUDE.md rules (the stuff you wish you didn’t have to repeat):

## Verification
After code changes, always run the build and show output.

## Deployment
Use Wrangler CLI. Do not suggest API tokens.

2. Turn recommendations into defaults (Hooks)

Hooks are the most direct response to the report’s biggest friction category (buggy code). Set them up in .claude/settings.json (see the Hooks example above) so verification runs automatically right after edits.

3. Use your report like a feedback loop

To generate (or refresh) the report, run /insights inside Claude Code, then open the HTML output at:

xdg-open ~/.claude/usage-data/report.html

Resources


Go check your usage report. What’s your success rate? What patterns emerge? What’s blocking you?

Run /insights, pick one friction to fix, and let the next report confirm the improvement.


I write about AI-assisted dev and building faster. X · LinkedIn · Work with me.

Tags: