Prompt Reviews, Shared Sessions, and Why Your Team Should Talk About How They Use AI
Peter Steinberger, the creator of OpenClaw, said something on a podcast recently that stuck with me: he reviews prompts more carefully than code. He asks contributors to attach their prompts to pull requests. He finds more signal in the prompt than in the diff.
I manage an SRE team at Billie. We use Claude Code and Gemini CLI daily. And I’ve noticed a pattern: everyone’s figured out their own way of working with AI, but nobody talks about it. One person has a brilliant technique for debugging Terraform. Another figured out how to use browser snapshots for visual regression. But these tricks stay locked in individual heads.
That’s a waste.
The problem with silent AI adoption
Most teams adopted AI coding tools the same way: someone started using Copilot or Claude Code, got faster, and kept going. No process change. No knowledge sharing. Just individuals getting individually better.
The result? Your team has wildly different levels of AI effectiveness, and nobody knows it. The person struggling with vague prompts sits next to someone who discovered that pointing the agent at an existing solution in another folder makes it 10× more reliable. They never talk about it because “using AI” feels like a personal productivity thing, not a team practice.
Start with what you have
You don’t need new tools. Here’s what actually works, ordered from zero-effort to proper infrastructure.
1. Add a “Prompt Context” section to your PR template
This is the single highest-value change you can make this week. Add a section to your PR template:
## AI Context (optional)
- Tool used: Claude Code / Gemini CLI / other
- Key prompt or approach:
- What didn't work:
The “what didn’t work” field is the most valuable part. Failures teach more than successes. If someone spent 20 minutes fighting a hallucination before finding the right framing, that’s exactly what the team needs to see.
2. Share sessions, not just results
Claude Code saves transcripts in ~/.claude/projects/. Gemini CLI has similar logs. When you solve something interesting — a tricky refactor, a debugging session, a migration — drop the key prompt chain in a shared channel.
Not the whole session. Just the turning point. “I was stuck until I said this, and then it clicked.”
3. Build a shared AGENTS.md
GitHub now officially supports AGENTS.md — both Copilot coding agent and Claude Code read it for repo-level instructions. Make it a team effort. Every time someone discovers a gotcha — “always use CSS variables in this project, never !important” or “always include the Jira ticket in PR titles” — add it to the shared file.
This is how you create institutional knowledge for AI. Your agent gets smarter every time someone adds a line. And unlike tribal knowledge that lives in people’s heads, this survives when someone leaves or switches teams.
4. Shared skills repo
Once you’ve been sharing prompts for a few weeks, patterns emerge. The same types of problems get solved the same way. At that point, a flat collection of prompts isn’t enough — you want something structured.
If you’re using OpenClaw, this is exactly what skills are: reusable packages that bundle a prompt with context, scripts, reference files, and conventions. But even without OpenClaw, the concept works with AGENTS.md or any agent that reads markdown instructions.
Create a shared repo:
team-skills/
terraform/
SKILL.md # when to use, how it works
plan-and-fix.md # the actual prompt + workflow
validate.sh # verification script
debugging/
SKILL.md
log-analysis.md
refactoring/
SKILL.md
css-visual-comparison.md
Each skill has a description (so the agent knows when to use it), the prompt itself, and optionally scripts that close the feedback loop. It’s runbooks for AI-assisted work, but the AI can actually read and follow them.
The beauty of a shared skills repo: when someone figures out a reliable way to do X, they package it once. Everyone’s agent gets that capability. New team members inherit months of accumulated technique on day one.
5. Session replay in standups
Dedicate 2 minutes per standup (or one slot per week) to “here’s an interesting AI session.” Show the prompt, the result, and what you’d do differently. More useful than showing code diffs, and it normalizes talking about AI usage openly.
The harder stuff
Once the basics are flowing, you can invest in tooling:
Git notes for prompt context. Git has a built-in feature most people forget about: git notes. You can attach metadata to any commit without changing its hash. A post-commit hook could automatically capture the key prompt from your Claude Code or Gemini session and attach it as a note:
# attach prompt context to the last commit
git notes --ref=ai-prompts add -m "Prompt: Refactor auth module to use DI, run tests after each change"
# view prompts alongside history
git log --show-notes=ai-prompts
# share with the team
git push origin refs/notes/*
One caveat: notes aren’t fetched by default. Add this to your team’s git config:
git config --add remote.origin.fetch '+refs/notes/*:refs/notes/*'
Now reviewers see what was asked, not just what was generated — without cluttering the commit history.
Wrapper scripts with logging. A thin shell around claude and gemini that logs sessions to a shared store. Searchable by author, repo, outcome. Over time this becomes a dataset you can analyze — which prompts lead to successful outcomes, where do people get stuck.
Team metrics. I ran an analysis of my own Claude Code usage: 135 sessions, 1,499 messages, 80% success rate. The failures clustered around specific patterns — wrong initial approach, misunderstood scope, auth issues. That kind of data at the team level would be gold for identifying where to invest in better tooling or training.
Close the loop, together
The same principle that makes AI coding effective — closing the feedback loop — applies to teams. If you’re using AI in isolation, you have no feedback. You don’t know if your approach is good, bad, or just different.
Share the prompts. Review the thinking, not just the output. Talk about what failed.
The code your AI writes is temporary. The way your team learns to work with AI compounds.