On most teams, productivity hits a weird ceiling. New tools make us faster, then we bottleneck on context, reviews, and decision time. The blocker is rarely typing speed. It’s waiting for the right information to show up.
AI doesn’t make the job different. It changes when parts of the job happen. The moment a ticket is clear enough for a human, it can be clear enough for a model that knows your repo. That single shift moves context earlier. It turns Monday morning from archaeology into editing.
This is a perspective on helping teams find ways to build more while staying balanced. Not a prescriptive guide, just an approach that’s proven effective in practice.
The bottleneck is us, not the tools#
Two-week sprints, estimates, planning, PRs, CI, retro. These rituals mostly work. The problem is their timing. Context arrives late, so developers spend the first hour of every ticket just getting oriented. Models help most when they create good starting points before we start.
A small, useful shift#
When a ticket is truly ready, it should be ready for both a person and a model. Ready means: goal, relevant paths in the repo, constraints, acceptance examples. With that, you can ask a model for four things:
- A change outline - what needs to touch what
- A thin scaffold - something that compiles and runs
- Tests that fail for the right reasons - executable specifications
- A short risk list - what could break
You wake up to a draft you can run and critique. The first hour becomes review and naming, not searching and guessing.
What changes, what doesn’t#
Changes: sequence, not ownership. Planning, scaffolding, and tests move earlier. Pull systems work better because tickets carry context with them.
Doesn’t change: taste, trade-offs, responsibility. Humans still decide shapes, enforce style and architecture, and say no when a fast path breaks the system.
A day in this rhythm#
Maya opens a ticket about retry logic for webhooks. The ticket links two specific modules (webhooks/handlers.py
and utils/backoff.py
), shows the current handler function, sets a 200ms performance budget, and mentions idempotency concerns.
Overnight, someone asked the model for an outline, tests, and a sketch of the exponential backoff. Maya pulls the branch, runs the failing tests, fixes the import paths, renames retryWithDelay
to retryWithBackoff
, and adds the edge case the model missed: what happens when the webhook endpoint returns a 2xx but with an error payload.
The pull request explains why this retry shape fits the existing error-handling patterns. Review is quicker because the tests tell a coherent story and the implementation follows established conventions.
Other days the draft is wrong. That’s fine. Treat model output like a junior colleague who works nights. Useful, not in charge.
Rules that work#
Move work left. Earlier context beats later speed. A well-prepared ticket with model scaffolding saves more time than the fastest possible code review.
Tests first, always. A failing test is a better specification than three paragraphs. It’s also harder for models to misinterpret.
Keep context near code. Prompt fragments, architectural decisions, and constraint notes live in the repo, in README files, in draft PRs, embedded in comments. Not buried in tickets.
Guardrails on by default. Lint, types, security scanning, secret detection. Machines excel at boring compliance checks.
Measure flow, not effort. Track cycle time per PR, lead time per ticket, escaped defects. Forecast by readiness and risk, not by story points.
Privacy and security require explicit protocols. The risk of accidentally sharing sensitive code with public AI models is real and costly. Establish clear guidelines: never include API keys, connection strings, or personally identifiable information in prompts. Use enterprise-grade, secure AI platforms that offer data residency guarantees and audit trails for proprietary codebases. Train teams to craft prompts that describe patterns and requirements without sharing actual sensitive business logic. When in doubt, use masked or synthetic data for sensitive workflows.
If you manage people#
Your job is removing friction, not managing output. Give time for tickets to become model-ready. This is planning work, not overhead. Keep a small library of prompt examples and improve them during retrospectives.
Tighten CI so “fast” doesn’t mean “sloppy.” Publish a simple flow dashboard that shows where work gets stuck. Hire for judgment and systems thinking. These are the skills that matter when the typing is handled.
The jargon, decoded#
- CI/CD: Continuous Integration/Deployment. Scripts that build, test, and deploy code automatically
- PR: Pull Request. A proposed change waiting for review and approval
- Scaffold: A minimal starter that compiles and runs, giving structure without implementation
- SAST: Static Application Security Testing. Automated scans that catch risky code patterns
Common challenges and practical fixes#
This isn’t magic. Not every ticket will be well-prepared, and AI-generated code comes with predictable problems. Here’s what we’ve learned from teams making this transition:
Ambiguous requirements lead to hallucinated features. When tickets say “make it faster” or “improve error handling,” models invent requirements that sound reasonable but miss the point. Fix: Break vague tickets into smaller, well-defined tasks with specific success criteria. “Reduce webhook timeout from 30s to 10s” beats “improve webhook performance.”
AI misreads context and creates plausible but wrong solutions. Models excel at patterns but struggle with business logic edge cases. Fix: Implement a quick human-in-the-loop review before any AI-generated code gets merged. Treat the first commit as a draft that needs validation, not a solution that needs polish.
Legacy systems resist model understanding. Older codebases with inconsistent patterns, missing documentation, or complex implicit contracts confuse models. Fix: Start with greenfield features or well-documented modules. Let models learn your patterns gradually rather than throwing them at your most complex legacy code first.
Developer experience matters#
Efficiency gains mean nothing if developers lose engagement. The teams seeing the best results from this approach focus as much on satisfaction as speed.
Automating repetitive tasks creates space for creative problem-solving. Developers report higher job satisfaction when they spend less time on boilerplate and more time on architecture, user experience, and system design. The cognitive overhead of switching between mundane tasks and complex decisions is real.
Maintaining ownership prevents AI dependency. The key is ensuring developers still feel ownership over their work. AI provides starting points, not finished solutions. Developers should be critiquing, refining, and ultimately deciding what ships. When people feel like code reviewers rather than code authors, engagement drops.
Recognition and growth paths need updating. Traditional metrics like lines of code or features shipped become less meaningful. Focus instead on system design contributions, code review quality, and mentoring newer team members on effective AI collaboration patterns.
Practical patterns that work#
Here are specific workflows teams are using to shift work earlier and run processes in parallel:
Background ticket processing. Set up automation that starts working on tickets as soon as they’re marked “ready for development.” While you finish your current task, AI generates scaffolding, tests, and implementation sketches for the next three tickets in your queue. You arrive to find branches with failing tests and working code that needs review, not blank files.
Automated test generation on PR creation. Every time someone opens a pull request, trigger automation that generates comprehensive test cases based on the code changes. The developer reviews and refines these tests using AI feedback loops. Multiple processes run in parallel: the original feature development, test generation, security scanning, and performance analysis.
Proactive code review preparation. Before requesting human review, run AI analysis that identifies potential issues, suggests improvements, and generates explanatory comments. The reviewer gets a pre-analyzed PR with highlighted concerns and suggested fixes, turning review from detective work into decision-making.
Context-aware documentation updates. When code changes, automatically generate documentation updates and README modifications. AI identifies which docs are affected and creates draft updates that maintainers can approve or refine.
Dependency and impact analysis. For every change, run background analysis of what else might be affected. Generate migration guides, update scripts, and compatibility notes before anyone asks for them.
Parallel environment management. While you work on feature A, have automation preparing environments, running tests, and validating deployments for features B and C. Manage multiple workstreams simultaneously without context switching.
The key is treating AI like a team of junior developers working different shifts. They prepare, you decide. They draft, you refine. They analyze, you prioritize.
Start small, learn fast#
Pick one team, one project type, one workflow. See what works. The goal isn’t perfect tickets overnight, it’s better starting points for the work that matters most.
We’ll keep our rituals. We’ll move their weight. When a developer opens a ticket and sees tests, a sketch, and a plan, the day starts on step two. The work that remains is the part that needs judgment. That’s the part worth getting faster at.
The sequence changed. The responsibility didn’t. AI gives us starting points; humans decide where to go.
Teams moving in this direction often find the trickiest part isn’t the technical implementation, it’s the cultural shift. If you’re curious how this might work for your organization, feel free to reach out.