Why Your Engineering Team Is Slow (It's the Codebase, Not the People)

A two-minute interactive audit to score whether technical debt is dragging your engineering team. Five signals that separate people problems from code problems.

Ally Piechowski · · 9 min read

A client’s team spent a full week adding a CSV export to their admin panel. Two engineers, clear requirements, maybe a day of actual work. The rest of the time went to understanding existing code well enough to change it safely. That’s what I call codebase drag: when the codebase makes every task take longer than it should. It doesn’t show up in any dashboard or sprint report.

The team slows down. Every time, leadership decides it’s a people problem. Maybe the seniors got complacent, maybe the new hires need more support. So they reorganize, add process, sometimes let people go. Then the next team hits the same wall.

Because it was never the people. It was the codebase.

Not bugs, not missing features. Not even what most teams mean when they say “technical debt.”

After years of codebase audits, the same five signals keep showing up, so I finally put them into a scoring rubric: The Codebase Drag Audit. Five signals, scored 0 to 2. If you hit 4 or above, the code needs direct investment before anything else will help. (Skip to the audit)

5 Signs Your Codebase Is Punishing Your Team

1. The Apology Estimate

“Knowing this codebase, it’ll probably take about two weeks.” I hear some version of this on every engagement. The feature should take three days. The engineer says two weeks. Leadership assumes they’re padding. Or just slow.

They’re pricing in drag.

They know that changing the billing module means touching the notification system, because somewhere along the way those two got coupled through a shared service object that nobody remembers writing. They know the last person who modified that module broke checkout for three hours. Hidden patterns like default scopes or deeply nested callbacks mean the blast radius of a change is impossible to predict without reading half the codebase. The estimate isn’t padded. It’s honest. The codebase just costs that much to work in.

When estimates consistently run 2-3x what the work “should” take, it’s not an estimation problem. Your engineers know what the codebase costs. They’ve just stopped trying to explain it.

2. Deploy Fear

When’s the last time your team deployed on a Friday? If that question gets a laugh, you have deploy fear.

Deploy fear shows up as batching. Instead of shipping as they go, the team groups releases into big, infrequent deploys.

One client’s team had an unofficial rule: no deploys after Wednesday. Nobody wrote it down. It just became the rule after three Thursday deploys in a row caused weekend incidents. No rollback strategy, tests you couldn’t trust, and a deploy pipeline that took 45 minutes. Of course they stopped deploying on Thursdays. What else would you do? DORA defines elite teams as deploying on demand with change failure rates under 5%. This team was doing one deploy a week and holding their breath.

3. The “Don’t Touch That” File

“Don’t touch that file.” I hear it on almost every engagement. Usually within the first two days, always casual. Like it’s just how things work around here.

A billing controller with 30 before_actions. A model that tops git log every time but nobody’s touched structurally in years. I run git log --oneline --since="2 years ago" on the models directory to see which files have been touched repeatedly. The file at the top is almost always the one people warned me about. And if it’s all small patches with no structural work, that tells you everything: people are treating the symptoms and leaving the disease alone.

The real cost isn’t the file. It’s that features which should live in that module get built somewhere else instead. New engineers figure out to stay away within their first week. Over time the codebase grows around the dead zone like a tree growing around a fence post.

Ask your tech leads which files they’d be nervous to refactor. You’ll learn more from that conversation than from any metrics dashboard.

4. The Coverage Lie

80% test coverage. The dashboard looks healthy. But the three models that handle money have zero tests, and the coverage number is carried by hundreds of tests on serializers, helpers, and utility methods that rarely break.

I’ve started calling this the coverage lie: when a test suite exists to make a metric look good rather than to catch regressions. Tests pass, production breaks anyway. Engineers stop trusting the suite and start manual-testing critical paths before deploys, which feeds deploy fear.

CI takes 40 minutes, so developers stop running tests locally. Now the coverage number is lying twice: the tests don’t cover what matters, and the ones that exist aren’t even being run. Bugs surface later. By the time someone notices, the engineer who wrote that code is three tickets deep into something else.

Forget the coverage number. The real question: when’s the last time a test actually caught a bug before production? If your team has to think about it, the suite isn’t doing its job.

5. Time to First Commit

Hand a new engineer a laptop. How long until they open a pull request with a real change? Not a README fix. An actual bug fix or small feature.

In a healthy codebase, this takes a day or two. The README has setup instructions that actually work and the test suite runs locally. In a dragging codebase, I’ve seen this take two weeks or more. One client’s dev environment setup took weeks before I got involved. After I fixed it, a new dev was up and running in 15 to 20 minutes.

The thing that mattered more: devs could reset to a clean slate at any time. Before, if your dev environment broke, you were manually rebuilding and reinstalling everything from scratch. So people were afraid to experiment. They’d tiptoe around their own setup the same way they tiptoed around the code.

The culprit is usually setup rot. The bin/setup script hasn’t been updated since the last developer environment change. Seed data references tables or columns that no longer exist. There are three undocumented environment variables that you only learn about when the app crashes on boot. Preventing setup rot is cheap, but nobody owns it, so it decays quietly.

Time to first commit matters because it’s the one signal you can’t work around. Your existing engineers have already internalized all the undocumented steps. A new hire exposes exactly how much accumulated knowledge the codebase demands before anyone can be productive in it.

Why Good Engineers Look Slow in Bad Codebases

These five signals compound. Every task carries overhead that nobody can point to in a standup. An engineer who shipped features in two days at their last job takes a week here. When they try to explain why, it sounds like excuses even to them.

A 2025 METR study found experienced developers were 19% slower with AI tools. Typing was never the bottleneck.

Your best engineers slow down the most. They see the risk. They know that changing this file might break that flow. So they move carefully, write defensive code, pad their estimates. A less experienced engineer might ship faster by not seeing the danger, then cause the production incident that makes everyone even more cautious next sprint.

One client cycled through six engineering teams in ten years, including two full acquisitions. The pattern repeated every time: leadership pushes for features, debt remediation gets skipped, the code starts to feel unrecoverable. Someone proposes a rewrite or a microservices extraction. That makes things worse, because now you have two systems instead of one and the original is still there. The next team inherits all of it.

When leadership reads slowness as a people problem, they make it worse. They add process on top of friction the team is already struggling under. The only intervention that actually helps is fixing the paths.

The Codebase Drag Audit: A Diagnostic You Can Run This Week

Score each signal from 0 to 2. Click the cell that best describes your team.

Codebase Drag Audit scoring rubric. Click a cell in each row to score.
Signal 0 1 2
Apology Estimate
Deploy Fear
“Don’t Touch That” File
Coverage Lie
Time to First Commit
Score: 0 / 10
Select your scores above.

What to Do When Technical Debt Is Dragging Your Team

The audit gives the developer productivity problem a name and a number. Something you can put in front of a stakeholder who controls the roadmap.

Start with the highest-scoring signal. Don’t try to fix everything. If deploy fear scored a 2, the first investment is CI speed, rollback automation, and smaller deploy units. If the apology estimate is highest, start by decoupling the modules with the widest blast radius. If the codebase is also several Rails versions behind, the version upgrade is often the forcing function that justifies the investment. If time to first commit scored a 2, a single day fixing bin/setup and documenting the environment will pay for itself with every future hire.

Give it two weeks. Pick the top signal, run a focused sprint, and measure something concrete. Deploy frequency is the easiest one to track, but estimate accuracy or time-to-first-PR work too depending on which signal you’re targeting. Not a rewrite. One targeted investment the team can feel.

The hardest part is getting the investment approved, because the cost of not doing it is invisible. That’s where the audit earns its keep. “Every feature takes roughly twice as long because of coupling in these three modules” is a different conversation than “we have tech debt.”

If you scored 7 or above, that’s the range where most of my client engagements start. I usually begin with a one-week codebase audit and work from there. Happy to talk through it.


Related Articles