Skip to main content
Background Image

What's Holding You Back from Succeeding in the AI Era?

·3041 words·15 mins·
Pini Shvartsman
Author
Pini Shvartsman
Architecting the future of software, cloud, and DevOps. I turn tech chaos into breakthrough innovation, leading teams to extraordinary results in our AI-powered world. Follow for game-changing insights on modern architecture and leadership.

I’ve been experimenting with AI in development teams. Some experiments have gone well. Developers shipping faster, workflows getting streamlined, genuine productivity gains. Others… not so much. I’m still figuring this out, honestly, but I keep running into patterns that concern me.

Last week, something happened that crystallized these concerns.

A developer I know (let’s call him Marcus) was excited to show me his GitHub stats. Impressive numbers: 247 commits in a month, 23 features shipped, velocity charts trending up. His manager was thrilled. Out of curiosity, I asked him to walk me through the architecture of a feature he’d shipped recently. Simple question: “Why did you structure the caching layer this way?”

He paused. Then admitted he wasn’t sure. The AI had suggested it. It worked. He shipped it. Three days later, that feature caused a production incident. Forty minutes of downtime. Significant revenue impact. All because he’d implemented architecture decisions he didn’t fully understand.

Marcus isn’t failing because AI isn’t good enough. He’s failing because he’s gotten really good at using AI without building the judgment to evaluate what it produces.

This got me thinking about something I’m noticing more often. Not that AI will replace developers (I don’t think that’s the real risk), but that we might be accidentally creating developers who move fast but think shallow, and managers who confuse speed with capability. The numbers are striking: by 2028, 90% of enterprise software engineers will likely be using AI code assistants, up from less than 14% in early 2024. Yet 77% of engineering leaders see integrating AI as a major challenge.

Maybe the issue isn’t AI itself. Maybe it’s that AI amplifies whatever approach you already have. If you think deeply about problems, AI helps you think faster. If you don’t… well, AI helps you not-think faster too.

I’m starting to see a pattern in how this plays out, and I think it’s worth sharing what I’ve noticed.

The Great Divide: Marcus vs. Sarah
#

We’re accidentally creating a divide. Not between people who use AI and people who don’t, but between those who let AI carry them and those who use it to leap forward.

Marcus represents the first group. There’s another developer I’ll call Sarah who seems to represent the second. Same company, similar experience level, both use AI heavily. But when I asked Sarah the same architecture question, she didn’t just answer. She walked me through her reasoning: the trade-offs she’d considered, why she’d rejected the AI’s first two suggestions (one would have created a memory leak under load, the other couldn’t scale horizontally), what she’d validated before shipping, and what monitoring she’d added because she knew this approach had specific failure modes under network latency.

Sarah’s velocity? Nearly identical to Marcus’s. But Sarah’s code doesn’t cause incidents. When it does break (because all code eventually breaks) she diagnoses it in minutes, not hours. She’s using AI to move faster, but her understanding of systems architecture is actually deepening. She treats AI as a thinking partner that suggests solutions, which she then stress-tests against her mental model of how distributed systems behave.

The difference between them isn’t talent. It’s approach. Marcus accepts AI suggestions that look good on the surface. Sarah interrogates them. Marcus ships fast. Sarah ships right. Marcus is becoming dependent. Sarah is becoming more capable.

And here’s what makes this dangerous: for the first six months, they look identical on paper. Same velocity, same feature throughput, same commit frequency. The difference only emerges when systems hit scale, when architectural decisions made months ago come home to roost. By then, Marcus has shipped dozens of features built on shaky foundations, and the technical debt is crushing.

The Self-Deception Patterns
#

Beyond the Marcus/Sarah divide, I’m noticing three patterns that seem to lead to struggles:

The Resisters refuse to engage with AI at all. I know a brilliant engineer who was convinced Copilot would “rot their brain.” Six months later, they were frustrated and behind, trying to catch up with tools they didn’t understand while everyone else had already learned to use them thoughtfully.

The Checkbox Adopters use AI just enough to say they’re using it. They’ll accept a Copilot suggestion here and there, maybe prompt ChatGPT when really stuck, but fundamentally they’re doing things the old way with a thin veneer of AI adoption. They think this is a safe middle ground. It’s actually the worst of both worlds. They’re not building deep AI collaboration skills because they’re not truly engaging. And they’re not building deep foundational skills because they’re using AI as a crutch for the things they don’t want to learn properly.

Meanwhile, the AI world makes huge leaps forward monthly. Not yearly. Monthly. If you learned Copilot in 2023 and called it done, you’re falling behind while convincing yourself you’re staying current. The gap between you and people actively learning these tools isn’t just widening. It’s compounding like interest you can’t afford.

The Manager’s Blind Spot might be the most concerning. I’m hearing more managers wonder if they still need developers at all. AI can write code, ship features, fix bugs. Why keep investing in expensive engineering talent when AI does it faster and cheaper?

I think this is a dangerous miscalculation. They do still need developers. Desperately. But they need a fundamentally different kind. They need developers who can see the whole picture, who can challenge AI when it’s wrong, who understand both the product vision and the code architecture deeply enough to orchestrate AI effectively.

From Privates to Generals
#

Think about it this way: if AI can write the code, you don’t need code writers anymore. You need generals who can command an AI army.

I mean this literally. In military terms, a private follows orders and executes tasks. A general orchestrates entire campaigns: seeing the terrain, understanding the objective, marshaling resources, adapting to changing conditions, and making strategic decisions that ripple across the entire operation.

That’s what developers need to become. Someone who can define the business problem, set architectural constraints, establish quality bars, plan rollout strategy, and then marshal multiple AI tools to execute on that vision while maintaining coherence across the system. Someone who spots when the AI is headed down the wrong path, not because they read every line of generated code, but because they understand the system deeply enough to catch the architectural smell.

The private-to-general shift isn’t about seniority. It’s about thinking level. I’ve seen 25-year-old developers who think like generals and 45-year-old senior engineers who still think like privates. The generals understand systems, trade-offs, second-order effects. The privates understand syntax.

Most managers are still hiring and evaluating for privates while wondering why their team can’t handle complexity. They’re measuring lines of code, tickets closed, features shipped (all private-level metrics). They should be measuring systems thinking, architectural coherence, the ability to spot when AI suggestions don’t fit the bigger picture, and the judgment to maintain quality at AI-augmented speed.

The Invisible Barriers
#

From what I’ve observed working with teams going through this transition, there seem to be five core barriers:

The Fundamentals Gap: I’ve interviewed developers who learned to code entirely in the AI era. They’ve never written a hundred lines without Copilot running. They can ship features fast, but they can’t debug when the AI steers them wrong because they’re missing the mental models that tell you when something smells off. It’s like someone who learned to navigate exclusively with GPS suddenly needing to read a map and orient themselves by landmarks. The skill atrophied before it fully developed.

The Management Gap: When AI handles syntax, what remains is collaboration, problem decomposition, and creative solutions to ambiguous problems. But many engineering managers rose through the ranks by being excellent individual contributors. They know how to review a pull request, but not how to review someone’s AI collaboration process. They can spot a memory leak, but they can’t spot a team that’s becoming dependent on tools that mask their fundamental skill gaps.

The Ethics and Security Blind Spot: Bias in AI-generated code isn’t just a headline. I’ve heard about recommendation algorithms that worked perfectly in testing but systematically disadvantaged certain user groups in production because the training data was skewed. Data privacy leaks happen when someone prompts ChatGPT with actual customer data to debug an issue, and suddenly proprietary information is in OpenAI’s training corpus. These risks are real and can be project killers.

The Burnout Nobody Saw Coming: I know a developer (call him Jason) who went from energized to exhausted in several months of heavy AI use. He wasn’t working more hours. But the cognitive load was crushing him. Before AI, natural breaks were built into his workflow: write code, get stuck, think through the problem, research solutions. With AI, the suggestions come instantly. The code appears. The tests pass. The features ship. There’s no natural stopping point. Jason told me: “I used to finish a feature and feel done. Now I finish a feature and immediately have three AI-generated options for the next one waiting for review. I’m not coding more, but I’m deciding constantly. My brain never gets to rest.” The pressure isn’t about hours anymore. It’s about attention.

The Skill Gap: AI won’t make engineers obsolete. It’ll automate the repetitive work and free you for complex problem-solving. But only if you develop those complex problem-solving skills. If you spend all your time prompting and none of your time learning fundamentals, you’re not building a career. You’re becoming an AI operator. And when the AI gets better, what value do you bring?

What Works for Managers
#

If you lead a team or a group, you’re in the position to shape how AI gets adopted. But first, get honest with yourself about what you actually need. You don’t need a team that can write code faster. You need a team of AI generals.

Here’s what seems to be working from what I’ve observed:

Institute AI literacy training, but make it real. I suggest to a team to try “fundamentals Fridays.” For two hours every Friday afternoon, no AI tools. Period. They work through algorithm problems from scratch, debug performance issues with just a profiler and their understanding of systems, and review code the old-fashioned way. The first few weeks, developers hated it. Three months in, something shifted. They started catching subtle bugs in AI-generated code they would have missed before. They became the team’s quality gatekeepers, not because they rejected AI, but because they could evaluate it critically. Meanwhile, I know about teams that went all-in on AI without fundamentals training having much higher incident rates and senior engineer burnout.

Set KPIs around quality, not just speed. Track code review depth. Measure incident resolution time and root cause quality. Monitor technical debt accumulation. If you only measure velocity, you’ll get velocity at the cost of everything else that matters.

Prioritize soft skills development. Run exercises where developers explain AI outputs in plain English to non-technical stakeholders. If they can’t explain why the AI suggested an approach, they probably shouldn’t ship it.

Implement ethical guidelines before you need them. Create clear policies for AI use: what data can go into prompts, what outputs require human review, how to audit for bias, what the security boundaries are. We want those teams that are avoiding serious incidents not because they got lucky, but because they’ve thought through the risks ahead of time.

Promote work-life balance aggressively. Enforce no-AI-after-hours rules if you need to. Set clear boundaries to prevent the 24/7 treadmill. Burnout destroys teams slowly, then all at once.

Invest in upskilling with real budget and real time. McKinsey’s research highlights that AI accelerates innovation in software development, but only with skilled teams. Make continuous learning part of the job, not something people do on weekends.

What Works for Developers
#

If you’re a developer, you have more control over your trajectory than you might think. Don’t wait for your company to figure this out. Take ownership of your growth.

Master the fundamentals alongside the tools. Spend time every week coding without AI. Implement algorithms from scratch. Debug performance issues using only profiling tools and your understanding of systems. This feels inefficient in the moment. You could ship faster with Copilot. But this is the time investment that makes you valuable. When you’re the person in the room who can debug the AI’s output, who can spot architectural problems before they ship, who can make trade-offs that the model can’t understand, that’s when you become indispensable.

Stay actively current, not passively aware. The AI landscape moves at a pace I’ve never seen before in my career. What’s cutting-edge this month is table stakes next month. One way to stay up to date is to follow me - I regularly share insights about new AI developments and how they impact software development. Beyond that, learn one new AI-related skill or tool every month, minimum. Not just surface-level “I tried it once.” Actually integrate it into your workflow and understand its strengths and limitations. Read about what’s working in production. Try new models when they drop. Understand what changes when context windows expand from 200K to 1M tokens. Stop lying to yourself that minimal engagement is enough. The gap is widening monthly.

Hone your soft skills deliberately. This isn’t fluffy advice. It’s career-critical. Join every code review you can. Present your work to the team regularly. Practice explaining technical decisions to non-technical people. Work on your writing. Clear documentation is a superpower in an AI-augmented world. AI can’t replace your storytelling. It can’t replicate your ability to build consensus, to read the room, to know when to push an idea and when to let it go.

Stay ethical and secure by default. Always validate AI outputs for bias and security implications. Make it a habit. Study real cases of AI projects that failed, not to be scared, but to learn the patterns of what goes wrong. When you’re prompting, be paranoid about what data you’re including.

Manage your time and energy like the finite resources they are. Track your productivity not just in features shipped, but in energy levels and work satisfaction. When you notice the treadmill speeding up, push back. The fastest way to stall your career is to burn out and need many months to recover.

The Uncomfortable Truth About What Comes Next
#

Remember Marcus and Sarah from the beginning? Same tools, same company, similar experience. One caused a six-figure production incident. The other is becoming a more capable engineer every week.

The gap between them isn’t widening linearly. It’s widening exponentially.

One year from now, Marcus will be even more dependent on AI because that’s the only way he knows how to work. When the AI fails (and it will, because all tools fail) he’ll be stuck. When his manager finally realizes he’s been shipping fast but shallow, his career trajectory will have already calcified.

Sarah will be leading architecture discussions. She’ll be mentoring other developers on how to use AI effectively. She’ll be the person who gets pulled into critical incidents because she can diagnose systemic problems, not just fix symptoms. She’ll be positioned for the next level of responsibility because she’s demonstrated judgment, not just velocity.

The market is already splitting, and it’s splitting fast. There are developers who think deeply, paired with AI that moves fast. There are managers who lead boldly, building teams that thrive because of AI, not despite it. These people are pulling ahead at a pace that would have seemed impossible five years ago. They’re not working longer hours. They’re working with deeper understanding and sharper judgment.

Then there are people getting left behind, not because they’re not using AI, but because they’re using it wrong. They’re over-relying without building foundations. They’re resisting out of fear. They’re engaging halfway and calling it done. They look productive today, but they’re accumulating a debt (technical, intellectual, professional) that will come due in ways they don’t yet understand.

McKinsey’s 2025 outlook shows that AI’s impact grows when combined with human ingenuity, not when it replaces it. The differentiator isn’t whether you use AI. By 2028, everyone will. The differentiator is whether you use it as a boost or a crutch. Whether you’re becoming more capable or more dependent. Whether you’re building judgment or eroding it.

Your Move
#

Marcus can still become Sarah. Sarah could still become Marcus if she gets lazy. The trajectory isn’t fixed, but it’s compounding, and the gap widens every month.

If you’re a manager: Your job right now is to build teams of generals, not privates. That means investing in skills deliberately, setting boundaries aggressively, creating psychological safety for experimentation, and holding quality bars even when it’s easier to ship fast and sloppy. It means measuring the right things: systems thinking, architectural coherence, AI collaboration effectiveness, judgment under pressure.

If you’re a developer: Your job is to become someone who elevates AI, not someone who’s elevated by it. That means mastering fundamentals while learning tools. Staying actively current, not passively aware. Building soft skills that AI can’t replicate. Maintaining the judgment that separates generals from privates. Treating AI as a thinking partner, not an autopilot.

The AI era isn’t about surviving. It’s about succeeding. The people who succeed will be the ones who overcome these barriers deliberately, who build both their AI collaboration skills and their independent judgment in parallel, who understand that velocity without understanding is just speed toward the cliff.

Six months from now, you’ll either be further ahead or further behind than you are today. The compounding has already started. The question isn’t whether the AI era is here. It’s whether you’ll be one of the people who define it or one of the people left wondering what happened.

So here’s my question for you: Which path are you choosing today?

Not tomorrow. Not when you have more time. Not when things settle down. Today.

What’s your first step?


The gap between teams that successfully navigate the AI transition and those that struggle often comes down to intentional strategy around skill development and quality standards. If you’re wrestling with how to build AI-augmented teams that maintain deep engineering capability, I’m always up for a conversation.

Related

Model Context Protocol: The Missing Connection Between AI and Your Real Work
·3093 words·15 mins
I'm Pro-AI. That's Exactly Why I'm Worried About Our Next Senior Engineers
·1124 words·6 mins
Developer Work Did Not Change. The Sequence Did.
·1508 words·8 mins