Skip to main content
Background Image

The End of Courses: Learn From AI Like a Toddler, Or Become Obsolete

·2183 words·11 mins·
Pini Shvartsman
Author
Pini Shvartsman
Started in server rooms. Now I run engineering orgs where AI agents ship alongside humans. I’ve built teams across continents, infrastructure from first commit, and an AI hackathon that changed how 50+ engineers think about their craft. I write about all of it.

Remember when building an application required months of upfront learning? You’d buy a 40-hour video course, read through documentation, and painstakingly memorize syntax before writing a single line of logic.

Today, an AI agent builds that same application in three minutes from a single prompt.

We’re standing at a massive crossroads—not just in software development, but in how humans acquire knowledge. And most people haven’t realized yet that the entire learning model they grew up with just broke. We need to completely relearn how to learn.

The old learning model is dead
#

For twenty years the path was the same. Read the book. Buy the course. Follow the tutorial. Build the toy project. Then, eventually, attempt something real. Learning was structured, linear, and almost entirely theory-first.

That model is finished. And the data is already catching up to what everyone can feel.

The coding bootcamp industry—the market that turned “learn to code in 12 weeks” into a $4B business—collapsed through 2024–2025. Entry-level roles got automated or outsourced. Programs that didn’t rebuild around AI shut down. The survivors pivoted from “teach you to write code” to “teach you to work alongside agents.” On Udemy and Coursera, the courses people actually buy now have to be updated within the last 12 months or they’re teaching deprecated APIs. The half-life of “learned knowledge” collapsed.

But the deeper shift isn’t the market. It’s the cognitive model underneath.

I wrote before that AI didn’t change the work, it changed the sequence. The same thing is happening to learning. You’re no longer supposed to load the theory first and then apply it. You apply first, and the theory arrives on demand, exactly when you need it.

Learning is now intuitive, experiential, and strictly on-the-job.

Learn like a toddler
#

Think about how toddlers learn to speak.

Nobody hands a two-year-old a grammar textbook. They don’t attend a workshop on verb conjugation. They hear words in context, try them, get corrected, try again. They absorb meaning through constant exposure, trial, error, and interaction with their environment. The adult in the loop isn’t delivering lectures. The adult is a patient partner who keeps responding, correcting, and raising the bar.

That’s exactly how we have to work with AI now.

There’s actual learning science behind this. Piaget’s stages of cognitive development put hands-on experience and interaction at the center of how humans build real understanding. A recent Springer paper on developmentally aligned AI argues that AI tools work best when they act as scaffolding, not substitution—temporary support that strengthens the learner’s internal capacity and is gradually removed as competence grows.

Scaffolding means every time the agent generates something, you engage with it, understand it, and internalize what you didn’t know before. Substitution means the agent does it for you, and next time you need the same thing, you still can’t do it without the agent. Both look identical in the commit history. They feel completely different six months in.

This is the choice hiding in every single prompt.

As agents expose us to new architectures, libraries, frameworks, and design patterns on the fly, we have a choice: we can blindly accept the output, or we can choose to learn from it critically. I choose to learn. I choose to treat the agent—which has access to effectively all the knowledge available in the world—as a sparring partner for deep, on-the-job learning.

A sparring partner is different from a thinking partner. A thinking partner you lean on. A sparring partner pushes back. The first makes you weaker over time. The second makes you stronger. Pick the right one.

The crossroads: Operator vs. Kolboynik Architect
#

Every developer right now is standing at the same fork. Two paths. Very different outcomes.

Path 1: The Operator (accept and ship)
#

You accept exactly what the agent generated. You never interrogate the design. You never ask why this database, this pattern, this trade-off. You optimize for throughput.

Honestly? This is perfectly fine for a while. Nobody expects you to match the agent’s raw output speed or carry its encyclopedic knowledge of every framework. If your only goal is absolute scale—ship more, faster, cheaper—you can craft excellent skill.md files, feed the agent the right instructions, and trust it almost blindly to produce working applications. With a small asterisk, but you get the point.

But here’s the warning. If all you do is operate the AI and accept its outputs, you’re a prompt-runner. And a prompt-runner can—and will—be replaced by a motivated middle-schooler.

This isn’t hyperbole. Job postings for prompt engineer roles fell 79% from their 2023 peak. Microsoft’s own workforce survey ranked “prompt engineer” second-to-last among roles companies planned to add in the next 18 months. The reason is brutal: as models got better at intent resolution, expert prompting went from providing 40% improvement in 2023 to 4–6% today. The specialty evaporated because the skill stopped being scarce.

I’ve also written about this danger before—the quiet divide between AI operators (fast with prompts, lost when tools fail) and AI-augmented engineers (fast and capable of reasoning from first principles). Both look identical for six months. The gap between them compounds forever after that.

Path 2: The Kolboynik Architect (critical learning)
#

If you want to stay relevant, you have to shift from coder to “Kolboynik”—the Hebrew term for the ultimate generalist who knows a bit of everything, about everything. Not a master of one domain. A master of connecting domains.

The market is already pricing this shift in. Roles requiring five or more distinct skill domains grew 27% in Q3 2025. Roles with a single dominant skill cluster fell 31% year-over-year. The reason is painfully simple: narrow specialization is exactly what AI replicates most efficiently. Depth in one narrow thing doesn’t make you irreplaceable anymore. It makes you replaceable.

Generalists win because they do the thing agents are still bad at—synthesizing across ambiguous, contradictory, unstructured problem spaces. Bridging systems. Catching second-order effects. Knowing which question to ask next.

Becoming a Kolboynik doesn’t mean you read every book in the library. It means you treat every agent output as a doorway into a new domain you now need to understand just enough to judge. Instead of treating the AI’s output as the finish line, you treat it as the starting point for a deep conversation.

Don’t dive into the lines of code. Zoom out.

  • Question the design. Why did the agent choose this specific database structure? What alternatives did it silently reject? What would fail at 10x scale?
  • Challenge the constraints. Ask it about security vulnerabilities, edge cases, cloud costs, compliance implications. Make it show its work.
  • Interrogate the defaults. Every framework choice is an opinion. Every pattern comes with a cost. If you can’t articulate the trade-off, you don’t understand what shipped.
  • Guide the process. The agent knows it should write tests. Reminding it sets the standard. Over time, it learns that test coverage is a non-negotiable part of what “done” means on your team.

This deep-dive conversation will probably take longer than the agent took to write the code in the first place. And that is exactly the point. You are the human in the loop, bringing judgment, context, and critical thinking to the table. Everything else got cheap. Judgment is the only thing still scarce.

The cost of skipping the conversation
#

Here’s what the data says about developers who skip the deep-dive and just accept output.

A 2025 MIT Media Lab study found students using AI assistants showed measurably decreased neural engagement and less ownership over their work. Anthropic ran a randomized trial where developers learning a new library with AI scored 17 percentage points lower on mastery than those who learned without it. The biggest gap was in debugging—the one skill you most need when AI-generated code breaks.

More recent research has given this pattern names. Comprehension debt is the gap between how much code you’ve shipped and how much you actually understand. Cognitive debt is the gradual degradation of your team’s problem-solving capability from disuse. Intent debt is the loss of documented rationale in code and commits—the “why” that goes missing when the prompt is the only record.

A 2026 paper on cognitive offloading in agile teams found that AI-only planning significantly degraded risk capture rates. The teams performing best had a hybrid pattern: let AI do estimation and formatting, but require human deliberation for risk assessment and ambiguity resolution. The “boring” cognitive work is exactly the work you can’t offload.

And on the perception side, the numbers keep embarrassing us. Developers feel about 20% faster with AI. Objective measurement shows many of them are actually slower. I’ve referenced METR’s experienced-developer study before: 20% perceived speedup, 19% measured slowdown. The feeling is real. The feeling is wrong.

Karpathy—who literally hasn’t typed a line of code since December 2025—is the clearest voice on what replaces typing. Not passivity. Direction, taste, judgment, oversight, iteration. His own work on MicroGPT was explicitly designed “to demystify the algorithm so both humans and future agents can understand and extend it.” Even the person farthest along the agent curve is obsessed with understanding, not acceptance.

The developers who will compound in value over the next five years aren’t the ones shipping the most agent output. They’re the ones who, for every shipped feature, can also tell you exactly why it exists, what it costs, where it breaks, and what it looked like before they pushed back on the agent’s first answer.

What critical learning looks like in practice
#

This isn’t abstract. It’s a set of small habits you either have or you don’t.

Pause after every accepted suggestion. Before merging an agent’s output, ask yourself one question: if the agent disappeared tomorrow, could I modify this confidently? If no, you haven’t learned anything from this PR. You just shipped borrowed knowledge.

Turn every unfamiliar pattern into a 10-minute tangent. The agent used an event-sourced pattern you’ve never seen? Stop. Ask it to explain why. Ask for two alternatives it considered. Ask for the trade-offs. Ten minutes of critical conversation now beats a 40-hour course later that you’ll never take.

Ask for the rejected options. “What did you consider before choosing this?” is the single highest-leverage prompt I use. It forces the model to expose trade-off space that it otherwise collapses into a confident recommendation.

Argue with the model on purpose. Even when it’s probably right. Especially when it’s probably right. The act of constructing a counter-argument is where your understanding actually forms. A sparring-partner workflow beats a thinking-partner workflow every time, for exactly this reason.

Keep a “things I didn’t know yesterday” log. One file. One line per learning. Review it weekly. It’s the cheapest learning system you’ll ever run, and it’s the closest replacement we have for the structured curriculum that just died.

Re-derive the answer without the model occasionally. The AI-off hours idea I wrote about earlier applies to learning, not just execution. Your mental models don’t build themselves—they atrophy unless you use them.

If that sounds slower than just shipping the agent’s output, it is. By design. Slower code path, faster growth curve. You’re choosing to invest the difference, not spend it.

The big picture
#

We’re past the era where your value was measured by execution speed. Execution is the cheap part now. Generation is the cheap part. First drafts are free.

Your value is now determined by your ability to connect the dots, see the big picture, and deeply understand how systems behave together. It’s determined by the questions you choose to ask, the constraints you choose to enforce, and the second-order effects you choose to catch before they ship. The industry calls this being an AI Architect Programmer. I still prefer Kolboynik. Same idea. Less buzzword.

And here’s the part that makes me hopeful rather than cynical: the barrier to becoming that person just got dramatically lower. The agent is the best teacher any of us have ever had access to. Infinite patience. Infinite availability. Knowledge of every framework, paper, and pattern. The only thing it can’t do is decide to learn. That’s still on you.

So stop buying courses. Stop pretending that six more hours of passive video will prepare you for the next thing. The next thing ships in three minutes from someone else’s prompt.

Stop learning syntax. Start learning architecture. The agent has all the answers. You are the only one who knows which questions to ask.

Which path are you on—Operator or Kolboynik? And what’s the last thing the agent taught you that you couldn’t have Googled? Find me on X, Telegram, or LinkedIn. I’d genuinely like to hear it.


Disclaimer: This article references specific studies, surveys, and public commentary for illustrative and educational purposes, including work from Anthropic, METR, MIT Media Lab, Microsoft Research, arXiv preprints, Andrej Karpathy, and industry analyses available at the time of writing. I have not independently verified all claims. The analysis and opinions expressed are my own. I have no financial interest, business relationship, or affiliation with any companies or tools mentioned. This is commentary, not investment, legal, career, or business advice.

Related

Your AI Agents Are Flying Blind. Here's How to Fix That.
·2733 words·13 mins
What's Holding You Back from Succeeding in the AI Era?
·3041 words·15 mins
DeerFlow 2.0: ByteDance Just Open-Sourced What Most Companies Are Trying to Build Internally
·1149 words·6 mins