Skip to main content
Background Image

When CI/CD Speaks Human: A Friendly Nudge to DevOps (and Developers)

·1089 words·6 mins·
Pini Shvartsman
Author
Pini Shvartsman
Architecting the future of software, cloud, and DevOps. I turn tech chaos into breakthrough innovation, leading teams to extraordinary results in our AI-powered world. Follow for game-changing insights on modern architecture and leadership.

I spend my days thinking about how to make engineering teams more effective. Whether it’s rolling out AI tooling that boosts developer productivity or exploring automation that eliminates the tedious parts of our workflow, I’m always looking for that next breakthrough that will let us focus on what actually matters: building great software.

That’s why GitHub Next’s Agentic Workflows project hit me like a lightning bolt. This isn’t just another automation tool, it’s a fundamental shift in how we’ll think about CI/CD, repository management, and team coordination.

What’s the idea?
#

GitHub Agentic Workflows transforms natural language markdown files into GitHub Actions that are executed by AI agents. You write automation in markdown instead of complex YAML, letting AI-powered decision making handle the details while maintaining GitHub’s native security and collaboration model.

The workflow is straightforward: install the GitHub CLI extension with gh extension install githubnext/gh-aw, describe your automation in a markdown file with frontmatter specifying triggers and permissions, then compile it to standard Actions YAML with gh aw compile. The system supports multiple AI engines (Claude, Codex, and others) and maintains security through sandboxed execution with minimal permissions.

This is explicitly a research demonstrator from GitHub Next and Microsoft Research, not a production product. The goal is to explore “Continuous AI”, the systematic, automated application of AI to software collaboration, and learn out in the open.

The design is Actions-first (familiar GitHub execution model) and engine-neutral (swap AI backends as needed). Your markdown source remains the source of truth, while the compiled YAML integrates seamlessly with existing GitHub workflows and governance.

How this will transform DevOps teams (if used carefully)
#

I’ve been watching multiple DevOps teams spend countless hours on repetitive investigative work—debugging CI failures, triaging flaky tests, writing post-mortems that follow the same patterns. Agentic Workflows could automate the tedious parts while keeping humans firmly in control.

Here’s what I’m most excited about:

Automated CI failure investigation — Think “CI Doctor” workflows that automatically investigate build failures and flakiness, then open Issues with their findings and suggested actions. No more manual time spent on repetitive post-mortem analysis. The AI does the legwork; your team makes the decisions.

Effortless status reporting — Weekly research reports and daily status updates delivered as scheduled Issues. Better visibility into what’s happening across your infrastructure without modifying a single pipeline. The information just appears where your team already looks.

Organization-specific guardrails — This is crucial. Role-based execution limits, “plan→apply” workflows with human approval checkpoints, and integrated MCP tools all running in sandboxed, network-confined environments. You get the automation benefits without losing control.

The key insight: your governance model doesn’t change. These workflows compile to standard GitHub Actions, so your existing review processes, permissions, and audit trails remain intact.

How this will supercharge Development teams
#

I’ve watched developers get buried under the administrative overhead of modern development—triaging issues, chasing missing PR details, manually updating documentation that should sync automatically. Here’s where I see Agentic Workflows making the biggest difference:

Intelligent triage that actually works — Workflows that request missing details from issue reporters, automatically categorize and label new issues, and reduce the noise that constantly interrupts focused development time. Finally, a way to maintain issue quality without developers playing 20 questions.

PR assistance with real context — Code-aware workflows that update documentation when APIs change, check dependencies for known issues, suggest fixes when PR builds fail, and identify opportunities to improve test coverage or performance. Crucially, all delivered through PRs that developers can review and approve—never silent changes to your codebase.

Continuous research and knowledge sharing — Workflows that create Issues with summaries of relevant trends, new tools, or techniques in your domain. Instead of wondering what you’re missing in the ecosystem, the information comes to you where you already work.

Here’s a simple example that captures the magic—an issue clarifier that runs when issues are opened:

---
on:
  issues:
    types: [opened]
permissions: read-all
safe-outputs:
  add-comment:
---
# Issue Clarifier
Analyze the current issue and ask for additional details if the issue is unclear.

That’s it. English instructions that compile to Actions YAML your team can review and govern.

Special caution regarding code changes
#

Here’s where I want to be crystal clear: any workflow that touches your actual codebase must go through pull requests for human review. The beauty of this system is that AI agents can suggest changes, improvements, and fixes, but they deliver them through the same PR process your team already trusts.

I’ve seen too many automation projects fail because they bypassed human oversight. The GitHub team got this right—workflows that modify code create PRs, not direct commits. This preserves your team’s ability to review, discuss, and reject changes that don’t make sense.

My pragmatic advice
#

  • Start small and specific. Pick one repetitive task that’s eating your team’s time—issue triage, status reporting, or CI failure investigation.
  • Security is non-negotiable. Use the read-only defaults, explicit tool allow-lists, and human-visible outputs. This is research-grade software; treat it accordingly.
  • Governance doesn’t change. Because it compiles to Actions YAML, your existing review processes, branch protections, and policies still apply. This is an authoring tool, not a permission bypass.
  • Keep humans in the loop. The goal isn’t to eliminate human judgment—it’s to eliminate human busy work.

Why I think this is the future
#

I’ve spent years watching teams struggle with the gap between intent and implementation. Developers know what they want their CI/CD to do, but getting there requires wrestling with YAML syntax, learning platform-specific APIs, and debugging workflows that should just work.

Agentic Workflows flips this: you describe what you want, and the system handles the how. Your DevOps team keeps control over policies, permissions, and infrastructure. Your developers get to focus on features instead of YAML archaeology.

Most importantly, everything stays auditable, reviewable, and governed through the same processes your team already trusts.

Ready to try it?
#

If you’re curious (and you should be), the quick start is genuinely quick:

  1. Install the extension: gh extension install githubnext/gh-aw
  2. Add a sample workflow: gh aw add weekly-research -r githubnext/agentics --pr
  3. Set up your AI secret: gh secret set ANTHROPIC_API_KEY -a actions --body "<your-key>"
  4. Run it: gh aw run weekly-research

Start with something low-risk—issue triage, status reports, or CI failure investigation. Keep approvals enabled, review everything the system generates, and learn what works for your team.

Key resources:

This is where engineering productivity is heading. The question isn’t whether AI will change how we automate our workflows—it’s whether we’ll be ready when it does.

Related

From "Toys" to "Tools": The Missing Layer Developers Actually Need
·679 words·4 mins