The Agile Manifesto established a principle in 2001 that has guided software teams for a quarter century: "At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behaviour accordingly." That principle is the foundation of the retrospective — arguably the most important ceremony in agile practice.

And yet, the data suggests most retrospectives aren't working.

Platform data from EasyRetro, one of the largest retrospective tools globally, shows the average completion rate of retrospective action items is approximately 0.33%. Easy Agile's product usage data from TeamRhythm tells a less extreme but still sobering story: teams were completing only 40-50% of their retrospective action items before the company introduced specific tracking features.

The retrospective isn't broken as a concept. It's broken as a practice. Teams talk, identify issues, generate action items — and then those items disappear into the noise of the next sprint. AI has the potential to change this. But the research says it's more complicated than bolting a language model onto your retro board.

0.33%
Average action item completion rate across retrospective meetings on the EasyRetro platform.
Source: EasyRetro, "Retrospective Statistics." Platform-wide data across 100+ countries.

The Harvard Evidence: AI as a "Cybernetic Teammate"

In March 2025, researchers from Harvard Business School, the University of Pennsylvania's Wharton School, ESSEC Business School, and Procter & Gamble published "The Cybernetic Teammate" — a pre-registered, randomised controlled trial with 776 professionals at P&G. It is one of the most rigorous field experiments on AI's impact on collaborative work conducted to date.

The study's design was a 2x2 experiment: participants were randomly assigned to work either with or without AI (Microsoft Azure built on GPT-4), and either individually or in two-person teams, on real product innovation challenges across four P&G business units.

The findings have direct implications for how we think about AI in collaborative ceremonies like retrospectives:

Individuals with AI matched the performance of two-person teams without AI. AI-enabled individuals improved solution quality by 0.37 standard deviations on average, while teams without AI improved by 0.24. This suggests AI can replicate certain collaborative benefits — pattern recognition, idea diversification, and knowledge synthesis — that traditionally required multiple human perspectives.

AI broke down functional silos. Without AI, R&D professionals tended to suggest technical solutions while commercial professionals leaned towards commercial proposals. With AI, both groups produced more balanced solutions regardless of their functional background. In a retrospective context, this suggests AI could help teams see beyond their immediate domain.

AI-assisted participants worked 12-16% faster and reported higher positive emotions and lower negative emotions during the work. The emotional finding is notable — it suggests AI may reduce the cognitive load of information processing, freeing humans to focus on relationship and judgment work.

However, the study also found that teams with AI and teams of individuals with AI performed comparably — meaning AI doesn't necessarily multiply the benefit of human collaboration. It augments individuals powerfully, but its additive value to an already-collaborative team is more modest.

What AI Can Actually Do in Retrospectives

Based on the available research and current tooling landscape, there are four areas where AI adds genuine value to the retrospective process. None of them involve replacing the human conversation.

1. Pattern Detection Across Multiple Sprints

The single most valuable AI application for retrospectives is identifying patterns that humans miss because they're too close to the work. As noted in Scrum.org's analysis of AI in agile practice, a skilled Scrum Master using AI "can analyse team dynamics across six retrospectives to identify systemic impediments that manual review would probably miss."

This isn't theoretical. If your retrospective notes are stored digitally (in Confluence, Miro, or a retro tool), AI can surface recurring themes, detect when the same issue appears sprint after sprint in different language, and flag improvement areas that the team has identified but never resolved.

2. Sentiment Analysis of Team Communications

A 2025 study published in MDPI's Applied Sciences journal examined the integration of sentiment analysis into agile feedback loops. The research explored how natural language processing (NLP) could analyse team feedback to complement traditional retrospective methods, which the authors noted "are often constrained by their episodic nature and susceptibility to subjective interpretation."

Sentiment analysis can process communication data from tools like Slack or Jira comments to identify morale trends that the team may not articulate in a retrospective. A comment like "The data team has been very slow in responding, but when they finally do, their work is accurate" reads as positive on the surface. Sentiment analysis can flag the underlying frustration — the kind of signal that often gets lost in group discussion.

Tools like TeamRetro have already implemented AI-powered meeting summaries that capture key topics, themes, and overall team sentiment from retrospective sessions, providing a documented baseline that teams can track over time.

3. Action Item Tracking and Accountability

Given the dire action item completion rates documented by EasyRetro (0.33%) and Easy Agile (40-50% before tracking), AI-powered follow-up is arguably the highest-ROI application. Easy Agile's data showed completion rates jumping to 65% after introducing features that surfaced and tracked incomplete actions — a 30-60% improvement from visibility alone.

AI can extend this further by automatically linking retrospective action items to backlog items, sending contextual reminders at the right moment in the sprint, and flagging at the start of each retrospective exactly which previous items remain unresolved and how long they've been open.

4. Preparation and Pre-Analysis

AI can prepare a pre-retrospective analysis of the sprint: what the sprint goal was, whether it was achieved, how cycle time compared to recent sprints, what the deployment and incident data shows, and what themes emerged from code review comments or PR discussions. This replaces the 10-15 minutes of "what happened this sprint?" warmup with a shared factual foundation, leaving more time for the high-value discussion: why things happened and what to do about them.

What the DORA Data Warns Us About

DORA's 2024 Accelerate State of DevOps Report found that AI adoption was accompanied by an estimated 1.5% decrease in delivery throughput and a 7.2% reduction in delivery stability. For the second consecutive year, AI tooling correlated with worsened software delivery performance.

The report concludes that "improving the development process does not automatically improve software delivery — at least not without proper adherence to the basics." Additionally, 39% of respondents reported little to no trust in AI-generated code quality.

The implication for retrospectives is clear: AI is not a substitute for disciplined practice. Teams that don't have functioning retrospectives won't fix them by adding AI. Teams that do have functioning retrospectives can use AI to make them substantially more effective.

Where AI Doesn't Belong in Retrospectives

Not every application of AI to retrospectives is beneficial. Some are actively harmful.

AI should not replace the facilitator. The retrospective exists as a space for human reflection, psychological safety, and interpersonal trust. The Scrum.org analysis of AI and agile is direct on this point: the biggest value of AI in agile is that it "handles information processing while humans focus on judgment, relationship-building, and collaborative decision-making." An AI facilitator removes the human element that makes retrospectives work.

AI should not generate the improvement actions. The team needs to own its improvements. AI-generated action items may be technically sound but will lack the team's commitment and context. AI's role is to surface the data and patterns; the team decides what to do about them.

AI should not assess individual performance. Retrospectives are team events. Using AI to analyse individual contribution patterns or sentiment can rapidly destroy psychological safety. The moment team members believe their comments are being individually assessed, honest reflection stops.

AI should not make the retrospective feel automated. Atlassian's research on retrospectives emphasises that teams with higher levels of reflexivity — "the extent to which team members collectively reflect upon the team's objectives, strategies, and processes" — are more likely to innovate, identify problems, adapt to change, and implement new ideas. Reflexivity is an active, human process. AI should enhance it, not mechanise it.

The Expertise Multiplier Effect

The Scrum.org Agile AI Manifesto analysis articulated a principle that applies directly to AI-powered retrospectives: "Expertise plus AI creates a competitive advantage. AI without expertise creates expensive noise."

An experienced Scrum Master who already knows how to facilitate effective retrospectives will use AI to surface deeper patterns, track action items more rigorously, and make pre-retrospective preparation faster. An inexperienced facilitator using AI may produce retrospectives that look polished — complete with sentiment scores and trend analyses — but fail to create the conditions for honest reflection and genuine improvement.

The Dell'Acqua et al. P&G study supports this. While AI boosted performance for everyone, the researchers noted that AI was particularly powerful for individuals working alone — suggesting it functions best as a capability amplifier for skilled practitioners, not as a replacement for the skills themselves.

Key Takeaway

AI-powered retrospectives are not about automating team reflection. They're about giving skilled facilitators better data, identifying patterns across sprints that humans miss, and solving the action item follow-through problem that undermines continuous improvement.

The research is clear: AI amplifies whatever capability already exists. For organisations looking to build that capability, Agility Ops' FACT Applied AI Training is designed to turn non-technical professionals into confident AI operators — starting from the workflows they already use. Explore FACT Training.

A Practical Starting Point

If you want to introduce AI into your retrospective practice, here's an evidence-based approach:

Start with action item tracking. This addresses the biggest documented gap (0.33% to 40-50% completion rates) and requires the least cultural change. Use your existing retro tool's tracking features, or create Jira issues for every action item and review them at the start of the next retrospective. Easy Agile's data shows this alone can push completion rates toward 65%.

Introduce sprint data summaries. Before each retrospective, prepare (or have AI prepare) a factual sprint summary: sprint goal achieved (yes/no), average cycle time, deployment count, incident count, and any notable patterns in the work data. Present this as the opening context — not as a judgment, but as shared evidence.

Pilot sentiment analysis carefully. If you choose to analyse communication sentiment, be transparent with the team about what's being analysed and why. Use it at the team level only, never at the individual level. Present it as one input among many, not as ground truth. The MDPI research notes that sentiment analysis is "constrained" and should complement — not replace — direct human feedback.

Protect the human core. Keep the facilitation human. Keep the conversation human. Keep the decision-making human. AI prepares the ground and follows up on the actions. Humans do the reflecting, the relating, and the deciding. That division of labour is where the value lives.

Build AI capability where it matters most — in your teams.

Agility Ops helps enterprises integrate AI into their existing workflows through strategic consulting, intelligent Jira tools, and Applied AI training designed for non-technical professionals. Australian-owned, practitioner-built.

Book a Discovery Call

References & Sources

  1. Dell'Acqua, F., Ayoubi, C., Lifshitz-Assaf, H., Sadun, R., Mollick, E., Mollick, L., et al. "The Cybernetic Teammate: A Field Experiment on Generative AI Reshaping Teamwork and Expertise." Harvard Business School Working Paper No. 25-043. March 2025. 776 professionals at Procter & Gamble. hbs.edu
  2. Mollick, E. "The Cybernetic Teammate." One Useful Thing blog, March 2025. Summary of findings. oneusefulthing.org
  3. Fortune. "What a study at consumer giant P&G says about AI's potential impact on teamwork." March 2025. Reports 12-16% speed improvement. fortune.com
  4. Google Cloud / DORA. "2024 Accelerate State of DevOps Report." AI adoption findings: -1.5% throughput, -7.2% stability, 39% low/no trust. dora.dev
  5. Integrating Sentiment Analysis into Agile Feedback Loops for Continuous Improvement. MDPI Applied Sciences, Vol. 15(22), 12329. November 2025. mdpi.com
  6. EasyRetro. "Retrospective Statistics." Action item completion rate: ~0.33%. Data across 100+ countries. easyretro.io
  7. Easy Agile. "Why Retrospectives Fail: Fixing Action Item Follow-Through." 40-50% completion rate, 65% with tracking. easyagile.com
  8. Scrum.org. "The Agile AI Manifesto." Analysis of AI's role in agile practice. References Dell'Acqua et al. scrum.org
  9. Scrum.org. "Generative AI in Agile: A Strategic Career Decision." December 2025. scrum.org
  10. TeamRetro. "AI Tools That Make Agile Retrospectives Easier and Smarter." November 2024. AI-powered meeting summaries with sentiment analysis. teamretro.com
  11. Atlassian. "What are agile retrospectives?" Reflexivity research and retrospective best practices. atlassian.com
  12. MIT Sloan Management Review. "How Procter & Gamble Uses AI to Unlock New Insights From Data." December 2025. Davenport, T.H. & Bean, R. sloanreview.mit.edu