Sprint Reports — Measure, Learn, Improve

Every sprint generates a report. Use it during the sprint to monitor progress, during standups to review daily work, and after completion to run your retrospective. Sprint reports are the feedback loop that makes each sprint better than the last.


Accessing Sprint Reports

From the Sprints page, click Report on any sprint card — whether it's active or completed. The report opens as a full-page view with multiple panels.

Reports are available for all sprint states:

  • Planning — Limited data; useful for confirming assignment counts
  • Active — Live data, including real-time timers
  • Completed — Full historical snapshot

Summary Metrics

The top of the report shows key sprint-level metrics at a glance:

Metric Description
Completion Rate Percentage of items that reached a terminal status (Done, Cancelled, etc.) by the end of the sprint
Estimation Accuracy Ratio of estimated hours to actual logged hours. Values closer to 1.0 indicate better accuracy.
Total Items All items assigned to this sprint
Completed Items Items in a terminal status
In Progress Items still in an active status at report time

Completion rate is the headline number. Track it sprint-over-sprint to see if your team's ability to scope and commit is improving.

Estimation accuracy is the learning number. If you consistently log 2× your estimates, your estimates are half what they should be — and you'll keep overcommitting sprints until that's addressed.

Tip: An estimation accuracy of 0.8–1.2 is healthy. Outside that range, investigate: are estimates being set at all? Are tasks scoped too broadly? Are items getting added mid-sprint without re-estimation?


Status Breakdown

The Status Breakdown section shows a table of every status and how many items currently sit in each one. This gives a fast read on where work is piling up.

Common patterns to watch:

  • Many items in "In Review" or "Code Review" → review bottleneck
  • Many items in "QA" → QA bandwidth constraint
  • Items stuck in "Blocked" → dependencies not resolved before sprint start

Use this table during standups to direct the conversation: where is flow being interrupted?


Member Statistics

The per-member breakdown shows individual performance across the sprint:

Column Description
Member Avatar and name
Items Assigned Total items in the sprint assigned to this person
Estimated Hours Sum of estimated hours for their items
Logged Hours Actual hours logged (timers + manual entries)
Completion Rate % of their items in a terminal status

This view is useful for retrospectives — not for performance management. Use it to identify patterns (someone consistently under-estimates, a team member is being over-assigned) rather than to rank individuals.

Tip: If a member's logged hours are much higher than estimated, the issue is usually estimation — not the person. Use this data to calibrate future estimates, not to question effort.


Sprint Changes Log

The changes log is a chronological feed of everything that happened during the sprint:

  • Status transitions on items
  • Assignment changes
  • Sprint field changes (items added or removed)
  • Priority changes

This is invaluable for retrospectives: you can reconstruct the story of the sprint, see when things shifted, and understand whether the plan changed mid-flight (and why).

Filter the log by member, item, or change type to focus on specific threads.


Active Timers Panel

When viewing an active sprint's report, the Active Timers panel shows real-time work in progress:

  • Which team members currently have a running timer
  • Which item they're working on
  • How long the current session has been running

This is a live broadcast — it updates in real time without refreshing the page. Use it during standups ("I can see three people are actively in timers right now") or to get a pulse on the team mid-sprint.

Active timers are only visible during the Active sprint state. Completed sprints show a historical view instead.


Daily Breakdown

The Daily Breakdown is one of the most useful panels for retrospectives and standups.

It shows a day-by-day view of hours logged across the sprint:

  • Each day appears as a bar or row in the timeline
  • Total hours logged that day across the team

Drilling Into a Day

Click any day to expand a detailed view:

  • Member details — Who logged hours and how many
  • Dev vs. QA split — Hours broken down by tracking type
  • Item-level entries — Specific time logs for each item worked that day
  • Notes — Any notes attached to individual time log entries

This makes standups faster. Instead of asking "what did everyone do yesterday?", you can see it at a glance and focus the standup on blockers and coordination.

Using It in Retrospectives

Walk through the daily breakdown at the end of the sprint:

  1. Were there days with very low logging? Why — was the team blocked, in meetings, or working on unlogged work?
  2. Did hours spike near the end of the sprint? That's a sign of crunch — scope was too large.
  3. Were QA hours proportional to dev hours? Imbalance can signal QA bottlenecks.

Tip: Review sprint reports in your retrospective, not after it. Open the report during the meeting. Share the screen. Walk through completion rate, estimation accuracy, and the daily breakdown together. The data makes the conversation specific instead of anecdotal.


Comparing Sprints

Sprint reports don't show cross-sprint comparison views directly, but you can open multiple sprint reports in separate tabs to compare them. Key metrics to track sprint-over-sprint:

  • Is completion rate trending up?
  • Is estimation accuracy converging toward 1.0?
  • Are items per member balancing out?

Building this habit over 4–6 sprints will give you a reliable picture of your team's actual velocity — which is far more useful than any velocity estimate made on day one.


Related