RICE Scoring — Data-Driven Prioritization
Stop arguing about what to build next. RICE is a prioritization framework that scores every item on four dimensions and produces a single number you can rank by. In Lifecycle OS, RICE scoring is built on top of custom number fields — flexible enough to adapt to your team's specific scales, powerful enough to drive your entire product roadmap.
What Is RICE?
RICE is an acronym for the four inputs to the scoring formula:
- R — Reach: How many users or customers will this affect?
- I — Impact: How much will it improve the experience for those users?
- C — Confidence: How certain are you in your Reach and Impact estimates?
- E — Effort: How much work will this take?
RICE Score = (Reach × Impact × Confidence) / Effort
Higher scores mean higher priority. A feature that reaches 10,000 users with high impact and high confidence — but low effort — scores much higher than a complex feature that only a handful of users will notice.
Tip: RICE scoring is especially powerful on Feature Requests boards. It turns subjective "we should build this" discussions into data-driven decisions your whole team can reason about.
Setting Up RICE in Lifecycle OS
RICE is not a built-in scoring system — it's implemented through 4 custom number fields and a 5th computed score field. This approach gives you full flexibility to adapt the scales to your team.
Step 1 — Create the Four Input Fields
Go to Settings → Fields → + New Field and create these four fields:
| Field Name | Type | Applies To | Notes |
|---|---|---|---|
| Reach | Number | Stories, Bugs | Number of users affected |
| Impact | Number | Stories, Bugs | Impact scale (see below) |
| Confidence | Number | Stories, Bugs | Percentage (50–100) |
| Effort | Number | Stories, Bugs | Person-weeks or months |
Step 2 — Create the Score Field
Create a fifth field:
| Field Name | Type | Notes |
|---|---|---|
| RICE Score | Number | Computed manually or via formula |
Currently, RICE Score is filled in manually after calculating (Reach × Impact × Confidence) / Effort. Enter the result in this field.
Step 3 — Configure Your Views
- Create a view on your Feature Requests or Backlog board
- Enable all five RICE columns in table view
- Sort by RICE Score descending to see your highest-priority items at the top
- Optionally filter to only stories or bugs (exclude tasks/sub-tasks)
Scoring Each Dimension
Reach
What it measures: How many users, customers, or accounts will be affected by this feature over a given time period (typically per quarter)?
How to score it: Use raw numbers. Examples:
| Score | Interpretation |
|---|---|
| 10,000 | Affects all paying customers |
| 2,500 | Affects a major customer segment |
| 500 | Affects power users of a specific feature |
| 50 | Affects a small niche use case |
Use the same time period consistently across all items. Most teams use "number of users affected per quarter."
Impact
What it measures: How much will this improve the experience for the users it reaches?
Use the standard scale:
| Score | Interpretation |
|---|---|
| 3 | Massive — transforms the experience |
| 2 | High — significant positive change |
| 1 | Medium — noticeable improvement |
| 0.5 | Low — slight improvement |
| 0.25 | Minimal — barely noticeable |
This scale is intentionally nonlinear — it rewards high-impact work over marginal improvements.
Confidence
What it measures: How certain are you in your Reach and Impact estimates?
Use percentage values:
| Score | Interpretation |
|---|---|
| 100 | High confidence — based on data, research, or clear user feedback |
| 80 | Medium confidence — reasonable assumptions, some research |
| 50 | Low confidence — mostly guessing |
Be honest. Overconfident estimates inflate RICE scores artificially. If you haven't done user research for a feature, score it at 50.
Tip: Confidence below 50% usually means you need more discovery before prioritizing this feature at all. Consider adding a discovery task to your backlog first.
Effort
What it measures: How much work will this require from your team?
Common scales:
- Person-weeks: A 1-person effort for 1 week = 1. A 2-person effort for 3 weeks = 6.
- Story points: Use your team's existing estimation scale.
- T-shirt sizes mapped to numbers: S=1, M=3, L=8, XL=20.
Choose one scale and apply it consistently. The specific scale doesn't matter as long as you're comparing items on the same scale.
Reading Your RICE Scores
After scoring, sort your board by RICE Score descending. A few patterns to watch for:
High Score, Low Priority — Reassess
If an item has a high RICE score but is currently marked Low priority, investigate. Either the RICE data is wrong, or you've been undervaluing this feature.
Low Score, High Priority — Challenge Assumptions
If a High priority item scores low on RICE, it may mean:
- High effort but low reach (internal tooling, tech debt)
- Low confidence (speculative features)
- Low impact per user
These are fine to build, but be explicit that the decision is qualitative, not data-driven.
High Effort Items Need to Earn Their Score
A 20-week effort needs massive reach and impact to compete with a 1-week effort with medium reach and impact. RICE naturally surfaces high-ROI small wins.
Validating Priority with RICE
After scoring all items on your feature backlog:
- Sort by RICE Score descending
- Note whether the current Priority assignments (Urgent, High, Mid, etc.) match the ranking
- Items near the top of the RICE ranking with Low priority are candidates for reprioritization
- Items near the bottom with High priority deserve a closer look
Tip: Group by Priority after RICE scoring to validate your team's intuitions. If your "High" priority group is full of low-RICE items, you may be building for the loudest voice in the room rather than the greatest impact.
RICE in Practice — Feature Requests Board
Feature Requests boards are the ideal home for RICE scoring. Here's a suggested workflow:
- Capture: Add every feature request as a story. Use the "Requested By" text field to track the source.
- Score: During weekly product review, score new items on all four RICE dimensions.
- Rank: Sort by RICE Score descending — this is your data-driven priority order.
- Plan: When planning a sprint, pull from the top of the RICE-sorted backlog.
- Review: Monthly, revisit scores as you learn more. Confidence scores often increase after user interviews.
Tips
- Score items as a team, not alone. Product, Engineering, and Design often have different estimates — the discussion is valuable.
- RICE is a starting point, not a final answer. Legal requirements, strategic partnerships, or CEO priorities may override the score. That's fine — just be explicit about it.
- Don't let perfect be the enemy of good. Rough estimates scored consistently across all items are more useful than perfect estimates on a few.
- Add a "RICE Reviewed" date field to track when each item was last scored. Stale scores (6+ months old) may not reflect current product reality.