Build
Build Phase: From Strategy to Product
The build phase answers three questions: which assumptions to test first (RAT tests), which features to build first (prioritization), and whether the economics work (unit economics).
RAT Tests (Riskiest Assumption Tests)
Methodology: 6 Risk Categories
RAT is a framework for systematically validating risky hypotheses. Instead of building an MVP and hoping for the best, you first validate key assumptions with minimal resources. AI automatically extracts assumptions from collected facts and scores them by P×I.
6 Risk Categories
| # | Category | What It Tests | Kill Signal | Priority |
|---|---|---|---|---|
| 1 | Market Demand | Is anyone performing this job? | Zero search demand + zero competitors + zero budget | Existential — product dies |
| 2 | Segment Attractiveness | Is the segment economically viable? | TAM too small, LTV/CAC < 3, segment shrinking | |
| 3 | Value / WTP | Will the segment pay for OUR solution to THIS job? | Zero conversion after 10+ qualified demos | |
| 4 | Unit Economics | Does each sale generate profit? | Negative margin per cohort by M3, CAC > LTV | Limiting — product stagnates |
| 5 | Scaling Demand | Can you find enough customers? | No repeatable channel with CAC < ⅓ LTV | |
| 6 | Operational / Regulatory / Tech | Can you deliver and support the solution? | Regulatory block, tech infeasible, team can't deliver |
P×I Scoring (Probability × Impact)
Probability (P: 1-5) — probability of being wrong
| P | Evidence Level | Description |
|---|---|---|
| 1 | Strong empirical | Existing sales, validated cohorts, market data |
| 2 | Partial empirical | Indirect signals + real data (competitor revenue, adjacent market) |
| 3 | Analogies | Similar patterns in neighboring markets/segments |
| 4 | Weak indicators | Expert opinions, surveys, stated intentions (unreliable) |
| 5 | Pure hypothesis | No evidence — intuition only |
Impact (I: 1-5) — business impact
| I | Impact | Description |
|---|---|---|
| 1 | Local failure | Cosmetic or edge case — doesn't affect survival |
| 2 | Metric dip | One metric drops, recoverable |
| 3 | Growth freeze | Key metrics stagnate, roadmap blocked |
| 4 | Serious hit | Revenue drop, team loss, pivot needed |
| 5 | Business death | Product/company dies — regulatory, economic, or market collapse |
Score = P × I (range 1-25). Sort descending. If Score ties: prioritize higher I, then P.
Risk Card Format
Assumption: "≥ 40% of surveyed will agree to a $10/month subscription after demo"
Risk: "Zero revenue from the main segment → pivot or shutdown within 3 months"
Category: Value / WTP
P (1-5): 4 — no real sales data, only stated intentions
I (1-5): 5 — if they don't pay — no business
Score: P×I = 20 — CRITICAL
Quick Tests:
1. Landing page + ad traffic — pre-order conversion — $50, 3 days
2. 5 solution interviews → sales attempt at the end — free, 1-2 weeks
3. Pre-sale: collect deposits before building — free, 1-2 weeks
Evidence Threshold:
— CONFIRMED: > 2% landing conversion OR 3+ pre-orders
— REFUTED: < 0.5% conversion AND 0 pre-orders after 10 attempts
— UNCLEAR: 0.5-2% conversion — need more data
Quick Test Library
| Method | Best For | Cost | Time | Evidence Strength |
|---|---|---|---|---|
| Solution interviews (5-10) | Value, Demand | Free | 1-2 weeks | Medium-High |
| Landing page + paid traffic | Demand, Value | $200-500 | 3-5 days | Medium |
| Prototype / UX test | Value, Operational | $0-200 | 1 week | Medium |
| A/B test (existing traffic) | Value, Unit Econ | Free | 2-4 weeks | High |
| Competitor revenue analysis | Market, Segment | Free | 2-3 days | Medium |
| Pre-sale / deposit collection | Value, WTP | Free | 1-2 weeks | Very High |
| Expert interviews (3-5) | Operational, Regulatory | Free | 1 week | Low-Medium |
| Manual service delivery | Value, Operations | Time | 2-4 weeks | Very High |
- Pre-sale / manual service = strongest validation. Real money > stated intentions
- A/B test = the only strict causal method. Everything else = directional signal
- Product DNA rule: Interviews about PAST behavior, never about future intentions
RAT and ABCDX Connection
RAT results directly influence ABCDX segment classification:
| RAT Result | ABCDX Grade |
|---|---|
| All key assumptions confirmed | A — ideal fit |
| Most confirmed, 1-2 still being tested | B — good fit |
| Key assumptions unclear (insufficient data) | C → needs more research |
| Some assumptions refuted | D — poor fit |
| Not tested | X — unknown risk profile |
Over-Generation Pattern
- Generate: List ALL assumptions (10-20) from the product description
- Score: Apply P×I to each
- Filter: Take top-5 by Score — this is your RAT deck
- Gate: Score ≥ 20 → STOP and test BEFORE building
Feature Prioritization
Methodology: Job-Based Prioritization
Feature prioritization in AI CPO is not based on ICE/RICE (which are subjective), but on a job-based approach: which feature solves the most important job for the most valuable segment.
Prioritization Criteria
| Criterion | Source | Scale |
|---|---|---|
| Job Importance | From the Job Map — how important the job is | 1-10 |
| Job Satisfaction | How poorly current solutions perform | 0.0-1.0 (0=fully, 1=unsolved) |
| Segment Value | Segment value by ABCDX | A=10, B=7, C=3, D=1 |
| Implementation Effort | Implementation complexity | S=1, M=2, L=4, XL=8 |
| RAT Score | Level of untested risk | 1-25 (modifier) |
Priority Formula
RAT Modifier: If RAT Score for this feature ≥ 15 → multiply Effort by 2 (high risk = more efficient to test first than build)
Features with high Priority and low Effort = your MVP.
| Feature | Job Imp. | Satisf. | Segment | Effort | Priority |
|---|---|---|---|---|---|
| Auto-tracking via Figma | 9 | 0.2 | A (10) | M (2) | 36 |
| PDF client report | 8 | 0.3 | A (10) | S (1) | 56 |
| Team dashboard | 6 | 0.5 | B (7) | L (4) | 5.25 |
| Accounting integration | 4 | 0.4 | C (3) | XL (8) | 0.9 |
Unit Economics
Methodology: Unit Economics by Segment
Unit economics answers the main question: "Can this business be profitable?" Critical rule: calculate unit economics by job segments, not by "average customer". A-segment LTV can be 5-10x higher than C-segment LTV.
Key Metrics and Formulas
| Metric | Formula | Benchmark | Meaning |
|---|---|---|---|
| LTV | ARPU × Avg. Lifetime (months) | Depends on niche | How much revenue one customer generates over their lifetime |
| CAC | Marketing Spend ÷ New Customers | LTV/CAC > 3x | How much it costs to acquire one customer |
| Payback Period | CAC ÷ Monthly Revenue per Customer | < 12 months | How many months until the customer pays back acquisition cost |
| Gross Margin | (Revenue - COGS) ÷ Revenue | > 70% for SaaS | Gross margin after direct costs |
| Contribution Margin | Revenue - Variable Costs per Customer | > 0 per cohort by M3 | Each customer's contribution margin |
| MRR | Customers × ARPU | Growth > 10% MoM | Monthly Recurring Revenue |
| Churn Rate | Lost Customers ÷ Total Customers | < 5% for B2B | Percentage of customers leaving per month |
| NRR | (Revenue - Churn + Expansion) ÷ Revenue | > 100% | Net Revenue Retention — is revenue growing from existing customers |
Bootstrap SaaS Formula
Example TimeFlow:
ARPU = $10/month, Avg. Lifetime = 14 months
LTV = $10 × 14 = $140
CAC (paid search) = $32
LTV/CAC = 4.3x ✓ (benchmark > 3x)
Payback = 3.2 months ✓ (benchmark < 12 months)
Gross Margin = 82% ✓ (server costs ~18%)
Breakeven: At $32 CAC and 82% margin: need 100 customers for $12K MRR.
For that: Marketing spend = 100 × $32 = $3,200 (one-time).
Unit Economics by Segment
Red Flags
| Signal | What It Means | Action |
|---|---|---|
| LTV/CAC < 3x | Acquisition too expensive or retention too low | Reconsider pricing or channels |
| Payback > 12 months | Too slow to recoup — cash flow problem | Raise price, add annual plan, reduce CAC |
| Gross Margin < 50% | High direct costs (COGS) | Optimize delivery cost, automate |
| Churn > 10% (B2B) | Product doesn't solve Core Job well enough | Job Scorecard → find gap → improve |
| 80% of support from C/D | Resources spent on wrong customers | Filter C/D at intake, focus on A/B |
- Share ad cost data for your niche in chat — CAC will be more accurate
- For early stage, use conservative (pessimistic) estimates
- Test the business model BEFORE the UX. Button color comes last