RU EN

Build

Build Phase: From Strategy to Product

The build phase answers three questions: which assumptions to test first (RAT tests), which features to build first (prioritization), and whether the economics work (unit economics).

RAT Tests (Riskiest Assumption Tests)

Methodology: 6 Risk Categories

RAT is a framework for systematically validating risky hypotheses. Instead of building an MVP and hoping for the best, you first validate key assumptions with minimal resources. AI automatically extracts assumptions from collected facts and scores them by P×I.

6 Risk Categories

#CategoryWhat It TestsKill SignalPriority
1 Market Demand Is anyone performing this job? Zero search demand + zero competitors + zero budget Existential — product dies
2 Segment Attractiveness Is the segment economically viable? TAM too small, LTV/CAC < 3, segment shrinking
3 Value / WTP Will the segment pay for OUR solution to THIS job? Zero conversion after 10+ qualified demos
4 Unit Economics Does each sale generate profit? Negative margin per cohort by M3, CAC > LTV Limiting — product stagnates
5 Scaling Demand Can you find enough customers? No repeatable channel with CAC < ⅓ LTV
6 Operational / Regulatory / Tech Can you deliver and support the solution? Regulatory block, tech infeasible, team can't deliver
Priority Rule
Categories 1-3 are existential (product dies). Categories 4-6 are limiting (product stagnates). Always test 1-3 first. It's pointless to optimize unit economics if Market Demand = 0.

P×I Scoring (Probability × Impact)

Probability (P: 1-5) — probability of being wrong

PEvidence LevelDescription
1Strong empiricalExisting sales, validated cohorts, market data
2Partial empiricalIndirect signals + real data (competitor revenue, adjacent market)
3AnalogiesSimilar patterns in neighboring markets/segments
4Weak indicatorsExpert opinions, surveys, stated intentions (unreliable)
5Pure hypothesisNo evidence — intuition only

Impact (I: 1-5) — business impact

IImpactDescription
1Local failureCosmetic or edge case — doesn't affect survival
2Metric dipOne metric drops, recoverable
3Growth freezeKey metrics stagnate, roadmap blocked
4Serious hitRevenue drop, team loss, pivot needed
5Business deathProduct/company dies — regulatory, economic, or market collapse

Score = P × I (range 1-25). Sort descending. If Score ties: prioritize higher I, then P.

Gate Rule
If the top assumption has Score ≥ 20 (P ≥ 4 AND I ≥ 5) → STOP and test it BEFORE any building. Building a product with an untested Score-20 risk = playing roulette.

Risk Card Format

Risk Card Template
RISK #1: Freelancers won't pay

Assumption: "≥ 40% of surveyed will agree to a $10/month subscription after demo"
Risk: "Zero revenue from the main segment → pivot or shutdown within 3 months"
Category: Value / WTP

P (1-5): 4 — no real sales data, only stated intentions
I (1-5): 5 — if they don't pay — no business
Score: P×I = 20 — CRITICAL

Quick Tests:
1. Landing page + ad traffic — pre-order conversion — $50, 3 days
2. 5 solution interviews → sales attempt at the end — free, 1-2 weeks
3. Pre-sale: collect deposits before building — free, 1-2 weeks

Evidence Threshold:
— CONFIRMED: > 2% landing conversion OR 3+ pre-orders
— REFUTED: < 0.5% conversion AND 0 pre-orders after 10 attempts
— UNCLEAR: 0.5-2% conversion — need more data

Quick Test Library

MethodBest ForCostTimeEvidence Strength
Solution interviews (5-10)Value, DemandFree1-2 weeksMedium-High
Landing page + paid trafficDemand, Value$200-5003-5 daysMedium
Prototype / UX testValue, Operational$0-2001 weekMedium
A/B test (existing traffic)Value, Unit EconFree2-4 weeksHigh
Competitor revenue analysisMarket, SegmentFree2-3 daysMedium
Pre-sale / deposit collectionValue, WTPFree1-2 weeksVery High
Expert interviews (3-5)Operational, RegulatoryFree1 weekLow-Medium
Manual service deliveryValue, OperationsTime2-4 weeksVery High
Validation Rules
  • Pre-sale / manual service = strongest validation. Real money > stated intentions
  • A/B test = the only strict causal method. Everything else = directional signal
  • Product DNA rule: Interviews about PAST behavior, never about future intentions

RAT and ABCDX Connection

RAT results directly influence ABCDX segment classification:

RAT ResultABCDX Grade
All key assumptions confirmedA — ideal fit
Most confirmed, 1-2 still being testedB — good fit
Key assumptions unclear (insufficient data)C → needs more research
Some assumptions refutedD — poor fit
Not testedX — unknown risk profile

Over-Generation Pattern

  1. Generate: List ALL assumptions (10-20) from the product description
  2. Score: Apply P×I to each
  3. Filter: Take top-5 by Score — this is your RAT deck
  4. Gate: Score ≥ 20 → STOP and test BEFORE building

Feature Prioritization

Methodology: Job-Based Prioritization

Feature prioritization in AI CPO is not based on ICE/RICE (which are subjective), but on a job-based approach: which feature solves the most important job for the most valuable segment.

Prioritization Criteria

CriterionSourceScale
Job ImportanceFrom the Job Map — how important the job is1-10
Job SatisfactionHow poorly current solutions perform0.0-1.0 (0=fully, 1=unsolved)
Segment ValueSegment value by ABCDXA=10, B=7, C=3, D=1
Implementation EffortImplementation complexityS=1, M=2, L=4, XL=8
RAT ScoreLevel of untested risk1-25 (modifier)

Priority Formula

Formula
Priority = (Job Importance × (1 - Satisfaction) × Segment Value) ÷ Effort

RAT Modifier: If RAT Score for this feature ≥ 15 → multiply Effort by 2 (high risk = more efficient to test first than build)

Features with high Priority and low Effort = your MVP.
Example
FeatureJob Imp.Satisf.SegmentEffortPriority
Auto-tracking via Figma90.2A (10)M (2)36
PDF client report80.3A (10)S (1)56
Team dashboard60.5B (7)L (4)5.25
Accounting integration40.4C (3)XL (8)0.9
MVP: PDF report (Priority 56) + auto-tracking (Priority 36). The rest — later iterations.
Focus Rule
For every new feature request, ask: "Is this for the A-segment or for the C-segment that's screaming loudly?" ABCDX: 80% of resources go to A+B, even if C+D are louder.

Unit Economics

Methodology: Unit Economics by Segment

Unit economics answers the main question: "Can this business be profitable?" Critical rule: calculate unit economics by job segments, not by "average customer". A-segment LTV can be 5-10x higher than C-segment LTV.

Key Metrics and Formulas

MetricFormulaBenchmarkMeaning
LTV ARPU × Avg. Lifetime (months) Depends on niche How much revenue one customer generates over their lifetime
CAC Marketing Spend ÷ New Customers LTV/CAC > 3x How much it costs to acquire one customer
Payback Period CAC ÷ Monthly Revenue per Customer < 12 months How many months until the customer pays back acquisition cost
Gross Margin (Revenue - COGS) ÷ Revenue > 70% for SaaS Gross margin after direct costs
Contribution Margin Revenue - Variable Costs per Customer > 0 per cohort by M3 Each customer's contribution margin
MRR Customers × ARPU Growth > 10% MoM Monthly Recurring Revenue
Churn Rate Lost Customers ÷ Total Customers < 5% for B2B Percentage of customers leaving per month
NRR (Revenue - Churn + Expansion) ÷ Revenue > 100% Net Revenue Retention — is revenue growing from existing customers

Bootstrap SaaS Formula

Sweet Spot for a Solo Founder
Formula: 80-150 customers × $150-$300/month = $12K-$45K MRR

Example TimeFlow:
ARPU = $10/month, Avg. Lifetime = 14 months
LTV = $10 × 14 = $140
CAC (paid search) = $32
LTV/CAC = 4.3x ✓ (benchmark > 3x)
Payback = 3.2 months ✓ (benchmark < 12 months)
Gross Margin = 82% ✓ (server costs ~18%)

Breakeven: At $32 CAC and 82% margin: need 100 customers for $12K MRR.
For that: Marketing spend = 100 × $32 = $3,200 (one-time).

Unit Economics by Segment

Don't Calculate for "Average Customer"
If you mix A-segment (LTV $250, Churn 3%) and C-segment (LTV $40, Churn 12%), you get "average" LTV $145 and churn 7.5% — and make wrong decisions about channels and pricing. Calculate separately for A, separately for B, separately for C.

Red Flags

SignalWhat It MeansAction
LTV/CAC < 3xAcquisition too expensive or retention too lowReconsider pricing or channels
Payback > 12 monthsToo slow to recoup — cash flow problemRaise price, add annual plan, reduce CAC
Gross Margin < 50%High direct costs (COGS)Optimize delivery cost, automate
Churn > 10% (B2B)Product doesn't solve Core Job well enoughJob Scorecard → find gap → improve
80% of support from C/DResources spent on wrong customersFilter C/D at intake, focus on A/B
Tips
  • Share ad cost data for your niche in chat — CAC will be more accurate
  • For early stage, use conservative (pessimistic) estimates
  • Test the business model BEFORE the UX. Button color comes last
Previous Strategy Next Launch