Validation Experiments & Hypotheses
Hypothesis #1: Problem Existence 🔴 Critical
We believe that [suburban homeowners and retirees] will [actively seek ways to exchange skills with neighbors] if they [want to build community while getting help without spending money] we will know this is true when we see [60%+ of surveyed neighbors confirm this as a top-3 need AND 5%+ landing page signup rate]
Risk Level: 🔴 Critical (product fails if wrong)
Current Evidence: Supporting: Time banking movement (350+ active time banks), Nextdoor complaints about lack of community help; Contradicting: None identified; Gaps: No direct user interviews yet
Experiment Design: Customer discovery interviews + landing page test | Sample: 20 interviews, 1,000 landing page visitors | Duration: 2 weeks | Cost: $500 (ads) + 20 hours
| Metric | Fail | Minimum | Success | Home Run |
|---|---|---|---|---|
| Problem confirmation rate | <40% | 40-60% | 60-80% | >80% |
| Landing page signup | <2% | 2-5% | 5-10% | >10% |
Hypothesis #2: Solution Fit 🔴 Critical
We believe that [neighbors seeking skill exchange] will [use a time-credit platform instead of informal favors] if we [provide a trusted, easy-to-use system that tracks exchanges fairly] we will know this is true when we see [70%+ of prototype users rate the experience as "useful" or "very useful"]
Risk Level: 🔴 Critical
Current Evidence: Supporting: Existing time banks show demand; Contradicting: Informal networks may suffice; Gaps: No prototype testing
Experiment Design: Wizard of Oz prototype with manual matching | Sample: 15 users | Duration: 3 weeks | Cost: 30 hours manual effort
Hypothesis #3: Willingness to Pay 🔴 Critical
We believe that [active community members] will [pay $4.99/month for premium features] if we [provide unlimited exchanges and priority matching that saves significant time] we will know this is true when we see [15+ pre-orders at target price point from engaged users]
Risk Level: 🔴 Critical
Current Evidence: Supporting: Freemium models work in community apps; Contradicting: Time banks are typically free; Gaps: No pricing validation
Experiment Design: Pre-order page with premium feature explanation | Sample: 100 engaged users | Duration: 2 weeks | Cost: $200 (targeted ads)
Hypothesis #4: Trust Mechanism Effectiveness 🟡 High
We believe that [new users] will [feel safe participating] if we [implement community vouching and optional background checks] we will know this is true when we see [80%+ of users complete verification and 90%+ report feeling "safe" or "very safe"]
Risk Level: 🟡 High
Hypothesis #5: Community Champion Model 🟡 High
We believe that [HOAs and community associations] will [adopt and promote SkillSwap] if we [provide them with a free community dashboard and launch support] we will know this is true when we see [3+ HOAs commit to pilot programs with 50+ member signups each]
Risk Level: 🟡 High
Hypothesis #6: Credit System Adoption 🟢 Medium
We believe that [users] will [understand and use the time-credit system] if we [provide clear onboarding and equal-value framing] we will know this is true when we see [75%+ of users complete at least one exchange within 2 weeks of signup]
Risk Level: 🟢 Medium
Hypothesis #7: Chicken-and-Egg Solution 🟢 Medium
We believe that [new communities] will [achieve critical mass quickly] if we [seed with 3-credit starter and community champions] we will know this is true when we see [50+ active members and 20+ skills listed within first month of launch]
Risk Level: 🟢 Medium
Hypothesis #8: Channel Efficiency 🟢 Medium
We believe that [HOA partnerships] will [drive efficient user acquisition] if we [provide turnkey community launch packages] we will know this is true when we see [CAC < $10 and 30%+ 30-day retention from this channel]
Risk Level: 🟢 Medium
Experiment Catalog
Experiment #1: Problem Discovery Interviews
Hypotheses: #1
Method: 20-30 semi-structured interviews with suburban homeowners and retirees
Success: ✅ 60%+ confirm problem as significant
Cost: $1,000-$1,500 | Timeline: 2 weeks
Experiment #2: Landing Page Smoke Test
Hypotheses: #1, #2
Method: Single-page waitlist with ad traffic
Success: ✅ >5% signup rate
Cost: $500-$1,000 | Timeline: 2 weeks
Experiment #3: Wizard of Oz MVP
Hypotheses: #2, #3
Method: Manual matching and delivery via email
Success: ✅ 8+/10 satisfaction, 50%+ would pay
Cost: Time only | Timeline: 4 weeks
Experiment #4: Pricing Survey
Hypotheses: #3
Method: Van Westendorp price sensitivity testing
Success: ✅ Clear optimal price point identified
Cost: $200 | Timeline: 1 week
Experiment #5: Trust Mechanism Test
Hypotheses: #4
Method: Test verification completion rates and safety perception
Success: ✅ 80%+ verification, 90%+ feel safe
Cost: $300 | Timeline: 2 weeks
Experiment #6: HOA Pilot Outreach
Hypotheses: #5
Method: Pitch 10 HOAs with community dashboard demo
Success: ✅ 3+ HOAs commit to pilot
Cost: $500 | Timeline: 3 weeks
Experiment Prioritization Matrix
| Experiment | Hypothesis | Impact | Effort | Priority |
|---|---|---|---|---|
| Discovery Interviews | #1 | 🔴 Critical | Medium | 1 |
| Landing Page Test | #1, #2 | 🔴 Critical | Low | 2 |
| Wizard of Oz MVP | #2, #3 | 🔴 Critical | High | 3 |
| HOA Pilot Outreach | #5 | 🟡 High | Medium | 4 |
| Pricing Survey | #3 | 🟡 High | Low | 5 |
8-Week Validation Sprint
- Launch landing page
- Recruit 20+ interviewees
- Run $500 ad campaign
- Conduct discovery calls
- Analyze interview insights
- Build manual MVP workflow
- Deliver to 15 users
- Collect satisfaction data
- Run pricing survey
- Test pre-order page
- Outreach to 10 HOAs
- Test trust mechanisms
- Compile all results
- Score against criteria
- Make Go/No-Go decision
- Plan Phase 2 or pivot
Minimum Success Criteria (Go/No-Go)
| Category | Metric | Must Achieve | Nice-to-Have |
|---|---|---|---|
| Problem | Interview confirmation | 60%+ | 80%+ |
| Problem | Landing page signup | 5%+ | 10%+ |
| Solution | Prototype satisfaction | 7/10+ | 8.5/10+ |
| Solution | Exchange completion | 50%+ | 75%+ |
| Pricing | Pre-orders collected | 15+ | 30+ |
| Partnerships | HOA commitments | 3+ | 5+ |
Go Decision: All "Must Achieve" criteria met
Conditional Go: 70% of criteria met, clear path to remainder
No-Go Decision: <70% of criteria met, no clear fixes
Pivot Triggers & Contingency Plans
Trigger #1: Problem Doesn't Exist
Signal: <40% confirm problem
Action: Interview about actual top problems
Pivot: Different problem in same audience or same problem in different audience
Trigger #2: Solution Doesn't Resonate
Signal: <50% satisfaction with prototype
Action: Deep-dive on missing value
Pivot: Simplify scope, change format, add human touch
Trigger #3: Won't Pay Enough
Signal: Acceptable price <50% of target
Action: Find higher-value use case
Pivot: Freemium with upsell, enterprise pivot, cost optimization
Trigger #4: Can't Scale via HOAs
Signal: <1 HOA commitment from 10 outreach
Action: Test organic/viral channels
Pivot: Product-led growth, community-first, partnership distribution
Experiment Documentation Template
**Date:** [Start - End]
**Hypothesis Tested:** #X
### Setup
- What we did
- Sample size
- Tools used
- Cost incurred
### Results
| Metric | Target | Actual | Pass/Fail |
|--------|--------|--------|-----------|
### Key Learnings
- Insight #1
- Insight #2
- Surprise finding
### Evidence
- [Link to data]
- [Quotes/screenshots]
### Next Steps
- [What this means for the product]
- [Follow-up experiments needed]