Section 05: User Research & Validation Plan
VendorShield: De-risking the Idea Before De-risking the Vendors
1. Key Assumptions to Validate
The success of VendorShield hinges on these core assumptions. Our validation plan is designed to systematically test them before committing significant resources.
Problem Assumptions
| Assumption | Risk | Validation Method | Target Evidence |
|---|---|---|---|
| Mid-market security teams are overwhelmed by the volume of manual vendor assessments. | High | User Interviews, Surveys | 80%+ of interviewees managing 50+ vendors describe the process as "painful" or "overwhelming". |
| Current solutions (enterprise GRC) are perceived as too expensive and complex for the mid-market. | High | Interviews, Competitive Analysis | >70% of targets have evaluated but rejected enterprise tools due to price/complexity. |
| Static, self-reported questionnaires are widely considered "security theater" and untrustworthy. | Medium | User Interviews | >60% express low confidence in vendor-provided questionnaires. |
Solution Assumptions
| Assumption | Risk | Validation Method | Target Evidence |
|---|---|---|---|
| Users will trust and act upon a single, composite risk score generated by an algorithm. | Critical | Wizard of Oz Prototype, User Testing | Users can explain the score's meaning and identify a clear next action based on it. |
| Continuous monitoring is a "must-have" feature that justifies switching from periodic reviews. | High | Landing Page Test, Pricing Interviews | "Continuous" or "Real-time" messaging drives a >20% lift in signup conversion. |
| A unified view of Security, Financial, and Operational risk is more valuable than a security-only tool. | Medium | Feature Ranking in Surveys, Interviews | Financial/Operational risk is ranked in the top 3 needs by >50% of security and procurement personas. |
Business Assumptions
| Assumption | Risk | Validation Method | Target Evidence |
|---|---|---|---|
| Mid-market companies will pay ~$1,000/month for this solution. | Critical | Pricing Interviews, Pre-order Test | >5 pre-orders secured at a ~$500/mo early-adopter price point. |
| The primary buyer (CISO/Security Lead) has the budget and authority to purchase independently. | High | User Interviews (Buying Process) | >50% of Security Leads confirm they control a budget of at least $25k/year for security tools. |
| We can acquire high-quality risk data from third-party APIs at a cost that supports our unit economics. | Critical | Technical Spikes, API Trials | Key data points (e.g., credit scores, breach data) are available for <$5 per vendor per year. |
2. Customer Discovery Interview Guide
A 60-minute interview script designed to uncover deep insights into the user's world, focusing on pains and current workflows, not pitching our solution.
Part 1: Context (10 min) - "Tell me about your role. What does a typical week look like when it comes to managing third-party risk?"
Part 2: Problem Deep Dive (20 min) - "Walk me through the last time you onboarded a new critical vendor. What were the steps? What was the most frustrating part of that process?"
Part 3: Current Solutions (15 min) - "What tools are you using today? Spreadsheets? Email? A specific platform? What do you love and hate about your current system?"
Part 4: Quantifying Pain (10 min) - "Roughly how many hours does your team spend per month on vendor risk activities? If you had a magic wand to fix one thing about this process, what would it be?"
Part 5: Closing & Ask (5 min) - "This is incredibly helpful. Based on our chat, how would you rate the pain of vendor risk management on a 1-10 scale? Would you be open to looking at early concepts we're developing in this space?"
Logistics: Target 20-30 interviews with Security/Procurement leaders at companies with 500-5,000 employees. Recruit via LinkedIn and offer a $100 gift card incentive.
3. Phased Validation Experiments
A series of low-cost experiments to test demand, willingness-to-pay, and the core value proposition before writing a line of code.
A. Landing Page Test
Goal: Validate problem/solution resonance.
Setup: Unbounce page with a clear value prop ("Automated Vendor Risk. Zero Questionnaires.") and an email signup for a waitlist.
Traffic: $1,000 budget for LinkedIn ads targeting "Head of Security" at mid-market tech companies.
Success: >5% visitor-to-waitlist conversion rate (>50 qualified leads).
B. "Wizard of Oz" MVP
Goal: Test the value of the core output (the risk report).
Setup: A simple form where users submit a vendor's domain. We manually generate a 5-page PDF report using public tools and data, then email it to them.
Offer: "Get a free risk report on one of your vendors."
Success: >40 Net Promoter Score (NPS) from the first 20 users who receive a report.
C. Pre-Order Test
Goal: Validate willingness-to-pay.
Setup: After a user receives their "Wizard of Oz" report, follow up with a limited-time offer to pre-purchase the platform.
Offer: "Be one of our first 10 customers. Get the Professional Plan for $499/mo (50% off) for the first year. Fully refundable."
Success: ≥ 5 paying customers, generating ~$2,500 in committed monthly revenue.
4. 8-Week Validation Sprint
-
Weeks 1-2: Problem Validation
Conduct 10-15 customer interviews. Deploy screening survey. Synthesize pain points.
-
Weeks 3-4: Solution & Demand Validation
Build and launch landing page A/B test. Drive traffic via LinkedIn ads. Build waitlist.
-
Weeks 5-6: Value & Pricing Validation
Launch "Wizard of Oz" MVP for waitlist users. Conduct pricing interviews. Initiate pre-order campaign.
-
Weeks 7-8: Synthesize & Decide
Analyze all experiment data. Collect NPS from prototype users. Make Go/No-Go decision.
Go/No-Go Decision Criteria
We proceed to a seed round and product build only if we meet at least 4 of these 5 criteria.
| Metric | Target | Pass? |
|---|---|---|
| Interview Pain Signal | >80% rate pain ≥ 8/10 | ☐ |
| Landing Page Signup Rate | >5% (qualified leads) | ☐ |
| Price Acceptance | >50% accept ~$1k/mo anchor | ☐ |
| Pre-Orders (Willingness to Pay) | ≥ 5 customers @ ~$500/mo | ☐ |
| Prototype NPS | > 40 | ☐ |
5. Research Synthesis Framework
After each phase, we will synthesize findings into a standardized format to ensure learnings are captured and inform the next step.
- Validated Assumptions: Which of our initial beliefs were confirmed by evidence? Include direct quotes.
- Invalidated Assumptions: Which beliefs were proven wrong? What is the pivot or change required?
- Top 3 User Pains: Rank the most severe, frequently mentioned problems in the user's own words.
- "Aha!" Moments: What were the most surprising or unexpected insights from the research?
- Persona Refinements: How has our understanding of the target user (their role, goals, environment) changed?
- Actionable Next Steps: What is the single most important thing we should do next based on this data?