AI: PromptVault - Prompt Library Manager

Model: deepseek/deepseek-v3.2
Status: Completed
Cost: $0.129
Tokens: 327,153
Started: 2026-01-02 23:25

User Research & Validation Plan

Systematic validation of PromptVault's core assumptions before engineering investment

1 Key Assumptions to Validate

Problem Assumptions

8
Critical assumptions

Solution Assumptions

7
Product-market fit

Business Assumptions

5
Revenue viability
Assumption Risk Validation Method Target Evidence
PROBLEM ASSUMPTIONS
AI practitioners manage 20+ prompts across multiple platforms High User interviews + screen recording analysis ≥80% of practitioners confirm managing 15+ prompts
Teams waste ≥5 hours/week finding and recreating prompts High Time tracking diary study (5 teams, 1 week) Avg. 4+ hours/week lost to prompt management
Version control is a critical pain point (can't find "what worked") Critical Scenario-based interview questions ≥70% report losing good prompt versions
SOLUTION ASSUMPTIONS
Git-like versioning is intuitive for non-developers Medium Figma prototype usability testing (10 users) ≥80% complete core tasks without guidance
Multi-model testing saves significant manual effort High Wizard of Oz MVP with manual execution Users report ≥50% time savings vs manual testing
BUSINESS ASSUMPTIONS
Teams will pay $49/user/month for collaboration features Critical Pricing page A/B test + pre-order commitment ≥3% conversion to Team plan at target price
CAC < $150 for Pro users via content marketing Medium $1,000 test campaigns across 3 channels CAC < $120 in at least 2 channels

2 Customer Discovery Interview Guide (75 Minutes)

Interview Targets

  • AI Engineers (8-10)
  • Product Managers using LLMs (6-8)
  • Content Creators/Analysts (4-6)
  • Consultants/Agency leads (4-6)

Logistics

  • Recruitment: LinkedIn, AI Discord communities
  • Incentive: $75 Amazon gift card
  • Tools: Zoom + Otter.ai + Airtable notes
  • Target: 25 interviews minimum
Interview Structure

Part 1: Context & Workflow (15 min)

Discovery
  • "Walk me through your typical week working with LLMs"
  • "What percentage of your work involves prompt engineering?"
  • "Show me how you currently save and organize prompts"
  • "What's been your biggest prompt-related frustration recently?"

Part 2: Pain Point Deep Dive (20 min)

Problem Validation
  • "Tell me about the last time you lost a prompt that worked well"
  • "How do you currently test prompts across different models?"
  • "What happens when you need to share prompts with team members?"
  • "On a scale of 1-10, how painful is prompt management for you?"

Part 3: Solution Reaction (25 min)

Solution Testing
  • "If I showed you a tool that did X, what would be most valuable?"
  • "Review this Figma prototype - what's confusing or missing?"
  • "Would this solve your version control problem? Why/why not?"
  • "How much would you expect to pay for something like this?"

Part 4: Wrap-up & Referrals (15 min)

Network Building
  • "Who else on your team struggles with this?"
  • "Would you be interested in beta testing?"
  • "Can I follow up in 2 weeks with an early version?"
  • "Any other communities where I should share this?"

3 Validation Experiments & Timeline

Landing Page Test

Quantify demand & messaging effectiveness
Budget: $750 Duration: 2 weeks
  • A/B test 3 value propositions
  • Measure waitlist conversion rate
  • Test pricing page fake door
  • Target: 5%+ signup rate

Wizard of Oz MVP

Manual service to validate core workflow
Cost: $0 + time Users: 15-20
  • Manual prompt versioning via Google Sheets
  • Email-based multi-model testing
  • Collect feedback after each interaction
  • Measure time savings reported

Pre-Order Test

Validate willingness to pay
Target: 10 customers Price: $49-99
  • Offer 50% discount for 6-month commitment
  • Refundable if product doesn't launch
  • Measure conversion rate from waitlist
  • Test different pricing models

8-Week Validation Timeline

Week 1-2: Problem Discovery

Interviews: 10-12 Survey: 200+ responses

Validate core pain points, document current workflows, identify most frustrated users

Week 3-4: Solution Testing

Landing page live A/B tests running $750 ad spend

Test messaging, measure demand, build waitlist, validate user interest

Week 5-6: Pricing Validation

Pricing interviews: 15 Pre-order test Van Westendorp survey

Test price sensitivity, validate willingness to pay, optimize pricing tiers

Week 7-8: Prototype Validation

Wizard of Oz MVP 20 users testing NPS collection

Test core workflow, collect qualitative feedback, measure time savings

4 Go/No-Go Decision Criteria

Metric Target Threshold Weight Status
Problem Validation Score ≥80% confirm pain points 70% 30% Not Tested
Waitlist Signup Rate ≥5% conversion 3% 25% Not Tested
Price Acceptance ≥60% at target price 40% 20% Not Tested
Pre-Orders Secured 10+ customers 5 15% Not Tested
Prototype NPS ≥40 30 10% Not Tested

Go Decision Criteria

Proceed if: Total weighted score ≥ 70% AND at least 3 of 5 metrics meet target. Minimum viable validation requires problem confirmation + some willingness to pay.

5 Research Synthesis Template

Problem Validation Summary

  • Top 3 validated pain points
  • User quotes as evidence
  • Unexpected findings
  • Invalidated assumptions

Solution Validation Summary

  • Most compelling features
  • Features users don't care about
  • UX concerns raised
  • Integration needs identified

Pricing Validation Summary

  • Optimal price point: $______
  • Price sensitivity by segment
  • Value anchors (what they compare to)
  • Preferred pricing model

Go-to-Market Insights

  • Where target users hang out
  • How they discover solutions
  • Decision-making process
  • Key buying objections

Recommended Next Steps

If GO (≥70%)
Proceed to MVP development with validated features
If NO-GO (<70%)
Pivot or kill project based on weakest validation areas

This validation plan requires 8 weeks and approximately $1,500 in incentives/ad spend.

Success criteria based on industry benchmarks for B2B SaaS products.