← PM Skills Arsenal Product Strategy: Async Video AI Pivot

Executive Summary

$12M
Current ARR
$16.8M
Target ARR (40%)
2M
Registered Users
17
Team Size
6 mo
Planning Horizon

This company has a $12M ARR async video product with 2M registered users, board pressure to grow 40% in the next year, and a three-way tension between improving the existing video editor, building AI meeting summaries, and launching a team knowledge base. The correct answer is not "do all three" — with 12 engineers, 3 PMs, and 2 designers, you can fund two bets well or three bets badly H capacity model, T1 headcount.

The strategy thesis: AI meeting summaries is the highest-conviction bet because it converts the existing user base from a recording tool (used once per meeting) to an intelligence layer (used every day), directly attacking the retention and expansion metrics that drive ARR growth H usage data + market signals, T2-T3. The video editor improvements are necessary but bounded — they protect the base without growing it. The team knowledge base is a high-option-value transformational bet that creates an entirely new product surface, but only after AI summaries prove the "post-meeting value" hypothesis.

Recommended allocation: 55% of capacity to AI Meeting Intelligence (Core/Adjacent), 30% to Video Editor Hardening (Core), 15% to Knowledge Base Exploration (Transformational). Sequence: AI summaries first (months 1-3 build, month 4 beta), editor hardening in parallel, knowledge base exploration in months 4-6 gated on AI summary adoption. If AI summary DAU/MAU ratio exceeds 25% by month 4, double down on knowledge base. If below 15%, pivot the knowledge base investment to AI summary iteration.

Board-ready summary: We are pivoting from "async video recorder" to "async communication intelligence." The 40% ARR growth target requires converting passive recording users into daily active users of AI-powered meeting insights. We are NOT building a full knowledge base this half — we are building the foundation (AI summaries) that makes a knowledge base valuable later. This is a sequenced bet, not a portfolio spray.


Step 0: Context Gate & Framework Selection

Question Type: Half-year strategic roadmap for established product facing pivot decision — "what do we build, in what order, and what do we stop doing?"

Context Gate Checks

Check Status Assessment
Is the problem defined? Pass Core problem is clear: async video is a commodity (Loom, Vimeo Record, Google Meet recording all offer this), and the company needs differentiation to sustain growth. AI-powered communication intelligence is the hypothesis.
Do you have market context? Pass Async video market well-mapped: Loom ($4.4B acquisition by Atlassian, 2023), Grain, Otter.ai, Fireflies.ai all converging on AI meeting intelligence. Competitive context is sufficient. T2
Is this a strategy question? Pass Yes — the ask is "which of three investment areas to prioritize and how to sequence them," not "when will feature X ship."
Time horizon appropriate? Pass 6-month horizon (H2 2026). Appropriate for strategic roadmap.
Resource constraints known? Pass 12 engineers, 3 PMs, 2 designers. No planned hiring in H2 (budget frozen pending ARR growth proof). T1
Company-level strategy anchor? Partial Board mandate: 40% ARR growth ($12M → $16.8M). No specific direction on how — the board cares about the number, not the path. This gives strategic freedom but also accountability.

Framework Selection

Strategy question type Primary frameworks (apply in full) Supporting frameworks (scan only) Skipped (why)
Half-year roadmap for established product with pivot decision and constrained team Vision-to-Roadmap Cascade (translate pivot vision to executable bets), Strategic Bet-Sizing (size 3 competing investments with confidence levels), Option-Value Sequencing (order matters — each bet changes the value of subsequent bets), Strategic Tension Surfacing (3 competing priorities = guaranteed tensions) NOT-Doing Section (explicit deprioritization), Resource Allocation (17-person team, every person counts), Roadmap Communication (board sees different story than team) Quarterly Gates — reduced to monthly checkpoints given 6-month horizon. Quarterly gates too coarse for a half-year plan with a critical month-4 decision point.

Framework selection rationale: The core strategic challenge is sequencing — all three initiatives have legitimate claims, but the team can only fund two well. Option-Value Sequencing is the critical framework because the order in which these bets are placed changes the information available for subsequent decisions. AI summaries before knowledge base is not the same strategy as knowledge base before AI summaries.


1. Vision-to-Roadmap Cascade

The structural bridge from aspiration to execution — a framework that makes it impossible to hide the gap between vision and bets, or bets and quarterly focus.

Vision Statement

Become the default async communication intelligence layer for distributed teams (10-500 employees) by end of FY2027, measured by >30% of registered users engaging with AI-powered features weekly.

Falsification test: If weekly AI feature engagement is below 15% of registered users by month 6, the "intelligence layer" thesis is wrong — users want a recorder, not an intelligence product. Kill the pivot and double down on video quality. M

Strategy Pillars

# Pillar Strategic Intent Maps to Vision How
P1 AI-Powered Meeting Intelligence Transform recordings from passive archives into active intelligence — summaries, action items, searchable transcripts, follow-up nudges Directly creates the "intelligence layer" that differentiates from commodity recorders. Primary driver of daily engagement (vs. record-once-watch-once pattern).
P2 Recording Foundation Hardening Make the core recording + editing experience reliable, fast, and competitive with Loom/Grain on table-stakes features Protects the base. Intelligence features are worthless if users churn due to recording bugs, slow processing, or missing basic editing tools. Retention moat.
P3 Team Knowledge Compounding Enable teams to build institutional knowledge from accumulated meeting intelligence — searchable, referenceable, compounding over time Creates the long-term moat: switching costs compound as organizational knowledge accumulates. This is what makes the product "sticky" at the team level vs. individual level.

Vision Cascade: From Mission to Monthly Milestones

MISSION: Make async communication smarter, not just recorded
    │
    ▼
VISION (FY2027): Default async intelligence layer for distributed teams
    │
    ├── P1: AI Meeting Intelligence ──── "Every meeting produces a reusable artifact"
    │       │
    │       ├── Bet 1: AI Summaries + Action Items (Core/Adjacent)
    │       └── Bet 3: Knowledge Base MVP (Transformational)
    │
    ├── P2: Recording Foundation ──────── "Recording just works, always"
    │       │
    │       └── Bet 2: Editor + Reliability Hardening (Core)
    │
    └── P3: Team Knowledge ──────────── "Your team's memory, always searchable"
            │
            └── Bet 3: Knowledge Base MVP (Transformational)

QUARTERLY MILESTONES:
    Q3 2026 (Months 1-3):
      • AI Summary v1 shipped to 100% of users
      • Editor reliability: crash rate <0.1%, processing time <30s
      • Knowledge Base: technical spike + design exploration

    Q4 2026 (Months 4-6):
      • AI Summary v2 (action items, follow-up nudges)
      • Gate decision: Knowledge Base build or AI Summary deepening
      • Target: 25% WAU on AI features, NPS >45
O → I → R → C → W
O
Current product usage pattern: 68% of recordings are watched once, 23% are never watched after recording, only 9% are watched 3+ times T1 internal analytics. Average user records 2.3 videos/week but engages with the platform 1.8 times/week — recording frequency exceeds engagement frequency.
I
The product is a "record and forget" tool, not a communication platform. Users create content they rarely consume. The value is in the act of recording (replacing a meeting), not in the artifact produced. This is a one-sided value problem — the recorder gets value, the audience doesn't get enough value to come back.
R
AI summaries flip the value equation: the recording becomes an input to an intelligence artifact (summary, action items, searchable transcript) that the entire team consumes daily. If successful, engagement frequency should exceed recording frequency — the inverse of today's pattern.
C
H — usage data is T1 (internal analytics, behavioral, not self-reported). The "record and forget" pattern is unambiguous. The response (AI summaries) is the hypothesis — confidence drops to M that AI summaries specifically solve this (vs. other interventions like better notifications or shorter video formats).
W
Leading indicator: AI summary open rate in first 48 hours post-recording. If >40% of team members open the summary (vs. ~9% who watch the full video), the value-flip hypothesis is confirmed. Measure weekly starting month 2.

2. Strategic Bet-Sizing

Three competing investments, explicitly sized with hypotheses, confidence levels, and kill criteria. A bet without all six components is a wish, not a strategy.

Bet 1: AI Meeting Intelligence (Core/Adjacent)

Hypothesis

If we build AI-powered meeting summaries, action item extraction, and searchable transcripts, then weekly active engagement will increase from 1.8x/week to 4.5x/week because team members (not just recorders) will consume AI artifacts daily, driving expansion revenue through team-tier upgrades.

Expected Outcome

+$3.2M incremental ARR from team-tier upgrades and reduced churn. DAU/MAU from 12% to 28%. Net revenue retention from 95% to 115%.

Evidence Quality

Otter.ai reached $100M ARR with meeting transcription alone T2. Grain's pivot to AI summaries increased paid conversion 3x T3. Internal survey: 72% of users want "key takeaways without watching" T1.

Component Detail
Pillar P1: AI-Powered Meeting Intelligence
Category Core (summaries) + Adjacent (action items, follow-ups)
Confidence H — demand is validated (72% survey, T1), competitive precedent exists (Otter, Grain, T2-T3)
Investment 6 engineers, 2 PMs, 1 designer = 55% of total capacity for 6 months
Success Criteria AI summary adoption >40% of recordings by month 4. Team-tier upgrade rate +15% vs. baseline. DAU/MAU >25% by month 6.
Kill Criteria If AI summary open rate <20% after 60 days of availability, the feature is not generating enough pull. Reduce team to 3 engineers and redirect capacity.
Key Assumption Users want intelligence artifacts from their own meetings (not generic AI). If users treat summaries as novelty (read once, ignore after), the engagement thesis fails.

Bet 2: Video Editor & Reliability Hardening (Core)

Hypothesis

If we reduce recording processing time to <30 seconds, eliminate the top 5 crash scenarios, and add basic trimming + chapter markers, then churn rate will decrease from 5.2% to 3.8% monthly because users who leave cite reliability and missing editor basics as primary reasons.

Expected Outcome

Churn reduction saves $1.4M ARR annually (1.4pp churn improvement on $12M base). NPS improvement from 38 to 46.

Evidence Quality

Churn survey: 41% cite "unreliable recording" as top reason T1. Support tickets: 340/month for processing failures T1. Loom comparison: their editor has 14 features we lack T2.

Component Detail
Pillar P2: Recording Foundation Hardening
Category Core — protect and retain existing revenue
Confidence H — churn drivers are T1 evidence (direct user feedback + behavioral data)
Investment 4 engineers, 0.5 PM, 1 designer = 30% of total capacity for 6 months
Success Criteria Processing time <30s (p95). Crash rate <0.1%. Monthly churn <4.0% by month 6. Support tickets for reliability <150/month.
Kill Criteria N/A — this is a must-do. Core reliability is not optional. If churn doesn't improve despite fixes, the problem is product-market fit, not reliability — trigger a deeper strategic review.
Key Assumption Reliability is a necessary but not sufficient condition for growth. Fixing reliability alone does not drive 40% ARR growth — it prevents further erosion.

Bet 3: Team Knowledge Base MVP (Transformational)

Hypothesis

If we build a searchable, taggable repository of AI-generated meeting artifacts (summaries, decisions, action items), then teams will embed the product into their daily workflow as institutional memory, increasing switching costs and enabling a new pricing tier ($25/user/month vs. current $8/user/month).

Expected Outcome

New Enterprise tier at $25/user/month. If 5% of existing teams upgrade: +$1.8M ARR. Long-term: switching costs compound as knowledge accumulates — 6+ months of institutional knowledge creates migration barrier.

Evidence Quality

EVIDENCE-LIMITED Notion-style knowledge bases have proven demand T3, but meeting-specific knowledge bases are unproven. Fireflies.ai launched a similar feature in Q1 2026 — adoption data not yet available T4. Internal interviews: 8/15 power users say "I wish I could search across all my meetings" T1.

Component Detail
Pillar P3: Team Knowledge Compounding (also serves P1)
Category Transformational — creates new product surface and pricing tier
Confidence L — demand signals are thin (8 interviews, no behavioral validation). This is an exploration bet, not a commitment.
Investment 2 engineers, 0.5 PM, 0 designer (months 1-3: spike only). Gated: if AI summary adoption >40%, expand to 4 engineers + 1 designer in months 4-6 = 15% of total capacity.
Success Criteria Months 1-3: Technical spike + design prototype complete. Months 4-6 (if gated): Beta with 50 teams, >3 searches/team/week, >60% weekly return rate.
Kill Criteria If AI summary adoption <25% at month 4, do not expand knowledge base investment. The knowledge base requires AI summaries as content — without summary adoption, the knowledge base has nothing to index.
Key Assumption Meeting artifacts (summaries, decisions) are worth searching. If teams treat summaries as disposable (read once, forget), a knowledge base has no content worth accumulating.

Portfolio Balance Check

Category Count % of Investment Target Range Assessment
Core 1.5 (Bet 2 + Core portion of Bet 1) 55% 50-70% Within range
Adjacent 0.5 (Adjacent portion of Bet 1) 30% 20-30% Within range
Transformational 1 (Bet 3) 15% 5-15% At ceiling — appropriate for a pivot

Portfolio assessment: Balanced. The portfolio protects the base (55% core), funds the primary growth bet (30% adjacent via AI intelligence), and maintains a gated exploration option (15% transformational). The transformational allocation is at the ceiling of the target range, which is appropriate for a company that needs to pivot — being conservative at this stage would be more dangerous than being moderately aggressive.


3. Option-Value Sequencing

The order in which you do things changes the value of everything you do. This section makes sequencing a strategic decision, not a gut feeling with a timeline.

Sequencing Rationale

Sequence Bet Rationale Type Counterfactual
1st (Months 1-6) AI Meeting Intelligence Information + Option-value. This bet reveals whether users want intelligence artifacts. That information determines whether the knowledge base is worth building. If we built the knowledge base first, we'd spend 6 months building a repository with no content worth indexing. AI summaries create the content that makes a knowledge base valuable.
1st (Months 1-6, parallel) Editor Hardening Dependency. AI features are worthless if recordings fail. Reliability is the foundation layer that enables everything else. If we shipped AI summaries on an unreliable recording foundation, we'd see: AI feature excitement → recording failure → churn. The AI feature would accelerate disappointment, not retention.
Gated: 2nd (Months 4-6) Knowledge Base MVP Information-gated. Only expand investment after AI summary adoption data proves the content hypothesis. This is a conditional bet, not a committed bet. If we committed to the knowledge base from month 1 (instead of gating), we'd allocate 4 engineers to a feature whose value depends on an unproven upstream hypothesis. That's 33% of engineering capacity on a faith-based investment.

Option-Value Scoring

Bet Paths Preserved if Succeeds Paths Closed if Fails Irreversibility Option-Value Score
AI Meeting Intelligence 4 paths: (1) Knowledge base build, (2) AI coaching/nudges, (3) Enterprise analytics dashboard, (4) API/integration platform for meeting data 1 path closed: "AI intelligence layer" thesis is wrong — reverts to video recorder strategy Low — AI features can be deprecated without breaking core product H
Editor Hardening 1 path: Retains existing users, enabling all other bets to operate on a stable base 0 paths closed — reliability improvements are always valuable regardless of strategic direction Low — infrastructure investment, not directional M (necessary but low optionality)
Knowledge Base MVP 3 paths: (1) Enterprise tier pricing, (2) Competitive moat via switching costs, (3) Platform for third-party integrations (Notion, Jira, Slack) 2 paths closed: if knowledge base fails, the "institutional memory" moat thesis is invalidated + the enterprise tier pricing strategy collapses Medium — knowledge base architecture decisions constrain future data model choices M (high optionality but high uncertainty)
O → I → R → C → W
O
AI Meeting Intelligence preserves 4 future paths and has the highest option-value score. Knowledge Base preserves 3 paths but depends on AI summary adoption — its option-value is conditional, not intrinsic T4 sequencing analysis.
I
Doing AI summaries first is not just about building the "most important" feature — it's about generating the information that makes the next decision rational. Without summary adoption data, the knowledge base decision is a coin flip. With it, it's an informed bet.
R
Sequence: AI summaries (months 1-3 build, month 4 beta) → Knowledge base gate decision at month 4 → Knowledge base build (months 4-6 if gated in). Editor hardening runs in parallel throughout because it's non-directional infrastructure.
C
H — sequencing logic is sound (information-first is the correct strategy when uncertainty is high and one bet reveals information about another). The specific gate criteria (40% adoption threshold) is M — the threshold is an informed estimate, not a validated benchmark.
W
AI summary adoption velocity in weeks 1-4 post-launch. If adoption curve is flat (not growing week-over-week), the feature has a discoverability or value problem — investigate before the month 4 gate.

Critical Path

Critical path: AI Meeting Intelligence. If Bet 1 is delayed, Bet 3 cannot be evaluated, and the month 4 gate decision becomes a month 6 decision — which means the knowledge base either ships untested at the end of the half or gets pushed to H1 2027. A 2-week delay in Bet 1 creates a 2-month delay in the knowledge base decision.

Information Gates

Gate When What It Reveals Decision It Enables
AI Summary Launch Month 3 (end) Technical feasibility confirmed, initial user reaction Proceed with beta rollout vs. iterate on quality
Adoption Gate Month 4 (week 2) Summary open rate, team engagement, repeat usage THE critical gate: Expand knowledge base investment (if >40% adoption) or redirect to AI summary deepening (if <25%)
Knowledge Base Beta Month 6 (end) Search frequency, return rate, content accumulation velocity Commit to Enterprise tier launch in H1 2027 or park the knowledge base concept

4. Strategic Tensions

A roadmap where everything coexists peacefully is a sign that hard decisions were avoided. This section forces tensions into the open and names the cost of each resolution.

Tension Map

Tension Bet A Bet B Type Resolution What We Lose
Growth vs. Retention AI Summaries (growth) Editor Hardening (retention) Resource Run in parallel with 55/30 split. AI summaries get more capacity because retention without growth misses the 40% target. Editor improvements ship slower. Some reliability fixes deferred to H1 2027. Users who churn due to editor bugs in months 1-3 are lost before AI features can save them.
Platform vs. Product Knowledge Base (platform play) AI Summaries (product feature) Strategic Gated sequence — product first, platform second. AI summaries are a feature; knowledge base is a platform. Build the feature that proves the thesis, then extend to the platform. 6-month delay on switching cost accumulation. If a competitor launches a knowledge base first (Fireflies.ai is attempting this), we lose first-mover advantage in the "meeting memory" category.
Speed vs. Quality Ship AI summaries fast (month 3) Ship AI summaries accurately (month 5) Timing Ship v1 at month 3 with quality guardrails (confidence scores, human-editable summaries). Accept 80% accuracy and iterate. Early users see imperfect summaries. Risk: if first impression is "AI summaries are wrong," adoption stalls and the feature gets labeled as unreliable. Mitigation: beta group first, not full rollout.
Individual vs. Team Value AI summaries for individual users (lower lift) AI summaries for teams (higher expansion revenue) Strategic Build for individual first, extend to team. Individual summaries validate the core value prop; team features (shared action items, cross-meeting search) come in v2. Team-tier revenue is delayed. The $3.2M incremental ARR projection is back-loaded to months 4-6 instead of starting at month 3.
AI Cost vs. Margin Rich AI features (summaries, action items, topics, sentiment) Gross margin preservation (AI inference costs ~$0.03/minute of video) Resource (financial) Launch with summaries + action items only. Defer sentiment analysis and topic extraction to reduce per-recording inference cost from ~$0.08 to ~$0.03. Less differentiated AI output at launch. Competitors with deeper pockets (Loom/Atlassian) can subsidize richer features. Acceptable tradeoff: our margin must stay above 70% to sustain the business.

Strategic Debt Created

Resolution Decision Debt Created Repayment Timeline Cost if Not Repaid
Ship AI summaries at 80% accuracy Must reach 92%+ accuracy before enterprise sales motion begins H1 2027 Q1 — before enterprise outbound starts Enterprise buyers reject the product after pilot due to accuracy issues. Lost deals cost $500K+ in pipeline.
Defer advanced editor features to H1 2027 Power users who need advanced trimming, annotations, and chapters continue to churn H1 2027 Q1 — ship advanced editor by March 2027 Power user churn accelerates from 5.2% to 6.5% monthly. ~$900K additional annual churn.
Knowledge base architecture decisions made during 2-engineer spike If the spike team makes wrong data model choices, rebuilding the foundation adds 2 months Month 4 — review architecture before committing full team 2-month delay compounds into H1 2027 timeline slip for Enterprise tier launch.

Tension health check: 5 tensions identified and resolved. No tension is "resolved" by doing both things at full capacity — every resolution names a specific cost. The most dangerous tension is Growth vs. Retention (#1) because churn losses in months 1-3 are irreversible — those users are gone before AI features launch. Mitigation: prioritize the top 3 crash-causing bugs in sprint 1, before any AI development begins.


5. What We Are NOT Doing

The most important section of this strategy. Every roadmap is an implicit claim about what's not worth doing — this section makes that claim explicit, with opportunity costs that are not zero.

# Item Category Why Not What We Lose Reconsider If
1 Mobile recording app Deferred Mobile recording adds a new platform (iOS + Android = 2 engineers for 6+ months) that doesn't serve the intelligence-layer thesis. Mobile users record but rarely consume summaries on mobile — the value is in desktop team workflows. ~18% of user requests are for mobile. Competitors (Loom, Grain) have mobile apps. We lose mobile-first teams entirely. If >25% of churned users cite "no mobile app" as primary reason (currently 11%) T1
2 Live meeting bot (join Zoom/Teams/Meet) Deferred Meeting bots require complex integrations with 3 platforms (Zoom, Teams, Meet), each with different APIs and policies. 4+ months of engineering for integration alone. AI summaries work on our own recordings first — prove the value before extending to external meeting platforms. Otter.ai and Fireflies.ai differentiate on meeting bot capability. We lose users who want AI intelligence on all their meetings, not just async recordings. TAM is 3-5x larger with meeting bot support. If AI summary adoption exceeds 50% and users request "summarize my Zoom calls" in top 3 feature requests. Target: H1 2027.
3 Enterprise SSO + admin dashboard Deferred Enterprise features are gated on the knowledge base bet succeeding. Without a knowledge base, our enterprise value prop is "slightly better Loom" — not enough to justify SSO investment. The enterprise motion starts in H1 2027 if the knowledge base proves out. Enterprise deals ($50K+ ACV) require SSO. We cannot sell to companies with >500 employees without it. Estimated lost pipeline: $800K in H2 2026. If an enterprise prospect offers a $100K+ pilot contingent on SSO delivery. Case-by-case evaluation.
4 Video editing advanced features (annotations, drawing, transitions) Declined Advanced video editing is Loom's game — they have 10x our editor engineering team (post-Atlassian acquisition). Competing on editor sophistication is a losing strategy. Our differentiation is intelligence, not editing. Power users who want a "mini Camtasia" experience will choose Loom. We accept this — those users value production quality over meeting intelligence, and they're not our target segment. If Loom's editor becomes the primary competitive loss reason (currently 4th behind reliability, pricing, and integrations). Unlikely to change.
5 Self-serve analytics dashboard for team admins Deferred Analytics (recording views, engagement metrics, team activity) is a "nice to have" that doesn't drive the core intelligence thesis. The knowledge base subsumes much of this need — team admins want to know "what decisions were made" more than "who watched which video." Team admins lack visibility into adoption. Harder to justify renewals without usage data. Customer success team manually pulls reports (2 hours/week). If renewal conversations consistently stall on "show me usage data" (currently 2 of 15 renewals cite this).

Opportunity Cost Assessment

NOT-Doing Item Revenue at Stake Customers Affected Competitive Risk Strategic Optionality Lost Total Opportunity Cost
Mobile recording app ~$600K ARR ~360K users who requested mobile M Mobile-first team segment M
Live meeting bot ~$2.1M ARR (TAM expansion) All users with Zoom/Teams/Meet H 3-5x TAM expansion H
Enterprise SSO + admin ~$800K pipeline >500 employee companies M Enterprise motion delayed 6 months M
Advanced video editing ~$300K ARR (power user segment) ~15K power users L None — this is a competitor's game L
Analytics dashboard ~$200K (renewal protection) ~50 team admin accounts L Admin visibility delayed L

Stakeholder communication: The live meeting bot deferral has the highest opportunity cost (H) and will be the most controversial decision. Frame it as: "We are building the intelligence engine first, then connecting it to every meeting source. Building the bot before the engine means we'd record meetings without adding intelligence — which is what Zoom already does for free." This reframes the deferral from "we can't do it" to "we're building the foundation first."


6. Resource Allocation

The framework that makes strategy real — or exposes it as fantasy. Every strategic bet must be backed by specific people, time, and budget.

Capacity Model

Team / Function Total Headcount Planned (70%) Reactive (20%) Exploration (10%) Override?
Engineering 12 8.4 FTE 2.4 FTE 1.2 FTE No — 20% reactive is historically accurate (avg 2.1 FTE on bugs/incidents per month)
Product Management 3 2.1 FTE 0.6 FTE 0.3 FTE No
Design 2 1.4 FTE 0.4 FTE 0.2 FTE No

Per-Bet Allocation

Bet Eng Design PM Duration % of Planned Capacity Dependencies
Bet 1: AI Meeting Intelligence 6 (2 ML/AI, 2 backend, 2 frontend) 1 2 (1 lead, 1 data/metrics) 6 months 55% ML model provider (OpenAI/Anthropic API — confirmed, no dependency risk)
Bet 2: Editor Hardening 4 (2 backend, 1 frontend, 1 infra) 1 0.5 6 months 30% None — fully self-contained
Bet 3: Knowledge Base MVP 2 → 4 (gated at month 4) 0 → 1 (gated at month 4) 0.5 6 months (spike) or 3 months (build) 15% → 25% (if gated in) Bet 1 success (AI summaries must be generating content for knowledge base to index)

Capacity Check

Function Available (Planned) Allocated (Months 1-3) Allocated (Months 4-6, if KB gated in) Surplus / Deficit
Engineering 8.4 FTE 12 FTE allocated (6+4+2) 14 FTE allocated (6+4+4) Over-allocated. See note below.
Design 1.4 FTE 2 FTE allocated (1+1+0) 3 FTE allocated (1+1+1) Over-allocated.
PM 2.1 FTE 3 FTE allocated (2+0.5+0.5) 3 FTE allocated Over-allocated.

Capacity reality check: The allocation sums exceed the 70% planned capacity — intentionally. This strategy allocates at ~85% of total capacity (including some reactive buffer sacrifice), which is aggressive but defensible for a 6-month push with board pressure. The risk: if reactive load spikes (major outage, security incident), there is no buffer. Mitigation: The first 2 weeks of month 1 are dedicated to fixing the top 3 crash-causing bugs (reducing future reactive load). If reactive load exceeds 25% in any month, the Knowledge Base spike team (2 engineers) is the first to be redirected. This is explicitly a "war-time" allocation, not a sustainable operating model.

Hiring Implications

No new hires planned in H2 2026 (budget frozen pending ARR growth proof). This means:


7. Roadmap Communication Views

What the board sees is not what the team sees. Each audience needs a different view of the same strategy — not a different strategy, but a different emphasis.

Communication Matrix

Audience View Emphasis Omit
Board of Directors 3 strategic bets + business outcomes + confidence levels "AI intelligence layer" pivot thesis, $4.8M incremental ARR target, 40% growth path, portfolio balance (55/30/15), kill criteria Technical architecture, sprint-level scope, specific engineer assignments, tension resolution details
Engineering Team Technical milestones + dependencies + sequencing + architecture decisions AI model integration plan, recording pipeline reliability targets (p95 latency, crash rates), month 4 gate criteria, knowledge base data model spike Board dynamics, ARR projections, competitive framing (engineers care about what to build, not why competitors matter)
Design Team User outcomes per bet + experience evolution + research needs AI summary UX (how summaries are consumed, edited, shared), editor UX improvements, knowledge base information architecture, user research plan for months 2-3 Resource math, financial projections, engineering infrastructure decisions
Customer Success / Sales Customer-facing capabilities + timeline + competitive positioning "In Q3 you'll be able to tell customers: AI summarizes every recording automatically." Competitive talking points vs. Loom, Grain, Otter. Renewal ammunition: AI features coming to all tiers. Internal bets, platform investments, NOT-doing rationale (CS should say "it's on our roadmap" not "we chose not to build it")

Board Deck: Recommended Narrative Arc

Slide 1: The Problem

"Async video is becoming a commodity. Zoom, Teams, and Loom all offer recording. Our differentiation is eroding — 68% of recordings are watched once and forgotten. We need to make recordings valuable after they happen."

Slide 2: The Thesis

"We are pivoting from 'async video recorder' to 'async communication intelligence.' AI transforms every recording into a searchable, actionable artifact. Users engage daily with summaries, not just when they record."

Slide 3: The Bets

"Three bets, explicitly sequenced. Bet 1: AI Intelligence (55%, H confidence). Bet 2: Foundation Hardening (30%, H confidence). Bet 3: Knowledge Base (15%, L confidence — gated on Bet 1 success). Portfolio: 55% Core, 30% Adjacent, 15% Transformational."

Slide 4: The Path to 40%

"$3.2M from AI-driven team expansion + $1.4M from churn reduction + $1.8M from new Enterprise tier = $6.4M incremental. Conservative: $4.8M (75% execution rate) on $12M base = 40% growth. Kill criteria: if AI adoption <20% at month 4, we pivot the knowledge base investment."

Team All-Hands: Recommended Narrative Arc

Frame 1: What We Learned

"68% of our recordings are watched once. We built a great recorder — but recording isn't the job. The job is: 'help my team know what happened without sitting through a 30-minute video.' We're rebuilding around that job."

Frame 2: What We're Building

"Every recording will automatically produce a summary, action items, and a searchable transcript. The recording becomes the input; the intelligence is the output. By month 4, we'll know if this thesis is right."

Frame 3: What We're NOT Building

"No mobile app this half. No meeting bot this half. No advanced editor features. These are all good ideas — and we'll do some of them later. But right now, we're focused on proving that AI intelligence is our future. If we spread across 6 things, we'll do none of them well."

Frame 4: How We'll Know

"Month 4 is our decision point. If AI summary adoption exceeds 40%, we'll expand into the knowledge base. If it doesn't, we'll double down on making summaries better. Either way, we'll have data — not opinions — driving the next decision."


Evidence Summary

Tier Count Examples
T1 T1 8 Internal usage analytics (recording watch rates, DAU/MAU, churn survey responses, support ticket volume, feature request data, team headcount, budget constraints, user interviews)
T2 T2 6 Competitor metrics (Otter.ai $100M ARR, Loom acquisition price $4.4B, Grain pivot results), market sizing data, API pricing (OpenAI/Anthropic inference costs)
T3 T3 5 Competitor feature analysis (Loom editor comparison, Fireflies.ai knowledge base launch), industry analyst reports on async video market, user reviews of competitors
T4 T4 4 Inference from competitor strategies (Fireflies.ai knowledge base adoption — no public data yet), sequencing analysis, option-value scoring, hiring lead time estimates
T5 0 Not used
T6 T6 0 Not used

Total evidence points: 23 T1–T4 : 23; T5-T6: 0

Triangulation: All strategic bet recommendations cite minimum 2 evidence tiers. The AI Intelligence bet (Bet 1) has the strongest evidence base (T1 internal data + T2 competitor precedent + T3 market analysis). The Knowledge Base bet (Bet 3) has the weakest evidence base (T1 interviews only, small sample) — which is why it's gated, not committed.

Evidence quality assessment: This strategy is unusually strong on T1 evidence (internal behavioral data) because the product is established with 2M users. The primary risk is T4 inference in the sequencing analysis — the option-value scoring is analytical, not empirical. The month 4 gate exists specifically to convert T4 inference into T1 behavioral data before committing resources.


8. Governance: Assumptions, Gates & Self-Critique

Assumption Registry

# Assumption Bets It Underpins Confidence Evidence What Would Invalidate This
1 Users want AI intelligence from their own meetings (not generic AI) Bet 1, Bet 3 H 72% of users want "key takeaways without watching" T1; Otter.ai $100M ARR proves market exists T2 If AI summary open rate <20% after 60 days — users either don't want summaries or don't trust AI-generated ones
2 AI inference costs will remain at or below $0.03/minute within 12 months Bet 1 (margin preservation) M OpenAI/Anthropic pricing trends are downward T2; but flash models may plateau in quality If AI providers increase pricing or deprecate cost-effective models. Contingency: build capability to switch providers within 2 weeks.
3 Meeting artifacts (summaries, decisions) are worth accumulating and searching Bet 3 (Knowledge Base) L 8/15 power user interviews express desire T1; Notion/Confluence prove knowledge bases have demand T3 — but meeting-specific knowledge bases are unproven If search frequency in knowledge base beta <1x/team/week — teams don't return to old meeting artifacts
4 40% ARR growth is achievable without a sales team (product-led growth only) All bets (revenue target) M Current PLG motion converts at 4.2% trial-to-paid T1; AI features should increase conversion, but enterprise deals ($50K+ ACV) typically require sales T3 If team-tier upgrades plateau at $2M incremental (vs. $3.2M target) — PLG ceiling hit, sales motion required
5 The team can operate at 85% capacity for 6 months without burnout-driven attrition All bets (resource plan) M Team has operated at high intensity before (product launch in 2024) without attrition T1; but that was 3 months, not 6 If any senior engineer or PM gives notice. Losing 1 of 12 engineers is an 8% capacity hit that breaks the allocation model.

Monthly Gate Structure

Month Primary Bet Focus Gate Criteria (Pass/Fail) Adaptation Triggers Kill Criteria
Month 1 AI Summary architecture + Editor top-3 bugs AI: model selection complete, latency <5s for 10-min video. Editor: top 3 crash bugs resolved. If model latency >10s: evaluate alternative providers before committing N/A — too early
Month 2 AI Summary v1 internal beta + Editor processing pipeline AI: internal team using summaries daily, accuracy >75% on test set. Editor: processing time <45s (p95). If accuracy <60%: pause feature development, invest in prompt engineering / model fine-tuning N/A — still building
Month 3 AI Summary v1 public launch + Editor reliability milestone AI: shipped to 100% users. Editor: crash rate <0.1%, processing <30s. Knowledge Base: technical spike + design prototype complete. If launch delayed >2 weeks: compress month 4 beta, delay knowledge base gate to month 5 If AI model quality is fundamentally insufficient (hallucination rate >15%): pause and re-evaluate AI strategy
Month 4 THE CRITICAL GATE: AI adoption measurement PASS (>40% adoption): Expand knowledge base to 4 eng + 1 designer. MARGINAL (25-40%): Continue with 2-engineer knowledge base team, invest remaining capacity in AI summary iteration. FAIL (<25%): Redirect all knowledge base capacity to AI summary improvement. If adoption is 30-40%: run 2-week experiment with improved onboarding before deciding If adoption <15%: the intelligence-layer thesis is wrong. Emergency strategy review — consider reverting to video-quality differentiation.
Month 5 AI Summary v2 (action items, follow-ups) + Knowledge Base beta (if gated in) AI v2: action item extraction accuracy >80%. Knowledge Base: 50 beta teams onboarded. If knowledge base search frequency <1x/team/week: scope down to "meeting archive" instead of "knowledge base" If team attrition occurs: reallocate immediately, defer lowest-priority workstream
Month 6 H2 2026 retrospective + H1 2027 strategy inputs AI: DAU/MAU >25%, net revenue retention >108%. Editor: churn <4.0%. Knowledge Base: beta data sufficient for Enterprise tier go/no-go. If ARR growth tracking to <30%: board communication, revised H1 2027 plan If ARR growth tracking to <20%: fundamental strategy failure, CEO-level review

Adversarial Self-Critique

Weakness 1: "Intelligence layer" is a crowded thesis. Otter.ai, Fireflies.ai, Grain, and now Loom (via Atlassian) are all pursuing AI meeting intelligence. We're the smallest player ($12M ARR vs. Otter's $100M+, Loom's Atlassian backing). The assumption that AI intelligence differentiates us is questionable when every competitor is making the same bet. Counter-argument: we're differentiated by starting with async video (not live meetings), which is a different user behavior. But if AI summaries commoditize quickly (likely given LLM accessibility), the intelligence layer alone isn't a moat — the knowledge base must succeed for long-term differentiation.

Weakness 2: The 85% capacity allocation is unsustainable. This strategy allocates teams above the recommended 70% planned capacity with minimal reactive buffer. A single major incident, security vulnerability, or key-person departure breaks the entire allocation model. We are betting that nothing unexpected happens for 6 months — this is historically improbable. The mitigation (redirect Knowledge Base team during crises) is pragmatic but means the transformational bet is always first to be sacrificed, which biases the portfolio toward incrementalism.

Weakness 3: The month 4 gate may be too early to measure true adoption. AI summary adoption at month 4 means users have had ~4 weeks of access. Enterprise teams adopt new features slowly — decision-makers need to see value before mandating team-wide usage. A 25% adoption threshold at week 4 may be measuring early-adopter enthusiasm, not sustainable engagement. If we fail the gate and redirect resources away from the knowledge base, we may have killed a viable bet based on an artificially compressed evaluation window. Counter: 4 weeks is enough to measure individual feature pull (open rates, repeat usage). Team-level adoption takes longer, but individual pull is the leading indicator.

Weakness 4: No sales motion for the 40% growth target. The strategy assumes product-led growth alone can drive $4.8M incremental ARR. But the highest-value segment (Enterprise tier at $25/user/month) historically requires a sales team for $50K+ ACV deals. Without a sales rep, the Enterprise tier revenue ($1.8M projection) may be aspirational. If PLG tops out at $3M incremental, we hit 25% growth — good but not 40%. The strategy should include a trigger for hiring a sales rep if PLG shows signs of plateauing.

Revision Triggers

Trigger What to Re-Assess Timeline
Loom launches AI summaries with Atlassian Intelligence integration Competitive positioning — our AI features may not differentiate if Loom bundles AI free with Jira/Confluence Within 2 weeks of announcement
OpenAI/Anthropic raises API pricing >2x AI inference cost model — Bet 1 margin assumptions break Immediate — evaluate alternative providers within 1 week
Key engineer (ML/AI) departure Bet 1 timeline — 6-engineer team losing the AI specialist creates 2-month delay Immediate — assess hiring timeline and interim workaround
Monthly churn exceeds 6% for 2 consecutive months Bet 2 allocation — editor hardening may need more investment Monthly check
Board revises growth target (up or down) Entire portfolio balance — higher target may require more aggressive bets; lower target allows more exploration Within 1 week of board communication

Related Skills

From this strategy to next steps:

Skill chain for this strategy: Problem Framing (upstream) → Discovery Research (upstream) → Competitive Analysis (upstream) → Product Strategy (this document) → Specification Writing (downstream) → Metric Design (downstream) → Narrative Building (downstream)