← PM Skills Arsenal DataVault AI Platform Investment Brief

The Executive Document

Below is the finished strategy one-pager produced by the executive-writing skill. Every element — the answer-first structure, the VP-calibrated framing, the compressed evidence, the explicit ask — was generated by applying the skill's frameworks. Navigate the tabs above to see how each framework shaped the output.

How to read this showcase: This tab shows the finished executive document exactly as the VP would receive it. The remaining tabs dissect the skill's frameworks — Context Gate, Format Routing, Minto Pyramid, Audience Calibration, Decision Architecture, Evidence Cascade, and Quality Gate — showing how each one shaped the output. The Governance tab contains the Assumption Registry and Adversarial Self-Critique.


DataVault Should Invest $2M to Launch an AI API Platform in H2 2026

Date: 2026-03-12  |  Author: Sarah Chen, Sr. PM  |  For: James Park, VP of Product  |  Ask type: Decision


Situation

DataVault's SaaS analytics platform generates $18M ARR with 340 mid-market customers and 87% gross retention T1. Our core product — real-time data dashboards and SQL-based reporting — has achieved product-market fit in the mid-market segment. Engineering has built proprietary data processing infrastructure that handles 2.3 billion events per day across customer accounts T1.

Complication

Three competitors — Amplitude, Mixpanel, and a well-funded startup called Streamline — launched developer API products in the last 9 months T2. Amplitude's API platform now accounts for 22% of their new bookings T2. Our sales team reports that 6 of our last 14 lost deals cited "no API access" as a deciding factor T1 — these customers wanted to embed DataVault's analytics into their own products, not just view dashboards. Without an API platform, we are locked into the dashboard-only segment while the market shifts to embedded analytics H.

Recommendation

Invest $2M ($1.4M engineering + $600K go-to-market) to build and launch an AI-powered API platform by Q4 2026 H. This unlocks an estimated $6.2M incremental ARR within 18 months by: (1) converting lost API deals into wins, (2) expanding existing customer accounts from dashboard-only to dashboard + embedded, and (3) entering the developer-tools segment where we have zero presence today T1 T3. DataVault's AI differentiation — predictive anomaly detection and natural-language query — gives our API a structural advantage over competitors offering raw data endpoints. The key assumption: mid-market companies want intelligent APIs (AI-enriched responses), not just data pipes H.


Strategic Options

Dimension Option A: Full AI API Platform Option B: Basic REST API Only Option C: Do Nothing
Cost $2M (6 engineers, 8 months + GTM) $800K (3 engineers, 5 months) $0 direct cost
Timeline Beta Q3 2026, GA Q4 2026 GA Q3 2026 N/A
Expected ARR impact +$6.2M in 18 months T3 +$2.8M in 18 months T3 -$2.1M (lost deals) T1
Risk M Engineering capacity strain L Commoditized offering H Market share erosion
Reversibility Hard to reverse — API contracts are commitments Easy to extend — can add AI later Fully reversible — but window narrows
Key tradeoff Delays v4.2 dashboard features by one quarter No AI differentiation; competes on price only Cedes embedded analytics market to Amplitude/Streamline

Recommendation: Option A — Full AI API Platform H. The decisive factor: a basic REST API (Option B) puts us 12 months behind Amplitude with no differentiation. Our competitive advantage is AI-enriched analytics — predictive anomaly detection and natural-language query — which cannot be replicated by bolting AI onto a data pipe after launch. Option A builds AI-native from day one. The key assumption: customers will pay a 30-40% premium for AI-enriched API responses over raw data endpoints M.


Key Risks

Risk Probability Impact Mitigation
Engineering capacity: v4.2 dashboard delayed H 3 existing customers expecting v4.2 in Q3 may escalate; $420K ARR at risk Pre-brief the 3 accounts (Chen owns). Deliver v4.2 high-priority items in Q3, remainder in Q1 2027.
API adoption slower than modeled M $6.2M ARR target missed by 30-50%; ROI extends to 24+ months Gate: if <15 API beta signups by end of Q3, pause GTM spend and reassess positioning.
Amplitude launches AI API features first M First-mover advantage in AI APIs narrows; differentiation window closes Ship beta in Q3 (not Q4). Accept tech debt to preserve speed. Our data pipeline handles 2.3B events/day — Amplitude's handles ~800M. Scale is our moat.

The Ask

Approve the $2M investment in DataVault's AI API Platform (Option A). Engineering begins sprint planning April 1. If not approved by March 28, we miss the Q3 beta window and Amplitude's 6-month head start becomes 9 months. Next step if approved: engineering kickoff April 1, beta partner recruitment begins April 15 (Chen + Williams own).

Appendix available on request: full competitive analysis, financial model with sensitivity scenarios, technical architecture review, beta partner pipeline.


How the Skill Built This Document

The executive-writing skill applied 8 frameworks to transform raw product strategy into this VP-ready one-pager. Each tab above shows one framework in action:

Context Gate (Step -1)

Verified artifact fit, analysis readiness, decision authority, and timing before writing a word.

Format Routing (Step 0)

Selected "Strategy One-Pager" over Board Memo or Decision Brief based on audience signals.

Minto Pyramid / SCR

Answer in the first paragraph. Situation-Complication-Resolution structure throughout.

Audience Calibration

Framed for VP of Product: strategic fit lens, competitive positioning evidence, resource allocation ask.

Decision Architecture

Type 1 decision (API contracts are commitments). 3 structurally different options including "do nothing."

Evidence Cascade

L1 headlines in body, L2 supporting data inline, L3 methodology in appendix. Zero jargon.

12/12
Quality Gate
487
Word Count
3
Options Presented
14
Evidence Points
2
Acronyms Used

Step -1: Context Gate

Before writing anything, the skill verifies that an executive document is the correct artifact. Four checks must pass. If any fail, the skill redirects to a different artifact type.

Check Question Assessment Result
Artifact fit Does the VP read written documents, or does he prefer verbal/visual? James Park reads strategy docs asynchronously before team syncs. He annotates in Google Docs. Written document is the correct format. T1 observed behavior Pass
Analysis readiness Do we have a clear recommendation backed by evidence? Yes. Recommendation: invest $2M in AI API platform. Backed by: lost deal data (T1), competitive launch timeline (T2), financial model (T3). H Pass
Decision authority Can James approve this investment? VP of Product has signing authority up to $3M for product investments per FY26 budget delegation. $2M is within his authority. Above $3M would require CEO approval. T1 budget policy Pass
Timing Is the decision ripe? Yes. Three signals: (1) Amplitude's API launched 9 months ago — window is narrowing, (2) Q2 planning starts April 1 — engineering allocation must be decided by March 28, (3) 6 lost deals in the last quarter provide fresh evidence. T1 T2 Pass

All four checks pass. Proceed to Step 0 (Format Routing). If "analysis readiness" had failed — for example, if we didn't yet have lost deal data or a financial model — the skill would redirect to run Competitive Market Analysis or Problem Framing first, then return to executive-writing with the output.

What the Context Gate Prevented

Without the Context Gate, common failure modes include:

FM-1: Information Dump

Writing a 10-page market analysis with no recommendation because "the VP should decide." The Context Gate's analysis readiness check catches this: if you don't have a recommendation, don't write the document yet.

FM-6: Wrong Audience

Writing a CFO-framed financial justification when the VP of Product controls the budget. The decision authority check catches this: James has signing authority, so the document frames for his lens (strategic fit), not the CFO's lens (unit economics).

Premature Document

Sending the investment brief 3 months before planning season when engineering allocation isn't on the table. The timing check catches this: the decision is ripe because Q2 planning starts April 1.


Step 0: Format Routing

The skill routes to one of three document formats based on signals in the request. Wrong format = wrong output, regardless of content quality. This is a decision about the container, not the content.

Signal Analysis

Signal in the Request Points To Match?
"Strategy one-pager recommending a $2M investment" Strategy One-Pager Direct match
"VP of Product" as audience (not board, not exec team) Strategy One-Pager (single decision-maker, not group)
"Decision brief on whether to build" Decision Brief (alternative) Partial match
Investment amount ($2M) within VP signing authority Strategy One-Pager (not complex enough for Board Memo)

Format Decision Table

Dimension Strategy One-Pager Board Memo Decision Brief
Page count 1 (strict) 3-5 1-2
Read time 2-3 min 10-15 min 5-7 min
Ask type Alignment or Decision Alignment or Decision Decision
Evidence depth Headline only Level 1 + Level 2 Level 1 + selected Level 2
Best when Single exec, time-constrained, needs to decide or align Exec will read before a meeting Group of execs must align on a choice

Selected: Strategy One-Pager. The VP reads 15-20 documents per week. A single-page document with answer-first structure respects his time. A Board Memo (3-5 pages) would over-invest for a $2M decision within his authority. A Decision Brief is optimized for meetings — this document is read asynchronously.

What This Routing Prevented

Over-engineering (Board Memo for VP decision): A 5-page Board Memo with full evidence appendix signals "I can't tell what matters." The VP doesn't need the methodology — he needs the recommendation, the options, and the ask. The Strategy One-Pager format forces compression to 400-600 words, which produces a better document.

Under-structuring (Decision Brief for async reading): A Decision Brief is optimized for synchronous decision-making in a meeting ("what we need from this meeting"). This document is read alone, annotated, then discussed. The Strategy One-Pager's SCR structure works better for async consumption.


Minto Pyramid / SCR Structure

Barbara Minto's Pyramid Principle: answer first, then grouped supporting arguments, then evidence. Executives read top-down and stop when convinced. The SCR (Situation-Complication-Resolution) framework creates tension that demands attention.

SCR Breakdown

Element Purpose Application in This Document Quality
Situation Shared context (2-3 sentences). What everyone agrees on. "DataVault's SaaS analytics platform generates $18M ARR with 340 customers and 87% gross retention. Our core product has achieved product-market fit. Engineering handles 2.3B events/day." Pass
Complication What changed (2-3 sentences). The tension demanding action. "Three competitors launched API products in 9 months. Amplitude's API is 22% of new bookings. 6 of our 14 lost deals cited 'no API access.' We're locked into dashboards while the market shifts." Pass
Resolution The answer (1-2 sentences). In the first paragraph. "Invest $2M to build and launch an AI-powered API platform by Q4 2026. Unlocks $6.2M incremental ARR in 18 months." Pass

Pyramid Structure (Below the SCR)

             RESOLUTION: Invest $2M in AI API Platform (H)
                /                    |                    \
    Argument 1:               Argument 2:              Argument 3:
  Market is shifting         AI differentiation      Financial return
  to embedded analytics       is our moat             justifies investment
     /        \                /         \               /        \
 E1.1: 6/14   E1.2: Amplitude  E2.1: 2.3B    E2.2: NLQ     E3.1: $6.2M  E3.2: 4x ROI
 lost deals   22% API bookings events/day    + anomaly      ARR in 18mo  payback at
 cite API     (T2)             (T1)          detection(T1)  (T3)         month 11(T3)
 (T1)

MECE check: The three arguments are Mutually Exclusive (market shift, differentiation, financials) and Collectively Exhaustive (cover why now, why us, and what we get). Each argument has 2 supporting evidence points.

The 10-Second Test

Test result: Pass. A reader scanning only the recommendation paragraph in the first 10 seconds gets: (1) what to do (invest $2M in AI API), (2) when (Q4 2026), (3) what it achieves ($6.2M ARR), and (4) the confidence level (H). No supporting detail needed — the SCR structure ensures the answer comes first.

Common SCR Failures the Skill Prevents

FM-4: Ask Buried on Page 4

Without Minto structure, PMs typically write: background (page 1) → market analysis (page 2) → competitive landscape (page 3) → recommendation (page 4). The VP stops reading on page 2. The skill enforces Resolution in paragraph one.

Situation Too Long

A common failure: 2 paragraphs of company history that the VP already knows. The skill caps Situation at 2-3 sentences of shared context — not a history lesson, just enough to ground the Complication.

Complication Too Vague

"The market is evolving" is not a complication — everything evolves. The skill demands specifics: "3 competitors launched APIs in 9 months" + "6 of 14 lost deals cited no API" creates genuine tension.


Audience Calibration

The same investment must be framed differently depending on who reads it. This document is calibrated for a VP of Product. Below is the calibration matrix showing how the framing would differ for other executive roles.

Role Calibration Matrix — This Investment, Four Frames

Dimension VP of Product (selected) CFO CTO CEO
Primary lens Strategic fit + competitive positioning Unit economics + ROI timeline Technical feasibility + capacity Market position + growth trajectory
Opens with "This positions DataVault to own embedded analytics before Amplitude locks in the segment" "$2M investment yields 4x ROI — $6.2M ARR in 18 months, payback at month 11" "Requires 6 engineers for 8 months; no new hires — reallocate from v4.2 post-Q2" "The embedded analytics market is $4.2B and growing 28% YoY — we're not in it"
Evidence they trust Lost deals, competitive moves, customer demand signals Financial models, payback period, sensitivity analysis Architecture reviews, API design specs, team capacity data Market sizing, competitive landscape, customer sentiment
Risk framing Competitive risk — what happens if we don't move Financial risk — downside scenarios, break-even analysis Execution risk — technical complexity, team dependencies Strategic risk — market window, competitive moat durability
Ask framing "Approve this investment to close the competitive gap" "Approve $2M from the FY26 product budget" "Commit 6 engineers for 8 months starting April 1" "Align on embedded analytics as a strategic priority for H2"
Kills the document No competitive context, no strategic vision No financial model, no payback analysis No technical architecture, no capacity plan No market sizing, no long-term positioning

How Calibration Shaped This Document

AUDIENCE CALIBRATION APPLIED
O
Reader: VP of Product. James Park manages the product portfolio, owns the product roadmap, and has $3M signing authority. He cares about competitive positioning, product-market fit expansion, and resource allocation across his team.
I
Framing consequence: Lead with competitive threat (Amplitude, Mixpanel, Streamline), not financial return. The VP knows the financial model matters, but he decides based on "are we losing deals we should win?" and "does this extend our product moat?" Financial return is supporting evidence, not the headline.
R
Document adjustments made: (1) Complication leads with "3 competitors launched APIs" + "6 of 14 lost deals," not revenue projections. (2) Risk table includes "Amplitude launches AI features first" because competitive timing is the VP's primary anxiety. (3) Ask says "close the competitive gap," not "approve the budget."
C
H — VP of Product calibration is well-established. Validated by: the VP's past decisions prioritized competitive positioning over pure ROI (approved the v3.0 redesign based on churn data, not revenue model). T1
W
If the VP asks "what's the payback period?" before asking "what's the competitive landscape?" — the calibration was wrong. He's operating in CFO mode. Pivot to financial framing in the follow-up conversation.

Multi-Audience Considerations

This document goes to the VP of Product as primary reader. However, James will likely share it with:

Calibration rule: Lead with the decision-maker's frame (VP = competitive positioning). Include secondary audience data in the body (CTO = capacity risk row, CFO = ROI numbers). Put role-specific deep-dives in the appendix, not the body — putting CFO-level financial models in a VP document signals "I don't know who I'm writing for" (FM-6).


Decision Architecture

How you structure choices determines how executives process them. The skill classifies the decision type, generates structurally different options, and names the irreversible commitment in each option.

Decision Classification

Dimension Assessment
Type 1 or Type 2? Type 1 (One-Way Door) — API contracts with beta partners are commitments. Once we ship a public API, customers build integrations against it. Deprecating endpoints breaks customer products. This investment creates a support and maintenance obligation that extends indefinitely.
Evidence bar High — need strong data before committing. We have T1 data (lost deals, internal metrics) and T2 data (competitive launches, Amplitude earnings). Evidence bar met. H
Options needed 3 (for Type 1 decisions: 3-4 with full tradeoff analysis, including "do nothing")
Approval level VP of Product ($2M within $3M authority). Would escalate to CEO if >$3M.
Decision speed Deliberate — 1-2 weeks. Not urgent enough for same-day, not slow enough for quarterly review.

Option Architecture Analysis

Each option must differ on at least one structural dimension. Options that differ only in scope are the same option at different dosages.

Rule Option A: Full AI API Option B: Basic REST API Option C: Do Nothing Check
Structurally different? AI-native from day one; premium pricing strategy Data pipes only; competes on price Defend current SaaS; cede API market Yes — differ on technology approach, pricing model, and market positioning
Genuine advantage? Highest differentiation + ARR upside Faster to market, lower cost, lower risk Zero cost, no engineering distraction Each has a real advantage over the others
Irreversible commitment named? API contracts + AI model training investment API contracts (smaller scope) None — but competitive window closes
"Do nothing" included? Option C explicitly models the cost of inaction: -$2.1M from continued lost deals

The Cost of Inaction (Option C Deep-Dive)

O → I → R → C → W
O
6 of 14 lost deals in Q4 2025 - Q1 2026 cited "no API access" as a deciding factor, representing $2.1M in lost ARR T1 CRM deal notes, verified by AEs.
I
At the current loss rate, "do nothing" costs ~$350K per month in lost deals that we can't win without an API. Over 18 months, that's $6.3M in foregone revenue — 3x the investment cost. The opportunity cost of inaction exceeds the cost of action.
R
"Do nothing" is not a low-risk option — it's a slow bleed. Every month without an API, Amplitude and Streamline convert more of the developer-tools segment. Once these customers build integrations against competitor APIs, switching costs compound against us.
C
H — lost deal data is T1 (internal CRM, verified by account executives). The $350K/month estimate assumes the current loss rate continues; actual losses could increase as more competitors launch APIs.
W
If "no API" drops from the top 3 loss reasons for 2 consecutive quarters → market demand for embedded analytics may be overstated. Reassess the investment thesis.

Decision Architecture Quality Check

Criterion Result
2-4 structurally different options 3 options, each differs on technology, pricing, and positioning
Clear recommendation with confidence Option A recommended, confidence H
Type 1/2 classification stated Type 1 — API contracts are irreversible
Irreversible commitments named per option Named for each option
"Do nothing" considered Option C with quantified cost of inaction

Evidence Cascade

Executives consume evidence at different depths. The skill structures evidence in three levels so each reader gets what they need. For a Strategy One-Pager, Level 1 evidence appears in the body and Level 2-3 go to the appendix.

Evidence Levels Applied

Level What It Contains Where It Appears Example from This Document
Level 1 (Headline) One sentence, one number, one implication Document body "6 of 14 lost deals cited 'no API access' as a deciding factor" T1
Level 2 (Summary) 2-3 supporting data points with source and confidence Selected body elements + appendix "Lost deals totaled $2.1M ARR: Acme Corp ($380K), Nexus Systems ($290K), four others. Exit interviews cite API access as deciding factor in 4 of 6; remaining 2 cite pricing but noted API as secondary." T1 T2
Level 3 (Appendix) Full methodology, data tables, sensitivity analysis "Available on request" Full CRM export of 14 lost deals, AE interview transcripts, deal stage analysis, competitor feature comparison matrix, financial model with pessimistic/base/optimistic scenarios

Key Evidence Points with Tiers

$18M ARR, 340 customers, 87% gross retention T1 — Internal metrics, verified Q4 2025 close. This establishes the Situation: DataVault has achieved product-market fit in mid-market SaaS analytics. The VP knows these numbers; stating them establishes shared context.

2.3 billion events/day processed T1 — Internal infrastructure metrics. This is not vanity — it establishes DataVault's technical moat. Amplitude processes ~800M events/day T2. Our 3x scale advantage means our API can offer richer analytics per API call.

6 of 14 lost deals cited "no API access" T1 — CRM deal notes, verified by account executives. This is the strongest evidence in the document: behavioral data (what customers actually did) showing a specific, addressable gap. Not a survey. Not a focus group. Real deals we lost for a stated reason.

Amplitude API = 22% of new bookings T2 — Amplitude Q4 2025 earnings call, verified via transcript. This proves market demand: a direct competitor launched an API and it's already a significant revenue driver. Not a prediction — an observation.

$6.2M incremental ARR in 18 months T3 — Financial model built by PM + Finance. Based on: (1) recovery of 60% of lost API deals ($1.26M), (2) API upsell to 15% of existing 340 customers at $12K/year average ($612K), (3) new developer-segment customers at pipeline conversion rates ($4.3M). Assumptions documented in appendix.

$4.2B embedded analytics market, 28% CAGR T4 — Gartner 2025 market sizing report. Used for market context only, not for strategic conclusions. The decision rests on T1 evidence (lost deals), not market sizing.

Zero-Jargon Compression Applied

PM Draft After Compression What Changed
"DataVault has achieved PMF in the mid-market analytics vertical with strong NRR metrics" "DataVault's analytics platform generates $18M ARR with 87% gross retention" Removed PMF, NRR. Replaced with specific numbers the VP can benchmark.
"Competitors are leveraging API-first strategies to capture the embedded analytics TAM" "Three competitors launched API products in the last 9 months" Removed TAM, API-first, leveraging. Replaced with observable fact.
"Our ML pipeline provides a defensible moat via proprietary anomaly detection algorithms" "Our AI differentiation — predictive anomaly detection and natural-language query — gives our API a structural advantage" Removed ML pipeline, defensible moat. Translated to what the AI actually does.

Acronym count: 2. The document uses "ARR" (Annual Recurring Revenue) and "API" (Application Programming Interface). Both are defined implicitly by context. The skill's cap is 5 acronyms per document; this document uses 2, leaving room for the appendix to introduce technical terms.


Quality Gate

The skill runs a 12-point quality gate before any executive document ships. Every check is pass/fail. This document scores 12/12.

Full Quality Gate Checklist

# Check Result Evidence
1 10-second test: Can the reader extract the recommendation in 10 seconds? Pass Recommendation in first paragraph: "Invest $2M to build and launch an AI-powered API platform by Q4 2026"
2 Ask present: Explicit, time-bound, actionable Ask? Pass "Approve the $2M investment. If not approved by March 28, we miss the Q3 beta window."
3 Audience calibrated: Framing matches reader's role and decision criteria? Pass VP of Product frame: competitive positioning + lost deals, not CFO-style ROI lead
4 SCR structure: Answer first, then context? Pass Resolution in paragraph 3; Situation and Complication precede but are compressed to 2-3 sentences each
5 Options genuine: Structurally different, not scope variations? Pass 3 options differ on technology (AI vs. basic), cost ($2M vs. $800K vs. $0), and strategic positioning
6 Jargon clean: Zero unexplained frameworks, acronyms, or technical terms? Pass 2 acronyms (ARR, API), both self-evident in context. No framework names in body.
7 Risks honest: Specific scenario + probability + impact + mitigation? Pass 3 risks, each with probability (H/M/L), quantified impact, named owner, concrete mitigation action
8 Evidence leveled: L1 in headlines, L2 in body, L3 in appendix? Pass Body uses L1 evidence (one stat, one implication). Financial model details in appendix.
9 Word count: Within target for document type? Pass 487 words (target: 400-600 for Strategy One-Pager)
10 Reversibility stated: Each option's reversibility named? Pass Option A: "Hard to reverse — API contracts." Option B: "Easy to extend." Option C: "Fully reversible — but window narrows."
11 Assumptions surfaced: Load-bearing assumptions listed? Pass Key assumption stated inline: "mid-market companies want intelligent APIs, not just data pipes" (M). Full registry in Governance tab.
12 "So what" complete: Every paragraph connects to the decision? Pass No paragraph presents facts without stating implications. Every data point connects to "why this supports the investment."
12/12
Quality Score
Ship
Verdict

Quality Gradient: Where This Document Lands

Tier Description This Document?
Intern Tier No recommendation, no structure, jargon throughout, no Ask
Consultant Tier Clear SCR, recommendation, 2-3 options, specific Ask, evidence tiered. Missing: adversarial critique, assumption registry, Type 1/2 classification
Elite Tier Minto Pyramid precise, audience calibrated, 3 structurally different options, Type 1 classified, evidence cascade, zero jargon, assumption registry, adversarial self-critique, revision triggers, 12/12 Quality Gate This document

The elite-tier difference: A consultant-tier document gets the VP to the right decision. An elite-tier document gets the VP to the right decision and shows him exactly where the analysis could be wrong (adversarial critique), what would change the recommendation (watch indicators), and when to revisit (revision triggers). This builds the VP's trust in the author — the next document gets approved faster.


Evidence Summary

Tier Count Examples
T1 5 Internal revenue metrics ($18M ARR), infrastructure capacity (2.3B events/day), CRM lost deal data (6/14 deals), gross retention (87%), budget policy (VP $3M authority)
T2 4 Amplitude earnings (22% API bookings), competitor launch timeline (3 competitors, 9 months), Amplitude event volume (~800M/day), AE-verified exit interviews
T3 4 Financial model ($6.2M ARR projection), API upsell estimate (15% of 340 customers), 30-40% AI premium assumption, pipeline conversion rates
T4 1 Gartner embedded analytics market sizing ($4.2B, 28% CAGR) — used for context only, not strategic conclusions
T5 0 Not used
T6 0 Not used

Total evidence points: 14 T1T2T3 : 13 at T1-T3; 1 at T4 (sizing only)

Triangulation: The core recommendation rests on T1 evidence (lost deals, internal metrics) supported by T2 evidence (competitive launches). Financial projections are T3 (model-based). No strategic conclusion rests on T4+ evidence alone.

Evidence quality for a Strategy One-Pager: This document's evidence quality is unusually strong for this format. Most strategy one-pagers rely on T3-T4 evidence (market reports, expert opinions). This one leads with T1 behavioral data (what customers actually did), which is the highest-confidence evidence type for an investment decision.

EVIDENCE QUALITY ASSESSMENT
O
9 of 14 evidence points are T1-T2 (behavioral data + primary research). Only 1 point is T4 (industry report), and it's used for market context, not strategic conclusions. Zero T5/T6.
I
The evidence base is strong enough for a Type 1 decision. The VP can act with confidence because the core argument ("we're losing deals for a specific, addressable reason") is grounded in what customers actually did, not what analysts predict.
R
Ship the document. The evidence bar for a $2M Type 1 decision is met. The one area requiring additional evidence: the $6.2M ARR projection (T3 model) should be stress-tested with the CFO's team before final approval.
C
H — evidence quality is sufficient for this decision type and investment level.
W
If the VP's first question is "where did you get these numbers?" — the evidence isn't presented clearly enough. If his first question is "what about Streamline's pricing?" — the evidence is clear and he's engaging with strategy, which is the goal.

Assumption Registry

Every executive document surfaces its load-bearing assumptions. If any assumption breaks, the recommendation changes. This registry tells the VP exactly where the analysis is vulnerable.

# Assumption Confidence Evidence What Would Invalidate
1 Mid-market companies want intelligent APIs (AI-enriched), not just raw data endpoints M T1: 6 lost deals mentioned "API access" but did not specify AI enrichment. T3: Developer survey (N=42) shows 67% prefer AI-enriched responses when available. If beta customers consistently choose the basic (non-AI) API tier over the AI tier, the premium pricing assumption fails. Gate: <15% AI tier adoption in first 90 days of beta.
2 DataVault's data processing scale (2.3B events/day) translates to a structural API advantage H T1: Internal infrastructure metrics. T2: Amplitude's ~800M events/day (earnings). Our 3x scale advantage means richer analytics per API call. If Amplitude or Streamline closes the scale gap to within 2x (e.g., via infrastructure investment or acquisition), our differentiation narrows to AI features only.
3 The $6.2M ARR projection is achievable in 18 months M T3: Financial model based on: 60% lost deal recovery ($1.26M), 15% existing customer upsell ($612K), new segment pipeline ($4.3M at 12% conversion). If API deal cycles are >6 months (vs. modeled 3 months) or upsell conversion is <10% (vs. modeled 15%), the 18-month timeline extends to 24-30 months. ROI still positive but payback shifts.
4 Reallocating 6 engineers from v4.2 is feasible without material churn risk M T1: 3 customers expecting v4.2 features, $420K combined ARR. T1: Account health scores (all 3 above 80/100). T3: PM assessment — high-priority items deliverable in Q3, remainder in Q1 2027. If any of the 3 accounts escalates pre-emptively or churn signals appear (support tickets spike, executive sponsor goes quiet), engineering reallocation plan needs modification.
5 Amplitude won't launch AI-enriched API features before our Q3 beta L T2: Amplitude's Q4 earnings mentioned "AI capabilities in 2026 roadmap" without specifics. T3: Job postings include ML engineer roles for "API Intelligence" team. If Amplitude announces AI API features at their April developer conference, our differentiation window narrows from 6 months to 2-3 months. Mitigation: accelerate beta to Q2 (requires contractor augmentation, +$200K). [EVIDENCE-LIMITED]

Adversarial Self-Critique

Three genuine weaknesses in this recommendation. Each links to a specific watch indicator. This section is not buried in the appendix — it demonstrates intellectual honesty, which builds VP trust.

Weakness #1: "No API Access" May Be a Proxy for Price Sensitivity

What we assumed: 6 of 14 lost deals cited "no API access" as the deciding factor T1. We interpreted this as genuine demand for an API product.

What could be wrong: "No API access" may be the socially acceptable reason buyers give when the real issue is price, feature gaps, or relationship failures. Sales teams often record the stated reason, not the root cause. If even 3 of those 6 deals were actually lost on price, the demand signal drops by 50%.

Evidence that would disprove: Re-interview the 6 lost accounts with a non-sales interviewer. Ask: "If we had offered API access at the same price, would you have signed?" If fewer than 4 say yes, the demand signal is weaker than stated.

Watch indicator: If API beta signup rate from new prospects is <20% (vs. projected 35%), demand may be overstated. Reassess at 60-day mark.


Weakness #2: AI Premium Pricing Is Untested

What we assumed: Customers will pay a 30-40% premium for AI-enriched API responses (predictive anomaly detection, natural-language query) over basic data endpoints M.

What could be wrong: The developer-tools market is brutally price-competitive. Stripe succeeded with simple, well-documented APIs — not AI-enriched ones. If developers want reliable data pipes (not AI magic), our premium pricing strategy puts us at a cost disadvantage against competitors offering cheaper basic APIs. The entire financial model depends on this premium.

Evidence that would disprove: In beta, offer both a basic tier and an AI tier. If <30% of beta users choose the AI tier within 60 days, the premium isn't valued.

Watch indicator: AI tier adoption rate in beta. If <15% at 90 days, pivot to a basic API with AI as an optional add-on (not the default).


Weakness #3: We May Be Solving a Sales Problem with a Product Investment

What we assumed: Lost deals require a new product (API platform) to recover. The market shifted; we must shift with it.

What could be wrong: Some "lost to API competitor" deals may have been recoverable with better sales execution, integration partnerships, or white-glove onboarding. Building a $2M product when the real issue is sales enablement is an expensive misdiagnosis. Amplitude's 22% API bookings could be driven by their sales motion (developer evangelism, free tiers), not product superiority.

Evidence that would disprove: Review the 6 lost deals for sales process quality: (1) Were alternatives proposed (e.g., webhook integrations, data export)? (2) Did the AE escalate to solutions engineering? (3) Was a pilot offered? If sales process was incomplete in 3+ of 6 deals, the product gap may be smaller than stated.

Watch indicator: If a partnership with a middleware provider (e.g., Fivetran, Airbyte) recovers 2+ of the lost accounts without building an API, the investment thesis weakens. Test this before committing engineering resources.


Revision Triggers

Trigger What to Re-Assess Timeline
Amplitude announces AI API features at April developer conference Differentiation window, timeline, contractor augmentation budget Immediately (April 2026)
API beta signups <15 by end of Q3 Demand assumption, $6.2M ARR projection, GTM strategy September 2026
"No API" drops from top 3 loss reasons for 2 consecutive quarters Entire investment thesis — market demand may be overstated Q4 2026
Any of the 3 v4.2 accounts (combined $420K ARR) signals churn Engineering reallocation plan; may need to split team differently Ongoing (monthly account health review)
Decision deadline passes without action (March 28) Cost of delay — Q3 beta window missed, competitive gap widens by 3 months April 2026

Related Use Cases and Skills

From this document to next steps:

Real-world skill chains: