📊 Design of experiment: presale crowdfunding

TLDR Phase 1 - asset prep checklist

  • highlight what problems device solves, then answer how tech solves them

  • compare against competitors performance, compare against cost of inaction

  • show human faces (me, derms, UGCs), show real use

  • share free value: informative and credible content in the same space

  • cut to open strongly in first 3 seconds

  • waitlist now for limited time reward, signup smooth (type email, drop IG handle, QR code)

Phase 2 - running experiments checklist

  • incrementally increase spending, note down kpi to check for diminishing returns

  • cross platform comparison: Meta, Google, TikTok

  • engage the waitlist: interview, survey, poll, vote, easy tap IG story, email click

  • (if time allows) A/B test visuals, price point, copy text, length

Phase 3 - data analysis checklist

  • monitor ROAS and its levers, segmentation by

    • their effort to engage

    • geographical region / timezone

  • (if time allows) quantify trust/traction of conversion percentage vs. list size

  • did higher ad spend also raise organic signups? did higher ad spend proportionally increase impression?

  • recalculate CAC + COGS = LTV

  • iterate 1-2

 

Documenting Econ101 crash course and campaign recipe with GPT.

My prompt:

Review my formula for ad spend. Here are the variables:

  • $ ads: total spending on paid Google/Meta/TikTok ads

  • # seen: number of people who saw the ad (impression scales up with $ spend)

  • % seen2wait: conversion from seeing the ad to joining waitlist (may or may not scale with $ spend)

  • # waitlisted: number of interested buyer emails

  • % wait2buy: assume 5% (ref below) converts to backers after seeing the email

  • # buyers: number of backers (may go up if $ price goes down)

  • $ price: selling price per device

  • $ cost: manufacturing cost per device

I see that:

  1. overall revenue > expense:

    (# waitlisted) Ă— (% wait2buy) Ă— ($ price) > (# buyers) Ă— ($ cost) + ($ ads)

    (# waitlisted) Ă— (% wait2buy) Ă— [($ price) - ($ cost )] > ($ ads)

  2. individual LTV > CAC + COGS:

    ($ price) > ($ ads) / [(# seen) Ă— (% seen2wait)] + ($ cost)

    ($ price) - ($ cost) > ($ ads) / [(# seen) Ă— (% seen2wait)]

They’re the same thing. My questions:

  1. What are other equations to quantify, limit, and guide ad spend?

  2. What hidden relationships did I miss (e.g., lowering $ device increases # buyers, raising $ ad spend might also raise conversion beyond 5%)?

  3. What experiments can we run to maximize revenue = (# seen) Ă— (% seen2wait) Ă— (% wait2buy) Ă— ($ price)

    a) Tests for optimizing raising $ price vs. lowering # buyers, so their Ă— is maxed
    b) Tests to maximize # buyers through higher % wait2buy, and % seen2wait

(reference for 5%)

GPT Response:

1. Other guiding equations:

  • Breakeven Ad Spend = (# buyers) Ă— ($ price)

  • Return on Ad Spend (ROAS) = (# buyers) Ă— ($ price) / ($ ads); ROAS > 3 good benchmark for consumer hardware

  • Cost per Waitlist Lead (CPL) = ($ ads) / [(# seen) Ă— (% seen2wait)]; expect CPL < ($ price) (% wait2buy)

  • Payback Period beyond Kickstarter = CAC / monthly subscription-based revenue per buyer

2. Hidden relationships:

  • Price elasticity: as $ device decrease, # buyers increase, but not linearly. Usually, elasticity > 1 in luxury or novelty goods, < 1 in must-haves

  • Diminishing returns due to ad fatigue: as $ ads increase, # seen increase, but comes with incremental % seen2wait because you start reaching less relevant audiences

  • Social proof effects: more waitlisted can improve %wait2buy above the assumed 5% because people see traction and trust

  • Cross effects: a higher price may reduce buyers, but can improve perceived quality; a larger ad spend can boost brand awareness and improve organic signups

3. Tests to run:

  • For balancing price and buyers:

    • A/B test landing page price points to measure waitlist signup rate & purchase intent

    • Survey willingness-to-pay (van Westendorp model)

    • Pilot campaigns with different price anchors and measure CTR and signup conversion

  • Maximizing buyers:

    • Increase %wait2buy by

      • showing social proof (“10,000 already on waitlist”)

      • offering early-bird tiers with urgency

      • building engagement (emails, sneak peeks, community)

    • Increase #waitlisted by

      • optimizing ad creatives for CTR and conversion (A/B test ad copy, visuals

      • adding referral incentives (“invite 3 friends to get early access”)

      • layering in organic channels (PR, newsletters, TikTok/IG content)

  • Maximizing %seen2wait

    • Ad creative tests: run split-tests with different copy angles (problem-focused vs. aspiration-focused)

    • Landing page optimization: test headline, CTA wording, visuals, form length

    • Audience targeting experiments: compare lookalike audiences vs. broad interest targeting vs. retargeting

    • Retargeting sequences: test follow-up ads for people who clicked but didn’t sign up

In practice, building a dashboard that updates

Revenue Forecast = (# seen) Ă— (% seen2wait) Ă— (% wait2buy) Ă— ($ price)

and compare against ad spend to see CAC, ROAS, and profitability in real time

My response:

I like Return on Ad Spend (ROAS) because it gives me multiple levers to optimize:

ROAS = (# seen) Ă— (% seen2wait) Ă— (% wait2buy) Ă— ($ price) / ($ ads)

So we test improvements in 4 directions:

  1. Increase $ price by emphasizing value against competitors, highlighting pain points solved, creating fomo, and targeting demographic with more spending power

  2. Increase waitlist to buyer conversion by increasing engagement and sense of ownership, (high effort) 1-1 interviews on pain points, (low effort) click voting on design choices

  3. Increase impression to waitlist conversion by simplifying signup, making ads feel trustworthy not scammy, and delivering valuable information in the first 3 seconds of videos or glancing at the page

  4. Maximize # seen impressions by scaling ad spending only after improving other conversions, gradually increase

Hard to measure trust before launch, since the actual % wait2buy won’t be known until launch day (polls vs. election results), so it’s important to build trust prelaunch by putting a face to the corporatey brand, showing diverse peers use the device. Forget about cross effects because messy to predict and many can only be analyzed retrospectively. To avoid personal bias on price elasticity, we can anchor competitor prices or anchor to cost of inaction (wasted money without our product).

Next
Next

🦓 How would you give zebras their stripes?