Chapter 01

Why PMs Need This

How data shapes product decisions, when to trust numbers, and how to push back without becoming a data scientist.

⏱ 10 min read 🔰 Start here

The meeting that should scare you

Picture this: your data team shares a slide. "Engagement is up 12% after the redesign." The room nods. Someone says "great work." You move on.

But nobody asked: 12% compared to what? Over what time window? Which users? Was anything else launched that week? Is 12% even meaningful given how noisy this metric is — meaning how much it bounces around randomly even when nothing real has changed?

This happens in almost every product review, every week. And the cost isn't just making a bad call on one feature — it's building a habit of mistaking noise for signal, correlation for causation, and activity for progress.

PM Insight

You don't need to run the analysis. You need to know which questions to ask before trusting it. That's the skill this guide builds.


Three roles data plays in product work

Data shows up differently depending on what stage of a decision you're in.

1. Data as evidence

You're deciding whether a problem is real and how big it is. Here, data answers: how many users hit this? how often? how much does it cost them? The risk is cherry-picking — finding data that confirms what you already believe.

2. Data as signal

You've shipped something and want to know if it worked. Here, data answers: did behavior change? did it change because of what we did? The risk is attribution — many things change at once, and it's easy to credit the thing you shipped.

3. Data as input to ML

You're building a feature that learns from data — a recommendation engine, a ranker (a system that orders results, like a search page or a feed), or a classifier (a system that puts things into categories, like a spam filter or a fraud detector). Here, data answers: what patterns exist that a model can learn? The risk is garbage in, garbage out — and not knowing that's what happened.

PM Insight

Each role requires different skepticism. "Is this problem real?" needs different rigor than "did this launch work?" — and both are different from "is this model good enough to ship?" Knowing which question you're answering changes what you should scrutinize.


What PMs actually need to know

The job isn't to run the analysis — it's to know which questions to ask, and what to do with the answers. That requires a specific mental toolkit:

1 — Distributions, not just averages

When someone reports an average, your first instinct should be: average of what? What does the spread look like? A feature used "on average 3 times a week" might be used daily by 10% of users and never by the rest.

2 — Correlation vs causation

Two metrics moving together doesn't mean one caused the other. Both might be driven by a third thing you're not measuring. This is the most common error in product analytics.

3 — Statistical significance vs practical significance

A result can be statistically significant (very unlikely to be random) and practically meaningless (too small to matter). With large enough samples, almost anything clears the significance bar.

4 — Model confidence ≠ model accuracy

"87% confident" is the model's internal probability score — how strongly it's leaning toward a prediction. It is not "this prediction is correct 87% of the time." What matters is the actual error rate: of every 100 users flagged as churners, how many actually churn? That's what tells you whether to act.

5 — What's feasible vs what sounds feasible

"Can't we just use AI to predict X?" is asked in every product meeting. The answer depends on whether you have labelled data (historical examples where you already know the outcome — e.g. past users who churned vs stayed), whether a predictive signal even exists, and whether the business case covers the engineering cost. This guide helps you ask those questions.


How to use this guide

Each chapter covers one concept area. You don't need to read them in order, though the early chapters build vocabulary that later ones use.

Wherever a concept is clearer to see than to read, there's an interactive visualization. Play with it — the intuition you build from dragging a slider is stickier than a definition.

Every chapter ends with a PM Playbook — the specific questions you should ask in real meetings, scoped to that chapter's topic.

Before you continue

Think of a recent meeting where someone presented data that changed a product decision. What questions weren't asked? By the end of this guide, you'll know exactly what was missing.


PM Playbook — Questions to ask right now

The next time someone shares analysis in a meeting, try these:

These six questions alone will change the quality of decisions your team makes.


4 questions