Product Analytics is Useless. Let's Fix That.
This story "Product Analytics is Useless. Let's Fix That." is also available on Medium.
Here’s a dirty little secret: Product Analytics is useless.
It’s strange, right? After all, we’re a Product Analytics company. We literally built a tool that delivers product analytics in a new way.
Recently, though, we’ve been looking at the analytics landscape and making some realizations. Primary among these is despite the transformative power analytics has on product development—analytics are like a superpower for product—most of the time analytics ends up doing far less for its users than it could.
Why is this? In our experience, product analytics is useless because of a simple mistake companies (and people) tend to make, over and over. (It also happens to be a mistake promoted by marketing departments everywhere.) It’s this:
THE TOOL IS NOT THE ACTION
The tool is not the action. It’s just not. And yet product orgs proceed as if it is, all the time.
You know that guitar in your closet? Be honest: at some point weren’t you tempted by the idea that simply buying a guitar was the thing holding you back from being a rock star? Or that fancy closet organizer—wasn’t the dream that if there were finally a place for everything, your house would end up clean?
And yet the tool is not the action. Having a tool is not the same as putting the systems or methodologies in place that make that tool meaningful. Owning a guitar isn’t the same as practicing every day. Putting shelves in your closet isn’t the same as picking up your clothes from the floor.
Put more formally, owning a tool is not the same as committing to and/or executing the repeated activities that produce the result you want.
Unfortunately, the promise of most analytics companies tends to be exactly this. All you need is data! Get that data in front of you, put a dashboard in front of the right person, and poof! Out pops a magical product!
Right now Product Analytics as a field tends to focus on bringing companies data. But what’s elided here is the path from data to decision. Product teams don’t need data for its own sake. They need data to make better products.
We’d like to change this. We want to make Product Analytics useful! After all, our goal in building a tool that collects customer data in an easy and comprehensive way wasn’t just to get data—it was to help people make better products.
"Product teams don’t need data for its own sake. They need data to make better products."
So how are we doing this?
Well, at Heap we’re busy developing something we’re tentatively calling the “Scientific Method for Product.” This is a way to leverage the tools of science—most notably, hypothesis and experiment—to generate more insights from your data. The goal is to raise the rigor and focus of product decisions.
Here are three core beliefs that help make up this method. Our belief is that these strategies, when incorporated fully, can help product teams be more rigorous, more creative, and more successful.
IDEA #1: Make Hypotheses, Not Decisions
That’s right — no decisions. We’re serious: we think that most things you can do in your product would benefit from being framed as hypotheses.
So instead of “deciding to ship a feature,” you say, “my hypothesis is that this feature will cause X users to do Y.” Instead of deciding to pursue a new market segment, you make a hypothesis that pursuing a new market segment will allow you to expand your TAM.
Why? Decisions imply finality. They’re hard to walk back. Hypotheses, on the other hand, are just strategies for gathering information. A bad decision is a failure; a wrong hypothesis is just another step on the path to knowledge.
Similarly, understanding choices as hypotheses forces you to focus your plans. We think hypotheses should specify in advance what should change, why you think that thing should change, and how you’ll know if you were correct. (You can download our Product Brief here.) Here’s an example from Heap:
Decision: “We’ve decided to build the ability to autocapture previous-page data.”
Hypothesis: “We hypothesize that by building our tool to autocapture previous-page data, users (primarily eCommerce customers) will be better able to track the paths users take in their product, and/or to perform this analysis in their downstream warehouse. This should increase NPS and customer retention, improving ARR by tens to hundreds of thousands. We’ll know we’re successful if NPS scores increase among people who use this feature, and if churn is reduced among people who use this feature compared to people who don’t.”
Which of these better sets you up for success?
"Product-Market Fit is fickle. Maintaining it requires constant iteration."
Finally, framing decisions as hypotheses promotes a vision of your product as ever-evolving. Product-Market Fit is fickle. (Learn more about Product-Market Fit.) Maintaining it requires constant iteration. Good thing hypotheses have a way of producing only more hypotheses. What assumptions haven’t you investigated? What additional information do you need?
IDEA #2: Analytics for exploration
This is another one we believe deeply. Traditionally, product analytics have functioned as a kind of documentation. You decide that you care about a group of things—high-level metrics like pageviews and sessions—and then you track them.
We’re not saying there’s anything wrong with documentation-style analytics. We have these high-level dashboards up on monitors in our office, too.
But if this is where your data starts and ends, you’re missing a wealth of information. What product features drive these metrics? What events make up these features? And — finally — how can you change these events to improve your high-level KPIs?
For example, our customer Lending Club noticed a dropoff at a specific moment in their funnel. So they dug in. Among the questions they asked were, “what happens after people experience a form error?”
Thanks to their analytics tool, they made an interesting observation: while some people who received a form error dropped off and some didn’t, the people who did drop off didn’t do so immediately. Instead, often the users who did receive validation errors engaged in the same behaviors of those who didn’t, receiving errors but moving forward as if they hadn’t. Strange!
From this, Lending Club made a hypothesis: maybe the errors were not observable enough. Maybe customers who received an error didn’t notice they’d received the error, and so couldn’t correct their error. Maybe they kept moving forward, but the site, waiting for an error to be corrected, wouldn’t let them. This then produced confusion: why can’t I move forward? Something must be wrong with the site — I’d better drop off.
The test was simple: make the error messages more prominent. That simple fix produced a near-immediate uptick in conversion.
When analytics only track things that are defined in advance, it’s simply not possible to diagnose a problem in this way. Fixes like these are only noticeable when Product Managers can ask multiple questions of their data, and poke around in it.
IDEA #3: Iterate incrementally
The third idea is “iterate incrementally.” This attitude aims to combat the myth that great products somehow pop fully formed out of the head of genius product managers. That’s always the story, at least. Maybe it’s happened somewhere, but we’ve never seen it.
What really happens (as many of us know but don’t always want to admit) is that you start with an idea, then you work on it, then you work on it, then you work on it, and after a long time you’ve created something you hope is valuable to someone. Aiming big is great, but you’re more likely to stumble upon buried treasure when you commit to searching every part of your product.
So how do you build this approach into your development process? Some great product companies provide examples. AirBnb, for example, is famous for experimenting with everything, for testing every single part of their site. Many of these experiments produce no meaningful results. That’s ok! Collectively they dramatically improve the experience and value of the site.
At Netflix, every product change goes through a rigorous A/B testing process before becoming the default user experience.
Luckily, it’s easy to follow these leads. Think small, not big, and never stop moving forward.
—
So these three ideas—make hypotheses, not decisions; analytics for exploration; iterate incrementally—are just three components of our general scientific method for product. Together, they (among others) connect the dots from data to products.
Are Product Analytics useful? Of course they are. But to be useful, Product Analytics has to mean more than simply “data.” The tool isn’t the action. But combine tool and action, and you’ve got a reliable process.