How Forge Uses Product Data to Drive Experimentation
This post is guest-authored by Aaron Cripps, VP of Product at Heap customer Forge.
In 2019 when our Product team at Forge evaluated how the business defined and measured success, it became clear that very little of the true journey was visualized or measured. We didn’t have a reference point to identify the “why” which was driving behavior. And while we were running experiments, the practice of evaluating the impact of changes over the course of time was inconsistently applied. Increasing visibility into the journey and customer behaviors helped us deliver better solutions.
We’ve found that the key to iterating on product success has been moving data closer to the core of our decision-making. Having a trustworthy source of truth has increased our team’s accountability to measurable results. Using Heap’s product analytics has increased our comfort and agility in making product changes as we’re able to rely on a complete set of historical data, as well as measure the results of our experiments in real time. With Heap, we can measure the impact of product changes on client conversion and see how our experiments increase the number of closed transactions.
How experimenting drives measurable value
Every successful experiment includes the following: a hypothesis, a target outcome, a documented change, and a way to socialize the insights you gain. This process can be adopted across your organization to produce tangible insights that lead to measurable outcomes.
While there are many types of experiments one can run, here are three which have been especially easy to get going:
Concierge: Manually delivering value via direct, person-to-person engagement. For example, we might have a broker or Ops person “white glove” a new concept with a customer, and then we’ll measure their response.
Wizard of Oz: Manually providing value behind the scenes, without the customer being aware. For example, we’ll sometimes run a query or calculation by hand, then present the results through the platform to see if it increases engagement.
Doing it live: Shipping changes straight to production, with split testing to see if the changes make a measurable difference in user behavior compared to a live baseline. (Assuming you have an infrastructure capable of running two experiences at once).
What experimenting looks like in action
Forge offers investors access to pre-IPO companies through our private marketplace platform. Clients are able to closely track price trends, discover new opportunities, and invest in private market capital products and services.
Compared to the public stock market, the private market is extremely opaque and challenging to engage with. As a result, our product needs to be very clear to help investors navigate trading within the private marketplace. Unfortunately, the engagement numbers on our marketplace platform suggested it wasn’t as clear as we had thought. Instead, we saw too many potential clients sign up to learn more — and then leave without indicating interest in trading.
Forge’s marketplace discovery experience allows potential clients to find investment opportunities by searching and filtering through the private companies listed on the platform. About three years ago, our marketplace discovery experience presented just the company name and provided a brief summary about the business.
One of our product managers dug into Heap’s behavioral data, and found that initial marketplace engagement was lower than expected. We know that many investors will research multiple companies for a long period of time, but the data showed us that there was minimal engagement after the first (brief) visit.
This prompted us to go deeper in understanding our customer needs through interviews and segmentation. We found that some key information was buried too deep within the product — and potential clients were leaving the platform without ever finding that info.
Before jumping into any product changes, we used Heap to baseline our retention in quarterly cohorts and show that our platform’s performance was stable across cohorts. Historically, the retention rate for our platform after the first week was 35%. We believed that it needed to be much higher than that. We noticed that the subsequent weeks were also low, and we wanted that to change.
We hypothesized that by adding more information to the initial marketplace view (which is the first experience users have after signup), we could increase the number of repeat visits. Our desired end goal was to give clients access to the right information earlier, in order to better inform investment conversations with their broker.
We first performed a heuristics audit and identified ways to improve the visual presentation of pre-existing information in the marketplace. We also considered how we could surface proprietary insights, like historical trading data, which we had access to behind the scenes but weren’t readily visible on the platform. Changing the presentation was a straightforward design activity, so we started there, while at the same time navigating the challenges of presenting more data on the platform.
There were two risks to bringing new data to the platform:
Execution risk: can we present data consistently and meet our high compliance standards?
Desirability risk: does this information resonate with our customers and drive repeated engagement with the platform?
In order to avoid overbuilding our first solution, we created a compliance-approved formula, business process, and internal tool which enabled the brokers to deliver this information “Wizard of Oz” style.
As we weren’t yet using split testing, we released a sequence of product changes over time to compare the effects of changes on signup cohorts. The design changes produced a small lift, but the real benefit came from getting those internal insights out on the platform. As a result of our UX/UI updates, we saw a 17% lift in engagement, and the data that we added to the marketplace view led to a 51%+ increase in potential client engagement.
How to start running your own experiments
Hopefully our journey at Forge will inspire you to begin running experiments at your own company. It can be overwhelming to know where to start, so here are three simple steps to get you going:
Find and focus on a test which creates a first point of leverage — an area which is already important to the organization where you can prove the value of experiments. Don’t go too wide or too conceptual.
Make sure you have “just enough” governance that your data can withstand scrutiny. Learn more about how Heap helps you govern your data.
Look for partners who believe in your success and want to pair with you. For example, your Heap CSM can accelerate your progress by sharing tips on how to get the most out of Heap, as well as patterns they’ve seen work in other organizations.