How I shipped a mobile app without tracking and bad things™ happened
Here’s a story:
I was hired as a Senior Android engineer at [a big, famous company] in September 2014. My job was to help build out a brand new Android app that would ship in January. That’s right: we were supposed to build an entirely new app in 5 months.
Now, this wasn’t supposed to be a big deal. The iOS team had shipped the iOS app about six months earlier. The hope was that we could just copy the iOS app to Android. No thinking, just code until January. Unfortunately, the minute the Android eng team (all three of us) started making tickets for the work, we realized there was no way we could pull that off. If the goal was 5 months, we needed to cut scope. Fast.
No problem, we thought. We’ll just ask the iOS team which features weren’t being used in their iOS app, then cut those features out of our v1. I asked the iOS team what were the least used features—only to immediately discover there was no tracking in the iOS App.
Digging in, we discovered that the team had discussed and planned for tracking. They even made tickets. However, the tracking project was estimated to be a couple of weeks of work: creating a tracking plan, getting the data team to agree to the schema for tracking, implementing tracks, ensuring the tracking captured basic information like device type, OS version, and so on. When push came to shove, the team prioritized shipping additional features and left the tracking for later.
Largely due to that decision, all sorts of bad things™ happened.
Bad thing #1: We copied stuff that didn’t work the first time
We had specific goals for the release of our mobile apps. We wanted users to use our product more often (DAU), create more content, and collaborate more with other users. Using server-side data, we could tell that more content was being created. But we couldn’t understand the workflows and journeys of the iOS app users. We had no idea if users were getting stuck along the way.
We wanted to fix these issues in our Android app! But without tracking, we couldn’t tell if users couldn’t find the features in the iOS app, or if we built the wrong things.
For example:
In our tool, there were multiple ways to write comments. Our hope was that by making commenting easy, more users would take advantage. But because we didn’t have tracking in the iOS app, we couldn’t tell if users were learning how to write comments, which entry points to commenting were most important, or if users were getting stuck in the workflow. Because we were mostly looking at API log data, it was hard to tell if users were making more comments on the web or in the app.
We scheduled interviews with users to ask them how they were using the app. Unfortunately, this was time consuming, and gave us very limited information. People couldn’t always remember exactly how they learned how to use the app, and instead tended to focus on top-of-mind feedback.
So what happened? Yep - when we built the Android app we re-built all of the mistakes from the iOS app. You know how each iteration is supposed to make things better? Not in this case.
Bad thing #2: We built stuff our users didn’t want
Another reason the iOS team deprioritized tracking app behavior was because we had pretty great user behavioral data tracking on our web product. The thinking was that we could simply use the web data to understand things like what features are most used and then use that to decide what to build on mobile.
The problem was that people used web and mobile totally differently! People on the go weren’t setting up accounts and doing major project management on their phones– they were doing that at their desk.
Relying on web data might have seemed faster and cheaper than building mobile tracking, but it wasn’t. We wasted time building things users didn’t need: workflows like creating projects and account configuration. And we under invested in critical mobile workflows like comments and notifications because we didn’t know their usage was so much higher on mobile.
Bad thing #3: We prioritized unimportant stuff
The thing was, we had tons of customer feedback from the iOS app. We had app store reviews, folks tweeting at us, and customers who wrote into support. But without tracking, we had no idea which of these issues were most important, or most widespread.
Unfortunately, the response was to tackle all of it.
We spent months working, but it turned out to be on improvements that didn’t move the needle. In retrospect, we were being 100% reactive. But the pressure to improve the app was intense. The work involved in implementing tracking and getting data team resources to create reports and dashboarding felt too slow and too expensive. So we kept building and building, without really making a difference for our users.
The one good thing ™
After hitting a brick wall for so long, we came up with a new plan. We decided to use what we had learned from user interviews, our imperfect data from the server and web, and the customer feedback to ship a much smaller feature set.
Even better: the one non-negotiable was that we were going to implement tracking on everything.
We did make the tracking project a little smaller by taking some shortcuts: put the data in its own tables (harder to analyze with our other data, but faster to make the tracking plan), messy naming conventions, limited metadata. But even so, we were not going to move forward unless we could track every button click, every screen, and every field.
What happened? Well, with tracking in place, we were able to understand basic things about the app, like how often users were looking at tasks versus projects, and when they were opening notifications.
We learned (for example) that most users still used the web app for the heavy lifting. But mobile apps turned out to be the perfect platform for responding to notifications and reviewing your tasks for the day while commuting on a train.
This time, we doubled down on the features our mobile users truly cared about, and worked on removing friction from the workflows that gave them the most problems. We shipped way fewer features, but we tracked all of them and focused on the things that mattered to our users.
After doing this for just a few months, our Android app saw much higher app ratings, more engagement, and better user feedback.
The moral of the story: don’t let bad things™ happen to you!
Ultimately, I was lucky. I basically got to A/B test what it would look like to develop the same app with and without user behavioral data. It turned out that data-driven development and iteration won by a landslide.
However, I also decided I would never again ship an app without tracking. Especially on mobile.
If you’re a mobile PM or engineer, my advice is: don’t repeat my mistakes. If you care about your users, you should start small, track everything, and focus on iterating the things that matter. That’s how you turn bad things™ into good things™!
Want to learn more about mobile analytics? Check out our complete guide.