One of the most important aspects of building a company is building out the team. This is particularly challenging because you need to hire people for a lot of roles that you aren’t qualified to do yourself.
For example, my background is in software engineering. When it was time to hire our first engineers, I had some idea of what to look for and even some interviewing techniques I liked. But as the company scaled, we needed to hire for specific expertise, which meant hiring for jobs I had never done and wouldn’t have been very good at.
This creates a new kind of challenge: how should I go about hiring a first DevOps engineer? What about a first data scientist? What about engineering managers? What about a first salesperson, or a growth marketer, or a lead for the solutions team?
This turns out to be one of the repeated problems in building a company. How do you hire someone who will be fantastic at their job when you wouldn’t meet the requirements yourself? This is critical, because your team will be extremely limited if you can only hire for expertise you have personally.
In this post I’ll describe a system that has worked well when I’ve needed to do this at Heap.
The first step is to get a complete picture of the role. You’ll want to talk to some people who know a lot more about it than you.
You need to learn things like:
What makes someone great at X?
What are the different kinds of X? Which profile is right for us right now?
You’ll notice patterns, and those will turn out to be important ingredients in the interview loop you’ll construct later. 
Ask questions about team-level failure modes, as well. For example, when we were doing background research for our first data science hire, multiple people flagged that data scientists typically need data engineering support to assemble datasets or productize pipelines. This kind of advice is important, so that you can build the right team to set your “first X” hire up for success.
Learn the pitfalls
Another important thing to learn is the most likely reasons for a hire to not work out. These are the pitfalls your interview process needs to de-risk.
When doing background research around our first data science, DevOps, and engineering management hires, here were some example anti-patterns that came up:
Data scientists who use shiny modeling techniques when simple ones will suffice, or without first trying the simple ones.
Data scientists who can plug data into off-the-shelf models but don’t have enough statistical depth to iterate and tune effectively beyond that.
DevOps engineers who have inflexible preferences for specific tools, as opposed to a pragmatic, open-minded approach.
DevOps engineers who push for perfect automation, beyond the point of practicality.
Engineering managers who don’t have enough technical context to ensure high-quality execution.
Engineering managers who limit their role to feature delivery, instead of being strategic partners with their product and design peers to collectively ensure a successful overall product.
Note that these anti-patterns lend themselves to interview steps later on, when you’re creating an interview.
You might need to lean on your network to source these discovery conversations, but I’ve found strangers to be extremely open to giving advice about their area of expertise.
Interview processes for roles you couldn’t do
Once you have a decent lay of the land, the next step is to build an interview process. This part is tricky, because your discriminative ability is limited. You’re looking for someone who has skills and expertise that nobody else on your team has, which means it will be hard to collectively discern whether a candidate is “great” or not.
I’ve found two strategies to be particularly helpful here:
When in doubt, emulate.
Tap external expertise.
Emulate the job!
The less you understand a role, the more you want to lean on making the interview process emulate the job. This works well because you can probably recognize great work, even if you couldn’t produce it. 
You might be totally unqualified to be a DevOps engineer, but you can probably still:
Assess which of five candidates made the most progress on a half-day DevOps problem.
Assess whether a candidate was able to explain their approach and the tradeoffs they’ve made, in a way you understood. (They’re going to have to do that on the job, so if the communication is fraught during the interview, that’s a real problem!)
To build an interview process that emulates the job, start by listing some scenarios in which it’s particularly important the person you hire will shine. Work backwards from those to create an interview loop.
Examples from hiring our first DevOps engineer:
This person was going to need to build a lot of tooling from scratch. So, our onsite interview centered on building a deploy pipeline, from AWS primitives, for a toy service.
This person was going to inherit a huge backlog of high-leverage DevOps projects. So, we structured the onsite problem so as to test how well they could make pragmatic choices to limit scope.
Examples from hiring our first data scientist:
This person was going to need to model behavioral datasets with an eye for what could be useful for our customers. So, for our onsite interview, we had candidates build a model based on a behavioral dataset and use it to power a mock feature.
This person was going to need to hit the ground running with some messy, one-off data dumps. So, we used a messy, one-off data dump for the interview problem.
An example from hiring our first engineering manager:
This person was going to need to build rapport with their team and figure out ways to help each person grow. So, our interview process involved getting to know some engineers, generating some ideas for how to help each person grow, and talking through how they would go about executing those ideas.
You can also incorporate the pitfalls you’re trying to avoid into your evaluation criteria. For example, we knew from our research that we want to avoid hiring a data scientist who reached for the shiniest, most sophisticated modeling techniques when simpler ones would suffice. In our interview process, we asked candidates to walk us through the choices they made during their interview project and, if they didn’t start with the simplest models, tried to understand if there was a good reason.
Lean on external expertise
You can recognize great work, even if you couldn’t produce it, which is why emulating the job works so well as the backbone of your interview process. But what if one of the key spikes for the role is something you won’t be able to recognize?
For example, it was important for our first data science hire to have deep statistics fundamentals. We can assess pragmatism or execution speed via an onsite problem, but what about the theoretical background our hire was going to need to be able to iterate on a model that wasn’t working well?
One of my favorite techniques for this kind of situation is to bring in an external expert for the spikes you can’t detect via emulating the job.
We’ve gotten great interview signal via incorporating folks in our network into the interview process. I don’t know why more people don’t do this. There’s no fundamental reason to limit your discriminative ability to that of your current team, especially for spikes you think are key for a role and which nobody on your team is qualified to assess.
Figure out which important aspects of the role you can assess well, and outsource the others to experts whom you trust. Ideally these experts are advisors or are otherwise “friends of the company,” but this can work well even if they aren’t.
For our first data science hire, we asked a machine learning PhD we knew well and thought highly of to join in as an interviewer for one part of our data science onsite, and they were able to give us confidence that a candidate we were excited about also had the technical depth we weren’t going to be able to gauge ourselves.
Here is some general advice I’ve found helpful when making a “first X” hire.
Playtest the interview loop. Have an external expert playtest key parts of your interview loop. This can help you calibrate – you now have one datapoint for what a “good” performance looks like. It can also help flush out parts of the process that don’t work well, e.g. that the interview problem is too complex, or the dataset is a little too messy, or that you’d learn more from having the candidate extend a basic solution instead of starting from scratch.
Look for someone who teaches you a lot during the interview process. You’re going to need to learn a lot about their function in order to collaborate with them. It should be clear that you’re learning a lot from them already.
Look for a great first X, not just a great X. Whoever you hire won’t be inheriting much foundation or structure. So, I would be especially careful about hiring a “first X” who only has experience at companies with very mature X competencies. For example, a first SRE hire who has only worked within an extremely sophisticated, Google-level SRE stack might be a risk – their experience might not translate well, and they might wind up a fish out of water without the toolchain and organizational support they’re used to. You’ll want to make sure the candidate knows what they’re getting themselves into and is excited about it. Also, make sure your interview process appropriately simulates the environment the candidate would be joining, which has none of the support the candidate might be used to.
Target people who have “seen good” before. You want someone who knows what an excellent X function looks like, because they’re going to play an outsized role in building it out at your company. And, in particular, they’ll play an important role in hiring your second X.
Sell the opportunity for impact and leadership. The opportunity to build out a new function from scratch is often part of the appeal of a “first X” job. The ideal candidate is often an important contributor on a high-functioning team, who has “seen good” before, but who hasn’t necessarily had a chance to lead yet.
Build a 30-60-90 plan together. Hiring people who know more than you about their area of specialty is only the first step. You’ll need to manage them too! Iterating on a 30-60-90 day plan as part of the interview process can help align expectations and give you something to work off of, after the person joins.
Make it clear that you value their area of expertise. You know less about the candidate’s area of expertise than they do. It’s easy for a candidate to assume this is because you don’t take their specialty seriously (and thus haven’t bothered to learn more about it). If they think this, they won’t want to work with you. The candidate needs to know that you respect what they do, see their role as important and strategic, and are aware of your own gaps.
When not to hire your first X (yet)
For roles that are particularly central to the core feedback loop of the company, you might want to spend some time doing X instead of hiring for it immediately.
For example, it might not be a good idea to hire a first salesperson before you’ve figured out the basic selling motion on your own, as a founding team. The market learning you’ll get from doing it yourself is so valuable to the fundamental product-market fit discovery the company is doing that you might be better off trying to do it yourself, at first.
You won’t be able to do this for each of the dozens of distinct roles you’ll need to hire as the company scales, but it can make sense in some cases.
Thanks to Sidra Hussain for feedback on earlier drafts of this post.
 With any rapid learning process like this, I prefer to pack the context into as short of a time window as possible. I’d much prefer having five such conversations in a week instead of spread over three weeks. I find this makes it much easier to spot the patterns.
 Building interview processes that emulate the actual job a person will be doing is a great idea in general. The typical alternative – behavioral interviewing – has all manner of pitfalls. Behavioral interviews are more prone to interviewer biases, favor good storytellers, and overemphasize “polish”.
 More specifically, you’re getting a datapoint about a performance that’s probably good. I did once come away from this exercise suspecting that my "expert" might not be that expert after all.