#Build 2: Failure is a Part of the Process

Media Thumbnail
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, #Build 2: Failure is a Part of the Process. The summary for this episode is: What do you do about failure? I’m not talking about a test that you took that failed. I’m talking about true product failures. Products that, for whatever reason, just bombed. Failures far outside the acceptable scope of “fail fast.”

Maggie: Hey, what's up? This is Maggie, PM here at Drift. Welcome to the second episode of Build, a new channel within the Seeking Wisdom universe, where we're going to go deep on all things building software. So, as we heard about last time, I'm here at Drift to examine everything I know about building and learn from some incredible mentors. And since Seeking Wisdom is all about accelerating learning, I'm going to share everything I learn with you along the way. But before we get into more episodes on new ways to work, today I thought I would start at the very bottom, the failure. By failure, I don't mean test that you shipped that failed. You know, that's the whole point of a test. What I mean are true product failures: Releases that you worked on that for whatever reason, just completely missed the mark. So I listened to DG's podcast the other day and stole his idea of polling Twitter for a quick response. I also came to a bunch of product people I knew to try and get a quick list together on some of our more obvious product failures. And my goal with this was to see if I could quickly identify a common thread and figure out a better way to spot failures in advance and what is fascinating was that it turns out that pretty much every example had the exact same thing in common: losing sight of the customer. So to get to the examples, I'm going to call in some of the good ones I got on Twitter. We had a product that missed a key segment of users that ended up hating the feature so much that it had to be rolled back despite loud commitments from the team that they wouldn't do it. Another where they released a feature that customers said that they needed, but since all the customers had already built work arounds, no one adopted it. And a really good one where we had a six month interactive video platform, API and SDK that no one used but the internal dev team. Shout out to Craig, our very own VP of Product at Drift for that gem. And I had one that even ended up working and that we hit our metric. But when I did some post- launch user feedback, I learned that the only reason why anyone interacted with our feature was not because of all the work that we put into the context and the framing and persuasion, but really just because we made a button orange and users noticed it, and this list just keeps going. So after listening to all these stories, one thing became super clear. We all opened ourselves up to failure when we started to think that we knew what the customers wanted and needed. And because we were sure about what the customers needed, we just went ahead and built it only to find out once we actually ship those features that it wasn't really what customers wanted or needed at all. In every case, these failures stick out, because I think they were outside of that acceptable fail fast method, right? No one said" oh, I tried out a test, but it failed". You know, like I mentioned before, it tests are meant to include the possibility of failure. And what's really interesting to me, the more that I thought about it is I bet all of us were in some kind of agile- esque environment that not only included the concept of testing features, but probably encouraged it. But for some reason, all of these features we talked about didn't go through that process. And the only reason I could think of why we would skip that is because to us at the time these features and these products were sure things, they were features that were a hundred percent gold, definitely going to work, rigorously backed up in our minds by all kinds of evidence. But in every single case, what we thought we knew about users was incorrect. And this is the thread that each of these failures had in common. The second we became sure of our point of view on users, the instant we thought that we knew absolutely what they were going to do or want, I think we became 10 times more likely to ship a failure. And this isn't new, right? This isn't a special realization that I had. We all know this at some level. I think anyone who builds products knows some cliche about user feedback and you are not your user, but there's a reason why it comes up over and over again, because we constantly fall into the trap. No matter how many times we hear" Just ship it", we still do this. And none of the features I mentioned were by any means the first feature that any of us had worked on. We were pretty much all well along in our careers, some very well on their careers when these things happened. And hopefully most of you who are listening are nodding along thinking about your own failures. And this isn't just something that happens. If you don't know better, if you're brand new to product. It happens because we get comfortable. We build up expertise in our industries, our field, within our individual product spheres. And when this happens, I think we start to lose our beginner mindset that makes us less sure of what's going to happen and more willing to spend the time building a test rather than skipping ahead and just building the full feature. And since we're here to accelerate learning, most of us, myself included, are looking for new ways to get things done better and faster, but going back and looking at all of these examples of failures I think that it's possible that the highest leverage thing we could do would be to just focus on constantly following the mantra of" ship smaller and faster". If we want to be able to build and ship better features more consistently, we just can't allow ourselves to think that we know what our customers are going to do, because you literally cannot know until you ship something and see. And you know, you can absolutely build up, I think, a pretty good intuition that's right most of the time. But still, it's no substitute for following a process of research, shipping quickly, and learning. And that's why those cliches and those processes exist to remind us that no matter what, we're pretty much always better off shipping a smaller test faster than just going for the big event. So now what? I want to make sure everything we talk about on Build is immediately useful, no matter what kind of team that you're on. So let's say you're working on something right now that you're pretty sure is going to work. Maybe you're starting to think about it and realize that not everything that you're basing your feature on is because of user feedback. That you've acted from something that you've actually shipped. So what can you do? You can't turn back the clock. There's nothing you can do right now. What you can do, at least you can start to do, is try to ship just one day earlier, see what you can cut out, try to get your feature out the door, and speed up your learning by at least one day. It's something that I've done and every time it's been worth it. So that's it for today. It turns out that failure in product is all about losing track of your users and customers, which is sort of amazing because our job in product is to literally be the voice of the customer on the team. And we all do it. And probably the more experienced you are, the bigger the consequences are for making this mistake. So just remember: Test everything all the time and ship earlier than seems to make sense. But thanks for listening. And now that we've gotten to the bottom and explored our failures together, I'm looking forward to inspiring a bunch of ways to get better and faster at shipping and sharing them all here with you, our Seeking Wisdom community. Hit me up at maggie @ drift. com or Maggie Crowley on Twitter. Tell me what you want to hear, who you want me to interview, what secrets I can share with you. And remember, always leave a six star review for this new Build channel and definitely come to Hypergrowth. Thanks.


What do you do about failure? I’m not talking about a test that you took that failed. I’m talking about true product failures. Products that, for whatever reason, just bombed. Failures far outside the acceptable scope of “fail fast.”