Episode Thumbnail
Episode 191  |  10:01 min

#Growth 2: How to Pick, Plan and Execute a Growth Experiment So You Can 10X Product Activations

Episode 191  |  10:01 min  |  12.19.2018

#Growth 2: How to Pick, Plan and Execute a Growth Experiment So You Can 10X Product Activations

00:00
00:00
This is a podcast episode titled, #Growth 2: How to Pick, Plan and Execute a Growth Experiment So You Can 10X Product Activations. The summary for this episode is: On this episode of #Growth, host Matt Bilotti walks us through how to pick, plan and execute a growth experiment. Most crucial to getting started? Make sure the experiment you choose gives you a statistically significant dataset from which to start, run your experiment off of and ultimately base your conclusions on. This experiment framework can apply to all parts of your business. Ready to get started with your own growth experiment? Tune into #Growth now.
On this episode of #Growth, host Matt Bilotti walks us through how to pick, plan and execute a growth experiment. Most crucial to getting started? Make sure the experiment you choose gives you a statistically significant dataset from which to start, run your experiment off of and ultimately base your conclusions on. This experiment framework can apply to all parts of your business. Ready to get started with your own growth experiment? Tune into #Growth now.

Matty B: What's up, what's up? Welcome to Hashtag Growth. It's your boy, Matty B, aka the guy with the beard from the videos. Today, we're going to be talking about how to pick, plan, and execute a growth experiment. So, all the things that you need to know, a step- by- step in terms of how to run a growth experiment. For all of you out there that are getting started with this or saying," Maybe growth is something that I should be thinking about." Let's talk about what it means to make that happen. So I want to start off with a bit of an example around activation. So let's say that we wanted to do an experiment to get people activated. People sign up on your website for your product, and then they take some kind of action to become activated. An example that I've used in the past is if it were Dropbox, someone would be activated when they upload their first file, or if it is Gmail, they would be activated once they send their first email. So if you're picking out an experiment to do, you have your lever, it's activation in this case, the first thing to know, and this is super critical. I learned this the very hard way when I started doing growth experiments, it's that statistical significance matters. Scary, big number mathematical thing, and to me, it was like," Whoa, what does that exactly mean and how do I measure that?" To really simplify this down, if you're running an experiment and you only have a few data points, you could only run an experiment on 40 accounts, you're not going to get anything from it. Any of the results that you're going to get are meaningless because there's not enough data there to say that this thing is actually caused by the changes that you made or the experiment that you ran. Once you have a sense of statistical significance, so there are a few really great resources out there, abtestcalc. com is a good one. And basically the way to think about this is if you're going to run an experiment, how long does it take you to get to a point where you can reasonably say this thing worked. And I've made a lot of mistakes when we started doing experiments saying," Yeah, it'll be fine. We'll have enough data. We'll be able to pull some results from it." Only to run an experiment, be two and a half weeks in. We spent a lot of time and energy only look around and say," Oh wow, we're going to have to run this thing for two more weeks to get data out of it." That is a bad place to be in. So just have a sense up front of, is this thing going provide you enough data. Now that you have a sense that you're going to have enough data around something, go ahead and do that thing. If you're trying to get people to be activated, go sign up for your product and see what the experience is like for them when they're going through that same scenario. Maybe you're not sending any emails. It's surprising how many times you might realize that you have people signing up for your product and you're not sending them emails. Go click around in the product. See if there's anything that points you in the right direction. Go be in the shoes of whatever lever you're trying to move. If it's revenue, for example. So people trip a limit in your product and they get to a point where they should pay, go do that, sign up for the product, put a bunch of data in, trip the limit somehow, and then see what happens. Does anything prompt you to go upgrade? Is a sales person reaching out? Is there a state in the product that actually locked something down until you go upgrade? Go be in the shoes, and once you're in those shoes, the way that we like to do it here is we do tear downs of our own product. So we go through, we take screenshots the whole way through, focused on whatever experiment we're thinking of running, and we take screenshots and say," This thing was confusing. I didn't understand what this thing was supposed to be. Nothing explained that I was supposed to do this then next." Put that all down and then spend 10 to 15 minutes just thinking about ideas. Write them all down based on the tear downs that you've done, the ideas that you might have of stuff that you saw on other products. Go ahead, write them all down, and then look at it again and tell yourself that you're probably not thinking big enough. It might be really tempting to make a small tweak, like changing where this button is on the dashboard, but ultimately those are not going to be the kinds of experiments that are going to get you the real big changes and data that you need to know if this thing worked or not. So think bigger. Go back through that list and say," How can we 10 X this?"" How can we 10 X this idea?" Maybe instead of moving a button in the dashboard, you just remove all the content in the dashboard and put a button. And that button is the only thing that people see and it's the only thing that you're driving them through. Next, you need to outline what you need to do to make this happen. So think about this as a grammar school science project. You're going to use the scientific method. At Drift for each experiment, we write a growth one pager which basically outlines the scientific method for that experiment. So the sections of it are observations, hypothesis, experiment, so what is the experiment, background and context, general requirements, concepts and references, experiment size and control. And what are the metrics that you want to track that will tell you success at the end of the day? So those are the things, observations, hypothesis, experiment, background context, requirements, concepts and references, experiment size and control, and success metrics. Now you might be saying," Whoa, that's a lot of things. How much time should I be spending on this?" Generally putting together a one pager should take no more than an hour of time, and one really important note is the hypothesis. You need to put a number into that. I think it's really easy to just say," Well, if we move this button on the dashboard, more people will click it." That's not something that you're really going to be able to disprove or prove the way that you need to. Put a number on it. If we do this, then we will get 5% more people clicking on that button. From here, you have your one pager. It's built, looks great. You didn't spend too much time on it, but you got some stuff together. Send it over to the rest of the team, that would be the one implementing it and working on it. So the designers and the engineers, then go ahead, work to build it out and this one is really, really tricky. And we're going to talk about this in the next episode. It's finding a way to create a control group. So, a control group being the group of people that you are not introducing the change to so that you can measure the success of the change or experiment that you ran. Super, super important. Been burned on that a lot. We'll talk about it another time. And then once you build the experiment, turn it on and don't touch it. It's really tempting, especially if you're coming from any kind of super iterative background, maybe that's in sales where you change stuff on every single call or product where you're iterating on the product with an experiment, set it and let it go. If you start touching it and changing it and saying," Oh, well, maybe we could change the button color instead of just moving it." Then you're introducing new variables and it's going to make it really, really hard to know if the experiment actually worked or not. To bring this all home, there's a couple of really important points here. One is that you have to have a really great hypothesis, put yourself back in your ten- year- old shoes. Remember what it was like to prove that black cloth makes something hotter or that feeding a plant lemonade makes it more likely to grow, whatever it was that you did during your science fair. I think those are some of the ones that I did. Put yourself in those shoes have a really, really solid hypothesis. Have an internal process for this. With us it's these one- pagers, the documentation around the experiment. It's a really, really important thing to have that reference point to come back to. Don't get too wrapped up in making sure that it's perfect, but have something down there to say," Here's the thing that we're testing. Here's roughly what it's going to look like. Here are the numbers that we're going to want to move. And here are the metrics that we're going to measure as a result." At the end of the day, the most important point is you're testing that thing. You're going to run an experiment. Don't get so wrapped up in the numbers. It's okay to be imprecise with this stuff, especially when you're just getting started. And over time, you'll learn more of a process and more of a down pat way to build out and run experiments. All right, thanks so much for listening. Really appreciate it. Six stars only. Seven stars. DC might be saying eight stars only these days. Who knows. So thanks again. Catch you next time.

More Episodes

#171: Working Backwards with Amazon's Colin Bryar and Bill Carr

#170: Avoid Consensus (Unless You Want Average Results)

#169: Introducing The American Dream with Elias Torres

#168: Seek Arbitrage Opportunities

#167: The Culture Episode

#166: Why DC Went on a News Fast and How You Can Too