Video Walk-Through

Step-by-Step Instructions

Click to print worksheet.
Click to print worksheet.

Now that you have a clear Path to Victory, this exercise will walk you through the first of three steps in designing an experiment that will accurately test your Offer.

Problems it Solves

  • How do you design a valid experiment?
This exercise will help you understand how to design an experiment that will yield reliable data to help you make good business decisions. You'll also learn how to avoid common pitfalls that can invalidate your data.

Keys to Creating a Great Offer Experiment

1. Test for Curiosity

In an Offer Test, you're testing for curiosity.
In Offer Design, you created your Offers using your customer's words - the way they describe their problems - and reflected them back to them with the hint of a solution. Next, you'll validate the effectiveness of those Offers, by measuring your customer's curiosity in them.

At this stage, you are less interested in how many people fully commit to an Offer (i.e. give you currency for your solution) than you are in people that express interest in your Offer to solve the problem.

To do that, you're going to create a False Door test.

In a False Door test, you offer something interesting to your customer, but you put it behind a "door." Then, you measure how many times customers try to open that door.

The test of your Offer is how many people try to open the door.
A False Door test is called a "false" door, because the thing you offered isn't actually behind the door.

For example, if you offer a customer a solution to their "lawn watering" problem in an ad, when they try to "open the door" (i.e. click on the ad) you may take them to a landing page that says that says the "Smart Lawn Sprinkler is coming soon!" with a field for them to enter their email address to get more information.

In this case you're measuring the number of people who respond to your Offer - the number of people who click on the ad - before even starting to build the smart sprinkler. This experiment will tell you, if is there enough demand to solve the problem for you to achieve your Victory.

By the way, if it feels uncomfortable offering something to your customers that doesn't exist, that's normal - and it's a good sign. I will talk about how to deal with the ethics of this situation shortly. At this point, I want to drive home that your goal for Offer Testing is to measure how many people are interested in trying to open the door.

2. What to Measure

Based on the channel that you use to reach your customers, there are different metrics you use to gauge their curiosity.

Curiosity Metrics

Ads

If you are doing anything Ad related, you are measuring the Click Through Rate (CTR): the percent of people who click on your Offer out of the total number of people who see the advertisement.

Existing Apps, Websites or Blogs

The same thing goes if you're making Offers through your existing app, blog or website. If you already have an App/Website/Blog and you would like to know if customers are interested in a new feature or if they would pay to upgrade, you are going to create an Offer for a potential solution to a problem and measure how many people click on the that Offer.

For example, if you wanted to add a "sharing" feature to your App, instead of spending time and resources to build out the integration with Facebook, LinkedIn, etc., and seeing if people use it, you could simply add a button that says "Share", and see how many people click on it. If enough people click the button, you will know if it's worth building.

If they come, you will build it.
Forums

Similarly if you are presenting an Offer in a forum such as Reddit, Twitter, Google Groups, and so forth, you will measure interest by how many people click on a link in your Offer.

B2B: Cold Offers

Cold Offers, typically via email or phone call, are slightly different - you're not looking for Click Through Rates.

In the B2B scenario, you are looking for a "positive response rate" to your cold Offers. In this case, you may email the customer, presenting the problems you have heard from similar customer interviews. The email will also include a hint of the solution, and then some kind of ask: for a meeting, a phone call, etc.

In this case, you are not counting how many meetings you are able to set up: you are measuring how many people respond to your email in some positive way (i.e. a request to take them off your mailing list would not count, an email saying "thanks, no time now but keep me updated" does count).

Again, you are not counting how many people commit or give you something back: you are only interested in how many people are curious for more information.

Why don't you care about people who commit? Your solution isn't optimized yet. You don't know the right price point, you don't have the right feature set, you haven't established the right social proof, etc. You'll need all of these in place before you're able to accurately measure the number of people who will give you currency for your solution. If you start measuring that now, and few people give you currency, you won't know which part of your solution is out of whack (e.g. the price, features, social proof, etc.).

So instead of trying to measure the effectiveness of all of the components of a sale at once, you're going to optimize each component individually. You're starting by optimizing your initial Offer, then you'll move on to your pitch, then your price, etc.

Eventually, you'll optimize your entire sales workflow, but for now, focus on positive interactions with your Offer.
In-Person Interactions

Curiosity for in-person interactions, such as at conferences, on-the-street interviews, or at meetups, can be measured through the number of people who ask questions for more information about your hint of a solution to their problem. Your metric is the percentage of people who ask questions out of the total number of people you tried to approach.

In other words, if you approach 10 people, and two of them make a convenient excuse about why they can't stick around to talk to you, you still need to count them. If out of those 10 people you tried to approach, three actually stuck around and asked you questions for more information about the Offer, your response rate would be 30%.

For example, imagine you're at a conference for people who maintain golf-courses and you're talking with attendees. If you mention to 10 of them that you're building a smart sprinkler to help save water, and then you stop talking, count how many people ask follow-up questions that tell you they're curious about solving the problem?

You don't have to count the number of people who offer to meet with you later to potentially buy a sprinkler system, you're only measuring if you're describing the problem and hinting its solution in a way that inspires curiosity your customers.


The channels above are not exhaustive. This is simply a list of examples to demonstrate ways that you can measure curiosity. You are not looking to get anything from them: not their time, not their emails, not their money. You simply want to know if their interest is piqued. The Currency Test will come later.

3. What is Behind the False Door?

When someone does express curiosity, what is behind that "False Door" that they have just opened? Here are some tips to making the most of this type of test.
First, don't piss off potential customers.
These are people who care about the problem you are trying to solve. You like them and want to keep them. Some people have suggested putting up an Error Message, or a Sold Out message when your customers try to open a False Door (e.g. someone clicks on your ad and they are taken to a page that says, "Sorry we are experiencing technical difficulties at the moment", but I'm not a fan of this approach.

I'm a bigger fan of just being honest.

Instead of making the customer feel like they've missed out or something is broken, I prefer to tell them honestly something like:

Thank you for your interest! This product is still under development. We'd love to contact you when it's available.

Then, ask to follow up.

Those who click on your link have already shown interest, it is OK to ask to follow up. The key here is that you won't be measuring follow up rates. You have not yet optimized for getting a follow up, and that is not the task of the moment. It is best at this moment to appease the customers who are interested and continue to optimize your Offer before moving on.

4. Statistical Significance

In your interviews, you didn't measure statistical significance: you were collecting qualitative data which you could use to create this Offer experiment. Now that you're running an Offer Test, it's time to switch to quantitative data - with quantitative data, statistical significance matters.

Your goal is to achieve statistical significance at a 95% confidence interval (more on this in the A/B Testing Exercise).

If you have a small sample size, such as through in-person interactions or cold emails, you may not be able to achieve this level of significance. However, if you've got the data, hold your standard high.

5. Time Boxing

Keep your experiment time-bound. This will prevent you from over-investing in the experiment. I like to keep my own experiments between 1-2 weeks. This allows me to account for day-to-day variance while not letting it go on for too long.

Depending on the experiment, if I can't get something with statistical significance within that time, I may extend it; however, it is important to recognize when something is just not working, so I can move on.

6. Done is Better than Perfect

This is your first step to putting an Offer or ad in place. You will be tempted to make the best ad or landing page with a really awesome and beautiful experience. Right now, you actually want the simplest version of your Offer to put out there. You will worry about optimizing the experience later.
At this point, your only goal is customer curiosity. Get a simple Offer out quickly. Spend hours, not days, creating it.
Once you get some data back on that Offer, you will be able to tell if it is worth your time to optimize that Offer. The truth is, while making a better user experience may increase your conversion rate, it will not do so to the extent that it will drastically change your decisions moving forward.

Create Your Offer Experiment

This is the first of three steps to creating your Offer Experiment. You will be using the same worksheet for the next three exercises.

Take out the Offer Experiment worksheet and have your Offer Design worksheet handy for reference.

Step 1

Take your Offer 1 from Step 4 of the Offer Design worksheet. Write it in on Step 1 of the Offer Experiment worksheet.

Step 2

Create your Time Box. Write in a Start date and an End date for your experiment. You want it to be long enough to gather enough data for statistical significance, but short enough that you won't get caught in an endless experiment. If your Offer is exciting to customers, it will be exciting now.

That's it for this exercise!

Recap

In this exercise, you took your first step to creating your first experiment.

Most importantly, you learned the six most important things to keep in mind while creating your experiment so that you produce high-quality, valid experiments:

  1. Test for curiosity
  2. What to measure
  3. What is behind the false door
  4. Statistical significance
  5. Time boxing
  6. Done is better than perfect

What's Next

Next, you will use your Path to Victory spreadsheet to define your Success Metric Stoplight. This is what will help you decide if an experiment was a success or a failure, and how to proceed.

 

How can we help?

Have a question about Offer Experiment Design? Or did you use/teach the exercise and discover something that may help others?

Our community thrives when you share your experiences.