Optimizing Your Campaigns With A/B Split Testing

Optimizing Your Campaigns With A/B Split Testing

In this chapter, we will talk about how to optimize your campaigns with A/B split testing.

As you know, by this point, the beauty of direct response marketing is that everything is measurable and trackable, and so we can improve anything and everything over time.

A/B Split Testing
A/B Split Testing

What that means is that the worst your campaign will ever perform is how it’s performing today.

 

That’s the way that I want you to look at all of your marketing.

 

The beauty is that no matter how bad it’s performing today, with A/B split testing when done correctly, you can improve that over time.

 

No matter how great your marketing campaign may be performing; we can always use A/B split testing to increase it over time. What is a great performance today, in a month from now, might be the worst that your campaign has ever performed because it’s gotten so much better?

 

Now, the way we improve the performance of a campaign is mathematical. It’s scientific. We don’t do anything with assumptions, guesses, or opinions. Let me say that again. There is no room in the world of direct response for assumptions, guesses, or opinions. Because we’ve got data, because we’ve got numbers, because we’ve got metrics, and because we can track everything, we can make our decisions objectively with data instead of subjectively, based on our feelings or emotions or desires or wants, for that matter.

 

That’s the beauty of direct response, that as we’re tracking and measuring and getting data, we don’t have to worry about whether we are making the right decision or not.

 

We don’t have to worry about whether we have done something to improve the performance, or is it hurting the performance, or how is it? We could simply look at the data and let the data tell us.

 

We can also take our ideas of what we think might increase performance or what we think might work well or work better, and we can put it to the test.

 

We don’t have to scratch our heads and wonder if it genuinely did increase performance or improve performance. We can simply look at the data.

 

The way we do that is with A/B split testing. Now, the fancy definition for A/B split testing is that it’s a randomized experiment with two variants.

 

I want you to, for this, remember two variants. This is not what’s called multi-variant testing. We’re talking about two variants, an A versus a B. What we’re doing is we’re comparing two versions of a single variable, which we’re going to talk about. We’re testing one thing, one element, one variable, and we’re using two variants.

 

We’re taking a headline, for example, which is a single variable. Then we’re testing two variants against each other, version A and version B of that one variable, the headline.

 

We’re looking to see the response that we get to variant A against variant B to determine which of the two variants produce the result that we are looking for.

 

A simple graphic from the folks over at Visual Website Optimizer shows what A/B split testing is all about. Let’s say that we’ve got a webpage, and let’s just say for argument’s sake that the red graphic represents a headline.

 

And the green graphic represents another headline. Everything else on the page is the same because we’re only testing a single variant. In this case, we’re running half of the traffic to each of these versions, and this is being split randomly and live.

 

What I mean by that is that we’re not sending visitors today to this page and then tomorrow to this page. We’re splitting the visitors.

 

We’re using a random live split, dividing the traffic in half, and then measuring the conversion. Remember, everything else except the two headlines is identical.

 

And then we look at, okay, so we changed the headline, we’ve got this headline, and we got a 23% conversion rate. This version only got an 11% conversion rate. We sent the same amount of traffic on the same days, at the same intervals, at the same time, but this headline produced 11%. This one produced 23%.

 

This one becomes the winner. And this one is labeled as what we would call the control.

 

What that means is that this one over here now becomes the standard, and everything else gets tested against this one. Everything else now has to beat this one.

 

We can continue to run more tests. In this case, we would get rid of Variant B and replace it with variant C. Then, we would start a new test, and we would split the traffic between version A and version C to see if we can increase. Maybe version C will produce a 26% conversion. If that’s the case, version A is no longer the control, and now version C becomes the control. This is a beautiful way to improve the performance of a marketing campaign consistently.

 

Now, when you are getting ready to optimize or improve the campaign, you want to begin by identifying the constraint.

 

You want to look at your conversion metrics, your performance metrics that we talked about earlier on in this track. You want to look at your opt-in conversion rate. You want to look at your sales conversion rate. You want to look at your order form conversion rate. You want to look at your upsell conversion rate.

 

And you want to find the constraint, the worst performing page in the campaign, the underperforming page. You might have a 50% opt-in rate. You might have a 1% sales conversion rate and an 80% order form conversion rate, and a 20% upsell conversion rate.

 

In this scenario, the reality is this, if you’ve got a 50% opt-in conversion rate and a 1%, or let’s call it a 0.5% conversion rate over here, working on the opt-in rate, working on the lead capture page, and trying to get the lead capture page above 50% to 55% or 53% or 56% or whatever it may be, is not the constraint.

 

That’s not the weak link in the chain. The weak link in the chain in this example is the sales page, the sales conversion rate. The way that I want you to think about this, I want you to think about it as a chain. Every chain has a weakest link, right? The weakest link determines the strength of every chain. It doesn’t matter if one particular link can withstand 1,000 lb. of pressure, another one 800, another 900, 1,000, and one that can withstand 350.

 

If the one can only withstand 350, well, once we put 350 lb. of pressure on this chain, this link is going to snap.

 

Well, what we’re talking about here is looking at each of your conversion metrics, each of your performance metrics, and looking for which one is the weakest link, which one is the underperforming step or stage that is the weak link.

 

Once you’ve identified it, let’s say again, this is just 5%, just for example purposes, then this becomes the constraint.

 

This is recognized as the weak link, the constraint. And then what we do is we run a split test on the constraint. You focus your A/B split testing on the constraint on the weakest link. There’s no use improving one of the stronger steps in the campaign over here until you fix the constraint.

 

We focus on the constraint. When it comes to starting the split testing on the constraint, you want to test the things that scream, not the things that whisper. That means that you want to test the things that can double conversions, to triple conversions.

 

You’re not looking for these tiny, little, half a percent increases, incremental things. You’re not looking to test things like button color, background color, font, or anything like that. You’re looking to test the things that scream.

 

The things that scream are things like the big marketing ideas, the headline, the offer, the lead, and the page’s format.

 

These are things that can make or break a campaign.

 

Changing the big marketing idea can double your conversions, no doubt.

 

Changing the headline, same thing.

 

Changing the offer, same thing.

 

Changing the lead, same thing.

 

These are the biggies.

 

And then oftentimes, I’ve seen changing the format of the page, not the layout, but just radically changing a wildly different format to the page can oftentimes make a tremendous difference.

 

Once you’ve identified the constraint, you know that you’re going to start with one variable or one element on the page, and then you’re going to have two variants, right?

 

You’re going to identify a single element on the page, whether it’s the headline, whether it’s the offer. You’re going to test one element on the page, two variants, and you’re going to do that on the constraint, and you’re going to do that with one of the things that scream. A few final tips related to A/B split testing, number one, you need to run all of your tests until they are statistically valid.

 

Now, more often than not, depending on the platform that you’re using, if you’re using a tool like Visual Website Optimizer or the split test application that you can get from Google or a tool like ClickFunnels, or almost all of the others, they’ll tell you when the test is statistically valid.

 

I mean that you have to make sure that the sample size is big enough for the statistics, the numbers, and the metrics to be meaningful.

 

If you send 20 people to a webpage and nobody opts-in, for example, that doesn’t mean anything because 20 people is not a big enough sample size to tell you anything.

 

Over the next 20, or 30, or 40, or 50 people, you might get a 50% opt-in rate. You’ve got to make sure that you are using a big enough sample size, and you’re running it long enough for you to get statistically valid data.

 

Only make decisions based on what you’re going to stop or start or add or change or keep in your campaign based on statistically valid tests. There’s nothing worse than having a test that’s not statistically valid and making a decision based on it. And because it’s not statistically valid, you got the wrong information or the wrong guides. You took the wrong action. Make sure that you run all of your tests until they are statistically viable or statistically reliable.

 

Next, don’t assume that the test results for one stage of your campaign apply to your campaign’s different stage.

 

That means don’t assume that just because you ran a test on the upsell offer, it produced a bump in conversion rate. That means if you do that very same thing on your main offer, it’s also going to increase the conversion rate.

 

You have to recognize that tests are appropriate for different stages. What you’re testing on a lead capture page might not produce the same bump on a sales page.

 

What you’re testing on a sales page that produces a lift might not produce the same lift on an upsell page. Each stage warrants its test. Don’t just blindly apply what you found to work on one stage to another stage.

 

Make sure that you take it and test it at the other stage as well.

 

Finally, only use other marketers testing data or testing insights, testing information, test results for idea stimulation.

 

In other words, there’s a lot of people out there that on the web, they’ll publish their testing data, their testing results.

 

Frankly, we don’t know whether their test was done correctly with a randomized split test. We don’t know if they ran to statistical reliability or statistical validity. Even if they did conduct the test accurately, and the data is statistically valid for their particular test, different markets perform differently.

 

Different markets are made up of individuals at different demographic levels, different psychographic levels, different experiences online.

 

Different markets respond differently to different marketing campaigns and different formats, different layouts, different ideas, different headlines, different offers.

 

Just because something worked well for one marketer or one entrepreneur in a different market does not mean that it will work for you.

 

That doesn’t mean that you shouldn’t pay attention. That just means that you should make a note of what it is that they tested, and you should apply that same test or consider using that same test to your market to confirm whether it gives you a lift or not.

 

You want to be smart about it, and you don’t want to blindly copy or apply somebody else’s results to your marketplace because it could have a wildly different impact. There you go. Now you understand the basics of A/B split testing.