You’re probably not thinking about taking some time out to run an experiment when you’re making marketing decisions every day.

You’re too busy writing email copy, designing your latest landing page, or crafting the perfect social media update to be creating tests, optimizing treatments, or shattering null hypotheses.

A/B testing is a way of testing two different versions of something to see which one works better. To do an A/B test, marketers take two different versions of one piece of content and test them with two audiences that are the same size.

To determine which test is most effective, marketers compare the results of the tests to see if there is a statistically significant difference between them. If the difference is significant with a confidence level of 95% or more, then the test with the better results is the winner. Split testing your marketing assets can help you increase leads and conversions.

Unfortunately, this isn’t what everyone hears about A/B testing. There are many myths that prevent smart marketers from making accurate, data-driven decisions.

We’re going to dispel some of the most prevalent misconceptions about A/B testing. Let’s get started!

1. Marketers’ instincts work better than A/B testing

Even the best marketers can be wrong.

After years of experience, we generally have a solid understanding of what works to convert visitors into leads, and leads into customers — but our decisions shouldn’t only be guided by our instincts.

Split testing allows you to test different versions of your website and see which one drives more traffic and conversion rates. Theoretically, A/B testing has the potential to increase leads by 30-40% for B2B sites, and 20-25% for ecommerce sites.

The moral of this story is that if you only rely on the opinion of the highest paid person in the room, you could be missing out on potential revenue.

2. You should use A/B testing before making every single decision

11 Split Testing Myths That Ruin Tests

While split testing can be beneficial in helping to make marketing decisions, it is not necessary to test every single decision. Some changes are not worth testing.

You don’t need to run an A/B test on clickthrough rates for headlines like “The Marketer’s Guide to Pinterest” and “A Marketer’s Guide to Pinterest.”

Splitting tests may be effective for making slight changes, such as the color of your CTA, but using a different word like “a” instead of “the” will not produce significant changes in conversion rates. If you want to test two headlines with different positioning, you should do an A/B test.

3. A/B testing is not as effective as multivariate testing

A/B testing and multivariate testing are both great ways of using data to make marketing decisions, but they are used for very different purposes.

A/B testing is used for testing one element by changing it in two or more different ways and seeing which change produces the best result. MVT is a method of testing which allows for the analysis of multiple combinations of elements and treatments. This is done in order to establish which combination is the most effective.

An example of when you would use an A/B test is if you want to test how changing the color of your CTA will affect your conversion rate, while keeping all other elements on the page the same, like the traffic sources, type of visitor, layout of the form, and even the accompanying copy and image.

What is the impact of the color of a CTA on conversions?

You aren’t trying to see how different combinations of elements impact conversions. For example, you wouldn’t explore how the color of a call-to-action button, the number of fields in a form, and the type of image used would affect conversions.

There is no single test that is necessarily better than any other test, they just serve different purposes.

4. If a treatment works for one marketer, it will work for any marketer

Although there are many A/B testing case studies that show the success of certain layouts, designs, and copy in terms of conversion rates, you should never just follow what other marketers are doing without testing it yourself first.

Each testing situation is different. The main point is that you cannot simply copy what another website is doing because it may not be effective for your website.

Although it may not be original, it can be helpful to look at someone else’s marketing plan and use it as a starting point for your own business. If you want to improve your clickthrough rate, you might want to use a personalized sender name.

We conducted a test in 2011 to see if including a personal name from someone on the HubSpot marketing team in the email’s “From” field would increase the email CTR.

The control group, who saw the “From” field as “Hubspot,” had a 0.73% chance of clicking while the group who saw the “From” field as “Maggie Georgieva, HubSpot” had a 0.96% chance of clicking. The personalized “from” field was a clear winner with 99.9% confidence.

The results of this test may or may not be applicable to your audience. You can use A/B tests to see what marketing tactics work best for your audience.

5. You need to be a tech-savvy marketer with a large budget to do A/B testing

A/B testing doesn’t have to be expensive. If you’re on a very low budget, there are free split-testing tools available, like Google Analytics’ Content Experiments. Google’s tool is free to use, but you will need to be more technically inclined to use it.

Paid A/B testing tools usually have a higher initial cost, but are less technologically complex.

Paid tools are more expensive than free tools, but you may be able to work more quickly and therefore reduce overhead costs.

In addition to managing technology and budget concerns, you should be able to use basic math to properly carry out a split test. To be considered a winning test, it must be statistically significant. This means that you will need to know how to interpret the results in order to understand what this means.

You can determine if something is statistically significant by using HubSpot’s free A/B testing calculator.

You will need varying levels of technological and mathematical skills depending on what resources you have available. If you do not mind working with numbers and technology, you can still A/B test without a large budget.

6. A/B testing is only for sites with a ton of traffic

You don’t need a lot of people to do A/B testing, you just need enough to get reliable results.

While more visitors will provide more accurate data on what works and what doesn’t, there is no set minimum number of visitors required for an A/B test. You will need a certain amount of people to ensure that your test is statistically significant.

There are plenty of free tools online to help you estimate how many visitors you need without needing a degree in statistics.

7. Split testing and optimization are the same thing

After a few years of being relatively unknown, split testing has become one of the most commonly used optimization strategies. It’s growth in popularity stems from two seductive things:

  • Extremely low barrier to entry (low cost and easy tech implementation),
  • Countless case studies depicting 300% lifts in conversions.

According to TrustRadius, nearly half of all companies surveyed plan to increase their spending on split testing in the next year.

The increased popularity of the term has caused a false reduction in the meaning. Many marketers think of split testing as a way to improve conversion rates, but this is not always the case. The goal of conversion rate optimization is to use data to improve the customer experience and get as many conversions from your website as possible.

8. You should only run iterative tests

There are other testing methodologies you can use split tests with, but iterative testing is by far the most effective.

An iterative testing program is a testing program that uses the previous test’s learnings to inform the next test. Iterative tests generally involve making small changes to a page, implementing the change that produces the greatest response, and then testing another small change.

He’s observed that successful teams have focused on testing and using, rather than debating and analyzing Chris Goward, CEO of WiderFunnel, refers to iterative testing as Evolutionary Site Redesign (ESR). He has observed that successful teams have focused on testing and using, rather than debating and analyzing. This is a valid practice, but it should not be the only testing approach. Theiterative testing process can be very effective, but it makes some unrealistic assumptions.

The first thing to keep in mind is that not every test will necessarily lead to success for your website – many tests may not have any positive effect at all. If the tests don’t show any improvement between each variation, then your website isn’t becoming more successful.

Tests that rely on smaller changes are less effective at finding bugs. Most of the time, making small changes to elements results in only small changes in the overall outcome.

Sure, there are plenty of case studies where a button color increased clicks by 400%, but those types of results are generally:

  • Impossible to replicate
  • An indication that something is broken in your testing process or technology

Optimizers should use a combination of innovative and iterative testing techniques. I’m a big fan of the innovation technique (a complete redesign of a page) when it makes sense.

9. You should test everything

11 Split Testing Myths That Ruin Tests

This is one of the most false myths. Not everything should be tested! If you could only take away one point from this article, please let it be this one. For every split test you run, there are an infinite number of other tests you could have run.

You want to test on pages that will bring in revenue and test elements that matter! There are two types of pages you should never test:

  • The broken page that just needs to be fixed.
  • A page that has no impact or an inconsequential impact on revenue.

As marketers, we can get lost in all the metrics that are interesting to us. While metrics are important to marketers, it is also important to remember that bosses/clients are more interested in money.

10. Everyone can/should split test

If your variations don’t produce at least 100 conversions, it’s not worth testing them. If you do choose to run the test, be cautious with the data as it is likely inaccurate.

Since split testing and CRO aren’t the same thing, you can still optimize your low traffic site and use other verification methods including:

Qualitative insights are gained from sources that don’t involve data analysis, such as heatmaps, user surveys, and personas.

A persona is a semi-fictional character that represents your ideal customer. You usually create several personas to account for different customer types.

Real time personalization is a method of user segmentation that provides content specifically tailored for that group.

A sequential test is a type of test that compares the results from two distinct dates. Though this method is not looked upon highly, it is a good way for sites that have trouble with traffic to understand what is effective. Use sparingly if at all.

micro conversions are conversions that happen on lower level pages, for example, the homepage. By using these as indicators, you can split test to see what works best. They are used as indicators when you don’t have enough deep funnel conversion data.

You cannot test changes on your low-traffic ecommerce site at the sales level. When testing, you can look at higher level conversions as an indication of success.

11. Test results will remain constant

A test’s results are a measure of how well it performed during a specific timeframe. There are a few things that could potentially change your conversion rates in the future, with time being the biggest factor.

The results of a test will always be different depending on what season it is. For example, a test run in November will have different results than if it were run in March. Why?

Your visitors’ intentions are very different at these two times! Monitor your conversion rates whenever you make your winning variation live. I recommend this for two reasons:

  1. You will identify imaginary lifts
  2. You will be able to identify more optimization opportunities

An imaginary lift is when a test’s results don’t match up with the reality of the situation. This usually happens when there is a false positive because of a small sample size, a problem with the way the test is done, or when the metrics are not well understood.

If you catch problems early, you might just save your job. When you’re told to expect an increase in sales, it’s frustrating when you never see the results.

When you monitor these pages, you will be able to see any changes that occur. Time isn’t the only thing that degenerates conversion rates:

  •  traffic source changes,
  • offer appeal,
  • design paradigm shifts

Other factors that will reduce the number of different champions you play include: When you check these pages, you can understand why the sales are falling and then start a new advertising campaign to improve the situation.

About the Author Brian Richards

See Brian's Amazon Author Central profile at https://amazon.com/author/brianrichards

Connect With Me

Share your thoughts

Your email address will not be published. Required fields are marked

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}