How to Use A/B Testing to Send Better Email Campaigns

[ 0 By

With so much competition in the inbox, today’s email marketing is all about relevance and connecting with your audience. Batch-and-blast emails aren’t going to make the cut—you need to be strategic and send emails that your subscribers want. How do you figure this out? You can use A/B testing to send better email campaigns.

We know that making smart decisions based on data can increase results, but sometimes it’s hard to know where to start. Every year at Litmus Live, we cover how to produce great looking—and performing—emails, sharing examples of companies that utilize data and insights from a variety of sources to understand their audience and create engaging, unique campaigns.

At the 2014 conference, Mike Heimowitz, Atlassian’s online marketing manager, presented on using A/B testing to learn what resonates with your audience so you can continuously optimize your emails. With this data in hand, you’ll be able to produce better-performing campaigns (and, hey, maybe even make more money!).

WHAT IS A/B TESTING?

A/B testing involves comparing the results of one version of an email (the control) against another version of an email (the test). When executed correctly, they can give marketers concrete evidence of which tactics work on their audiences and which don’t. There are countless things to test, including headlines, preheader text, ‘from names’, and the like. It’s one of the most effective (and easy!) ways to make measurable improvements to your campaigns.

SETTING UP A TEST

When setting up an A/B test, the first step is deciding what aspect of the email you will be testing—is it the color of a button? A graphic? A subject line? Then, since testing is a continuous process, you’ll want to formulate a hypothesis for your test so it’s repeatable. For example, a hypothesis would be “If we use our company name as the From name, rather than a salesperson’s name, open rates will increase because our subscribers recognize the company name.”

By choosing a hypothesis, if the results of your test are conclusive and your hypothesis was correct, then you can repeat that test in the future (and continue to improve your emails!). Once that hypothesis has been determined, choose which type of test you’d like to run. Mike covered three types of A/B tests in his presentation:

  • 50/50 test: Send version A to 50% of your audience and version B to the other 50%.
  • 25/25/50 test: Send version A to 25% of your audience and version B to the other 25%. After a certain amount of time—perhaps a couple of hours or days depending on your list size—send the winner of that test to the remaining 50% of the list.
  • Holdout test: Don’t send 10% of your list an email at all, and send version A to 45% and version B to 45%. Then, look at the conversions of your subscribers. Did those that received version A, version B, or didn’t receive an email at all convert the best? This can help show the effectiveness of email!

The next step is setting up the test in your Email Service Provider (ESP). All ESPs have different testing capabilities—some offer straightforward 50/50 split testing, others offer 25/25/50 testing, others are custom, and some many not have a testing platform at all. However, as Mike stated in his presentation, just like a good carpenter can’t blame his tools, a good marketer can’t blame his ESP. Regardless of whether your ESP has a testing platform or not, you can still set up tests. It may be a more time-consuming, manual process but you can still split your list and send different variations of your outgoing messages.

You’ll also need to identify the data point(s) you are going to be measuring in the test—clicks, opens, conversions? Be sure to set your own goals, and don’t look at industry baselines. Your audience and emails are unique so treat them that way! If you haven’t done A/B testing before, you can use your current open, click, conversion, etc. rates as the baseline for the testing results.

MEASURING THE SUCCESS OF YOUR TEST

It’s important to choose the statistical significance of your test. For example, a statistical significance of 98% would mean that if you ran that test 100 times, 98 times you would get the same result.

If the statistical significance isn’t high, then you wouldn’t want to make future decisions based on those test results. For example, if you got a 75% statistical significance for using blue buttons vs. green buttons in your emails, you’d likely want to retest this to see if you can get more conclusive results.

Atlassian uses a statistical significance of 95% on their test and uses this handy free tool to figure it out. In one of his examples, Mike set up a subject line test. Would putting the feature or product first in a subject line result in a higher open rate?

ab1

After running the test, he put the results in the A/B significance test tool.

ab2

From the subject line test, version B saw a 5% increase in opens, and a statistical significance of 97%. As a result, Atlassian now puts their product name and then the feature first in their product emails. They’ve used this test to conduct other tests—such as whether a hyphen or colon in the subject line performs better.

TESTING INSPIRATION

Need some inspiration to get your testing ideas flowing? Use these examples to ignite ideas, but only use them as motivation—your audience and emails are unique.

Deckers and Act-On both tested responsive designs vs. non-responsive designs on their audience and saw amazing results. Deckers saw a 10% increase in clicks from the mobile-friendly campaign, while Act-On saw a 130% increase in clicks and a 93% increase in sales-ready leads. In these examples, both companies were able to clearly recognize that their audience preferred responsive design over non-responsive and, as a result, switched their templates to reflect this.

Here at Litmus, we A/B test almost all of our major sends. While we’ve seen some significant results—removing a featured article in our Community digest increased clicks 13%—many of our tests have had inconclusive results. And, while it can be discouraging—that’s ok! In his presentation Mike explained that not every test is going to lead to an increase in conversions, clicks, or the like. However, each test brings you one step closer to learning more about your audience. If you’re not seeing significant results, don’t give up—keep testing until you find what resonates with your audience.

A CONTINUOUS PROCESS

There is no end game when it comes to testing—you should always be testing! Testing allows you to continuously improve your emails and provide your customers with content that matters. And, the more you know about your audience, the more advanced targeting techniques you can use—like HTML5 video background, CSS3 animations, and typography.

LEARN MORE ABOUT A/B TESTING

You can watch the entirety of Mike’s 45-minute presentation from Litmus Live as part of a Solo or Team video package—just one of 20 talks filled with takeaways, case studies and great advice to help you make awesome emails.

We also covered A/B testing, along with four additional strategies for creating relevant, data-driven and high-performance emails in our “Know Your Audience” webinar. Watch the recorded version below.

 


Register for this year’s conference for more A/B testing knowledge.

TESTING YOUR A/B TESTS (SO META!)

When it comes to an A/B test, you have two (or maybe more!) versions of an email to preview before sending. Save time previewing your emails and more time A/B testing so you can continuously optimize your emails. With a single click, you can easily test your emails in over 50 desktop, webmail, and mobile inboxes.

Test before every send with Litmus. Try us today and breathe a sigh of relief before you hit “send.”

Optimize your emails →