<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=520757221678604&amp;ev=PageView&amp;noscript=1">
Nicole Crilley

By: Nicole Crilley on May 26th, 2022

Print/Save as PDF

There’s a control for that: what your testing strategy is telling you

Marketing Automation | Email Automation | A/B Testing | Marketing Planning | Split Testing

mailto:demo@example.com?Subject=HighRoad Solutions - interesting article

Marketing planning can be exciting. It's often the time where new ideas get thrown on the table, innovative and analytical thinking converge, and rough concepts get fully baked. 

What's not fun—seeing campaigns and approaches flop, particularly when so much thought, creativity, and goal intention went into them.

Failure isn't always a bad thing
As marketers, "checking ourselves" has become a necessary learning and survival tool. Because we only improve through self-reflection regardless of the levels of discomfort we experience doing it.

It's a very necessary part of our craft. Unfortunately, when it comes to marketing efficacy, fear of failure often gets in the way. 

In fact, that fear can sometimes be so loud that it counteracts the notion that, behind every carefully executed plan, lies an equally as important testing strategy.

Those who understand the balance between planning, testing, and analysis are the ones who tend to reap the highest praises and metrics.

So how do you create a framework that incorporates testing into your holistic strategy? 

Understanding the fundamentals of testing
First and foremost, whether you're applying testing methodologies for email marketing, online ads, or content development, it's important to have a grasp on some of its core principles.
 
A/B testing
A/B testing is the process of testing one variable at a time within marketing campaigns such as the color of a landing page or the subject line in an email. The key differentiator here is that you're isolating one feature within your channel.
 
Multivariate testing
Multivariate testing, lets you test combinations of variables as a set. The caveat being, you're singling out the winning combination versus isolating the individual features of the emails. Since you're testing each of the emails as a package, you're still focusing on one variable at a time.

Since testing one variable at a time is optimal, you'll want to build a cadence to your testing program. For instance, depending on what you're looking to accomplish, your annual testing program could look something like this:

Quarter 1: Email testing

  • Audiences
  • Subject lines
  • Delivery Times

Quarter 2: Ad testing 

  • Audiences
  • Messaging
  • Retargeting

Quarter 3: Content testing

  • Audiences
  • CTA
  • Format

Quarter 4: Apply learnings to plan

  • What's working
  • What didn't work
  • What needs more testing

It's also important to follow some best practices for the most accurate insights:

  • Only test one variable (campaign change) at a time
  • Try to measure low-funnel results (as low as possible) 
  • Determine your control (original campaign) and variable (campaign you’re testing) to compare results directly 
  • Split each campaign equally and randomly between audience groups 
  • Test and run each campaign at the exact same time (unless you’re only testing time of delivery as a factor)  

To obtain the highest confidence level in your results (otherwise known as statistical significance), you'll want to target larger audiences and make sure you evaluate results with context. 

For instance, let's say you're testing a time of delivery/deployment between two domestic campaigns with all other elements staying the same:

  • One is done in the first two weeks of September
  • The other is sent out in the two weeks at the end of December

The results of the first one should be far better than the second one because most people are off work during the winter holiday season. Right there, your results are somewhat skewed and in need of context. 

Framework for testing
As you integrate testing into your marketing operations and campaign calendar, there’s a simple testing framework you should follow to keep this process consistent and reliable.
 
In the end, the goal is to make sure your campaigns are yielding the results that you’d like to see by optimizing your individual elements (even if those elements come in sets). To illustrate the framework, we're going to use 'email' as our example channel although the framework isn't limited to it:
 
1—Create an email split-testing hypothesis 
Starting from the top, first set your email split testing hypothesis. To formulate this, you’ll need to observe what is currently happening and do a little bit of speculation.
 
Next, suggest how you’ll improve the email campaign, why you think it will work, and how you’ll know if the change works. Thresholds are super important to set here so that you know when you've hit—or didn't hit—a significant testing mark.

All of these statements can be consolidated into a testing hypothesis statement. For example, let's say that you think a campaign can generate more webinar sign-ups by changing the subject line to something more about the specific proposition of the webinar rather than a general advertisement of the event. Your full hypothesis could be something like:

“Changing the email subject line to ‘Ease yourself into a stronger cybersecurity posture with our next webinar’ will create more webinar sign-ups because it will be clearer and easier to understand the exact benefits the event will offer.”  

This hypothesis is specific about the element you’re changing, why you think it will work, and what exactly you will measure (in terms of low-funnel conversions) to test your theory. 

2—Plan your tests
Once you've established your testing goal, you can now begin planning out your testing strategy. In this example, we'll use A/B email tests.
 
You’ll set the stage for scope by identifying exactly what you are testing. In the context of email marketing, it's typically going to either be a:
  • Audience
  • Time of delivery
  • Email format
  • Subject line
  • Email image
  • Message layout
  • Content
  • Copy length
  • CTA

First, schedule out when the testing will take place and which campaign you’re testing, then establish what your A and B versions will include.

Campaign A should include the original elements while B should include the altered ones related to your hypothesis. Track which is which by writing out a description for each version. 

3—Set up controls and variables 
As you set up and run each version of your email campaign, make sure you’re doing even split testing. The comparison should be direct as the original (experiment control) will be compared against the new version (experimental variant) to see if your hypothesis is valid.
 
Run each version at the same time, splitting the campaigns amongst an equal but random audience segment while keeping everything else the same except for the email element you’re testing. 
 
4—Review results and apply key findings
After some time is spent letting each version of the campaign take hold, schedule a time to go back into your email marketing platform to review the results.
 
Determine whether your hypothesis was correct, identify other interesting observations, and speculate the reason why the results came out the way they did. Evaluate if there were any key takeaways that you’d like to apply to your next A/B test.    

This process will be ongoing, allowing you to continuously understand ways to improve and measure your email programming. All of your results will translate into findings, which will translate to takeaways, and eventually into new testable hypothesis.

This “rinse & repeat” process pairs well with behavioral data collection and analysis. Collecting this intel and refining is all a part of a sustainable data-backed email optimization process—a true testament that your marketing team, and your org, is moving into digital maturation. 


 
Ready to start testing? We can help. 
Book a time with us today to learn the ins and outs of A/B testing and some of the software tools that can streamline this process.
 

About Nicole Crilley

Nicole is a digital strategist and content designer with 10 years of experience in email marketing automation, web design, marketing technology, user experience, and content production. With a versatile background in freelance, consulting, and corporate settings, Nicole specializes in identifying and implementing effective digital strategies.