(How to do it, what to test, and tools to use to improve CTR)
If you’re looking to improve CTR (click through rates), open rates, and overall email conversion for your email marketing operations and you’re not A/B testing, you’re missing out!
Put simply, A/B tests allow you to test the statistical significance of different versions (version A and version B) of important elements of your email.
These include: email subject lines, call to action, email templates, as well as send time, length of body copy and more.
After a sample size of recipients have interacted with your email, you’ll gain insight that shows which version was more effective with your visitors.
A/B tests (also referred to as split tests) are an essential part of maximizing the ROI of your email marketing strategy, because they allow you to improve your email content with insight into your customers behavior and what they like and don’t like.
That kind of information is like gold when it comes to communicating with your email list, and it can help your company dramatically increase conversion rates in a short time.
In this post, we’re going to walk you through how to implement an A/B testing strategy, which A/B tests you should run, and which software tools can help you streamline your testing.
How to Implement an A/B Testing Framework
Before we get into what you should test, let’s talk about how to implement a structured testing program.
You should totally do this because there are an infinite number of tests that you can run at any given time, which is daunting.
Because us marketers don’t have all the time in the world to run every test that we’d like, you’ll want to make sure that you’re learning from tests that have the most important or impactful outcomes.
For this process, we recommend the DACI/RICE model. DACI stands for Driver, Approver, Contributor, Informed. Rice stands for Reach, Impact, Confidence and Effort. Sound confusing? Don’t worry, we’ll take you step by step through how this process should work.
Part 1: Assign Roles (DACI Method)
D (Driver): This is the owner who manages the test and coordinates the other roles’ tasks.
A (Approver): This person has the authority to approve decisions throughout the test.
C (Contributor): The contributor(s) are the folks who will aid in implementing the test and are responsible for its success. They will have deliverables assigned and expectations set to ensure the project gets executed as planned.
I (Informed): These are the folks that need to be kept in the loop as tasks are completed and progress is made throughout the test.
Part 2: Design a Testing Action Plan
Now that we know everyone’s roles, we need to design the testing process and timeline. This occurs in three phases: planning, execution, and reporting.
During the planning phase, the tests will be designed and developed by the appointed team members. This includes any MOPs, automation, design, front-end development, analytics or design aspects. The desired outcome here would be that the test is ready to launch, and individual pieces are tested to ensure they’re functioning properly by the end of the two week period so that the team can begin collecting data. The planning phase will involve the Driver to orchestrate and Contributors to collaborate and prepare the testing pieces.
During execution, the team should launch the test and begin gathering data in your MOPs and analytics tools to later interpret into results. The Driver will do a final check to make sure everything is ready to go live prior to launching the test and Contributors will collect the data as testing is on-going.
You’ll notice that planning and execution take about a month. Therefore, at the end of each month, you can plan to review the data you’ve collected and meet to review next steps. This review meeting should begin with a review of the findings and a clear determination, from the results, as to what determinations were made and what the proper next steps should be. If data is inconclusive, the Approver can determine whether the test should be kept running or changed.
Part 3: How Frequently to Evaluate Testing
We recommend evaluating testing measures and reporting bi-weekly during an hour long meeting with the following agenda:
- Review current tests and performance to date. If testing has been running for two weeks, this will be the meeting to determine whether to keep testing or change the testing parameters.
- Review new tests that will begin launch.
- Finalize details for the next set of tests will be launched and who the DACI roles will be assigned to.
Part 4: How to Prioritize What Should Be Tested
Testing should be prioritized based on the impact they will have on goals and the effort it takes to implement them. Priority should be given to tests that have the most impact for the least amount of effort. Record the tests you want to run and the priority you want to run them in a spreadsheet to discuss in your bi-weekly meetings. Use the RICE method to score priority for each test:
R (Reach): On a scale of 1 to 10 (1 being low and 10 being high) how much of the database, website visitors or target audience (for advertising) will this test impact?
I (Impact): On a scale of 1 to 10 (1 being low and 10 being high) how much will this impact each person in the database, on the website, or with the target audience?
C (Confidence): On a scale of Low to High (High = 100%, Medium = 80%, Low = 50%) how confident are we in our estimate of impact and reach?
E (Effort): How many people and weeks will this take to complete?
Use this spreadsheet as an example for how to quickly prioritize the answers you come up with.
Important Components For Email A/B Testing
Now that you know how to implement an A/B testing framework, here are the most important components to consider when deciding what to test (and a few ideas to get your juices flowing with test ideas):
- Headlines/Subject Lines: Some things you can consider when experimenting with headlines/subject lines include: title length, bolded vs not bolded, personalized vs standard, question vs statement, proof point metric vs opinion, one tone vs another tone (funny vs racy, serious vs joking).
- Preview Text: Some things you can consider when experimenting with preview text include: using a portion of the main piece vs a writing a new summary, end preview text with a question vs statement, and using a proof point metric vs opinion within preview text.
- Links: Some ways to experiment with link position include the number of links used, the placement for the links (i.e. first paragraph or at the bottom of the email).
- Email Copy Length: Most experts within the industry agree that marketing emails should be between 50 and 125 words. However, you should experiment with different word counts within this range, and possibly more if your audience values long form, educational content. In addition to experimenting with word count, you can also experiment with paragraph length, bullet points vs no bullet points, and bolding certain power statements within the text.
- Images: Another area for testing is with your images. Experiment with image and without image. Also experiment with image size, number of images, photo vs animation vs gif, and what target reaction you want the image to deliver.
- CTA or CTA Button: Within the area of CTAs you can test button or CTA link color, call to action text, length of text, and CTA placement (top of post vs bottom of post vs multiple placements). Here are a few good examples of CTA phrases if you need some inspiration.
- Time of Day/Date: Another important element for A/B testing is when you send the email. This could be early morning vs mid morning and day of the week. According to MailChimp, the most optimal send time is Thursday at 10AM in the recipient’s own time zone. We recommend mid-week, mid-morning as the best place to start but encourage experimenting to find the optimal time for your audience.
- Segmentation: For most businesses, your product or service is useful for more than one group of users and thus, you segment your messaging to make sure it is as personalized for that group as possible. You may want to experiment with segmenting though. Test your message to see if it resonates to many different audiences? If it doesn’t seem to perform well, test different audiences and see where it performs best and consider adjusting the message for the segments that perform poorly.
- Design: Also, test how “designed” the email is. Try using a stylized template vs plain text emails. Also test having a header/footer and other design elements. The reason for this is that many read their emails on mobile devices where some design elements do not perform well. Test both and see which has a better click through rate.
- Personalization: Another element to test is personalization. This includes where you add it in the email (subject line vs the intro vs later on in the email). But keep in mind that you’re not just limited to the person’s name, you can also include their position (if you’re writing to a large audience who shares the same job title) or other demographic elements.
- Number of Emails Sent: If this is a nurture sequence, then how many emails will you send over how many days? Experiment both with cadence and with the amount of emails sent. Experiment with both the message and the CTA for each email to see which produces the desired result.
These tests can dramatically improve your overall CTR (click through rates) and conversion rates, but they can also throw you off track if you’re not using them strategically and really digging into the right metrics, rather than just gathering more data for data’s sake. That’s why we recommend getting crystal clear on the business outcomes you aim to achieve before you dive into the world of A/B testing.
Now that you have some direction on what kinds of tests to run, let’s talk about which tools you should use to make sure that you don’t pull your hair out while running these test.
Using Marketing Automation Software to A/B Test
Selecting an email marketing tool that allows users to easily run A/B tests is imperative for accuracy and agility. While basic email marketing tools like Constant Contact and Mailchimp are solid email marketing tools that have good A/B testing functionality, we’re bigger fans of using a marketing automation tool instead. With marketing automation software, you can run A/B tests and see in real time, more down funnel metrics, like landing page and form conversion rate, allowing you to adjust rapidly for maximum effectiveness.
There are so many different options on the market that do various forms of testing, and some also include other really important automation features that you might find useful for scaling your marketing efforts efficiently.
In our humble opinion, these are the top tools in the game right now that are not only easy to implement, but offer a wide range of functionalities that most businesses would find valuable.
SharpSpring is a fantastic tool if you’re looking for an affordable, all-around marketing analytics tool. While they do offer some extensive A/B testing features, they also have a large suite of functionalities that allow you to track all of your marketing analytics across your website and email marketing.
If you’re looking to maximize your A/B testing capabilities across your entire organization, their software offers a landing page builder as well. With this, you can build pages directly from their tool and then customize them with A/B tests and analytics functions that are pretty hard to beat!
HubSpot is an industry leader for all things marketing, simply because they do a great job at anticipating a marketer’s needs. Their A/B testing tools are full of useful information and functionality. Their A/B testing kit and A/B test tracking template are a great place to start if you’re just diving into the world of split testing because they’re free to use.
They also have an incredibly detailed download called “The Complete A/B Testing Kit” that guides you through the entire process and helps you determine how your company can benefit the most from your A/B tests.
Pardot is another great resource if you’re just starting out with A/B testing. Users can take advantage of an integrated email split testing function that makes it easy for you to test out different subject lines right within their email feature. Furthermore, their native integration with Salesforce makes them a win/win for companies already using Salesforce for sales enablement
Pardot is such a robust tool already, but the fact that their A/B testing tool is built into their email functionality really streamlines the process and makes it even easier to – as they say on their website – “craft the perfect message, and back it up with data.”
Getting Started with A/B Testing
Whether your team is just starting their split testing journey, or looking to revamp your strategy, knowing what to test, how to test, and selecting the right implementation tool can help you streamline a lot of the work. To help you get started planning and prioritizing your A/B tests, we’re happy to offer you this free template that will help you implement your own A/B testing framework.
And if you’re looking for some quick wins to supercharge your growth during an economic downturn, reach out to us and we’ll set up a time to chat about what Interrupt Media can do for you.