How to Achieve Higher Email Conversions With A/B Testing

Mashkoor Alam
ByMashkoor Alam

Updated:

7 mins read

Updated:

7 mins read

Summarize with AI

Insights from our webinar with Kath Pay

In our latest Growth Chat, we spoke with Kath Pay — author, speaker, and founder of Holistic Email Marketing. With more than 20 decades in the email industry, Kath has shaped how modern marketers think about customer-centric email strategy and scientific experimentation. Her book became an international bestseller, and her holistic approach is now widely adopted across teams and conferences worldwide.

This guide distills the most important lessons she shared in the session — why most A/B tests fail, what actually leads to meaningful insights, and how to use hypothesis-driven experimentation to drive conversions and long-term business learnings.

Watch the full webinar:

Why A/B testing matters more than most marketers realize

Kath has seen one consistent problem across email teams:

“We as email marketers don’t do enough testing — and when we do, we often don’t do it correctly.”

Most marketers fall into email accidentally, without formal training in scientific testing. As a result:

  • They choose tests that are too similar

  • They rely on ESP tools that simplify testing but hide the science

  • They don’t use hypotheses

  • They test the wrong metrics

  • They get inconclusive results

  • They give up and assume “email A/B testing doesn’t work”

Kath believes this is why so many teams fail to see meaningful improvements.

But when done correctly?

“Some tests deliver amazing gains. Others deliver small, incremental improvements — but those marginal gains compound over time.”

Testing is not about one big win. It’s about building a continuous feedback engine that improves the entire program.

Start with a hypothesis — not with subject lines

The first major misconception Kath addressed:

“Most failures begin because marketers don’t start with a hypothesis.”

Without a hypothesis, testing becomes random — swapping a few words, adding an emoji, flipping subject line order — all variants that are too similar to produce real results.

What Kath actually tests

Instead of tiny tweaks, she recommends testing motivations:

  • Savings vs. benefits

  • Emotional triggers

  • Different value propositions

This is where deeper insights emerge.

“When variants are too close, you won’t reach statistical significance. When they differ enough, you not only get results — you get learnings.”

A hypothesis also requires stating why you believe one variant will win. That reason becomes the insight you carry forward, even if you’re wrong.

And that’s the part most marketers fear:

“Marketers think if their hypothesis is wrong, they’ve failed. But the whole point of testing is that you don’t know the answer.”

Choose the right success metric — or your test will mislead you

Kath has audited hundreds of brands and found the same mistake everywhere:

“Most marketers use the wrong success metric.”

Especially when testing subject lines, they almost always use open rates.
But Kath warns:

  • Opens do not equal conversions

  • High opens can still lead to low revenue

  • The top open-rate campaigns rarely match the top conversion campaigns

Her example is simple:

“We’ve seen open rates look identical across variants — but conversions doubled for one of them. If we had measured opens, we would have declared the test a failure.”

How she recommends choosing metrics

Map your success metric to the objective of the campaign:

Objective Correct Success Metric
Drive sales Conversions
Get downloads Downloads
Drive site visits Clicks
Improve engagement Clicks or click-to-open
Improve inboxing Opens

This often means conversion tests take longer and require larger sample sizes — but the insights are far more trustworthy.

Test motivations, not micro-components

Kath’s holistic testing methodology focuses on understanding the why behind audience behavior.

“We’re not just testing components; we’re testing motivations.”

This means you don’t test a single subject line or one image.
You test entire narratives.

Example:
Variant A = savings
Variant B = benefits

Each variant includes:

  • A subject line supporting the theme

  • A hero image aligned with the narrative

  • Opening paragraph that reinforces the message

  • CTA aligned with the hypothesis

  • Landing page copy supporting the same motivation

This makes the variants meaningfully different — which leads to:

  • Statistically significant results

  • Clear winners

  • Real insights you can use across channels

Despite testing multiple elements, this is still an A/B test — not a multivariate test — because only one concept is being tested: the underlying motivation.

Use A/B testing as a business insight engine

Kath believes email is uniquely powerful for experimentation:

“Email is a push channel — the audience is already there. That makes it immediate, cost-effective, and perfect for learning.”

What makes this exciting is that these insights don’t just improve email — they guide decisions across:

  • Social

  • PPC

  • Website messaging

  • Landing pages

  • Overall positioning

Kath has seen brands use email learnings to:

  • Influence home page messaging

  • Update PPC ads

  • Reshape value propositions

  • Inform product positioning

Because the email audience is the same as your broader digital audience, tests reveal real customer motivations.

A real example: why measuring conversions changed the result

Kath shared a test where two emotional approaches were compared.

Here’s what happened:

  • Open rates: No significant difference

  • Click rates: Small uplift but not statistically strong

  • Conversions:

"Variant B delivered almost double the conversions of Variant A."

If she had measured opens or clicks alone, the brand would have mistakenly declared the test inconclusive — and missed a doubling of revenue.

This reinforced her message:

“The correct metric is everything.”

How to segment smartly — even with small lists

Kath recommends testing different segments because motivations differ:

  • New prospects may respond to savings

  • Loyal customers may respond to benefits

  • Inactive users may need different triggers entirely

But she also acknowledges the challenge:

“If you don’t have a large enough sample size, tests won’t reach significance.”

What she suggests for smaller lists

Instead of giving up:

“Run the same hypothesis test multiple times and aggregate results.”

Even though this isn’t as ideal as a single large test, it still produces insights backed by cumulative data — far better than not testing at all.

Best practices Kath recommends for reliable A/B tests

Kath summarized her top principles:

1. Start with a hypothesis

Ask questions. Spot anomalies. Turn them into hypotheses.

2. Identify the correct success metric

Tie it to your campaign’s true objective.

3. Record everything

Most marketers only log results in the ESP.
Kath wants you to capture:

  • Hypothesis

  • Success metric

  • Variants

  • Results

  • Conclusions

  • Next steps

  • Who should apply the learnings

4. Share results across teams

Testing insights help more than just email.
They help the entire business.

5. Use email as your testing hub

It’s fast, inexpensive, and reaches your real audience.

Common mistakes to avoid

Kath sees these errors repeatedly:

  • Using the wrong metric

  • Testing variants that are too similar

  • Not reaching statistical significance

  • Stopping testing too early

  • Not reporting or sharing learnings

  • Abandoning testing after a few failed attempts

Her biggest caution:

“If you’re not getting significance, it’s not that testing doesn’t work — it’s that the test isn’t designed well.”

Key takeaways

A/B testing delivers real conversion improvements only when it’s approached scientifically. Kath emphasized starting with a strong hypothesis, choosing the right success metric, and testing motivations rather than small components. Using conversions—not opens—as the primary measure reveals insights that can influence not just email, but wider business decisions.

Even small, incremental gains compound over time, and sharing learnings across teams strengthens overall marketing performance. Ultimately, consistent, hypothesis-driven experimentation is what turns email into a powerful optimization engine.

What should you do next?

You made it till the end! Here's what you can do next to grow your business:

2_1_27027d2b7d
Get smarter with email resources

Free guides, ebooks, and other resources to master email marketing.

1_2_69505430ad
Do interactive email marketing with Mailmodo

Send forms, carts, calendars, games and more within your emails to boost ROI.

3_1_3e1f82b05a
Consult an email expert

30-min free email consultation with an expert to fix your email marketing.

Table of contents

chevron-down
Why A/B testing matters more than most marketers realize
Start with a hypothesis — not with subject lines
Test motivations, not micro-components
Use A/B testing as a business insight engine
A real example: why measuring conversions changed the result
How to segment smartly — even with small lists
Best practices Kath recommends for reliable A/B tests
Common mistakes to avoid
Key takeaways

Meet the only AI
email automation
platform

Mailmodo Logo

Enter with an idea.
Exit with a winning email campaign.

Check.svg

Brainstorm email campaign ideas with Mailmodo’s AI assistant

Check.svg

Build precise segments in seconds with AI segmentation

Check.svg

Generate ready-to-use email templates with AI

Experience true AI email marketing automation with Mailmodo

Trusted by 10000+ brands

Group_1110166020_1_6fb9f2bd9a
Group_1110165532_1_bf39ce18b3
Ellipse_Gradientbottom_Bg
Ellipse_GradientLeft_Top