← See all blog posts

RCT vs. A/B Testing: A Quest to Answer "Why"​

Roy Bendor Cohen
April 27, 2021

There you are. You’re standing in front of your audience of potential and existing customers, wanting nothing more than for them to take your intended action (such as requesting a demo or using the chatbot for customer support instead of calling).

How can you promote action amongst a dozen, hundreds, or thousands of different individuals? Under what condition(s) will they do what you so hope?  

If you’ve gained any advantage from collecting and using data, you’ll lean towards running tests. And that’s a solid first step. For those in business, A/B testing comes to mind. For those in academia or science, randomized controlled trials (RCT) are a no brainer.

A/B tests and randomized controlled trials are used to make informed decisions. They serve similar goals: evaluate whether or not your hypothesis is correct, use data to measure effectiveness of interventions, and attempt to understand causal relationships. Despite being able to attain answers to these questions, choosing which method to employ will ultimately depend on your primary concern and context (one of my favorite words).

Here, the intention is to outline the similarities and differences between A/B testing and RCTs so that next time you have options to test, you can make the best choice.

For Starters: Some Definitions

Randomized controlled trial (RCT)

A randomized controlled trial is a type of scientific experiment in which test subjects are divided into two or more groups (conditions). One or some of the groups randomly receive the test treatment or circumstance whereas the other group (control group) does not. At the end of the trial, the groups are compared to understand the effectiveness of said treatment and to identify cause/effect relationships.  

For example, we see RCTs used all the time in clinical drug trials. Participants are split into a control group (provided with a placebo or no intervention) and an experimental group (given the intervention for testing). Then, results are examined to see whether or not the experimental group fared better or worse in comparison to those who didn’t receive treatment. If the experimental group was cured of the disease (or whatever measurement used to show the drug “works”), for example, then it’s safe to say that the treatment was the reason: given that any differences between the groups, aside from the treatment, are random and all other variables were held constant, the only difference is the treatment, or test/dependent variable.

RCTs are considered to be a gold standard for learning about causal relationships, as opposed to mere correlations. This is mostly because the design eliminates (or at least minimizes) bias through its randomization. By holding all else constant, this type of experiment provides a way to see how a certain variable influences an outcome.

A/B Testing

A/B testing (also known as bucket testing or split testing) compares two versions of a product (webpage, email newsletter, call-to-action button, etc.) to see which performs more successfully. The two variants are presented to a random sample from a population (audience). A statistical comparison of the conversion rate of both versions (open rate, click rate, etc), will inform your decision of which version to (move forward with for optimization’s sake.

In the sense of having variables, randomization, and a control group, A/B testing is like a rapid-fire type of randomized control trial. With the rise of machine learning and automation, A/B testing has become a staple for marketing departments and businesses looking to make speedy decisions to drive results. This is because variants can be tested simultaneously across an audience, and the data will show (sometimes in real-time) which is performing the best. However, the drawback is that you can’t know for certain why one option may have worked over another.

What’s the Big Difference?

As briefly touched on, RCTs are commonplace in academia and scientific research. To go even deeper, we must understand that RCTs use theory as a framework and approach the situation from a top-down perspective. You begin by understanding your question and hypothesizing solutions that can then be tested.

At its heart, RCT is looking to extract and fully isolate the independent variable that is responsible for the effect. It’s a way to get to the root cause, or answer the ever important question of “why” something happens. To accomplish this, changes in conditions are minimal so that you are able to control variables and only test one (the independent variable).

A/B tests can be performed with many different variables being changed at the same time because you're looking for what works best (as opposed to answering why something works). Given the potential for infinite and subtle differences between the existing version and its variants, you cannot say for certain the real cause of whatever effect takes shape. However, you can still find out which version is best for achieving your desired results.

Different Sides of the Same Coin

It’s of great value to have a strong hold on both methods because in application, context will often dictate which is preferable to use.

Here’s a look at how we’ve deployed RCTs and A/B testing in two separate use cases:

  • RCT - A mall was looking to encourage visitors to recycle and reduce waste. After conducting literature review and hypothesizing that the Default Option and the Friction Cost are impacting the rate of recycling, we assigned some stores to one condition in which a bag was offered as a default (less friction, opt-out) versus another condition in which representatives in stores asked if the buyer need a bag (more friction, ‘opt-in’). We did the same for fast food chains: in one condition they had plastic cups available (less friction) versus behind the counter (more friction). The findings supported our hypothesis: When you add friction and remove the default of providing plastic products, the amount of waste decreases.
  • A/B test - We worked with a client looking to optimize cold calls. By testing several potential messages and iteratively tweaking verbiage to develop the best articulation, we ended up increasing conversion rates. Although we can’t 100% say why one iteration worked better than the next, the client reaped the results they sought.

RCT or A/B: Which Letters Win?

RCTs test a hypothesis, whereas A/B tests have more to do with “rough” assumptions. While they both rely on data, the methodology differs such that a RCT is sniper-like in its precision and A/B tests are bulky and tank-line in their aim.

As a behavioral scientist, I’m still biased towards using the scientific method to be able to answer “Why?”. That being said, A/B testing is useful in many instances, like when you’re looking to optimize conversion rates quickly. This type of trial-and-error can no doubt accomplish positive results.

RCTs (and well-designed scientific experiments) give you more than just a one-time answer or solution. They allow you to gain valuable and deep insight into your audience that can serve as the basis for building long-term and strong relationships (and develop solutions that continue to deliver ROI). These experiments begin with knowing what questions to ask (often based on exhaustive literature review) and being able to test the “right” variable. In the end, the results could (and often do) end up challenging existing assumptions.

Our team at Q leverages both types of testing methods to help our clients get the results they’re looking to achieve. By meeting every challenge or question with an interdisciplinary lens, we can determine the method that will work best within the given context.

Ask us anything.
We’re in the business of finding answers!

Q@q-bt.co
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.