Episode 120: I’m Running Experiments. Why Hasn’t My Conversion Rate Gone Up?

Announcer: [00:00:00] You're listening to Drive & Convert a podcast about helping online brands to build a better e commerce growth engine with Jon MacDonald and Ryan Garrow.

Ryan Garrow: All right, Jon, we know that website or apps should always benefit from experimentation. It's been documented, you've written books that have mentioned this multiple times.

At this point, if you're episode 120 in with us and you haven't realized this, we failed miserably. All right. So we agree that experimentation is great, but we know it doesn't always equate to increasing conversion rates. And there's some frustration that we've had people come to both of us and say, Hey, I've been doing this Experiments.

Why are my conversion rates not going up? And so today we get to talk about some of the why behind that answer some of these questions at scale, because two to many is much more efficient. You won't always see [00:01:00] direct correlation across the entire website. Conversion rates go up every time you do an experiment.

So question one for you today would be, Jon, does having an experiment experimentation team guarantee better overall performance?

Jon MacDonald: Yeah, that's a great question. And I'll say this. Selfishly, I just want to record this so I can send it to everybody who asks me this question because I think I feel like I get this every day.

Lead to come in saying, we're doing testing. It's not working. And we want to change providers and, or our clients who are saying, Hey, we have all these other metrics that look great, but conversion rates not increasing. What's going on. And so I'd love to, selfishly just be able to send them a recording and say, to this, but here's the reality

Ryan Garrow: can pull these out and we can get some LinkedIn snippets, all these fun things.

Jon MacDonald: Awesome. There we go. Let's spread the good word. The reality, Ryan, is that all else being equal, once startups begin experimentation, they really see strong gains within the first few years, right? [00:02:00] It's well documented, they see the gains, and they continue to see that growth for years to come.

I'm not just saying this, I'm not just blowing smoke here, this is actually according to Bloomberg, right? And they've done some research, and what they did is they formed a, what they call, experimentation index. So these are firms, large companies that are doing digital experimentation within their companies, generally at a high level A B testing, but lots of other validation techniques as well.

And they compared that to the S& P 500, okay? Now, we'll put this up on our site at thegood. com along with an article around all this, but I'll show you the chart. The growth divide is actually pretty large and it's only getting larger over time. So the reality is, does experimentation guarantee better overall performance and, it's really logical to expect that once experimentation engine is firing on all cylinders, you're [00:03:00] doing an internal team, you're running experiments often that you should really see a noticeable impact on the data. So a common question really is, if I run experiments, will my conversion rates go up? And you'll hear this all the time, as I mentioned. So the reality is that after running thousands of experiments, we've learned that what happens after a release, Of what you tested is not a simple one to one outcome that mirrors the success of the experiment, but the story of why it's not so simple.

That's why I wanted to talk about this today.

Ryan Garrow: Fascinating that, it's also surprising to me that people haven't gotten to the point where they understand that there are so many moving pieces on a site that one metric doesn't work in a vacuum yes, you're converting your, you worked on your category pages.

Great. Okay. If you're smart, you're going to move up the funnel and push harder where conversion rates are lower. And so you get more traffic and your business grows, but you won't necessarily work to expect [00:04:00] conversion rate. Wow. Okay. Sorry, I'm probably jumping ahead, but it's, I have no, this is the reality of the

Jon MacDonald: conversations, right?

That you and I have every day where folks have become into these expectations. And this is why we've moved on from CRO to what we're calling digital experience optimization, because. We're not just impacting conversion rate, right? We are impacting the entire digital experience, which. Is we have a chart, in fact, I'll turn it even in this article, there is this chart that we put together.

It's a whole wheel of all the areas that experimentation can impact in time in the digital experience. And if you look at that conversion rate is a very small sliver of that by it just is. So it is one thing in a slew of metrics that provide a return on investment here. And if you just focus on conversion rate, you're probably doing yourself a disservice in terms of the longevity, [00:05:00] it's, et cetera.

There's just a laundry list of reasons why it's a bad idea.

Ryan Garrow: Okay. So why is it that we can't see that? Cause ideally we'd like to see that. And maybe 10 years ago when somebody was. Promising results, somebody could see it in a one off, but why can't we see that anymore?

Jon MacDonald: It's funny.

You phrased it that way. And it's funny in a good way, because if you've ever researched an experimentation partner, you've probably come across them that will guarantee conversion rate increases. And at the good, we have a phrase. Because we often have leads come in and say I talked to so and they guaranteed this lift.

If you can't guarantee that, then we're not going to work with you. And the phrase that we have is simply anyone who says they can guarantee an increase in your conversion rates is either really lucky. Or they're lying. So you can choose which one do you want to be a part of the lucky crowd or do you want somebody lies to you?

I don't know, but it's one of those two. Okay. And there are plenty of reasons [00:06:00] why experiment results don't map one to one with that real world outcome. that you would expect once you launch the variant that won in the testing. So let's explore these really, I say there's three key issues that prevent you from seeing the charts go up into the right.

Okay. There is that post launch variables. Okay. There's experimentation, segmentation, and there's the effect of false positives. So post launch variables, let's just call it segmentation and the effect of false positives. Okay. So I want to talk about each of those three today.

Ryan Garrow: Got it. I love that. Jon, he simplifies things wonderfully, which I appreciate.

You're also way more, probably political in your responses. Be like, would somebody guarantee something on my side of the equation on traffic? I'm like, if they're guaranteeing your results on your traffic, driving on Google or Microsoft or meta, they're idiots. I'm sorry. You can't do that without looking at the account, understanding the nuances of a company and then guarantee [00:07:00] results like no, they're idiots.

That's me versus .

Jon MacDonald: And like you said earlier, it's one metric in a vacuum, it does not work. So yeah, we'll break these down. But yes, luckier line is as far down that, that non PC path is. Yeah, we'll stay on the PC path

Ryan Garrow: with Jon and, he being the nicer guy in the, in this equation here.

Problem one with not getting a one to one result in your experiments would be post launch. And so what is it about post launch variables that muck up the results being extremely clear and like obvious?

Jon MacDonald: Yeah. Yeah. So post launch variables, they really make attribution less clear, right? So metrics are influenced by much more than the website or app experience.

And I hate to say that because that's the only part that we can affect is the website or app experience, right? Like directly direct and indirect influences really impact metrics, like your conversion rate. Obvious one, revenue, maybe [00:08:00] even as broad as customer satisfaction. In fact, we've identified over 55 of these variables that contribute to swings in your KPIs.

And this is that pie chart I was talking about. That's that graph. There's 55 variables on there. That's, and then we probably should come up with more, but honestly, after 55, we were like, I think people will get the point.

Ryan Garrow: That's a lot. That's a lot of variables.

Jon MacDonald: But, there's factors like traffic quality.

If you're just not getting the right traffic, they're not going to convert or seasonality. Maybe you sell winter coats and it's a hundred degrees out, God bless you with global warming. You might want to look at t shirts. There's competitor promotions, maybe your biggest competitor ran a 50 percent off sale and it just took away all of your customers for that time frame.

Even the greater economy, I keep hearing right now from so many e com brands. That are really struggling, and it's not because of their [00:09:00] product or their marketing or any of these other factors. It's just the economy has shifted. The COVID years are over, folks. People aren't going to only buy on your website anymore.

These all play a really huge role in whether or not your website visitors will convert now or in the future. And, I'm here. As we learned during the COVID pandemic, right? Even the largest experimentation wins may not eclipse the outside influences. That can either gain or hurt your conversion rates.

A really good way to emphasize this is this true story from one of our clients. That we once saw a single social media intern drive so much new traffic that conversion rates fell by a whole percentage point. And this is on a very large frame, okay? The intern was like, yes, I did my job. Amazing for their portfolio, their resume to be like, here are the metrics of, what we did, right?

The [00:10:00] problem is, it's really bad for conversion rates. Because the problem is that they drove all this unqualified traffic. It's probably a great meme, they got everyone there, but they weren't going to buy, right? And we couldn't control that, right? Once a test period is over, we generally are unable to tell whether we've improved a metric that was already rising or already in decline.

Which really makes post launch attribution fuzzy at best. And again, this goes back to anyone who told you otherwise is either lucky or a lie.

Ryan Garrow: Dang. Love to be that intern that screws up all your work.

Jon MacDonald: It was good for them, I'm sure.

Ryan Garrow: But, yeah. Oh man, a whole percentage point. That's massive. That's really cool, dawg.

Made a great problem to have. You have too much traffic and they're, the middle, upper funnel is buzzing about your brand. Man, that'd be great.

Jon MacDonald: Yeah, it's great, but it's not going to significantly increase. It's revenue right away, unless it's really qualified traffic right now. [00:11:00] It's great that the brand got out there, right?

It's great that maybe the long tail of this is more revenue, but in the immediate future when we get a call and say, Hey, our conversion rate dropped 1 percent this month, what's what gives you're supposed to be in charge of that for us. We start digging into it. We're like, wow, this one post got you several million visits.

Like good for that. That's not qualified traffic.

Ryan Garrow: Got it. Okay. Then the next problem you mentioned was the test results weren't summative. They didn't, five plus five equals 10, except when you were doing an experiment. So tell us more about that. Like, how does that work?

Jon MacDonald: That's just not math, right? There are many reasons that experiment results are not summative. And I just, let's just focus on one of them today, which is segmentation. Okay. So many experiments only impact a segment of users. Can we agree on that? Like you're not test, you're not going to test with every single visitor that comes in.

And [00:12:00] even if you do, you need a control set who aren't going to have it. So at the best, you have your control and your variable, and that's the easiest possible test. It never works out that way. There's always a sub, a segment of traffic, of sub subset, excuse me, of traffic that you're gonna run a test with, right?

So a win with that segment or subset of users does not predict the same gains from the whole, right? It needs to be said. It's obvious. I think a lot of brands don't go in with that mindset. So as a practitioner you're often running experiments that are only meant to impact that subset of an audience.

And the rest of the visitors won't experience that same benefit. That's just, that's testing, that's experimentation, right? And this is true for splits by device type, by Landing page by, I don't know, and similar type of segments. There's hundreds of ways you can segment, right? [00:13:00] And that's why we generally don't actually the results of a segment and apply them to the whole, right?

So a 5 percent gain with one audience and a 5 percent gain with another does not equal a 10 percent lift overall. That's where the five plus five does not equal 10, right? Basically test results are not summative. They don't add up in that way. Okay. And you need to have that on his view.

Announcer: You're listening to Drive & Convert, a podcast focused on ecommerce growth.

Your hosts are Jon MacDonald, founder of The Good, a conversion rate optimization agency that works with ecommerce brands to help convert more of their visitors into buyers. And Ryan Garrow of Logical Position, a digital marketing agency offering pay-per-click management, search engine optimization, and website design services to brands of all sizes.

If you find this. Okay,

Ryan Garrow: so if I'm going to simplify it from my brain thinking about traffic, if we're running an experiment on these [00:14:00] five product pages that we're driving traffic to, but we're also driving traffic to, let's say 10, 000 other products on the site.

Jon MacDonald: Just

Ryan Garrow: because the experiment does lift conversion rates on these five products, 5 percent each on average, doesn't mean that the traffic on all 10, 000 product pages is going to increase at the same level.

Just because there's so many different purchase attempts depending on the product and the price point. Okay.

Jon MacDonald: Yeah. Yeah. It's the same thing. It applies to users, visitors as it applies to product pages. There's so many ways to segment the testing. And so that, that's another way to segment and definitely makes sense.

Ryan Garrow: Okay. Then the third piece of the whole issues you're simplifying is false positives. Like you think something's going to work and then it doesn't.

Jon MacDonald: Yeah. That's a great way to to think about it. It's false positives. Everyone in experiment data indicates that. The hypothesis is true, but when it's actually not right, so false positives may sound like an atrocious error on the part of the person writing the experiments, but [00:15:00] they're actually just par for the course.

You just got to get used to them and understand. And that's why even rigorous and experienced experimentation teams really expect at least a quarter. I have a false positive rate, meaning one in four winning experiments is observed as the result of chance and not a true winner. And that goes with all of science, I think the question here is to be thinking about is, does this pervasiveness of false positives mean experimentation does not work?

And I think the answer is, of course, not. I'll take the 75 percent any day, but it just means we need to approach these wins with an informed skepticism, right? I don't expect a one to one relationship between the results that are observed during the test period and real world experience of performance.

I think that the best way to look at this is to say, okay, I'm going to run this test. And if it wins, I expect some lift on the site. There will be gains. I don't know what that [00:16:00] gain will be yet. But we've proven out, I feel very comfortable saying there is a gain to be had here and not a loss.

Ryan Garrow: No, it's, I like that.

And I am surprised that one out of four have false positives. That's just me not knowing enough about DxO. But that also tells me you're playing a numbers game like just everything else in Ecom. Yeah. Instead of doing just four experiments and getting frustrated. You got to do a thousand experiments and you've got 750 that actually did something right and are taking your brand to a better level.

And so I guess what's your response then? I guess running experience is not going to fix all your problems. It's not the one single thing, right? That's your silver bullet for all of these problems on your site.

Jon MacDonald: Yeah, look, It's tempting to look at fuzzy post launch attribution and false positives and say experimentation is not really worth it.

I want to be clear, and I'm throwing all the negatives out here today, but I believe that it is very much worth it. And I want to be clear that, there are people much [00:17:00] smarter than myself that swear by a test everything approach, right? Because there's simply no better or even a more rigorous way to.

Quantify the impact of changes in your bottom line. Now, I don't think that there's a test everything approach that is a good fit for any brand, but I do think that having that mentality can be beneficial. Now, problems, they do arise when we tout experimentation as a tool. This omnipotent growth lever, the magic bullet to success, whatever you want to call it, silver bullet, as you mentioned, right?

And that's because like it can increase your confidence in decision making. Okay, you have great data to back up a decision. That's really helpful in the boardroom and beyond, right? It can help you measure the discrete impact of good design. So you don't just have a creative director, or I don't want to pick on creative directors.

There's a lot of great ones out there, or just some creative who says, This is beautiful. Everyone loved it, right? You're able to put numbers to that, right? [00:18:00] And it can also settle internal debates about what's direction to head, right? You're able to say I love that your spouse thinks this is a great color, but our users who actually are paying us every day think this is the best color.

And here's the data behind that, right?

Ryan Garrow: That's a difficult battle to fight. Let me tell you,

Jon MacDonald: it's a difficult battle, but one where I've had to fight before. Yes.

Ryan Garrow: Still

Jon MacDonald: have the battle scars, if you can't tell, but I think, what experimentation won't do is compensate for all of the external forces that are going to hamper your business.

If you're looking for that confidence, the precision. Experimentation is just an incredible tool and it's going to add to your toolkit. But if you're looking for a silver bullet I'm still looking too, Ryan. So hopefully we can find it together.

Ryan Garrow: If it exists, we will eventually find it. I just have my doubts.

Maybe

Jon MacDonald: episode 220 and we'll have found it, but not by one. And

Ryan Garrow: then we'll be on a beach sipping margaritas and we won't tell everybody because. It would ruin our margarita [00:19:00] life. Love it. I love you. What do you say then when somebody is frustrated that they've been running experiments? So what's your response to your clients?

When at that's the case Hey, we're doing all this work. We're paying you all this money. And my conversion rate is that's got to be frustration.

Jon MacDonald: I totally understand the frustration if they are looking in it, just conversion and that's why it's really important to have this conversation up front.

And I try to make sure I set those expectations appropriately. And with every lead that comes in and every conversation I have. Because they really need to trust the process and use experimentation to its full potential. Which is not just for conversion rate increases. Look, one thing has been proven time and time again.

Experimentation, done right. Is associated with increased overall performance across so many factors. Experimentation can't combat outsized economic and environmental factors. It can be a [00:20:00] catalyst for better decision making and it can help assure you that your digital property is going to perform at its best, despite whatever is going on the outside and.

That's where optimization really comes in. And I think where it's really valuable.

Ryan Garrow: Yeah. And I think that it becomes challenging, especially as we get into this, whatever this new economic time period looks like or how long it lasts. Top line shrinks, which generally shrinks bottom line, which generally shrinks experimentation budgets, which, and then it becomes a possible downward spiral that unfortunately you have to have faith that if you keep pushing forward.

You will, your product, your site will continue to get better and better, and you can better fight a shrinking pie because you can get more aggressive and capture more of it. But what I see happen often is you get, things are bad, cut marketing, cut all these costs. And I'm like, there's probably some fat to be cut, but you can't cut the core things that are helping drive your business because your [00:21:00] competitors are going to smoke you if they're.

Still investing in experimentation allows them to get more aggressive on their marketing. Just

Jon MacDonald: think about the going back to one of the first points I made the experimentation index that Bloomberg put together versus the S& P 500. And that chart will show you very quickly the gains that experimentation teams are able to provide on top of a normal S& P 500.

And that gap is only widening. But if you aren't doing experimentation, or you cut it because you feel like, hey, I need to cut the fat and you don't see conversion rate gains immediately, understand that there are a couple things at play here, right? And it's not the experimentation that's causing those.

It could be the outside factors. And maybe we're, you're, experimentation is helping out with all these other 54 plus possible [00:22:00] metrics that are conversion rate and that those potentially are, Providing value and a good return on your investment.

Ryan Garrow: Yeah, I agree with all of that. And thank you for enlightening me.

Is there a, is there anything you can do to help guide clients to have kind of a vacuum set within their data to say we did get, yes. It's flat, but if you look at, this data in a vacuum, which is a very small subset, it did get better. Or does that even exist?

Jon MacDonald: This is why we came out with our five factor scorecard.

And I think we've done an episode on that in the past, but if you go to the good. com and click through the big blue button in the top right of every page, just go click on that. You can learn about it, but there are five areas that our team has determined that really benefit from experimentation. And should be the scorecard for, are you getting a return on this investment?

And it's not about looking at just conversion rate or a specific metric. It's about looking at more [00:23:00] holistic gains. And so I would, we really don't have time today to go in the entire, into the entire scorecard, but I would say that is what we've done. At the good. Now, there are probably other methods out there as well, and I'm sure there are this is the one we found that these five areas really matter to brands that on that experimentation index, they've all done these and excelled at these.

And if you aren't doing these and aren't excelling in those five areas. Then, an experimentation program or optimizing for your digital experience can really help improve those. And that's everything from getting buy in from your team on, okay, we're going to continue to do optimization, or we are going to start doing it, to resourcing it correctly.

You could run, I've seen brands run a lot of experimentation, but never implement anything, right? Because they can't implement or maybe they have some custom platform that if they implement anything it's just, it's like a ball of [00:24:00] yarn and they're afraid they're going to pull the wrong string and not everything up.

So there's a lot of these types of things you got to consider and, again, If there's any takeaway from today's discussion, it's purely that measuring your optimization on one metric alone. is unlikely to get you to want to do optimization. And if you don't do optimization, you are at a severe disadvantage to all of your competitors.

So you really need to be doing optimization and you need to be paying attention to the right way to measure that success. I

Ryan Garrow: love it. Yeah. And if you want to have one metric that improves your site, I can do it within an hour by just cutting off all your non brand traffic and your conversion rate increases, and you did no optimization.

Jon MacDonald: See?

Ryan Garrow: And your business will go in the tank. It'll be great.

Jon MacDonald: Or, you know what? Offer everybody 99 percent off or a hundred percent off. Just give your stuff away. Your conversion rate will improve. It'll improve. It's great. This goes in a vacuum. There are too [00:25:00] many ways. To make yourself lucky. Let's put it that way.

Ryan Garrow: Got it. Anybody promising you anything is an idiot. I'll go and say it for Jon, but

Jon MacDonald: lucky you're like, we'll work on it, Ryan, we'll work on it for

Ryan Garrow: you. Jon. I appreciate the education. Thank

Jon MacDonald: you.

Announcer: Thanks for listening to Drive & Convert with Jon MacDonald and Ryan Garrow. To keep up to date with new episodes, you can subscribe at driveandconvert.com

Episode 120: I’m Running Experiments. Why Hasn’t My Conversion Rate Gone Up?
Broadcast by