Blast Analytics and Marketing

Analytics Blog

Supporting Leaders to EVOLVE

Why Your Experimentation Program is Failing to Drive Digital Transformation: Top 5 Pitfalls

October 27, 2020

For some time now, there’s been a lot of discussion in the industry about the importance of a digital transformation. This is even more true with COVID-19 forcing the need to accelerate these efforts.

An important way for organizations to tackle this digital transformation is to prioritize their experimentation efforts. Leading organizations do so because they realize that optimizing the customer journey under the “new normal” requires an iterative approach so they can better adapt to changing customer expectations and behaviors over time.

Just because an organization decides to implement an experimentation program, success isn’t necessarily guaranteed.

However, just because an organization decides to implement an experimentation program, success isn’t necessarily guaranteed. In fact, there are a number of common pitfalls experimentation teams make that, if not remedied, will impede your organization’s digital transformation success.

If your organization is struggling to recognize the value of experimentation as part of your digital transformation strategy, it’s worthwhile to evaluate your current testing efforts to see if any of the following five pitfalls are occurring.

  1. Lack of a data-driven approach
  2. Improper test planning
  3. No formal testing process
  4. Failure to take action
  5. Foundation has not been optimized

Top 5 Pitfalls of Your Digital Transformation

lack of data driven icon

1. Lack of a Data Driven Approach to Experimentation

One common mistake that experimentation programs make is that they fail to implement a meaningful testing strategy. Oftentimes, the test recommendations are based on opinions of what will work rather than investing the time to take a data-driven approach.

Basing tests off opinions may enable your team to iterate faster and get more tests out of the door. However, in the end, most of those tests won’t have an impact on your organization’s goals because they’ll likely fail to address the real points of customer friction. Moreover, such tests fail to uncover meaningful insights, which is key to accelerating your digital transformation efforts.

This is especially necessary for organizations that recognize that closing the gap and having a better understanding of customer behavior is an important aspect of digital transformation. The recent pandemic has only accelerated the need to better understand customer behavior since it’s likely that how customers used to behave is not necessarily indicative of how they are currently acting or will act in the future. The only way to truly understand how to optimize the customer experience, whether under the “new normal” or not, is to let your organization’s data guide your efforts.

For organizations that try to take a data driven approach, another common issue that arises is that their teams lack the skills or the experience necessary to analyze and synthesize data from various sources. Many analysts can dive into an analytics platform to review quantitative data, but to get a more complete view of your customers, the team should have the ability to break down silos and analyze customer data from multiple sources, including voice of the customer and other qualitative data.

Incorporating data from multiple sources as part of the experimentation strategy will undoubtedly lead to more meaningful insights about your customers, which will generate more valuable test recommendations.

image representing lack of data driven approach to experimentation

improper test planning icon

2. Improper Test Planning

Another common pitfall that organizations experience, occurs with test planning, or the lack thereof. Improper test planning comes in many forms, including creating immeasurable or vague test hypotheses.

For example, running a proper test requires the team to have a clear understanding about the purpose of the test, what changes will happen, and most importantly, what specific goals the team is looking to impact by running the test. If a team plans a test and decides they’ll figure out how to interpret performance later on, this increases the risk of those test results being subject to a biased interpretation. This is even more likely to occur if there’s pressure to produce a “positive” result. Alternatively, if a test has an immeasurable hypothesis, then there’s little value of running the test, as the team won’t be able to measure impact in a meaningful way. This essentially leads to testing for the sake of testing.

Improper planning can also occur when teams fail to establish appropriate test durations, whether it’s too short or too long. While stakeholders may say that they want a test to run until it hits “stat sig,” it’s important for the experimentation team to have a solid understanding and be able to communicate with such stakeholders that there are other factors, beyond just running the test for longer, that impact statistical significance.

not fully utilize icon

3. Not Fully Utilizing a Formal Testing Process

Oftentimes, organizations may assign the responsibility of doing experimentation to individuals who are multi-tasking and leading initiatives in other areas of the business, such as marketing. When experimentation isn’t prioritized, these efforts tend to suffer from a lack of organization. Specifically, these experimentation programs aren’t likely to have a structured testing process in place. The consequence of not implementing a structured process is that this can compromise the credibility of test results.

A common way in which this arises is when there’s pressure to get an experiment out the door. Rather than following a thorough quality assurance (QA) process, the test is launched only to have bugs appear, which need further troubleshooting post-launch. At this point, the test results have been compromised, and relying on such data to make an informed business decision is risky.

image representing the utilization of a formal testing process

Another key aspect of the testing process that’s often missed is the importance of documentation, such as test plans and test reports. While in the short-term it may be easier and quicker for the team to provide results via email or speak to it during a meeting, in the long term this can harm efficiency.

If the team has been testing for quite a while without proper documentation, it becomes challenging to see what tests have been done previously, which increases the risk of redundancy and wasted effort. This also impacts the experimentation program’s ability to scale since the lack of documentation prevents easy knowledge sharing for new teammates.

It should be highlighted that even if your organization has a testing process in place, whether your organization is actually following the process is what’s really important. A common situation arises when experimentation teams are repeatedly pressured to make judgment calls prior to the agreed-upon test duration. Alternatively, stakeholders may ask teams just to provide a quick update on how the test is performing and then, use the quick update to make business decisions before the test has time to complete. The main concern in doing so is that these decisions are being made on unreliable data.

Examples of unreliable data include when test results have not obtained sufficient conversion volume or if a variant is showing a “lift” in performance, but results have not reached the statistical significance threshold or are underpowered. Making business decisions based on such test results is as good as making business decisions based on no data at all. If this scenario is regularly happening within your organization, this will very likely explain why your experimentation efforts are not accelerating your digital transformation.

failure to take action icon

4. Failure to or Fear of Taking Action

Even when an experimentation program has a formal process in place and takes on a data-driven approach to strategy, experimentation will have no impact on the digital transformation if the organization doesn’t take action. Failure to take action may occur for a number of reasons. Sometimes there’s inaction because the winning experience goes against what the organization used to do for the customer experience and stakeholders are hesitant with change.

For example, an ecommerce team may be used to having the homepage hero banner filled with promotions but a recent test reveals that using this hero space to provide compelling brand messaging is actually more effective and improves average order value and customer loyalty. Yet, even with these data supported insights, the team fails to take action because it goes against what they’re used to doing – using the hero space for promotions.

The problem with this approach is that COVID-19 is forcing customer behavior to change. Organizations that are unwilling or may be hesitant to embrace change, especially when it’s shown to have a positive impact on business goals, will likely see less success with their digital transformation.

Another reason why teams may fail to take action is that it’s not a top priority, especially if the team is multi-tasking and bandwidth is stretched. Oftentimes in this situation, there’s a lack of consistent support from key stakeholders. If stakeholders don’t continue to push for and champion a test-first mentality, others within the organization are less likely to buy into experimentation, including prioritizing the implementation of winning tests. In order to remedy this situation, establishing a culture of experimentation must start at the top, with key stakeholders leading the way.

foundation not optimized icon

5. Structural Foundation is not Optimized to Support Experimentation

As important as it is to have a proper strategy in place, it’s equally important to ensure there is a solid foundation to support your experimentation program. One area where organizations run into problems is with ensuring their experimentation platform is set up properly to support a data-driven approach to testing.

As important as it is to have a proper strategy in place, it’s equally important to ensure there is a solid foundation to support your experimentation program.
Over the past couple of years, browsers have made numerous updates to increase privacy restrictions on third-party cookies and first-party cookies that are served client-side. These updates will directly impact experimentation efforts.For example, if your organization is using Adobe Target but has not implemented the CNAME approach, then your Target activities are likely being compromised for Safari and Firefox users, which may skew test results and make it difficult to rely on for making informed business decisions. It’s also important to note that these restrictions are constantly evolving and as a result, your team’s approach for handling the increased restrictions must adapt as well.

In addition, to fully support a data-driven approach to testing, merely looking at test results within the testing platform will not suffice. To uncover meaningful insights that’ill drive digital transformation, it’s important that the testing platform is integrated with other data sources, such as using A4T to integrate Target with Adobe Analytics, which will allow your team to conduct a deeper analysis of the test results.

Another area where this issue arises is around data quality, including such things as improper goal setup or event tracking. It can’t be stated enough that navigating the “new normal” will require organizations to rely extensively on their data to understand the changing customer experience. Therefore, data quality plays a significant role both for experimentation strategy and test results analysis. At the end of the day, if stakeholders do not have confidence in the data due to poor data quality, this will lead to greater hesitation and more inaction, even if the experimentation team produces test “wins.”

Ultimately, it’s essential that experimentation teams are constantly aware and actively taking steps necessary to ensure their experimentation platform and other data sources are optimized to fully support a data-driven experimentation program.

It’s essential that experimentation teams are constantly aware and actively taking steps necessary to ensure their experimentation platform and other data sources are optimized to fully support a data-driven experimentation program.

Knowing Your Team’s Abilities and Capabilities is Key

If you find that your organization’s experimentation program is experiencing one or more of the above pitfalls, it’s important to prioritize making the necessary improvements to get your digital transformation efforts back on track.

In doing so, organizations may find that trying to achieve a truly data-driven experimentation program requires more bandwidth than they currently have, or even if they have the bandwidth, their teams may not have the experience necessary to truly support a culture of experimentation.

In such situations, your organization should consider working with a strategic partner to help drive this culture of experimentation and better support your organization to achieve your digital transformation goals.

Roopa Carpenter
About the Author

Roopa Carpenter is Vice President of Digital Experience (DX) at Blast. She leads a team of talented, DX consultants responsible for helping brands to better understand, optimize and measurably impact their digital experience. With many years of experience, Roopa offers a high level of knowledge and guidance to clients across a variety of industries regarding testing and personalization strategy and execution, user experience research and closing the empathy gap through Voice of Customer. Her data-driven approach focuses on impacting customer conversion and driving desired business outcomes.

Connect with Roopa on LinkedIn. Roopa Carpenter has written on the Blast Digital Customer Experience and Analytics Blog.