Even though companies believe in data-driven approaches, many underutilize A/B testing by only running a few per year. While the occasional A/B test is a start, it’s not enough to repeatedly increase revenue and gain compounded conversion lifts. To optimize continuously, organizations must prioritize and invest in a true testing program.
According to WhichTestWon, most companies only dedicate part of a staffer’s time to testing:
Time and resources are the biggest challenges in the way:
In a previous post, we learned from Econsultancy that companies who saw sales increases averaged 6.45 tests per month. For you to reach that average or better, you must invest time and resources into testing or find a solution that overcomes those challenges. An established program maximizes test output and gives a higher ROI. Any time spent without testing is potentially lost revenue. Turn anecdotal testing into a real optimization program now.
Build a Testing Roadmap to Drive Continuous Testing
The first step to establishing a healthy testing cadence is to use a testing roadmap. It’s essential to build out a roadmap that collects, lists, and organizes tests and enables your team to optimize collaboratively and consistently. Ideal testing roadmaps show the program’s future and keeps everyone preparing tests during execution so experimentation doesn’t slow or stop.
A testing roadmap also shows areas of focus, such as touch points in the site’s flow. This ensures that testing is guided by web analytics when they’re first proposed. It sets you up to produce high-impact variations and marks milestones to push your program’s reach.
To build your own testing roadmap, consider using data organization software or a templated roadmap like this Google Sheet. There are also new tools like Liftmap that provide capabilities to build collaborative CRO roadmaps. Team project management platforms like Project MonkeyWorks, Basecamp, and Asana are also capable of starting off a program. These methods ensure that tests are always being added to the roadmap while experiments are being executed.
To build a roadmap:
- Find a tool or solution that will help manage test planning and development, such as Liftmap, Project MonkeyWorks, Basecamp, Asana, or Experiment Engine
- Build out a testing roadmap that shows your program’s schedule, including milestones for testing reach
- Collaborate with your team to constantly input and develop tests while you’re executing other tests
- Create an internal system to track and manage testing, such as the Monetate example above
Experiment Engine’s customer success team collaborates with clients to analyze web data to identify test opportunities and build a comprehensive roadmap that drives a continuous optimization program. Recognizing these touch points will improve experiment results with higher win rate and greater lifts. The EE team builds out this roadmap so that your program has the foundation to continuously optimize.
Get Experiments Built
When you’re testing occasionally, variations are thought up and executed without a schedule. If you haven’t begun testing, you might have variations in mind, but they quickly run out. When armed with a roadmap that drives regular testing, you’ll constantly need more variations. These variations fill your testing pipeline to avoid bottlenecks in the program (more on this in the next section) so that experimentation is always being run.
Variations are sourced from numerous places: internally across the company, from optimization agencies, or outsourced elsewhere. Most companies rely on existing internal resources to collectively provide variations. Multiple departments submit test proposals into the roadmap on a part-time basis. An optimization agency quickly provides proven resources to your company but requires someone to manage that collaboration. Other companies choose to outsource work from places like Upwork and Elance, which can provide CRO help, but is not vetted and lacks continuity due to turnover. Of course, hiring new resources in-house dedicated to the company’s optimization is ideal if you have the time, resources, and know how to build a team.
To source more variations:
- Tap resources across all departments for variations to test; a testing culture results in more test submissions from within the organization
- Incentivize internal variation submissions by rewarding those who provide approved proposals or winning hypotheses
- Hire dedicated optimization resources in-house to generate variations full-time
- Outsource help to create variations from freelancers on Upwork or Elance
Experiment Engine provides companies access to a network of optimizers who have experience testing for many industries. From e-commerce to content publishing, these experts are familiar with your goals and challenges. A creative brief is provided to further direct optimizers about restrictions, goals, and other details. Each submitted variation is ready to test without the need for additional resources. They’re reviewed and approved before filling your pipeline, waiting to be executed. This streamlines the process of sourcing and developing variations.
Keep Moving Through Your Testing Pipeline
Companies often get stopped from running experiments because variations aren’t getting developed quickly. Even if a roadmap is in place, you need to prepare a test up to the point that it is waiting to be executed. A pipeline helps queue these variations so that experiments are run one after another. An active testing pipeline ensures that you’re maximizing testing capacity by constantly executing experiments on all touchpoint opportunities.
To ensure that your pipeline is moving, fill each focus area with numerous experiments to be prepared to run those consecutive experiments. Your company should also be testing each focus area simultaneously. For example, while you’re waiting for a test focused on the product listing page to complete, you should simultaneously test variations on landing pages. This approach pushes you to test iteratively at a rapid pace on multiple parts of your site.
To keep your pipeline active:
- Queue and prioritize tests so that you always have experiments ready to be run
- Run iterative tests consecutively for each touchpoint
- Simultaneously test multiple focus areas of your site to maximize optimization
After sourcing ready-to-test variations from Experiment Engine’s network, approved variations will automatically fill your pipeline where you’re able to prioritize. In the pipeline, you go straight into running experiments for multiple touchpoints, so that you’re always testing to capacity. Integrating the sourcing, pipeline, and execution together speeds up the entire workflow.
Monitor Status of Live Tests
Pushing a test out is a great success, but your program should already be thinking about the next experiment. Most companies get slowed down because they wait weeks or even months between tests. Monitoring the status of tests currently running helps recognize which touchpoints are being optimized and how long tests have run. This way, you’re able to test another area of the site or queue a test in the pipeline to replace an experiment that’s about to end.
Companies typically do not use platforms that provide one convenient view of live tests that provide data about status, test duration, area of focus, and results. A dedicated resource is needed to constantly update documentation for the program. Some companies with resources will invest into building an internal tool such as this:
But not every company has the capability to build an internal tool like this.
To monitor live experiments:
- Build an internal tool that provides a transparent view of all key info on running experiments
- Constantly update the status of live tests on a data management platform that is shared
- Easily and quickly prepare next cycle of testing
Experiment Engine’s platform provides a live view of all running experiments and makes all key information accessible. Each test is categorized by site touchpoint and shows the status, duration, the expert optimizer who designed it, and additional notes. Results and an actual preview of the variation are also just one click away. For customers, they’re able to constantly monitor how many tests are live, look at designed variations again, and think about the next cycle. This prevents companies from losing time between experiments and missing out on potential gains.
Set Testing Goal and Track Program to Reach Added Value
To build a program, you’ll need to set a testing goal for the company. This pushes the program to execute more and achieve gains. See how many tests you currently run and set weekly, monthly, quarterly, and yearly goals. Be realistic about your goals based on available resources and aim to maintain consistent testing cadence. The macro goal of your program is to incrementally grow revenue over time. The only way to get there is by pushing execution enough to reach testing goals.
As you increase testing, you’ll want meticulous documentation of every experiment — win or lose. Not only do the results of each experiment provide insight for future variations, but the overall data about your testing velocity keeps your program healthy so that you know you’re on the right track towards goals.
Companies typically use collaboration software for documentation, such as Confluence. Google Sheets or other data entry collaboration software works, too. These are excellent ways to record details of experiments but tracking program stats require additional reporting.
Document how many tests your program is able to execute each week, along with win rate and lifts. For each month and quarter, calculate averages for each then see if you’re hitting your program is consistently hitting goals over time.
After you have a clear vision of how your program is running, focus on areas where bottlenecks occur. For many organizations, sourcing variations is their biggest challenge, while others face difficulty streamlining the workflow. Both could be results of the larger problem we’ve seen — a lack of prioritization for testing.
To track testing velocity:
- Document testing ouput, win rate, and lift data on a weekly, monthly, and quarterly basis
- Measure your results against your goals and milestones
- Assess where your program has bottlenecks that prevent you from testing faster
Experiment Engine’s platform emphasizes the importance of testing operations by constantly tracking velocity. As we’ve said, any time you’re not testing, you’re losing money. Average win rate, average lift, and testing velocity are all shown on the dashboard to push your team to continuously optimize with greater efficiency. These tools are built right into the solution. The easier it is to track and monitor these testing metrics, the easier it is to reach goals.
Provide Executive Reporting & Program Transparency
To prioritize and validate the testing program after you’ve kicked off, share reporting about its health and results to executives and department heads. They want to know that testing will lead to a return, whether that’s projected revenue or actionable insight. Providing relevant reporting up the chain helps everyone understand the value of the testing program, which then prioritizes optimization. So when it’s time for executives to invest resources and time, the program will be high on the list.
Executive reporting requires key information about program health and how it translates into company growth and revenue added. Show stats, like velocity, win rate, average lift, and compare them to industry standards. The most appealing numbers to executives will easily be projected ROI. Show the potential effect of your testing by calculating lifts towards revenue. To do this, dedicate resources to documenting test results, program data, and ROI calculation.
- Track program health metrics, like experiments run, win rate, and average lift (you should already be doing this)
- Use those metrics to calculate projected revenue gains on multiple touch points and overall site
- Report on the program’s ROI to executives and department heads across company to validate optimization and further invest
Experiment Engine’s customer success team provides executive reporting that covers high-level metrics, such as quarterly insights about program health and ROI. These quarterly business reviews show a customer’s activity on Experiment Engine, including number of projects, variations created, experiments run, and win rate. The team also gives projected ROI to show the incremental increase of revenue due to continuous testing on multiple parts of the site. So for customers who are invested in testing, but do not have the time to follow day-to-day activities, they’re able to see key metrics that grow the business significantly over time.
How A Program Incrementally Increases Revenue
If you’re running a few tests, you’re on the right track, but have only begun scratching the surface. Prioritizing testing and turning your current testing into a continuous optimization program pays off in revenue gains. An occasional win increases your conversion rate from time to time, but a program gets wins repeatedly, setting your company up for revenue growth.
Let’s look at the projected revenue of a company who runs a few tests per year versus one with a dedicated testing program. We’ll use a hypothetical company with these numbers:
- Visitors per month: 500,000 (30% of which will be tested on: 150,000)
- Average order value: $50
- Current conversion rate: 2.5%
- Average Experiment Win Rate: 10%
- Average Gain Per Win: 10% (increase on most recent conversion rate)
Using these figures, the company would project this type of revenue growth over a year, depending on whether they’re running a few tests (avg. 1/month) or have a dedicated program (avg. 5/month):
Projected conversion rate by end of year:
- With a testing program: 5%
- With only a few tests: 2.75%
Projected revenue for the year:
- With a testing program: $3,183,750
- With only a few tests: $2,400,000
That’s $783,750 of potential revenue to gain for the year.
As you can see maximizing your testing capacity increases the number of wins and compounded gains. When only testing occasionally, the company ran 12 tests overall and only achieved one win since their average win rate is 10%. But with a program, they ran 60 tests and got 6 wins over the year. Testing faster results in learnings that compound gains with each win, causing conversion rate to incrementally increase. The results also continue to increase as the company tests into the following years.
Above All Else, Make Continuous Optimization a Priority
Resources and time are difficult for all companies, but to tap into the potential revenue you’re leaving behind, it’s time to invest. A dedicated testing program empowers your organization to reach for greater conversion rates with a methodical, repeatable process. It’s up to companies to establish that continuous optimization is a real need before they’re able to build out an optimization machine. Then, you’ll be able to test quickly enough to achieve increased revenue growth.