Partner Agency

This is your classic type of experiment where we take the control (the original version of your website) and test it against different versions called variations. Version A vs B, C, D, E etc to see which version performs the best. For Example:
Control: Original version (no changes)
vs
Variation A: A single change to the control, this could be an image, content, headline or layout change
Variation B: Another single change again different to the control and also different from variation A
This type of testing is done using software such as WebTrends that controls your users being sent to each variation to ensure a fair experiment. The variation changes are made within the software which overrides the original website content when a user is sent to a variation.
This type of test is done using two existing pages on your website, the main difference is the variations are hosted on your website with the variation using a new URL as opposed to being coded in software for A/B testing. The software then simply sends traffic to the URLs evenly, for example:
Control: yourwebsite.com/page
Variation A: yourwebsite.com/pagev2
This type of test is useful for larger wholesale changes where we we want to gauge overall performance differences. This is also useful for testing changes that your developers have made as part of their development roadmap to check if the change will be detrimental to the website performance or not.
Instead of testing one change at a time, MVT tests multiple elements simultaneously to see which combination performs best.
For example, testing:
3 headlines × 2 images × 2 button colours = 12 total combinations.
Use case:
When you want to understand interaction effects between page elements (how changes combine).
Useful for high-traffic websites, as the number of variations grows exponentially.
Best when you have the data volume to support many combinations.
Example:
Optimising hero sections where headline, image, and CTA may influence each other.
Key difference from A/B:
A/B tests, ideally test one single change, MVT tests many simultaneous combinations.
Multi arm what you say? The aptly named Multi Arm Bandit experiment is used for finding out what version of a page works the best quickly and then automatically distributes more traffic to the variation that is winning. This means you get the most amount of traffic going to the highest performing variation.
Testing software, such as WebTrends Optimise, uses machine learning to constantly evaluate the performance of variation within an experiment and adjusts the traffic distribution in real time according to performance. Crucially, the machine learning does not wait for statistical significance to be achieved before it starts to adjust traffic to performance.
Use case - This type of test is designed for sale periods such as Black Friday, Boxing Day or January Sales. By deploying a MAB test on your sales pages, you can maximise the sale ROI during the sale period. Not waiting for a winner and then catching the last few days of the sale with the best performing version.
MAB testing also minimises loss as it rapidly assigns traffic away from the poorer performing variations in a test, ensuring less users view the lower performing content






At the sharp end of CRO and Experimentation, companies adopt its methodology and thinking business wide. Experimentation functions best when it is not just performed on the website as a last stage of the funnel. It works best when it is considered and adopted by internal teams responsible for Marketing, Product Development, Web Development and so on.
SpaceX are the best and most recent example of how testing and experimentation can literally take you to the stars! Their objective was to get a booster rocket into space and then safely back down to earth to be reused saving billions of dollars. Did they think they would get it right first time? No. Second time, no, and so on. Their attitude was to collect data on each “failed” attempt and learn, taking those learnings and information into the next iteration to try again. Yes it cost them 100s of thousands of dollars on each attempt but that would be easily repaid once it succeeded, and it did.
What set them apart was their attitude of not being scared to fail, instead it was being prepared to learn and adapt every time a rocket took off.
The same can be said and done for Apps and Websites, usually we end up with one version that most stakeholders agree on launching with, then praying to the Conversion and SEO gods that it works. Companies that don't buy into experimentation upfront and embed it as part of SOPs are then reticent to test the website vs the old version or anything new for a period of time for fear of “failure”. This is a very different attitude and restricts growth.
By having a centralised experimentation programme in your business, stakeholders can funnel ideas into it and have their ideas tested as part of a web development and experimentation roadmap. This drives a culture of testing, and changes mind sets to making changes for growth rather than the loudest or highest paid voice wins. No one person no matter their experience or salary knows how 100’s or 1000s of users will react to a variation.
If you think your business could benefit from experimentation then get in touch via the button below. We are here to consult and help map out processes and frameworks for your organisation to get experimentation off the ground changing mind sets for growth.




Copyright Co-Labs 2025 | Company Number 2078941