Have you ever wondered what happens to your A/B test after the test is completed? There are missing elements in how A/B tests are run today. Once the test is completed and the winning variation becomes the default variation we tend not to monitor stable test results.
By working together with some of the top Product Managers in the industry we’ve learned many of them use Anodot not only for monitoring the A/B while it's running but mainly after the test is completed. Top product managers are using Anodot to monitor how the winning variation performs in production.
Here’s an example: One of our customers released a feature improvement to their main product branch after running an A/B test, they validated that they can potentially improve the results by 22% with a high confidence level. The feature in production had been working as expected for more than 6 months until, for an unknown reason, performance drastically deteriorated. Anodot identified the drop in performance and helped identify the root cause - a second A/B test was released to the main branch using the same API, an API quota violation occurred The incident was identified within 12 minutes from when it started and resolved about 60 minutes later.
Whether you run A/B tests or Multi-variant tests, Anodot can be used to monitor variations during and after the test.
HOW TO SETUP A/B TEST ANALYSIS WITH ANODOT
- Create ‘Test_Name’ Dimension (Example: “Test_Name”:”Pink Buttons”)
- Create ‘Test_Group’ Dimension (Example: “Test_Group”:”Control” / “Test_Group”:”Test”)
HOW TO MONITOR THE TEST
- Create a composite metric dividing the potential and the actual (use the ratioPairs function) group by ‘Test_Group’ and ‘Test_Name’
- Use a statistical test to explore whether the test results are statistically significant. We recommend using this free tool:
HOW TO MONITOR THE WINNER OVER TIME
That’s the easy part. Once the results are live in production, create an anomaly alert to monitor the winning metric and connect it to the relevant recipient and/or channel.