Percent Change Estimation in A/B Testing
Percent Change Estimation in A/B Testing

Abstract: 

Tech companies strongly rely on A/B testing to evaluate the impact of potential changes to their products. For example, does adding this new feature in the recommender system increase user engagement?

Traditionally, A/B experiments focused on testing whether a difference exists between the average treatment effect and the average control effect.

However, the difference between the average effects is less interpretable than the percent change between treatment and control. Percent change is scale-free, making the comparison across different experiments and different metrics more natural.

In this talk I will present a Bayesian statistical framework for testing and estimating the percent change between treatment and control. When pre-experimental data are available, this framework leverages these data to get more accurate point estimates and tighter confidence intervals. Code to use this framework is freely available in the form of an R package called abpackage.

Bio: 

I am a Quantitative Analyst in the Data Science Team at YouTube. Before joining YouTube I was a PhD student in Statistics at Duke University. Prior to that I received a MS in Mathematical Engineering from Politecnico di Milano, and a MS in Engineering from Ecole Centrale Paris.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google