Quantifying the Value of Iterative Experimentation
Over the past decade, most technology companies and a growing number of conventional firms have adopted online experimentation (or A/B testing) into their product development process. Initially, A/B testing was deployed as a static procedure in which an experiment was conducted by randomly splitting half of the users to see the control-the standard offering-and the other half the treatment-the new version. The results were then used to augment decision-making around which version to release widely. More recently, as experimentation has matured, firms have developed a more dynamic approach to experimentation in which a new version (the treatment) is gradually released to a growing number of units through a sequence of randomized experiments, known as iterations. In this paper, we develop a theoretical framework to quantify the value brought on by such dynamic or iterative experimentation. We apply our framework to seven months of LinkedIn experiments and show that iterative experimentation led to an additional 20 primary metrics.
READ FULL TEXT