
What is the Best Way to Make Business Strategy Decisions?
At Value Driven Analytics, we highly recommend making data-driven decisions. While certain decisions may have worked out okay at previous companies or on past projects, there is no guarantee they’ll be optimal going forward; thus, relying on past experience or instinct is not ideal. Don’t make the same mistake as the former 17-month CEO of J.C. Penny, who famously said “We didn’t test at Apple” before rolling out a pricing strategy that quickly halved the company’s value. Instead, test and measure every major decision.
Some organizations will pull together some high-level data and trends to justify certain decisions, which gives off the “feel” of making data-driven decisions and can be effective in some cases; but it’s important to keep a sharp eye on the rigor behind how the data is being used to justify business decisions. High-level data pulls and trends can easily be manipulated to justify what a decision-maker was already planning on doing based on past experience or instinct. As Rex Stout puts it, “There are two kinds of statistics, the kind you look up and the kind you make up”. If this is the case, this is still clearly not a data-driven decision, even though it might deceptively look like one based on some data being included in a presentation.
Our recommendation is to perform a robust testing and measurement process for every major decision. For certain initiatives that could require quite a bit of effort to implement, we recommend starting with a phase 1 “feasibility test” where the initiative is tested out in just a handful of areas to determine, based mainly on qualitative feedback, whether the change seems to be working. If that goes well (or for decisions such as website changes that require less effort to implement), roll the change out a little further to ~20% of the impacted population (referred to as the “test group”) for the purpose of truly getting a statistical read on the impact. This might be called a phase 2 “statistical test”, which is also known as an A/B test.
In this phase, it is critical that the 20% of the population to whom the change is rolled out be randomly selected. It could involve randomly selecting 20% people, zip codes, cities, stores, DMAs, etc.., but keep in mind that the more granular randomization levels (person-level for instance) will give the most statistically significant read on the impact of the change. The random assignment of the test group ensures that the makeup of the test group and the non-test group (called the “control group”) is essentially the same in aggregate in every way, except that the test group experiences the new change (called the “treatment”); this is essentially a scientific experiment!
At the end of phase 2, a statistical test such as a t-test can be conducted to compare the performance (perhaps sales) across the 20% test group and the remaining 80% control group to determine 1) whether the test group performed better and 2) whether the difference in performance was statistically significant. If the change passes both of these checks (and the benefit justifies any cost associated with the change), then the organization can confidently roll out the change to 100% of the population, knowing exactly the impact it will have! Repeat this process for all future decisions and pretty soon your organization will have a rhythm for robust testing and will be sure to only implement changes that actually lead to better results. See the video below to learn more about A/B testing.



