Growing retail sales takes intelligence. Keeping close account of how sales are tracking against norms can provide critical insights and help define better sales strategy and tackle problems; but setting the right norm is actually the real challenge. Which benchmark will provide a true point of reference and comparison to product sales?
In practice, there are two common benchmark approaches that retailers work with–time-based and similar-stores benchmarking. The trouble is, neither approach provides accurate insights. Here’s why.
Time-based benchmarking tracks sales over time and triggers alerts when decreases in sales occur. But a decrease in sales over time is not by itself necessarily indicative of a problem. A decrease in sales could result from conditional changes that impact product sales. This includes factors such as promotional campaigns, price changes, and seasonality. That means time-based benchmarking is likely to generate many false alarms when conditions change – and especially when they change frequently. Moreover, this benchmarking approach systematically overlooks situations in which the sales increase, yet are lower than the actual potential threshold.
To mitigate the fluctuations that make time-based benchmarking problematic, retailers tend to aggregate data to a level that will be less sensitive to short-term fluctuations, such as by region or category. It’s a tradeoff with a high cost; aggregation implies losing the granular details that could provide valuable insights, such as ‘Product A in Store B is selling below its demand potential, possibly due to some operational failure.’
Similar-stores benchmarking triggers alerts when a decrease in sales is spotted with respect to a segment (cluster) of similar stores at a given time.
Hypothetically, if completely identical stores were exposed to exactly the same conditions, including demographics, staff, and location effects, the stores should – again, theoretically – sell each and every product at about the same level. In reality, however, there are a myriad of factors that impact in-store product sales. No two stores are exactly alike. To make matters worse, information on the factors that impact local sales are usually not available to the retailer and can change frequently, such as a promotion that a local competitor is running.
That leaves only a few known and persistent factors to choose from, such as store size, format, and demographic profile. With the limited number of selected factors, similar-stores benchmarking is mainly valuable for aggregated sales benchmarking, which is less sensitive to dynamic conditional changes in short time frames. This is the reason that most of these segmentations are executed once or twice a year, overlooking many local factors that might change during this period.
A New Approach to Benchmarking
With the development of Artificial Intelligence tools, new forms of dynamic benchmarking are now available. For example, Pattern-Based Benchmarking works by applying machine-learning algorithms to sales data. With this approach, one can uncover consumer behavioral patterns across stores and products. These include patterns that arise outside of what linear time-based or similar-stores benchmarking can identify. These consumer behavioral patterns act as dynamic and accurate benchmarks for the expected demand of a SKU at a store. Moreover, instead of a rigid store segmentation that is revisited only once or twice a year, these new techniques can generate ‘fuzzy clusters’, by which a store is compared simultaneously to various subsets of stores for different subsets of products. Using these tools, hidden factors such as local competition, availability, and operational failures are made visible and actionable for the retailer.
Contact us to learn how CB4 uses your simple point of sales data to provide a dynamic sales benchmark to your brick-and-mortar chain.