While demand forecasts are never perfect, they are an absolute necessity for most retailers. Good forecasting helps to ensure that retailers can supply the right product at the right time and location, maintain adequate inventory levels while avoiding stock-outs, reduce the chance of obsolete inventory, and improve price and promotion management.
Many of the traditional forecasting methods use time series analysis that rely on historical data and statistical models to generate forecast models. These models learn the historical demand patterns and use past trends as a baseline to predict future demand. Yet, two major challenges are associated with this forecasting approach.
First, the assumption that past trends are stable and continuous is problematic, specifically since the retail market is so dynamic and often affected by new products, promotions, seasonality and other changes that make it very hard to base forward-looking decisions on past behavior. This is the reason why personal experience, local knowledge, and expert judgement are often used as critical inputs to override automated forecasting.
Second, there is an inherent conflict between the required granularity level of the forecast and its accuracy. On one hand, for the forecast to be actionable, a high level of data granularity is required (eg, SKU at store per day). However, on the other hand, such a granularity level often results in a lower forecast accuracy. For example, it would be extremely valuable to predict the local demand of a specific product within a specific store. Achieving such a precise prediction would enable the retailer to plan the exact amount of supply for each product at each store location. But, the accuracy level of such forecast would most probably be lower than the forecast of the demand of that product across all stores. The Central Limit Theorem provides a good explanation of why aggregating the data can eliminate outliers and extreme observations that cancel each other. This known tradeoff between precision and accuracy (as precision increases, accuracy decreases and vice-versa) is a subject of many discussions, including in one of my previous blogs “Retail Analytics – How Granular Should You Get?”
To resolve the tradeoff between the need to be granular and actionable versus the need to be accurate, many forecasting tools use a top-down approach. This is often achieved by first aggregating the data — either over a long period of time, across several stores, or between similar products and brands. This aggregation guarantees the required accuracy level. Then, the aggregated forecast is broken down to its components. For example, the overall forecast for a product demand across the entire chain is allocated to specific stores by relying on the percentages of past sales associated with these stores.
This approach can yield reasonable results if the demand is consistently spread across its components. In reality, however, this assumption is often unsound. For example, the fact that a specific store sold 2% of the total demand of a specific product on average does not guarantee that it will result in the same percentage of sales the following week. Similar limitations could apply to other forms of aggregation, such as the sales volume of products within a category or weekly demand during a season.
One way to address the limitations posed by conventional time series analysis is to apply machine learning algorithms to sales data. With this approach, one can uncover consumer behavioral patterns across stores and products both historically as well as in real time by analyzing multi-product patterns. These include the identification of patterns that arise outside of what time-based or similar-stores benchmarking can recognize. Identifying these consumer behavioral patterns act as dynamic and accurate benchmarks for the expected demand of a SKU at a store at any given period. Using these tools, hidden factors such as local competition, availability, and operational failures can made visible and actionable for the retailer. One example of this approach can be found in the CB4 Analytics software solution (that will be discussed in another blog).
Demand forecasting will continue to be one of the most important elements of the retailers’ planning process. After all, without a clear projection of how much a product should sell at a specific location at a specific time, planning for demand becomes a somewhat of a futile effort. The nirvana is having the ability to predict demand to precisely fulfill local consumer preferences.
Contact us today to learn how our clients apply CB4’s patented machine-learning algorithm to their simple POS data to detect unmet demand across their entire chain.