Using Uncertainty Modeling to Better Predict Demand

Using Uncertainty Modeling to Better Predict Demand

by Bloomberg Stocks
0 comments 67 views
A+A-
Reset

In the effort to reduce waste and eliminate redundancy, many companies have exposed themselves to greater risks of supply chain disruption, despite heavy investment in data analytics around demand prediction that should, in principle, drive out uncertainty. This article argues that the failure of demand prediction models is rooted in the fact that they do not take into account how data is generated, but simply explore apparent relationships in aggregated data that has been transferred from other functions in the organization. By unpacking the aggregation through a process the authors call uncertainty modeling, data scientists can identify new parameters to plug into the prediction models, which brings more information into the predictions and makes them more accurate.

The Covid-19 pandemic has triggered widespread supply chain disruptions across the globe: chip shortages are forcing automobile and medical equipment manufacturers to cut back production, while the blockage of the Suez Canal and the lack of shipping containers have inflated delivery lead times and shipping prices. Their effects have been exacerbated by management practices such as just-in-time manufacturing that are aimed at reducing redundancies in operations: with the redundancies have gone the safety buffers previously available to business supply chains.

Of course, companies understood the risks involved with eliminating buffers in the supply chain, which was while they have increasingly been investing in sophisticated data analytics. If they could better understand the bottlenecks in their supply chains, the thinking went, companies would in theory be able to operate with less redundancy without incurring extra risk. But the disruptions persist.

Our research across multiple industries including pharma and fast-moving consumer goods show that the reason for this persistence is due less to the shortcomings of the software and more to its implementation. To begin with, managers tend to ground their analysis within departmental units. Although sales and marketing teams can contribute important insights and data, their input is often unsolicited by operational decision-makers.

In addition, analytical solutions narrowly focus on the firm’s own supply chain. Best practices remain case-specific, and analytics models too often remain disconnected from trends in the larger ecosystem. As the examples cited above illustrate, one seemingly local disruption can snowball worldwide.

How can firms best avoid these traps?  Let’s begin by looking in more detail at what data analytics involves.

What Are Data Analytics?

Data-driven analytical methods can be categorized into three types:

Descriptive analytics.

These handle the “what happened” and “what is happening” questions and are rich in visual tools such as pie charts, scatter plots, histograms, statistical summary tables, and correlation tables. Sporting goods chain The Gamma Store, for instance, uses statistical process control charts to identify in-store, customer-engagement snags.

Predictive analytics.

These are advanced statistical algorithms to forecast the future values of the variables on which decision-makers depend. They address the question of “what will happen in the future.” The predictions generated are usually based on observed historical data about the response of the decision various to external changes (from, say, changes in interest rates or weather). Retailers like Amazon rely on predictive data on customer demand in placing orders from suppliers, while fast moving consumer goods producers such as Procter & Gamble and Unilever have been investing in predictive analytics in order to better anticipate retailer demand for their products.

Prescriptive analytics.

These supports decision-makers by informing them about the potential consequences of their decisions and prescribe actionable strategies aimed toward improving business performance. They are based on mathematical models that stipulate an objective function and a set of constraints to place real-world problems into an algorithmic framework. Airlines have been exploiting prescriptive analytics to dynamically optimize ticket prices over time. Logistics companies, such as UPS, also apply prescriptive analytics to find the most efficient delivery routes.

Firms typically use all of these methods, and they reflect the stages of decision-making: from the analysis of a situation, to the prediction of key performance drivers, and then to the optimization analysis that results in a decision. The weak link in this sequence is prediction. It was the inability of its famed predictive data analytics to accurately forecast demand and supply that forced Amazon to destroy an estimated 130,000 unsold or returned items each week in just one of its UK warehouses.

Managing Data Science

An eight-week newsletter on making analytics and AI work for your organization.

The reason that predictive analyses fail is in most cases related to assumptions and choices around the generation of data analyzed. Abraham Wald’s study of post-mission aircraft in WW2 provides the classic example. The research group he belonged to was trying to predict what areas on the aircraft would be targeted by enemies, and they suggested strengthening frequently struck areas. But Wald challenged this recommendation and advised reinforcing untouched areas, since aircraft damaged there were more likely lost and absent from observed data. It was by looking at how the data were generated that military officers were able to correct the decision on which aircraft areas to bolster.

The solution lies in an approach to analytics known as uncertainty modeling, which explicitly addresses the question of data generation.

What Does Uncertainty Modeling Do?

Uncertainty modeling is a sophisticated statistical approach to data analytics that enables managers to identify key parameters associated with data generation in order to reduce the uncertainty around the predictive value of that data. In a business context, what you are doing is building more information about the data into a predictive model.

To understand what’s happening, imagine that you are a business-to-business firm that receives one order every three weeks from one customer for one of your products. Each order must be delivered immediately, making the demand lead time negligible. Now suppose that the customer’s first order is 500 units, and that she plans to increase that quantity by another 500 units for each new order but does not inform the company that this is her plan.

What does the company see? The customer will order 500 units in week three, 1,000 units in week six, 1,500 units in week nine, and so on, which generates monthly demand values of 500, 1,000, 1,500, 2,500, and 3,000 units for the first five months — an average of 2,100 units per month. But because actual demand data exhibit substantial deviations from the average, the latter is a highly uncertain forecast. That uncertainty completely goes away, however, once the company gets the information that the customer is systematically increasing purchases by 500 units with each order.

In order for production managers to spot this kind of information, they need to look beyond purchase numbers. In most companies, customer order information is stored in an order management system, which tracks data such as when orders are placed, requested delivery dates, and what products are demanded in what quantities. This system is usually owned, managed, and kept by the sales department. After customer orders are fulfilled, aggregated information about completed orders is transferred to the demand fulfillment system, usually owned by production and operations, which managers in these functions then analyze to predict future demand.

The trouble is that the process of aggregation often entails a loss of information. With uncertainty modeling, however, managers can apply key parameters identified from the order management system in order to restore information to their prescriptive analytics.

Rescuing Information at Kordsa

Kordsa, the Turkish tire reinforcement supplier, provides a concrete example. The company receives large orders from its customers (tire manufacturers) but the number of orders, as well as the quantity and delivery date of each, is uncertain in each period. Previously, the company simply aggregated the customer order information in order to calculate historical monthly demand values that were then analyzed. As a result, the number of uncertain parameters fell from three to one, incurring a significant loss of information.

Using uncertainty modeling, we showed Kordsa how to avoid the information loss and gain significant performance improvements along key performance indicators (such as inventory turnover and fulfillment rate). By applying advanced algorithms such as Fast Fourier Transformation, we were able to integrate into the company’s demand prediction model key customer order parameters we identified by studying the company’s CRM data.

To better leverage the power of uncertainty modeling, Kordsa has  since created an advanced analytics team drawn from R&D, sales, production, planning, and IT. Team members regularly interact with different departments to better understand and identify the data and sources used in decision-making processes outside their own functions, which can then be factored into their predictive analytics.

This kind of boundary-spanning should not stop at the company’s gates. It is not only the decisions of its customers and suppliers that can affect demand uncertainties — decisions of actors in adjacent industries producing complementary or substitute products can also affect demand. Getting close to the data that these players generate can only help reduce uncertainty around the performance drivers you need to be able to predict.

. . .

Although manufacturers and retailers invest in data analytics to improve operational efficiency and demand fulfillment, many benefits of these investments are not realized. Information gets lost as data is aggregated prior to transformation across silos, which magnifies the level of uncertainty around predictions. By applying the math of uncertainty modeling to incorporate key information about how data is generated, data scientists can capture the effects of previously ignored parameters that can significantly reduce the uncertainty surrounding demand and supply predictions.

Read More

You may also like

Leave a Comment