In the last three to five years, something has changed in conversations about supply chains and logistics. Suddenly, discussions seem to treat big data as a driver for incredible innovation. Assuming that big data was superior to “just any data” came from evidence in the B2C market, but this did not stop bold predictions about how it could also transform supply chain operation. And so, the floodgates were thrown wide open and every supply chain software company started pitching itself as a “big data” supply chain software company.
However, dumping transactional data from the transactional system(s) into a data warehouse and using it mostly for massive reporting jobs doesn’t pass muster with the need for big data analysis and immediate operational application. To use big data effectively, supply chain and logistics operators need effective “big data” tools.
What has not changed in the supply chain landscape?
The number of core systems producing data valuable enough for supply chain improvement is relatively small. The volume and velocity of data has not changed significantly at either end of the supply chain, nor at execution level.
What has changed in the supply chain landscape?
Understanding that supply chain professionals must now continually sense and respond to supply chain problems by making highly accurate business decisions “right now”, not later. To do this, they need advanced analytical techniques they can apply to datasets whose volume, velocity or variety exceeds computing capability of their legacy IT tools.
In this blog, I deal exclusively with manufacturing industries operating in the B2B market. My next blog will examine the logistics and transportation industries.
Data, data (big and small) everywhere
Consider that on the buy side of supply chains, neither the volumes of data nor the complexity have increased in B2B markets. There are, however, exceptions. Food, beverage, and pharmaceutical producers have noticed increased capabilities of sensors, monitoring equipment, and tracers in the cold chain. Real time supply flow optimization based on this additional data can have significant impact on production processing and sequencing decisions, as well as, on prevention of loss due to spoilage.
In the middle of the chain, within the production/service segment of the supply chain, the machines are getting more complex, fitted with an increasing range of sensors and monitoring equipment. This is significantly increasing the amount of data available to create real-time monitoring and optimization systems of inventories in front the machines. But if supply chain management concerns itself only with warehoused inventories, and not production floor inventories, then the value of this “big data” to supply chain decisions is low.
On the sell side, while the collaborative interconnection to the buyers’ systems is now possible, the quantity of data shared between the participants has not greatly increased beyond the specific industry (e.g. automotive OEM and their suppliers) practices of the past. Even though the access to vaster quantities of data could be opened, the critical indicators of value to the supply chain decisions are still the same as in the past. There may be some value in the behavioral data on the consumers (customers of buyers of your company products), but unless your company is part of a fast moving consumer goods (FMCG) value chain, analyzing this vaster amount of consumer data available at the retail level would not make your B2B supply chain decisions any smarter than they are right now.
I deliberately made the division between the buy side, produce side, and the sell side to illustrate a broader point. While the data quantities are generally getting bigger, supply chain “big data” is still made up of large data silos distributed among business functions and external sources, capable of being, but largely not, interconnected. Therefore all that “big data” talk does not provide significant benefits to executing end-to-end supply chain decisions that are of “mission critical” variety. And to make insightful, optimal supply chain decisions, the optimization needs to make disparate data accessible by aggregating it into a single value chain optimization process, and disaggregating the optimal decisions to the functional “branches”, so that any event at the end of each functional branch can be always considered in context of one, whole-chain optimum.
This leads me to conclude that so far, we have not been able to properly close the gap between the supply chain function knowledge, supply chain data and the “big data” fever. What we have improved dramatically, is our capability to perform calculations on the available data that lead to optimal decisions. What has taken legacy vendors of 10-15 years before to produce precise and optimal plan in 24-48 hours, is now being done in minutes. This is not being accomplished because computing power has increased, but because the research into applied mathematics has significantly progressed in the same time.
All in all, even though we developed some very practical B2B supply chain optimization applications, we accomplished this with the supply chain data available before the “big data” became a “do or die” topic of conversations. But we are yet to achieve disruptive shifts for many supply chain management activities. For that, we need the alignments between the supply chain theories, understanding of the value of all that additional data, and a holistic change in the way companies view supply chain strategy as enterprise strategy, and not supply procurement strategy. If that alignment doesn’t occur, the supply chain “big data” concept will remain just a concept.
If you found this topic interesting, leave your comment or suggestion. I look forward to hearing from you.