By Christophe Vaissade, Sales Director EMEA, Western Digital
Today’s supply chains are rapidly evolving to reflect the increased complexity of world trade – a highly competitive and increasingly volatile environment, where customer requirements can change in an instant. This makes a one-size-fits-all fulfilment model at best, inefficient, and at worst, downright confusing.
Many companies are sitting on a wealth of historical supply chain data that, if utilised correctly, has the potential to power a fully-automated and efficient supply chain model. In fact, a recent survey found that supply chain executives rated advanced analytics as the second most important emerging technology.
It doesn’t come as a surprise that data analytics is the next big thing in supply chain management. The convergence of ubiquitous connectivity and compute capability is driving an exponential growth in connected devices and connected sensors. This is generating incredible volumes of data and enabling vast new types of transformative applications and business models across organizations as well as within the supply chain.
Data is captured, encrypted, processed and analysed at the edge for many reasons and there is a huge value in edge analytics, but there are also use cases that make sense for storing additional data collected at the edge.In addition to capturing this data locally as primary or backup storage, edge storage and compute devices will maximize network efficiency and enable systems to analyse the data and act on the results in real-time.
However,it isn’t always obvious how this can be put into action. By following a few simple steps, businesses can use machine learning and advanced analytics techniques to design a fully-automated prediction model, creating much more efficient and customer-centric supply chain processes.
Build an optimisation strategy
First of all, you need to ensure that you have a comprehensive understanding of business objectives, the organisation and its needs. The next step is to develop a supply chain optimisation strategy. This framework should consist of four pillars:
- Foundation – establish connected enterprise systems and big data infrastructure and data governance to support the new model
- Network Landscape – streamline the carbon footprint, rationalise the carrier base and optimise the network model
- Team Organisation –create a control tower to help with change management, develop talent and focus on culture
- Digital Solutions –implement intelligent transportation solutions, smart warehouses and utilise emerging technologies where appropriate
When putting this framework into place, it’s important to take a balanced, bi-modal approach, rather than being too rigid or too flexible. In particular, it’s beneficial to continuously identify and act on opportunities to improve these processes as you go along.As well as this, using disruptive, highly adaptive methods will ensure that failures happen quickly, to speedup the learning process. Together, these methods of ‘continuous improvement’ and ‘continuous innovation’ will enable the business to mature its technology quickly, efficiently and cost-effectively.
Gather the data
Some companies deliver tens of thousands of unit shipments every week to customers, channel partners and more – meaning there’s a lot of data to be processed. It’s important for organizations to manage a data strategy throughout the edge-to-core ecosystem, addressing the unique challenges in these environments. There are a few considerations to ensure the data can be successfully captured at the edge in these environments.
Data collection at the edge means sensors and storage are not residing in a pristine data center, but are out in the elements. That could mean collecting data for days or weeks in extreme high or low temperatures or while aboard moving platforms where they must be able to withstand unpredictable vibrations. Edge environments can be significantly harsher than traditional mobile, client or data centre environments.
In addition, for many large businesses, a data lake or centralized data warehouse is the only solution that is capable of handling such large volumes of data, and should be a single-source of truth. In order tobuild this warehouse, data should be ingested from integrated systems, from transportation management to workload managementand freight forwarders.
Once the warehouse has been built, it can be used to feed directly into the data model and analytics platform. From here, the platform can be used to derive insights such as the most efficient shipping routes, and the best days to ship. With this information, adjustments can be made within the transportation management system to improve reliability, speed up transit time for inventory, and increase shipment consolidation.
The future of the supply chain
By working with colleagues from other departments, adopting a fail-fast approach and drawing insights from the data, businesses can move from simply managing and connecting supply chain operations to using predictive analytics with machine learning, thereby making the supply chain much more adaptable to changing customer requirements.
Moving forward, there is potential to develop and deploy forward-looking models, which will allow businesses to compare different scenarios, understand potential outcomes, and push the most effective alternative into production. Ultimately, this will enable business to drive transit times, routes, and shipping schedules down to the delivery-address level, thereby establishing a much more reliable and efficient supply chain.
Whether it’s capturing and processing data in real time at the edge, or applying machine learning to larger data sets at the core, there are exciting opportunities across industries for maximizing the value of IoT data.Effective data storage and data analytics is the key to this as it will enable organisations to harness rich historical data to guide strategy as well as using data gathered from IoT-enabled devices to collect supply chain data about shipments in-transit.