For expanded detail on the November 2021 Newsletter, carry on reading.
We were really interested in this article by McKinsey. Going beyond Spend Analytics to Advanced Procurement Analytics and the systematic monitoring of contract performance. The article talks about the need to automate the aggregation of many sources of data, which is easy to say and really difficult to do.
As part of the DXC Analytics division, we talk to many customers who have invested in Data Lakes and Data Warehouses so that many sources of data can be stored in a central repository. They’ve also invested in teams of Data Scientists to mine the data, but the issue remains that it is still a largely manual, time consuming and resource intensive exercise to manipulate the data into a fit state to provide reliable management information that business professionals need, when they need it.
Delivering on the vision discussed in the McKinsey article is dependent on Data Curation, the term used to describe the management of data in a way that makes the data useful for users. Managing the complexities of the many sources and variations in procurement related data needs a team of Data Translators, Data Analysts, Data Scientists and Data Engineers as well as appropriate technology and data workflows, specifically designed to provide an output that delivers the management information procurement professionals need.
We’re seeing a growing trend in customers investing in analytics capabilities, whether that’s investing in BI reporting and visualisation tools such as Power BI or Tableau licences, in people with data skills and in the foundations of a Procurement Data Lake or Data Warehouse to store procurement and spend related data from all the various data sources. We’re also seeing an increase in data platform RFIs and RFPs where the key focus is not on the technology but on the skills and experience needed to build an effective data management solution and transfer knowledge to an internal team.
It’s widely documented that it isn’t easy to build a business case and secure budget for data and analytics projects, so it’s critical that any investment delivers a benefit to the business quickly. Starting with a clear outcome, such as being able to answer a key business question that cannot, or cannot easily be answered today, means that senior stakeholders will be able to see a direct link between the investment and a positive result for the business. That outcome needs to be either revenue growth, delivered savings, reduced risk, or increased customer satisfaction if you want to secure further investment. Working with Procurement professionals for many years has provided us with a library of business questions we are continuously looking to answer with better and better data.
While keeping your aspirational goal of a data platform that delivers high quality, timely Procurement MI for business users to self-serve in mind, delivering against a specific use case will focus time, energy and financial investment. You should aim to design any proof of concept or pilot project in a way that it can be used and built upon. It may not be ready to go into production, but the learnings from it should not be disposable.
At the heart of asking most procurement related questions is the need to link the many transactional sources of data, is the master data management of supplier records or product types. This is by far the most difficult data management task because of the many variations in spelling, address details, methodology used for deduplication, manual intervention and confusion between grouping of a company versus a corporate hierarchy, different SKUs etc. To get a really reliable result needs not only advanced data skills, but a solution that delivers continuous improvement through a finely tuned balance of human machine teaming (we’ll explore this topic in a future newsletter).
Starting with a defined set of business questions that need to be answered, means that we can identify data attributes needed to answer the specific questions. Efforts can then be focused on analysing the completeness and reliability of the data coming from internal sources. Building a clear picture about the level of trust in not only each source, but each data attribute, means that we can not only create a data transformation workflow that uses the data we trust the most in preference order, but allows us to identify the gaps.
When we know what the gaps are, we can design data enhancement processes that infer a result from a combination of data that we have, or we can look for external sources of data to enrich the data we already have. At this point, we’re back to needing to have a really good matching process in order to append the data to the correct records.
We’re working with some of our customers now to build advanced data transformation workflows that deliver high quality data, classified at the line level. We work with the customer to develop an appropriate line level taxonomy for their business, again starting with the business users and their questions in mind.
We work in partnership with the customer to determine the order of trust and precedence of the line level classification, building layers of data workflows, until we reach a point that the internal data is not trusted. That’s where the classification of suppliers to vCode, as an external proxy for the spend category, is used to supplement the internal data sources, mapped to the customer’s internal procurement taxonomy. We’ll bring you some examples of this in future newsletters.
We’ve had a hugely positive response to the innovation we’re leading in this area. As part of the DXC Analytics family, we’re now collaborating with our colleagues and technology partners to create a Procurement Data Factory that will allow us to support customers who are taking the next step forward. We’re developing modules that will be accessed using APIs that allow us to use parts of the process and to build more processes that we can build into a customer’s own Procurement MI platform to deliver really good quality data that procurement professionals will be able to access whenever they need to answer those killer questions that will give them and their organisations an unfair advantage.
Observatory login details are required to download resources.