In order to benefit from swathes of new apps hitting the market, businesses tend to increasingly use individual applications in support of day-to-day business functions. They collect and utilise key data, much of which is interconnected, and this helps the business to develop. However, as businesses grow they may find that this data becomes harder to manage.
The problem is that all of the conceptually connected data is separated and spread out across multiple apps and programs. Keeping this data dispersed across the organisation and not available in one core place gives management a fragmented and disconnected view of the reality of their operations.
The importance of connected, high-quality data cannot be understated. Reports show that 41% of surveyed companies claim that inconsistent data across diverse technologies is their biggest challenge (across CRM, Marketing, HCM, and more). It’s easy to see why that’s a concern; data is the powerhouse behind good strategic decision-making. Without strongly integrated systems, businesses also risk employees wasting hours on menial data processing tasks. Indeed, according to MIT Sloan, surveyed employees waste 50% of their time on managing poor data.
Integrated data enables companies to grow and develop strategically and efficiently. For a real-life example, imagine how connecting an ERP system with a CRM system may enable a company to create a holistic product matrix, in which they see an overview of the total sales made by product and by client over time. This is incredibly valuable. Therefore, many businesses seek solutions in the form of integrations. Companies attempt to integrate their data on a regular basis in order to use all that information at the point of need.
At Epicenter, we usually see three main types of data integration:
- Fully automated: Fully automated data integration means that data is ingested, synchronised and duplicated where required across an entire organisation and its contingent parts by automated software programs.
- Partially automated: In some cases, data integration can be partially automated. This means that there is both a manual element and an automated element. This risks being incorrect or technically unreliable.
- Manual: This involves manually integrating data. A human resource will have to manually copy, paste and even type the data where necessary. This usually involves excel spreadsheets and a significant amount of time! It also carries a large element of risk in terms of the reliability of the data.
Many business teams will be all too familiar with the consequences of data that is not synchronised properly. Old and inconsistent records may cost organisations money, reducing their reliability, and in some industries, even affect the safety of staff. At Epicenter, we’ve seen and solved a significant number of data-lead problems caused by manual or unstable automated integrations. Here are the key issues we see when integrations are unreliable.
If a business system contains incorrectly updated data, it can be a logistical nightmare. Looking into the HCM space, for example, shows that if payroll data is wrong, it leads to knock-on effects down the line. Examples of payroll data include:
- Compensation data
- Common incidentals and deductions
- Overtime pay
A lack of reliable system integrations risks this information simply being wrong, whether that’s due to human error or poor quality data. If this is updated incorrectly, the employee and HR department potentially have to deal with incorrect payments and inaccurate tax information.
Out of Sync Data
When you’re updating multiple platforms, all data has to be correct across all platforms, or you risk it becoming out of sync. Manually updating multiple systems with, for example, key sales information, can lead to mistakes in one platform where they are not present in others. If a CRM system notes that X employee sold 1000 units of product in FY20, however, the ERP system notes that X employee processed 100, this means that the data is out of sync and dependent decisions cannot be made. This de-values the data and means that it cannot be used strategically because you simply don’t know if it’s correct.
Unreliable system integrations can also lead to high incidences for outdated data. This is because syncing and upload of data happens less frequently.
It’s certainly true that manual syncing of data is usually less frequent than automated syncing. For example, if a dataset of new hires is being updated once a month, this means that during the month, there’s a significant risk of the data being incorrect and not up to date as new hires join. That has knock-on effects for reporting, financial projections and more.
With unreliable integrations, you may have data arriving from different systems and feeds. This means there’s a risk that records about specific entities (e.g. a product) will be duplicated in the system, leading to incorrect data on which to make strategic decisions.
Furthermore, it’s expensive. According to marketing firm Sirius Decisions, it costs about $1 to prevent a duplicate piece of data, $10 to correct a duplicate and $100 to store a duplicate.
Often, companies need to categorise data in order to ensure it’s in manageable groups. An example of this would be categorising sales by product or salesperson. Manual integration and unreliable integrations lead to higher risks of data being uncategorised. This means that it may then not be used or counted in vital analyses.
As you can see, there are many ways that data quality can fail you. It’s important to ensure that the integration solution you choose is highly dependable and suited to your IT architecture. If you’re looking to increase data quality across your system landscape, simply click here to learn more about Epicenter and how our automated integration solutions can drive this change in your organisation.