We’re excited to bring Transform 2022 back to life on July 19th and virtually July 20-28. Join AI and data leaders for sensible conversations and exciting networking opportunities. Register today!
Every company today claims to be data based or at least. Business decisions are no longer based on assumptions or predictable trends as they were in the past. Solid data and analytics now power the most important business decisions.
As more companies take advantage of machine learning and artificial intelligence to make critical choices, there should be a conversation about the quality-completeness, relevance, validity, timeliness and specificity of the data used by these tools. The insights companies expect to be delivered through machine learning (ML) or AI-based technologies are as good as the data used to power them. When it comes to data-driven decisions, the old adage “Inside the trash, take out the trash,” comes to mind.
Statistically, poor data quality leads to complexity of the data ecosystem and poor decision making in the long run. In fact, poor data quality costs about $ 12.9 million per year. As the amount of data increases, so will the challenges businesses face with authentication and their data. To address issues related to data quality and accuracy, it is important to first know the context in which the data components will be used, as well as the best practices for guiding initiatives.
1. Data quality is not a one-size-fits-all effort
Data initiatives are not exclusive to a business driver. In other words, determining the quality of data will always depend on what the business is trying to achieve with that data. The same data can affect more than one business unit, task or project in very different ways. In addition, the list of data components that require strict regulation may vary according to different data users. For example, marketing teams will need highly accurate and valid email lists when investing in R&D quality user response data.
The best team to judge the quality of a data element, then, would be the team closest to the data. Only they will be able to identify the data as it supports business processes and will ultimately evaluate the accuracy based on what and how the data is used.
2. Which you do not know you can Hurt you
Data is an enterprise asset. However, actions speak louder than words. They make every effort to ensure that everyone in the enterprise has accurate data. If users do not recognize the importance of data quality and governance અથવા or do not prioritize it as they should-they will not try to anticipate data problems from normal data entry or raise their hand when they receive data. Problems that need to be addressed.
This can be practically addressed by tracking data quality metrics as an operational goal to promote greater accountability for those directly involved with the data. In addition, business leaders should champion the importance of their data quality programs. They should align with key team members about the practical impact of poor data quality. For example, misleading insights that are shared in inaccurate reports for stakeholders could potentially lead to fines or penalties. Investing in better data literacy can help organizations build a culture of data quality to avoid making mistakes without negligence or misinformation that hurts the bottom line.
3. Do not try to boil the sea
Fixing a large laundry list of data quality problems is not practical. It is also not an efficient use of resources. The number of active data components in any organization is huge and growing rapidly. It is best to start by defining the organization’s critical data elements (CDEs), which are integral data components for the core function of a particular business. CDE is unique to every business. Net income is a common CDE for most businesses as it is important to inform investors and other shareholders.
Since each company’s business goals, operating model, and organizational structure are different, each company’s CDE will be different. In retail, for example, CDEs may be related to design or sales. On the other hand, healthcare companies will be more interested in ensuring the quality of regulatory compliance data. Although this is not an exhaustive list, business leaders may consider asking the following questions to help define their unique CDE: What are your important business processes? What data is used in those processes? Are these data elements involved in regulatory reporting? Will these reports be audited? Will these data elements guide initiatives in other departments in the organization?
Recognizing and improving only the most important components will help organizations measure their data quality efforts in a sustainable and resourceful manner. Eventually, the organization’s data quality program will reach a level of maturity where there is a framework (often with a certain level of automation) that will classify data assets based on predefined elements to eliminate inequalities throughout the enterprise.
4. More visibility = more responsibility = better data quality
Businesses add value by knowing where their CDEs are, who is accessing them, and how they are being used. In essence, there is no way for a company to identify their CDE if they do not initially have proper data governance. However, many companies struggle with vague or non-existent ownership in their data stores. The definition of ownership promotes a commitment to quality and usability before onboarding more data stores or resources. It is also wise to set up a data governance program for organizations where data ownership is clearly defined and people can be held accountable. This can be as simple as a shared spreadsheet indicating the ownership of a set of data elements or, for example, managed by a sophisticated data governance platform.
Just as organizations should model their business processes to improve accountability, they should also model their data in terms of data structure, data pipelines, and how data is converted. Data Architecture seeks to model the structure of an organization’s logical and physical data assets and data management resources. Creating this type of visibility is at the heart of the data quality problem, i.e. without visibility in the * life cycle * of data – when it is created, how it is used / converted and how it is output – it is impossible to guarantee true data quality. Is.
5. Data overload
While the data and analytics teams have established a framework for classifying and prioritizing CDEs, they still have thousands of data elements left that need to be either validated or modified. Each of these data components may require one or more business rules that are specific to the context in which they will be used. However, those rules can only be assigned by professional users working with those unique data sets. Therefore, data quality teams will need to work closely with subject matter experts to identify the rules for each and every unique data component, which can be extremely intimate, even if they are prioritized. This often leads to burnout and overload in data quality teams as they are responsible for writing large amounts of rules manually for different data components. Organizations should set realistic expectations when it comes to the workload of their data quality team members. They may consider expanding their data quality team and / or investing in tools that take advantage of ML to reduce the amount of manual work in data quality tasks.
Data is not just the world’s new oil: it’s the world’s new water. Organizations may have the most complex infrastructure, but if the water (or data) flowing from the pipeline is not potable, it is useless. People who need this water should have easy access to it, they should know that it is usable and not contaminated, they should know when supply is low and finally, suppliers / gatekeepers should know who is accessing it. Is. Just as access to clean drinking water helps communities in a variety of ways, improved access to data, a mature data quality framework, and deep data quality culture can protect data-dependent programs and insights, helping to promote innovation and efficiency in organizations around the world.
JP Romero is the technical manager at Calypso
Welcome to the VentureBeat community!
DataDecisionMakers is a place where experts, including tech people working on data, can share data-related insights and innovations.
If you would like to read about the latest ideas and latest information, best practices and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing to your own article!
Read more from DataDecisionMakers