WE NEED TO IMPROVE OUR DATA QUALITY OR IT"S GOING TO COST US IN THE NEAR FUTURE, AS A NEW EMPLOYEE COMING FROM A BACKGROUND THAT EMPHASIZES DATA QUALITY, DATA TAXONOMY AND ENTERPRISE CENTERED ARCHITECTURE, I AM SHOCKED AT WHAT I SEE HERE AT SSA AS FAR AS DATA STRUCTURE GOES
Data is an important enterprise asset, so its quality is critical. Disparate redundant data is one of the primary contributing factors to poor data quality. An ENTERPRISE DATA MODEL is essential for data quality because it exposes data discrepancies, inherent in redundant data. Existing data quality issues can be identified by "mapping" data systems to the ENTERPRISE DATA MODEL. As new data systems are built from an enterprise data model framework, many potential data quality issues will be exposed and resolved, prior to implementation. SSA attempts to “clean,” or fix data without this data blueprint. How would they ever know if the fix correctly represents their real world organization? Unfortunately, there is an unconscious consensus that if the data “looks” right, then it must be right. That is one of the biggest data fallacies today and underlies many of our data challenges. Data quality statistics account for only the data that is obviously or proven incorrect. As long as the data does not appear to be off, it is assumed correct.The preference would be to design, capture and store the data according to the enterprise data blueprint, but that is rarely considered for organizations with a large, well established legacy data infrastructure. Most feel they cannot afford to do this; but in many cases, it actually may be less expensive in long run due to the high cost of continual data repair, as well as the cost of distorted data itself. Every discrepancy of the organization’s actual data from the enterprise data blueprint needs to be accounted for and managed. There are always a number of solutions depending on the circumstances, on a case-by-case basis, but most require many hours of human effort to research, analyze, and design a solution.