Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Archive for August, 2009

by: Larry.dubov
14  Aug  2009

Quantifying Data Quality with Information Theory

Information Theory Approach to Data Quality for MDM

Introduction

Over the past decade data quality has been a major focus for data management professionals, data government organizations, and other data quality stakeholders across the enterprise. Still the quality of data remains low for many organizations. To a considerable extent this is caused by a lack of scientifically or at least consistently defined data quality metrics. Data professionals are still lacking a common methodology that would enable them to measure data quality objectively in terms of scientifically defined metrics and compare data sets in terms of their quality across systems, departments and corporations.  

 

Even though many data profiling metrics exist, their usage is not scientifically justified. Consequently enterprises and their departments apply their own standards or apply no standards at all.

 

As a result, regulatory agencies, executive management and data governance organizations are lacking a standard, objective and scientifically defined way to articulate data quality requirements and measure data quality improvement progress. An elusiveness of data quality results in that job performance of the enterprise roles responsible for data quality lacks consistently defined criteria, which ultimately causes limited progress in data quality improvements.

 

A quantitative approach to data quality, if developed and adopted by data management community, would enable data professionals to better prioritize data quality issues and take corrective actions proactively and efficiently.

 

In this article we will discuss a scientific approach to data quality for MDM based on Information Theory. This approach seems to be a good candidate to address the aforementioned problem.

 

Approaches to Data Quality

At a high level there are two well-known and broadly used approaches to data quality. Typically both of them are used to a certain degree by every enterprise.

 

The first approach is mostly application driven and oftentimes referred to as a “fit-for-purpose” approach. Oftentimes business users determine that certain application queries or reports do not return the right data. For instance if a query that is supposed to fetch top 10 Q2 customers does not return some of the customers the business expects to see, in depth data analysis follows. The data analysis may determine that some customer records are duplicated and some transaction records have incorrect or missing transaction dates. This type of finding can trigger some activities aimed at understanding of the data issues and corrective actions.

 

An advantage of this approach to data quality is that it is aligned with tactical needs of business functions, groups and departments. A disadvantage of this approach is that it addresses data quality issues re-actively based upon business request or even complaint. Some data quality issues may not be easy to discover and business users cannot decide which report is right and which one is wrong. The organization may eventually draw a conclusion that their data is bad but would not be able to indicate what exactly needs to be fixed in the data, which limits the IT’s abilities to fix the issues. When multiple LOB’s and functions across the enterprise struggle with their specific data quality issues separately, it is difficult to quantify the overall state of data quality and define priorities with which data quality problems are to be addressed by the enterprise.

 

The second approach is based on data profiling. Data profiling tools are intended to make a data quality improvement process more pro-active and measurable. A number of data profiling metrics is typically introduced to screen for missing and invalid attributes, duplicate records, duplicate attribute values that are supposed to be unique, frequency of attributes, cardinality of attributes and their allowed values, standardization and validation of certain data formats for simple and complex attribute types, violations of referential integrity, etc. A limitation of the data profiling techniques is in that an additional analysis is required to understand which of the metrics are most important for the business and why. It may not be easy to come up with a definitive answer and translate it into a data quality improvement action plan. The variety of data profiling metrics is not based on science but rather driven by the variety of ways relational database technology can report on data quality issues.

 

Each of the two approaches above has its niche and significance. When the quality of master data is in question an alternative and more strategic approach can be considered by data governance organizations. This approach avoids detailed analysis of business applications while providing a solid scientific foundation for its metrics.

Information Theory Approach to Data Quality for MDM  

Master data are those data which are foundational to business processes, are usually widely distributed, which, when well managed, are directly contributing
to the success of an organization, and when not well managed pose the most risk. Customer, Patient, Citizen, Member, Client, Member, Broker, Product, Financial Instrument, Drug are the entities oftentimes referred to as master data entities while company specific selection of master entities is driven by the enterprise business and focus.

 

Master Data Service (MDS) defines its primary function as the creation of the “golden view” of the master entities. We will assume that MDS has successfully created and maintains the “golden view” of entity F in the data hub. This “golden record” can be dynamic or persistent. There exist a number of data sources across the enterprise with the data corresponding to domain F. This includes the source systems that feed the data hub and other data sources that may be not integrated with the data hub. We will define an external dataset f which data quality is to be quantified with respect to F. For the purpose of this discussion f can represent any data set such as a single data source or multiple sources.

 

Our goal is to compare the source data set f with the entity data set F. The data quality of the data set f will be characterized by how well it represents the benchmark entity F defined as the “golden view” for the data in domain F. We are making an assumption here that the “golden view” was created algorithmically and then validated by the data stewards.

 

In Information Theory the information quantity associated with the entity F is expressed in terms of the entropy:

                                              

                    H(F) = – ∑ Pk log Pk,                                                                                            (1)   

                                              

where Pk are the probabilities of the attribute (token) values in the “golden” data set F. Index “K” runs over all records in F and all attributes. The base in the log function is 2.

 

H(F) represents the quantity of information in the “golden” representation of entity F.

 

Similarly for the comparison data set f

 

                    H(f) = – ∑ pi log pi,                                                                                            (2)   

 

We will use small “p” for the probabilities associated with f while capital letter “P” is used for the probabilities characterizing the “golden” entity record.

 

Mutual entropy J(f,F) characterizes how well f represents F.

                   

J(f,F) = H(f) + H(F) – H(f,F)                                                                        (3)   

 

In (3) H(f,F) is the joint entropy of f and F. It is expressed in terms of probabilities of combined events, e.g. the probability that the name = “Smith” in “the golden record” F and name = “Schmidt” in the source record linked to the same entity. The behavior of J qualifies this function as a good candidate quantifying the data quality of f with respect to F. When the data quality is low, the correlation between f and F is low. In an extreme case of a very low data quality f doesn’t correlate with F and these variables are independent. Then

 

                    H(f,F) = H(f) + H(F)                                                                                      (4)   

 

and

 

                    J(f,F) = 0                                                                                                       (5)   

 

If f represents F extremely well, e.g. f = F, then H(f) = H(F) = H(f,F) and

 

                    J(f,F) = H(F)                                                                                                  (6)   

 

We define Data Quality of f with respect to F by the following equation:

 

                    DQ(f,F) = J(f,F)/H(F)                                                                                      (7)   

 

With this definition of data quality DQ changes from 0 to 1, where 0 indicates the data quality of f is minimal; f does not represent F.  When DQ = 1 f perfectly represents F and the data quality of f with respect to F is 100%, and therefore f represents F perfectly well.

 

The approach can also be used to determine partial attribute/token level data quality. This will provide additional insights into what causes most significant data quality issues.

 

The data quality improvement should be done iteratively. Changes in the data source data may impact the “golden record”. Then equations (1) and (7) are applied again to recalculate the data quantity and data quality characteristics.

 

Conclusion

The article offers an Information Theory based method for quantifying Information Assets and the Data Quality of the Assets through equations (1) and (7). The proposed method leverages the notion of a “golden record” created and maintained in the data hub. The “golden record” is used as the benchmark against which the data quality of other sources is measured.

 

Organizations can leverage this approach to augment its data governance offerings for MDM and make our data governance approach truly unique. The quantitative approach to data quality will ultimately help data governance organizations develop policies based on scientifically defined data quality and quantity metrics.

 

By applying this approach consistently on a number of engagements, over time we will accumulate valuable insights into how metrics (1) and (7) apply to real world data characteristics and scenarios. We will develop good practices defining acceptable data quality thresholds, e.g. it might be a future industry policy for P&C insurance business to keep the quality of Customer data above the 92% mark, which sets clearly articulated data governance policy based on scientifically sound approach to data quality metrics.

 

The developed approach can be incorporated in the future products to enable data governance and provide data governance organizations with new tooling. Data governance will be able to select information sources and assets to be measured, quantify them according to (1) and (7), set the target metrics for data stewards, measure the progress on an on-going basis and report on the data quality improvement progress.

 

Even though we are mainly focusing on data quality, the quantity of data in equation (1) characterizes the overall significance of a corporate data set from the Information Theory perspective. For M&A the method can be used to measure an additional amount of information that the joint enterprise will have compared to the information owned by the companies separately. The approach developed above will measure both the information acquired due to the difference in the customer bases and the information quantity increment due to better and more precise and useful information about the existing customers.

 

  Simple Illustrative Examples

In this Appendix we will apply the theory developed above to two simple illustrative cases. We will define the “golden” data set F as follows:

 

EID

Name

State

1

Larry

NJ

2

Jim

GA

3

Scott

CA

4

Marty

CA

 

The probabilities of attributes values in F are:

 

Value

Probability (P)

log P

p log p

Larry

0.25

-2

-0.5

Jim

0.25

-2

-0.5

Scott

0.25

-2

-0.5

Marty

0.25

-2

-0.5

NJ

0.25

-2

-0.5

GA

0.25

-2

-0.5

CA

0.5

-1

-0.5

Scenario 1

Dataset f is the same as the “golden” data set. Then

 

                                                f = F, H(f) = H(F) = 3.5.

 

The probability matrix for combined values:

 

Value

Probability (P)

log P

p log p

Larry, Larry

0.25

-2

-0.5

Jim, Jim

0.25

-2

-0.5

Scott, Scott

0.25

-2

-0.5

Marty, Marty

0.25

-2

-0.5

NJ,NJ

0.25

-2

-0.5

GA,GA

0.25

-2

-0.5

CA, CA

0.5

-1

-0.5

 

 

 

 

H(f,F) = -∑Pk logPk =

 

3.5

 

and

 

                                                         H(F) = H(f) = H(f,F) = 3.5

 

 

                                             J(f,F) = H(F) + H(f) – H(f,F) = H(F) = 3.5

 

Equation (7) yields

 

                                                           DQ = J(f,F)/H(F) = 1

 

As expected the data quality of f when f = F yields 1 or 100%

 

Scenario 2

Dataset for the “golden record” F remains the same as in scenario 1.

 

                                                              H(F) = 3.5

 

We will change dataset f by adding a new record: “Larry, CA”. We will assume that the new record for “Larry” represent the same individual as “Larry, NJ”. Therefore records “Larry, NJ” and “Larry, CA” will have the same EID = 1. Data stewards determined that “NJ” is the right value for the attribute State.  Dataset f is as follows:

 

                                               

EID

Name

State

1

Larry

NJ

2

Jim

GA

3

Scott

CA

4

Marty

CA

1

Larry

CA

 

 The probability matrix for f is:

 

 

 

 

 

 

Value

Probability (P)

log P

p log p

Larry

0.4

-1.32193

-0.528771238

Jim

0.2

-2.32193

-0.464385619

Scott

0.2

-2.32193

-0.464385619

Marty

0.2

-2.32193

-0.464385619

NJ

0.2

-2.32193

-0.464385619

GA

0.2

-2.32193

-0.464385619

CA

0.6

-0.73697

-0.442179356

 

 

 

 

H(f) =

 

 

3.292878689

 

 

 

The probability matrix for combined values:

Value

Probability (P)

log P

p log p

Larry, Larry

0.4

-1.32193

-0.528771238

Jim, Jim

0.2

-2.32193

-0.464385619

Scott, Scott

0.2

-2.32193

-0.464385619

Marty, Marty

0.2

-2.32193

-0.464385619

NJ,NJ

0.2

-2.32193

-0.464385619

GA,GA

0.2

-2.32193

-0.464385619

CA, CA

0.4

-1.32193

-0.528771238

NJ, CA

0.2

-2.32193

-0.464385619

 

 

 

 

H(f,E) =

 

 

3.84385619

 

Substituting the values for H(F), H(f) and H(f,F) into 7 we will obtain:

 

                                 J(f,F) = 3.5 + 3.292878689 – 3.84385619 = 2.949022499

 

                                  DQ = J(f,F)/H(F) = 2.949022499/3.5 = 0.842577857 or ~ 84%

Category: Data Quality, Enterprise Data Management, Information Development, Information Governance, Information Management, Information Value, Master Data Management
5 Comments »

by: Robert.hillard
07  Aug  2009

How humans want to find their answers

Having taken the time to get a handle on your enterprise information by roughly dividing it into the four layers I described in my last blog post, I’d like to give you an idea for a “quick win”.

Humans don’t differentiate between the things that are structured (such as the content of the corporate data warehouse) and those that are unstructured (such as PDF train timetables).  While technologists think it is astoundingly obvious that the former is instantly analysable and the latter is only to be read, the same assumption is not obvious to most end-users of the data.

Consider for a moment the new class of analytical search engines such as WolframAlph (http://www.wolframalpha.com) and compare to a more traditional tool such as Google (http://www.google.com).  Try the query “American Airlines on-time performance” in both and compare the results.  WolframAlpha will actually try to give you the answer whereas Google will tell you where to go and find the answer.

If you invest in understanding your content, you can do the same for your organisation, providing tools to your users which will give them the answers to their questions rather than simply telling them “the answer is out there”.

For further reading on this topic, I recommend MIKE2.0’s Enterprise Search Offering.

Category: Information Development
No Comments »

Calendar
Collapse Expand Close
TODAY: Fri, March 24, 2017
August2009
SMTWTFS
2627282930311
2345678
9101112131415
16171819202122
23242526272829
303112345
Archives
Collapse Expand Close
Recent Comments
Collapse Expand Close