Economic Value of Information
From MIKE2.0 Methodology
-> You are here: Economic Value of Information
Data, information and knowledge are distributed in financial services organisations over a large number of business units, geographic locations and individuals. During the 1990\’s two trends are consistent amongst financial institutions. The first was the centralization of data, moving away from branch-based customer information to central information stores. The second was the generation of massive amounts of detailed data resulting from better and lower cost information technology. This paper proposes a methodology to assign a financial value to the data asset held by organisations and identifies the return that should be achieved given the size of this intangible asset and providing a target for which the stakeholder teams should aim.
Data, information and knowledge are distributed in financial services organisations over a large number of business units, geographic locations and individuals. During the 1990\’s two trends were consistent amongst financial institutions. The first was the centralization of data, moving away from branch-based customer information to central information stores (Knox, 2004). The second was the generation of considerable amounts of detailed data resulting from better and lower cost information technology (Kastenmueller, 2004).
Organizations are faced with an abundance of information that is sourced from numerous resources within and outside of the organization. To elude problems such as information overload, information management has become more significant each day as organizations strive to ensure their employees have access to information that enables them to perform their jobs most effectively and support organizational objectives (Gregson, 1995). The importance of this issue has been reflected by results from a recent Tech Republic report that found over 80 per cent of the respondents believed that enterprise information management for business analysis was of a critical or highly critical nature (Tech Republic, 2005).
For many organizations data is scattered among various silos within, leaving valuable information out of reach for other areas of the company. To ensure data integrity and availability of the information resource, a system of information management and governance must be in place. Information Management is defined as "a program that manages the people, processes and technology in an enterprise towards the goal of maximizing the leverage, value and investment in data and knowledge, while avoiding increased risk and cost due to data and knowledge misuse, poor handling or exposure to regulatory scrutiny" (Ladley, 2005).
Information as an asset
With information management and leverage comes the ability for knowledge workers (such as actuaries, investment bankers and analysts) to make fact-based decisions. The information that they use to make decisions and the processes that allow them to leverage that information have an intrinsic value to the organization that is seldom fully recognised.
Programs aiming to leverage information (typically under the banners such as Data Warehousing or Management Information) often struggle to determine the potential return on investment. Calculations are typically based on either specific process improvements or anticipated organisation growth. This paper proposes a top-down approach, as a means to identify the return that could potentially be achieved given the size of this intangible asset and provide a target for which the stakeholder teams should aim to achieve.
See also Information Economics.
Information Management Principles
Before attempting to leverage information effectively it needs to be treated as an integrated asset. To do this, the first step is to create a set of information principles (Hillard, 2004), of which the following six are good generic examples:
Valuing of the Information and Knowledge Asset
Generally organizations have no foundation from which to identify the value of this intangible asset. This paper provides a methodology that can be refined over time to test its value and focus the enterprise on achieving an appropriate return on investment.
The starting point is the total market value of the company itself. This is an absolute in terms of the total shares and their current price. This is typically the value that the market places on the company assuming no merger and acquisition activity occurs (unless there is specific market speculation).
VOrg = Share Price x Number of Shares
The next step is the most difficult. Determine how an independent third-party (such as a potential buyer) would value the organization if it was offered without historical information about its products, customers, staff and risk. This is a difficult exercise as it must be assumed that sufficient information and knowledge is retained to meet minimum regulatory and operational requirements. However, for the purposes of our experiment we can assume that the split is possible.
The authors have found that asking this question in workshops results in radically different outcomes ranging from a drop in market value of more than 50% to less than 30%. For this reason, and to avoid any controversy, for the purposes of setting a high-level target a starting point of 20% can be chosen. For example, if the total market value of the company is $50 billion the value of information and knowledge can be nominally set at $10 billion. To-date, the authors are yet to find any credible opposition to such a low estimate, yet the majority of participants are surprised by the implications.
VI (theoretical) = VOrg x 0.20
Calculating Information Value Efficiency
Since we now have a figure for theoretical value of information and knowledge within the organization, the next step is to measure "Information Efficiency". Information Efficiency shows how valuable is the information based on the organization\’s policies, practices, technologies, measurement, compliance and people/organization in relation information management.
Information Efficiency can be measured in a qualitative fashion by using the Information Maturity QuickScan survey instrument. IM QuickScan is a tool used to assess current and desired information-maturity Business Capabilities. The IM QuickScan survey instrument is broad in scope and is intended to assess enterprise capabilities as opposed to focusing on a single subject area.
IM QuickScan contains approximately 175 Capability Statements that are used to gain insight into the corporation’s IMM maturity level. Capability Statements are organized into Data Management Areas, which are used to group questions regarding a particular area. Questions are organized into groups to ensure that the assessment covers the full breadth of information maturity and to provide structure to the implementation roadmap.
In the IM QuickScan model, interviewees rate their current performance and target performance against these capability statements against the Information Maturity Model (IMM). The IMM was defined by the Meta Group and proposes five levels of maturity that an organization may be assessed at based on their information management practices. Other experts in the field, such as Larry English, have defined similar five-stage models. These models are meant to provide an information-oriented equivalent to the Capability Maturity Model (CMM), which is used to assess an enterprise\’ software development maturity level. BearingPoint took the basic capabilities and organized them into data management areas (e.g. Enterprise Awareness).
The current performance and target performance can then be measured against either benchmark performance from industry leaders or an "Optimal" rating to determine an information efficiency rating. This approach can also be used to measure levels of under-investment/over-investment in an organization\’s information management practices.
IEcurr = (IMQSCur) /(IMQSBenchmark)
Once an information effiency ratings have been determined for both current-state and future-state goals, the potential value gains can then be quantified.
VI (estimated) Curr = VI (theoretical) x IECurr
Quantitative Applications to the Model
The approach that has been put forward thus far has been largely qualitative in nature. There are, however, some quantitative measures than can be applied into the model .
Information diminishes in value as its quality and usability deteriorates over a period of time. During an audit of access patterns by IT administrators, it was found on 80 per cent of the occasions only 20 per cent of the data was hit. This indicates that the majority of the data residing on the system was non-mission critical (Data Mobility Group, 2004). The fact that only 20 per cent of the data was hit provides evidence that information does depreciate over time.
The information that an organization holds changes continuously. For example, it was found that customer information such as address or email changes at a rate of more than 10 per cent per month (Daley, 2003). These issues are caused by people moving locations, links being broken, modified product information, and changes in employee roles. Information depreciates for a variety of reasons and includes aspects such as being superseded by newer and more up to date information, users unaware that the information exists and also the inability to access the information due to access rights (Data Mobility Group, 2004).
Based on this research, the implication is that data deteriorates or expires over time unless it is kept up-to-date. Since data is the basis for information, the assumption can therefore be extrapolated to apply to the entire information and knowledge asset. A conservative estimate, without action to curb the deterioration, is that this deterioration or depreciation occurs at a rate not less than 20% per annum. Therefore, in the example of a company valued at $50 billion, and having an information asset worth $10 billion, the result is that without intervention at least $1 billion per annum is wiped from the value of this asset. Not all information, of course, should change over time, but some assumptions can be taken be made for data areas such as customer data that would allow these estimates to be used to factor into the model.
We can quantitatively measure these information assets through data profiling to confirm if these assertions are correct. Data profiling can be done as a once-off exercise or it may involve an ongoing monitoring process that is put in place once the solution has been implemented. Data Profiling typically uses a specialised vendor tool that enables the initial establishment of standards and initial formulation of metadata. It works by parsing and analysing free-form and single domain fields and tables, by profiling down columns, across tables and between tables. The results of Data Profiling can provide quantitative input into the Information Efficiency model, complementing the results of the IM QuickScan assessment and providing additional detail into each of the Data Subject areas that are covered at a very high level in IM QuickScan. The MIKE2 Methodology provides a recommend approach to profiling data, including process steps, estimating models and template deliverables.
As financial institutions shift to a customer centric focus, the importance of data and information in an organization has become more significant (Knox, 2000). Companies need to ensure that they are able to respond to the needs of the customers by delivering products and services that reflect the customer’s situation. This is only achieved by beginning with information that is accurate. Organizations can develop the best products and services, but if customer information is inaccurate, incomplete or worse still multiple versions of a particular customer’s details exist, their ability to respond decreases dramatically and unnecessary rework is required.
Moreover, "information is both a requirement for and a product of virtually every function and application within the bank" (Knox, 2000,p3). Hence we see information emerging as a core strategic asset where banks must seek to reap the value from the asset by managing the information sources, monitoring information application to preserve value and safeguarding the asset (Knox, 2000). Left unattended this can cause significant problems for the organization. For large institutions especially where organizational silos exist, historical barriers have created the problems of redundant and conflicting data, fragmented silos of information, and old and incomplete data sources (Young, 2003). Therefore, when such problems exist, the ability to maintain consistent data quality across the organization is less likely without a form of information management system in place.
In these organizations history has led these silos to collect and hold information that is relevant to their business practices in their own data management systems. Problems arise when attempting to utilize the data where an activity spans functional areas. This is due to the differences in the methods and definitions used by each silo. Business needs necessitate that the information from the disparate silos be available for reuse by other areas of the organization so that the value of the asset is not diminished (Hauch et al, 2005).
These distinct systems contribute to the cost of inaccurate data, which is estimated on a global level to range from under $100 billion to over $600 billion, and that of which is based on customer data only (Daley, 2003). Imagine what the figure would be if it included other aspects of the organization’s information and knowledge spectrum. Information that is poorly managed can result in a decrease in productivity, cash flow, materials, facilities and equipment for those who utilize that information. This is problematic for the organization as these problems will flow on to its customers’ in terms of time and money, and ultimately, dissatisfaction that drives customers away (English, 2005).
As discussed earlier, financial institutions are tackling the value of information and knowledge by creating enterprise stores. However, this enterprise capability is new in organizational terms (arguably less than fifteen years) and has not been subject to a full economic cycle. The first time any business activity is coordinated, a substantial saving is able to be achieved. For example, a re-centralization of IT operations involving a consolidation of data centres for a company yielded savings on infrastructure and other IT services and generated a cost reduction of approximately 10-15 per cent (Hoffman, 2002). The savings were achieved because centralization or coordinated management is able to elicit savings that could not be achieved had the data centres remained decentralized.
Given that centralization is about pulling together a variety of disparate components, essentially the same outcome (10-15% saving) that occurred for the IT re-centralization can be assumed to be applicable to information management. This is not an unreasonable assumption since organiztions hold such extensive amounts of inaccurate data (recall the $600bn previously discussed). The following opportunities to deliver savings should be able to lead to an outcome that is close to that magnitude. It must be noted however that these are not the only benefits that can be achieved by coordinating information, other aspects will also exist.
A system of information management is of critical importance if organizations are to leverage the potential of their information asset. This paper proposed a methodology to test the value of the information and knowledge asset within the organization, and provide a focus for the enterprise on achieving an appropriate rate of return. Through coordinated management it was hypothesized that a 10% saving could be achieved by coordinated management of information on an asset that is at least 20% of the market value of the organisation.
Robert Hillard email: email@example.com:firstname.lastname@example.org
Sean McClowry Email: email@example.com:firstname.lastname@example.org
Wiki asset search