Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.
by: Robert.hillard
22  Sep  2007

Information is finite

In this post, I want to remind readers that information is not abstract, it is something real and follows the laws of physics.  Information theory talks about encoding information using predictable patterns, which have to be represented by some type of device.  Such a device would typically use electrical energy in some form to represent each bit so it is no surprise that conservation of energy laws apply: that is creating one piece of information must destroy another.

Why does this matter?  Information is finite and discovering one thing inevitably means that something else is either lost or in some way reduced in value.  Anyone with a physics background might think of this as an extension on the Heisenberg uncertainty principle.

From a business management perspective, it means that the enterprise customer list cannot be used in an infinite number of ways without degenerating the value of the content.  While intuitively true, I argue that it is also mathematically true, through the fact that applying information such as customer details also derives information about its application.  Deriving information about its application must reduce information (significant or otherwise) from elsewhere.  Usually, this reduction is significant – the process of finding out a customer needs a new service usually reduces the confidence in earlier analysis and limits the ability to target the same customer in other ways.

Category: Information Management, Information Strategy, Information Value
No Comments »

by: Sean.mcclowry
15  Sep  2007

Enterprise 3.0

It took about 25 years for the ARPA initiative to evolve into the web and about 10 years before the advent of techniques and technologies arose that make up web 2.0. Although it’s a bit early, we’ll probably start to see the momentum around web 3.0 before the end of the decade. Web 2.0 was a bottom-up movement and Enterprise 2.0 is about making use of these capabilities. We may see Enterprise 3.0 connect with Web 3.0 earlier and even help drive the need for new technologies. Besides the typical “vice drivers”, what would drive demand for Enterprise 3.0 from the business side?

  • Fixing healthcare – an avalanche of costs is breaking current systems, whether in the countries like the US or in those applying more socialized models.
  • Enabling virtual shoring – organizations will want to improve their use of offshoring, outsourcing and physically separated staff. “Virtual shoring” in virtual worlds can help provide the answer through better collaboration and greatly reduced travel costs.
  • Bank of the Future – firms and individuals will continue to strive for new ways to raise capital, manage liquidity and hedge risk. Retail consumers will want a richer and interactive experience provided that simplifies their lives through use of technology, not a branch. Institutions will want a richer technology experience too – as well as the ability to bring in information from all types of sources in the decisioning process.

On the supply side, the fundamental change will be open source, globalization and technology exponentiation factors.

In areas such as health care, we’ll see these factors work together. A virtual treatment room may be a way to reduce cost by providing access to. Increasingly individuals don’t have pensions and they will look to have innovative product options that make long term health care affordable.2 technology areas look to stand out that will support these new models:

  • User Interactivity – the current interface needs a major upgrade. Visualization will become much, much richer and collaboration easier.
  • Information Development – information currency will continue to get more valuable. This will be driven by more systems and types content, greater abstraction levels and that increasingly decisions will be made in an automated fashion independent of human intervention.

So what might if look like? Think of an enhanced version of Second Life combined with everything Google is doing on steroids. Second Life for individuals to act in new virtual communities for commerce, healthcare and education; Google to link it back to the real world. If this sounds far off, its not. Look at what organizations such as IBM, Dell and ABN Amro are doing in the virtual world. Some of its very early (and some would argue PR oriented) but its happening.

Category: Web2.0

by: Robert.hillard
14  Sep  2007

Single version of the truth

The concept of a single version of the truth has gained currency in Information Management to the point of being a mantra and I believe it is appropriate to introduce a few words of caution.

If I can be philosophical for a moment, Information Management theory is starting to turn the many old views of the world, including the way physics describes objects.  This isn’t a long bow to draw for Information Management professionals: Imagine a white statue in the park.  If you put on rose colored glasses, what color is the statue?  If you get everyone in the park to put on rose colored glasses, what color is the statue?  If you cover the statue in rose colored cellophane?  If you paint in statue with rose colored paint?

Survey any group and you are likely to get different answers to the color of the statue under the different circumstances I’ve outlined here.  If you at least said that painting the statue changed its color then you are admitting that it is the information (in this case color) that you receive that is important rather than what might exist in any deeper layers.

Those same rose colored glasses can apply to the enterprise data warehouse.  One version of the truth that forces everyone to see the data through rose colored glasses does not make the data rose colored!  Accountants, however, have thought long and hard about this and have rules for how you can clean-up variances that can’t be reconciled.  The most important thing is to ensure that all observers agree rather than just observe the same result, and that includes reasonable outsiders, executives and analysts.

Category: Data Quality, Enterprise Data Management
No Comments »

by: Robert.hillard
10  Sep  2007

Teamwork avoids dangers of one-on-one emails

In the wake of a recent adverse court finding for a major media company in Australia, I was interviewed by The Age on the business dangers of email.  I argued that communication within and between companies should be embraced rather than feared, but proper governance inevitably meant that one-on-one emails were not the best way to manage this unstructured content.

You can read the full article at or listen the podcast at the same site.

Category: Enterprise2.0, Information Strategy, Web2.0

by: Sean.mcclowry
10  Sep  2007

How Web 2.0 and Small Worlds Theory may shape data modelling

In Small Worlds Data Transformation Measures, Rob wrote about the challenges of data modelling in today’s complex, federated enterprise. This is explained through an overview on Graph Theory, which provides the foundation for the relational modelling techniques first developed by Ted Cod over 30 years ago.

Relational Theory has been one of the stalwarts of software engineering. It is governed by a Codd’s rules, which have fundamentally stayed intact despite the rapid advances in other areas of software engineering – a testament to their effectiveness and simplicity.

While evolutions have taken place over time and there have been some variations to approach (e.g. dimensional modelling), the changes have built on the relational theory foundation and abided by its design principles.

But is it time for a change? Are some of the issues we are seeing today the result of the foundation starting to crumble due to complexity? Or is it that there are so many violations of Codd’s Rules? While the latter is certainly a contributing factor, it may be that relational theory is starting to wear under the weight of our modern systems infrastructures – and the issues will continue to get worse. Whereas there does not appear to be an equivalent approach to relational theory that will address the issues we see today, we think Small Worlds Theory and Web 2.0 may provide some ideas for a new approach.

Small Worlds Theory helps provide rationale for a different approach to modelling information. Small Worlds Theory tells us that for a complex system to be manageable it must be designed as an efficient network and that many systems (biological, social or technological) follow this approach. Although the information across organizations is highly federated, it does not inter-relate through an efficient network. As opposed to building a single enterprise data model, it is the services model that includes the modelling of “data in motion” that should be incopoporated into the comprehensive approach.

In addition to better modelling of federated data, new techniques should also to bring in unstructured content. This includes the information from the “informal network” such as that developed in wikis and blogs. While there are standards to add structure to unstructured content, their uptake has been slow. People prefer a quick and easy approach to classification, especially for content that is more informal in nature.

Therefore, the approach may involve the use of categories and taxonomies to bring together collaborative forms of communications and link it to the formal network. Both Andi Rindler and Jeremy Thomas have discussed some work we are doing in their area on the MIKE2.0 project on their blog posts. We’re also starting to see the implementation of some very cool ideas for dynamically bringing together tagging concepts such as the Tagline Generator.

In summary, whereas an approach based on a mathematical foundation is a required to provide a solution equivalent to Codd’s and there is a grand vision for a “semantic web”, we may chip away at the problem through a variety of techniques. Just as Search is already providing a common for mechanism for data access, other techniques may help with information federation and unstructured content.

Category: Enterprise Data Management, Information Strategy
No Comments »

by: Sean.mcclowry
06  Sep  2007

Should you offshore Information Development?

If an organization is going to move towards a Center of Excellence model for Information Development, we are some asked: “does it makes sense for this to be done offshore?” This seems a logical question, as organizations are increasingly moving their delivery capabilities offshore, especially for large application development and systems integration projects.

Although we encourage organizations to think about Information Development as a competency analogous to application development, it isn’t just something you can give to a separate group – it is a cultural change that must go across the company. While expertise can certainly be brought in from the outside, it’s also a capability that must exist internally.

Offshore Information Development should incorporate the following principles:

  1. It is the governance standards, policies and processes that enable an Information Development approach. These are the same in an offshore or onshore model.

  1. An Information Development team can be a physical (i.e. a dedicated team) or a virtual (i.e. members have other significant roles). In most cases there is a combination of dedicated and shared resources.
  1. For any sizeable offshore team, it will need to contain representation as part of the Information Development Center of Excellence.
  1. Information Development crosses business boundaries and requires participation from senior execs to line staff. Therefore, it is not a delivery capability that can be built completely offshore.
  1. The organizational model will evolve over time and individuals in assigned roles are typically needed to drive the transition to new organizational models.

In summary, organizations should make sure they have a strong onshore capability for Information Development, even if much of their development occurs offshore. Whatever the delivery model, the key to success is Information Governance through open and common standards, architectures and policies.

Category: Information Strategy
No Comments »

by: Sean.mcclowry
04  Sep  2007

A next generation modelling tool for Information Management

One of the difficult aspects of Information Development is that organizations cannot “start over” – they need to fix the issues of the past. This means that the transition to a new model must incorporate a significant transition from the old world

Most organizations have a very poorly defined view of their current state information architecture: models are undefined, quality is unknown and ownership is unclear. A product that models the Enterprise Information Architecture and provides a path for transitioning to the future state would therefore be extremely valuable.

Its capabilities could be grouped into 2 categories:

Things that can be done today, but typically through multiple products

  • Can be used to define data models and interface schemas
  • Provides GUI-based transformation mapping such as XML schema mapping or O-R mapping
  • Is able to profile data and content to identify accuracy, consistency or integrity issues in a once-off or ongoing fashion
  • Takes the direct outputs of profiling and incorporates these into a set of transformation rules
  • Helps identify data-dependent business rules and classifies rule metadata
  • Has an import utility to bring in common standards

New capabilities typically not seen in products today

  • An ability to assign value to information based on its economic value within an organization
  • Provides an information requirements gathering capability that includes drill down and traceability mapping are available across requirements
  • Provides a data mastering model that shows overlaps of information assets across the enterprise and rules for its propagation
  • Provides an ownership model to assign individual responsibility for different areas of the information architecture (e.g. data stewards, data owners, CIO)
  • Has a compliance feature that can be run to check adherence to regulations and recommended best practices
  • Provides a collaborative capability for users to jointly work together for better Information Governace

In summary, this product would be like an advanced profiling tool, enterprise architecture modelling tool and planning, budgeting and forecasting tool in one. It would be a major advantage to organizations on their path to Information Development.

Today’s solutions for Active Metadata Integration and Model Driven Development seem to provide the starting point for this next generation product. Smaller software firms such as MetaMatrix provided some visionary ideas to begin to move organizations to model driven Information Development. The bi-directional metadata repositories provided by the major players such as IBM and Informatica area a big step in the right direction. There is, however, a significant opportunity for a product that can fill the gap that exists today.

Category: Enterprise Data Management, Information Strategy

by: Robert.hillard
31  Aug  2007

Where next with Information Governance?

With a maturing in understanding by both business and government of the role of information in good governance has come the question “what’s next”.  The answer, of course, is to turn a reactive culture which deals with data issues into a proactive one.  In the MIKE2.0 community we talk a lot about Information Development which is an approach to think about data as part of the business function and application rather than as an afterthought when someone starts looking for reports.

Most users of this site would be familiar with the Maturity Model which describes the attributes of an organization that has evolved its capabilities to proactive and beyond.  The obvious question then is, “how do I organize to optimize”?  The answer is not, I believe, to adopt the organization model of an optimized organization, rather it is to evolve to the structure of the next level up.  The Centre of Excellence articles make a variety of organization recommendations.  Do the Information Management self-assessment and then pick the model that is a stretch but not too far from where you are now.

My observation is that there is a tight relationship between information maturity and the governance structure that is in place.  Like the chicken and the egg though, some organisations mature and hence naturally evolve to the right model.  Others take direct action and put the these roles and responsibilities in place which results in the corresponding level of maturity.  Cause and effect don’t matter, but if you take the former path it is very helpful to have the “back of the book” answers handy.

Category: Information Governance

by: Robert.hillard
22  Aug  2007

Bridging the gap between structured and unstructured data

We’re often asked to compare approaches to managing structured and unstructured data and attempts to bridge the gap between the two.  Traditionally, technology practitioners who worried about unstructured data have been entirely different group to those that worried about structured data.

In fact, there are three types of data, structured, unstructured and a hybrid (records-oriented) grouping of semi-structured.  They have much in common and are all part of the enterprise information landscape.  In order to look at ways to leverage the relative strengths of the different types of data, it is important to first understand how they are used.

There are three primary applications of data within most enterprises.

The first is in support of operational processes.  In the case of structured data, these processes are usually complex from a system perspective but often quite transactional from a human perspective.  In the case of semi-structured and unstructured data, there is often less system intervention or interpretation of the data with a heavy reliance on human interpretation.

Secondly, each of the three is used for analysis.  In the case of structured, it is easy to understand how the analysis is undertaken.  With semi-structured/record data, analysis can be divided into aggregation of the structured components and a manual analysis of the free-text.  With unstructured, analysis is usually restricted to searching for like terms and manually evaluating the documents.

Finally, all three types of data are used as a reference to back-up decisions and provide an audit trail for operational processes.

MIKE2.0 recommends approaches to governance, architecture and integration which are independent of the structure of the data itself.

The majority of effort associated with all data, regardless of its form, is gaining access to it at the time when it’s needed.  In all three cases, there are processes to lookup or search the data.  SQL for structured data, lookups for semi-structured and tree-oriented folders for unstructured.  Increasing, the techniques for finding all three types are converging in one set of processes called Enterprise Search.

Ironically, despite the power of search, successful implementations are really mandating the implementation of common metadata and the use of a single enterprise metadata model.  Again, MIKE2.0 takes the information architect through these requirements in a lot of detail.

In the future, organisations can expect to keep all three forms of data (structured, semi-structured records and unstructured documents) in the same repositories.  However, there is no need to wait for this future utopia to begin leveraging all three in the same applications and managing them in a common way.

Category: Information Strategy, MIKE2.0
1 Comment »

by: Robert.hillard
15  Aug  2007

How should an executive judge the quality of data models?

Why would an executive care?  There are two main reasons why every business and technology executive should consider the quality of data modelling to be core to their success.

The first is that information is a valuable economic asset (as argued in MIKE2.0 in the article the Economic Value of Information).  Customer data, performance data, analytical information all combine to be an asset that is often worth multiples of billions of dollars.  If a company had billions of dollars worth of gold, I’d expect business executives to want to review and understand how such a valuable asset was housed!  Given that the data model is usually the main home for the information asset, the same should also be true.  The data model cannot be delegated to junior technical staff!

Increasingly there is another reason for elevating the data model.  Legacy information is becoming an obstacle to business transformation.  As the price of storage dropped during the 1990s, new systems began also storing ancillary data about the parties involved in each transaction and substantially more context for the event.  Context could, for example, include the whole sales relationship tracking leading up to a transaction, or the staff contract changes that led to a salary change.  With the context as part of the legal record, there are operational, regulatory and strategic reasons requiring that any new or transforming business function do nothing to corrupt the existing detail.  The data model is the only tool we have to map new business requirements to old data.

Given the complexity of data modelling, it’s not surprising that executives have shied away from speaking to technologists about the detail of individual models.  A discussion on normalization principles would be enough to put most decision makers off!

In the Small Worlds Data Transformation Measure article MIKE2.0 introduces a set of simple metrics to indicate whether on average the data models of an organization or doing a good job of managing the information asset.  Using the principle that information makes most sense in the context of the enterprise, it measures the level of connectivity and the degree of separation on average across a subset or all of the data models housing the information assets.

Category: Information Management, Information Strategy, Information Value, MIKE2.0
No Comments »

Collapse Expand Close
TODAY: Wed, January 17, 2018
Collapse Expand Close
Recent Comments
Collapse Expand Close