Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.
by: Robert.hillard
14  Sep  2007

Single version of the truth

The concept of a single version of the truth has gained currency in Information Management to the point of being a mantra and I believe it is appropriate to introduce a few words of caution.

If I can be philosophical for a moment, Information Management theory is starting to turn the many old views of the world, including the way physics describes objects.  This isn’t a long bow to draw for Information Management professionals: Imagine a white statue in the park.  If you put on rose colored glasses, what color is the statue?  If you get everyone in the park to put on rose colored glasses, what color is the statue?  If you cover the statue in rose colored cellophane?  If you paint in statue with rose colored paint?

Survey any group and you are likely to get different answers to the color of the statue under the different circumstances I’ve outlined here.  If you at least said that painting the statue changed its color then you are admitting that it is the information (in this case color) that you receive that is important rather than what might exist in any deeper layers.

Those same rose colored glasses can apply to the enterprise data warehouse.  One version of the truth that forces everyone to see the data through rose colored glasses does not make the data rose colored!  Accountants, however, have thought long and hard about this and have rules for how you can clean-up variances that can’t be reconciled.  The most important thing is to ensure that all observers agree rather than just observe the same result, and that includes reasonable outsiders, executives and analysts.

Category: Data Quality, Enterprise Data Management
No Comments »

by: Robert.hillard
10  Sep  2007

Teamwork avoids dangers of one-on-one emails

In the wake of a recent adverse court finding for a major media company in Australia, I was interviewed by The Age on the business dangers of email.  I argued that communication within and between companies should be embraced rather than feared, but proper governance inevitably meant that one-on-one emails were not the best way to manage this unstructured content.

You can read the full article at http://www.theage.com.au/news/business/teamwork-avoids-dangers-of-oneonone-emails/2007/09/09/1189276544211.html or listen the podcast at the same site.

Category: Enterprise2.0, Information Strategy, Web2.0
2 Comments »

by: Sean.mcclowry
10  Sep  2007

How Web 2.0 and Small Worlds Theory may shape data modelling

In Small Worlds Data Transformation Measures, Rob wrote about the challenges of data modelling in today’s complex, federated enterprise. This is explained through an overview on Graph Theory, which provides the foundation for the relational modelling techniques first developed by Ted Cod over 30 years ago.

Relational Theory has been one of the stalwarts of software engineering. It is governed by a Codd’s rules, which have fundamentally stayed intact despite the rapid advances in other areas of software engineering – a testament to their effectiveness and simplicity.

While evolutions have taken place over time and there have been some variations to approach (e.g. dimensional modelling), the changes have built on the relational theory foundation and abided by its design principles.

But is it time for a change? Are some of the issues we are seeing today the result of the foundation starting to crumble due to complexity? Or is it that there are so many violations of Codd’s Rules? While the latter is certainly a contributing factor, it may be that relational theory is starting to wear under the weight of our modern systems infrastructures – and the issues will continue to get worse. Whereas there does not appear to be an equivalent approach to relational theory that will address the issues we see today, we think Small Worlds Theory and Web 2.0 may provide some ideas for a new approach.

Small Worlds Theory helps provide rationale for a different approach to modelling information. Small Worlds Theory tells us that for a complex system to be manageable it must be designed as an efficient network and that many systems (biological, social or technological) follow this approach. Although the information across organizations is highly federated, it does not inter-relate through an efficient network. As opposed to building a single enterprise data model, it is the services model that includes the modelling of “data in motion” that should be incopoporated into the comprehensive approach.

In addition to better modelling of federated data, new techniques should also to bring in unstructured content. This includes the information from the “informal network” such as that developed in wikis and blogs. While there are standards to add structure to unstructured content, their uptake has been slow. People prefer a quick and easy approach to classification, especially for content that is more informal in nature.

Therefore, the approach may involve the use of categories and taxonomies to bring together collaborative forms of communications and link it to the formal network. Both Andi Rindler and Jeremy Thomas have discussed some work we are doing in their area on the MIKE2.0 project on their blog posts. We’re also starting to see the implementation of some very cool ideas for dynamically bringing together tagging concepts such as the Tagline Generator.

In summary, whereas an approach based on a mathematical foundation is a required to provide a solution equivalent to Codd’s and there is a grand vision for a “semantic web”, we may chip away at the problem through a variety of techniques. Just as Search is already providing a common for mechanism for data access, other techniques may help with information federation and unstructured content.

Category: Enterprise Data Management, Information Strategy
No Comments »

by: Sean.mcclowry
06  Sep  2007

Should you offshore Information Development?

If an organization is going to move towards a Center of Excellence model for Information Development, we are some asked: “does it makes sense for this to be done offshore?” This seems a logical question, as organizations are increasingly moving their delivery capabilities offshore, especially for large application development and systems integration projects.

Although we encourage organizations to think about Information Development as a competency analogous to application development, it isn’t just something you can give to a separate group – it is a cultural change that must go across the company. While expertise can certainly be brought in from the outside, it’s also a capability that must exist internally.

Offshore Information Development should incorporate the following principles:

  1. It is the governance standards, policies and processes that enable an Information Development approach. These are the same in an offshore or onshore model.

  1. An Information Development team can be a physical (i.e. a dedicated team) or a virtual (i.e. members have other significant roles). In most cases there is a combination of dedicated and shared resources.
  1. For any sizeable offshore team, it will need to contain representation as part of the Information Development Center of Excellence.
  1. Information Development crosses business boundaries and requires participation from senior execs to line staff. Therefore, it is not a delivery capability that can be built completely offshore.
  1. The organizational model will evolve over time and individuals in assigned roles are typically needed to drive the transition to new organizational models.

In summary, organizations should make sure they have a strong onshore capability for Information Development, even if much of their development occurs offshore. Whatever the delivery model, the key to success is Information Governance through open and common standards, architectures and policies.

Category: Information Strategy
No Comments »

by: Sean.mcclowry
04  Sep  2007

A next generation modelling tool for Information Management

One of the difficult aspects of Information Development is that organizations cannot “start over” – they need to fix the issues of the past. This means that the transition to a new model must incorporate a significant transition from the old world

Most organizations have a very poorly defined view of their current state information architecture: models are undefined, quality is unknown and ownership is unclear. A product that models the Enterprise Information Architecture and provides a path for transitioning to the future state would therefore be extremely valuable.

Its capabilities could be grouped into 2 categories:

Things that can be done today, but typically through multiple products

  • Can be used to define data models and interface schemas
  • Provides GUI-based transformation mapping such as XML schema mapping or O-R mapping
  • Is able to profile data and content to identify accuracy, consistency or integrity issues in a once-off or ongoing fashion
  • Takes the direct outputs of profiling and incorporates these into a set of transformation rules
  • Helps identify data-dependent business rules and classifies rule metadata
  • Has an import utility to bring in common standards

New capabilities typically not seen in products today

  • An ability to assign value to information based on its economic value within an organization
  • Provides an information requirements gathering capability that includes drill down and traceability mapping are available across requirements
  • Provides a data mastering model that shows overlaps of information assets across the enterprise and rules for its propagation
  • Provides an ownership model to assign individual responsibility for different areas of the information architecture (e.g. data stewards, data owners, CIO)
  • Has a compliance feature that can be run to check adherence to regulations and recommended best practices
  • Provides a collaborative capability for users to jointly work together for better Information Governace

In summary, this product would be like an advanced profiling tool, enterprise architecture modelling tool and planning, budgeting and forecasting tool in one. It would be a major advantage to organizations on their path to Information Development.

Today’s solutions for Active Metadata Integration and Model Driven Development seem to provide the starting point for this next generation product. Smaller software firms such as MetaMatrix provided some visionary ideas to begin to move organizations to model driven Information Development. The bi-directional metadata repositories provided by the major players such as IBM and Informatica area a big step in the right direction. There is, however, a significant opportunity for a product that can fill the gap that exists today.

Category: Enterprise Data Management, Information Strategy
2 Comments »

by: Robert.hillard
31  Aug  2007

Where next with Information Governance?

With a maturing in understanding by both business and government of the role of information in good governance has come the question “what’s next”.  The answer, of course, is to turn a reactive culture which deals with data issues into a proactive one.  In the MIKE2.0 community we talk a lot about Information Development which is an approach to think about data as part of the business function and application rather than as an afterthought when someone starts looking for reports.

Most users of this site would be familiar with the Maturity Model which describes the attributes of an organization that has evolved its capabilities to proactive and beyond.  The obvious question then is, “how do I organize to optimize”?  The answer is not, I believe, to adopt the organization model of an optimized organization, rather it is to evolve to the structure of the next level up.  The Centre of Excellence articles make a variety of organization recommendations.  Do the Information Management self-assessment and then pick the model that is a stretch but not too far from where you are now.

My observation is that there is a tight relationship between information maturity and the governance structure that is in place.  Like the chicken and the egg though, some organisations mature and hence naturally evolve to the right model.  Others take direct action and put the these roles and responsibilities in place which results in the corresponding level of maturity.  Cause and effect don’t matter, but if you take the former path it is very helpful to have the “back of the book” answers handy.

Category: Information Governance
2 Comments »

by: Robert.hillard
22  Aug  2007

Bridging the gap between structured and unstructured data

We’re often asked to compare approaches to managing structured and unstructured data and attempts to bridge the gap between the two.  Traditionally, technology practitioners who worried about unstructured data have been entirely different group to those that worried about structured data.

In fact, there are three types of data, structured, unstructured and a hybrid (records-oriented) grouping of semi-structured.  They have much in common and are all part of the enterprise information landscape.  In order to look at ways to leverage the relative strengths of the different types of data, it is important to first understand how they are used.

There are three primary applications of data within most enterprises.

The first is in support of operational processes.  In the case of structured data, these processes are usually complex from a system perspective but often quite transactional from a human perspective.  In the case of semi-structured and unstructured data, there is often less system intervention or interpretation of the data with a heavy reliance on human interpretation.

Secondly, each of the three is used for analysis.  In the case of structured, it is easy to understand how the analysis is undertaken.  With semi-structured/record data, analysis can be divided into aggregation of the structured components and a manual analysis of the free-text.  With unstructured, analysis is usually restricted to searching for like terms and manually evaluating the documents.

Finally, all three types of data are used as a reference to back-up decisions and provide an audit trail for operational processes.

MIKE2.0 recommends approaches to governance, architecture and integration which are independent of the structure of the data itself.

The majority of effort associated with all data, regardless of its form, is gaining access to it at the time when it’s needed.  In all three cases, there are processes to lookup or search the data.  SQL for structured data, lookups for semi-structured and tree-oriented folders for unstructured.  Increasing, the techniques for finding all three types are converging in one set of processes called Enterprise Search.

Ironically, despite the power of search, successful implementations are really mandating the implementation of common metadata and the use of a single enterprise metadata model.  Again, MIKE2.0 takes the information architect through these requirements in a lot of detail.

In the future, organisations can expect to keep all three forms of data (structured, semi-structured records and unstructured documents) in the same repositories.  However, there is no need to wait for this future utopia to begin leveraging all three in the same applications and managing them in a common way.

Category: Information Strategy, MIKE2.0
1 Comment »

by: Robert.hillard
15  Aug  2007

How should an executive judge the quality of data models?

Why would an executive care?  There are two main reasons why every business and technology executive should consider the quality of data modelling to be core to their success.

The first is that information is a valuable economic asset (as argued in MIKE2.0 in the article the Economic Value of Information).  Customer data, performance data, analytical information all combine to be an asset that is often worth multiples of billions of dollars.  If a company had billions of dollars worth of gold, I’d expect business executives to want to review and understand how such a valuable asset was housed!  Given that the data model is usually the main home for the information asset, the same should also be true.  The data model cannot be delegated to junior technical staff!

Increasingly there is another reason for elevating the data model.  Legacy information is becoming an obstacle to business transformation.  As the price of storage dropped during the 1990s, new systems began also storing ancillary data about the parties involved in each transaction and substantially more context for the event.  Context could, for example, include the whole sales relationship tracking leading up to a transaction, or the staff contract changes that led to a salary change.  With the context as part of the legal record, there are operational, regulatory and strategic reasons requiring that any new or transforming business function do nothing to corrupt the existing detail.  The data model is the only tool we have to map new business requirements to old data.

Given the complexity of data modelling, it’s not surprising that executives have shied away from speaking to technologists about the detail of individual models.  A discussion on normalization principles would be enough to put most decision makers off!

In the Small Worlds Data Transformation Measure article MIKE2.0 introduces a set of simple metrics to indicate whether on average the data models of an organization or doing a good job of managing the information asset.  Using the principle that information makes most sense in the context of the enterprise, it measures the level of connectivity and the degree of separation on average across a subset or all of the data models housing the information assets.

Category: Information Management, Information Strategy, Information Value, MIKE2.0
No Comments »

by: Sean.mcclowry
15  Aug  2007

Information Governance 2.0

One of the concepts we introduce in MIKE2.0 is that of Networked Information Governance – essentially Information Governance combined with Enterprise 2.0. What we found when it comes to governance is the importance of the “informal network” in solving problems – the emails, hallway discussions and phone calls that place on a daily basis or in a crisis.

When it comes to bringing the information network together with a formal approach, technologies and techniques from Enterprise 2.0 are a great fit: search, tagging and aggregation are the keys to bridging the gap.

Can this approach be extended beyond Information Governance? We believe it can. Governance techniques can generally benefit from this approach – from a corporate board discussion to managing compliance with environmental regulations. Does it mean we remove the verbal conversations? Of course not (although it might have them recorded, tagged and indexed). Is security important? Definitely. The proposed solution isn’t trivial, but the Networked Information Governance Solution Offering helps define the steps required.

Category: Enterprise2.0, Information Governance, Information Strategy, MIKE2.0
1 Comment »

by: Sean.mcclowry
11  Aug  2007

Playing Devil’s Advocate on Information Governance

I’ve been reading a book called Why Not? by Yale professors Barry Nalebuff and Ian Ayers which provides “four simple tools that can help you dream up ingenious ideas for changing how we work, shop, live, and govern.” – its a highly recommended read. I was fortunate enough to attend one of Barry’s lectures last week and he’s already given me a few new ideas.

One idea is that of a Devil’s Advocate for corporate governance. It explains the religious origins of the term and the benefits of having a person that takes a counter-point for the sake of argument. This person is a trusted adviser and has a duty to take this contrarian view, therefore their argument is not as one of dissent. In the book, explanations are provided on how this technique could be applied to Corporate Governance, where strong-arm techniques can easily over-run outside opinions.

The Devil’s Advocate an interesting role on the subject of Information Governance – possibly an architect assigned within the Information Development Organisation. Although this approach could be applied more generally to any solution, I think it makes particular sense for those related Information Governance decisions, as:

  • Success requires concession and buy-in from multiple parties
  • Solutions are complex so multiple viewpoints are important
  • It is easy for one group to dominate, but as information flows horizontally across organizations the impact of issues can be asymmetric
  • It is easy to get “stuck” when someone brings up a counterpoint due to emotion and frustration. Using a devil’s advocate alternative viewpoints are quickly put on the table (it is their duty to identify issues)

We’ve all seen strong-arm techniques or argumentative competitors ruin projects.  When attempting to re-design an organisation to take a stronger focus on managing information, the shift is bound to run into issues.A trusted and educated view that raises arguments without being seen as a dissident would certainly be productive as a way to identify ownership roles, responsibilities and possible solutions.

Category: Information Governance, Information Strategy
No Comments »

Calendar
Collapse Expand Close
TODAY: Sat, November 18, 2017
November2017
SMTWTFS
2930311234
567891011
12131415161718
19202122232425
262728293012
Archives
Collapse Expand Close
Recent Comments
Collapse Expand Close