Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Archive for January, 2012

by: Robert.hillard
31  Jan  2012

Introduction to MIKE2.0 podcast for the InfoGov Community

Steven Adler, who leads the InfoGov Community initiative, invited me to record a podcast to introduce MIKE2.0 for their members (of which many are also users and contributors to MIKE2.0).  I thought that MIKE2.0 readers might also enjoy the podcast which provides a first principles introduction to this site.

Introduction to MIKE2

Category: MIKE2.0
No Comments »

by: Bsomich
28  Jan  2012

Weekly IM Update.


Check out our latest solutions and wiki articles!

MIKE2.0 has a number of open source solution and wiki offerings that provide helpful guidance for information management professionals.

MIKE2.0 Solutions

MIKE2.0 Solutions address specific information management problems:

   Core Solution Offerings
Composite Solution Offerings
Business Solution Offerings
Product Solution Offerings
SAFE Architecture
Governance Model
Open Source Solution Offerings   New Articles!

We hope you find these offerings of benefit and welcome any suggestions you may have to improve them.


MIKE2.0 Community

Popular Content

Did you know that the following wiki articles are most popular on Google? Check them out, and feel free to edit or expand them!

What is MIKE2.0?
Deliverable Templates
The 5 Phases of MIKE2.0
Overall Task List
Business Assessment Blueprint
SAFE Architecture
Information Governance Solution

Contribute to MIKE:

Start a new article, help with articles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Content Model
MIKE2.0 Governance

Join Us on

Follow Us on
43 copy.jpg

 Join Us on



This Week’s Blogs for Thought:

The CIO of 2020

What will the role of Chief Information Officer (CIO) look like in 2020?

The CIO role is one that really appeared during the 1990s in response to the increasing profile of Information Technology (IT) in organisations.  Previously, the person in charge of IT was usually called an Information Technology, Information Services or Computer Services Director.  The creation of the “chief” title shows the world how critical IT is in the modern enterprise.

Read more.

Heathcare and Data Incentives 

A few weeks ago, I wrote about the outsourcing of data analysis and discovery through a site called Kaggle. Today, I’d like to go deeper.

A look at the site reveals a number of fascinating data contests, including one that offers $3 million (USD) for identifying patients who will be admitted to a hospital within the next year, using historical claims data. For a look at the data, data dictionary, and the like, click here.

How, you ask, can an organization win such a large prize? It’s actually not that hard to understand.



   The Last Resort: Custom Fields

For many years, I worked implementing different enterprise systems for organizations of all sizes. At some point during the project (hopefully earlier than later), someone would discover that the core application had no place to store a potentially key field. Against that backdrop, the team and I had a few choices .

Read more.


Category: Information Development
No Comments »

by: Phil Simon
23  Jan  2012

Healthcare and Data Incentives

A few weeks ago, I wrote about the outsourcing of data analysis and discovery through a site called Kaggle. Today, I’d like to go deeper.

A look at the site reveals a number of fascinating data contests, including one that offers $3 million (USD) for identifying patients who will be admitted to a hospital within the next year, using historical claims data. For a look at the data, data dictionary, and the like, click here.

How, you ask, can an organization such a large prize? It’s actually not that hard to understand. From the site:

More than 71 million individuals in the United States are admitted to hospitals each year, according to the latest survey from the American Hospital Association. Studies have concluded that in 2006 well over $30 billion was spent on unnecessary hospital admissions. Is there a better way? Can we identify earlier those most at risk and ensure they get the treatment they need? The Heritage Provider Network (HPN) believes that the answer is “yes”.

Do the math. $3 million is one-hundredth of one percent of $30 billion. One could even argue that the prize for that kind of savings should be ten times higher than what is currently offered, but the current bounty is clearly not holding people back. At the time I wrote this post, 734 teams or individuals or companies had entered to win the $3 million prize.

Mutually Beneficial

Why are so many people competing? To quote Gordon Gekko, “It’s all about bucks, kid.” $3 million is clearly a great deal of incentive–and that doesn’t include the invariable PR benefit of winning the prize.

In a way, the mere fact that this type of project has to be outsourced is, quite frankly, sad. Think about it. With more than $1 trillion wasted on healthcare in the United States, even moving the needle a little bit can result in massive savings. Yet, clearly something isn’t working.

Healthcare is just one of many industries has become complacent and utterly incapable of fixing its own problems. (Of course, there are many others, as Jeff Jarvis’ wonderful book What Would Google Do? [affiliate link] points out.)

Simon Says

This is the beauty of the Internet. It has brought with it increased transparency, opportunity, and tools. No longer do people and organizations need to sit idly by on the sidelines as opportunities are squandered and poor practices are ossified. No, creative and/or frustrated folks can take their data or their causes online and circumvent traditional gatekeepers.

Now, no one is saying that developing this type of predictive algorithm is easy. It can’t be. But that’s a far cry from impossible. Perhaps the current level of waste is simply an example of a market failure.

In any event, the Kaggle example demonstrates how poorly many–if not most–large organizations treat the topic of information management. Maybe if organizations awarded major bonuses to individuals, teams, and departments for (one could argue) doing their jobs, they wouldn’t have to go elsewhere.

Then again, maybe more of them should.


What say you?

Category: Information Development, Information Management
1 Comment »

by: Robert.hillard
21  Jan  2012

The CIO of 2020

What will the role of Chief Information Officer (CIO) look like in 2020?

The CIO role is one that really appeared during the 1990s in response to the increasing profile of Information Technology (IT) in organisations.  Previously, the person in charge of IT was usually called an Information Technology, Information Services or Computer Services Director.  The creation of the “chief” title shows the world how critical IT is in the modern enterprise.

The problem is that the IT world is changing in a way that we haven’t seen in over a decade.  I argue that the first decade of the century was one in which little actually changed in the use of IT in business.  When I take this position in discussions I’m often met with many who disagree.

My argument is illustrated by comparing a typical professional worker’s desk in 2000 to the same desk in 2010.  The year 2000 desk probably had a laptop, telephone that was likely (or soon to be) based on IP telephony making them highly mobile.  It is also very likely that the business applications were little different to those we know today using Microsoft Windows, an ERP such as SAP for business operations and Microsoft Office for email and documents.  Sitting beside the computer was a mobile phone.  The 2010 picture is very similar.

Interestingly the comparison between the 1990 desk and the one at the turn of the century is much more dramatic.  The 1990 desk was very likely to have no computer, a terminal or if there was a PC it would have been deskbound.  The telephone was absolutely fixed to a point using a traditional PABX and there would not have been a mobile phone.  Any software being used would look very different to the products we commonly use today and likely to be bespoke and certainly character rather than graphics based.

This is a trend that we have seen before.  The 1970s saw massive change, while the 1980s was really a decade of consolidation.  While much happened in both the 1980s and the last decade, both were really dominated by a need to simplify and consolidate the role of IT in business.  As a result, we have been lulled into a view of the CIO which needs to focus first and foremost on the discipline of managing a complex but predictable portfolio of systems and projects.

Already in this new decade we’ve seen a move away from a single operating system (Windows) and end-user platform to a wide range of consumer-driven options ranging from tablets to mobile phones.   Similarly we’ve seeing a move away from enterprise servers to a much richer suite of options utilising cloud services.  Just as importantly, we’re seeing the long-predicted “internet of things” coming to life with embedded computing in everything from trucks to shopping trolleys.

No aspect of business value chains has been untouched by these changes.  The line between consumer technology and business systems has blurred to the point where customers expect to directly access their transaction data as it appears inside back-office applications.  This level of integration challenges the role of the CIO that has evolved during the last decade as being focused on business systems.  In this new world, the CIO is responsible for technology and information that is used by the businesses external stakeholders (including customers and suppliers).  This new CIO is as responsible for earning revenue as any divisional general manager.

The challenges will be substantial.  The CIO cannot hold onto the same number of staff that has been traditionally needed to run the infrastructure of the enterprise and still expect to take on revenue earning responsibilities.  The CIO is also going to need to lead large-scale innovation across the organisation with the goal of creating new products and finding new uses for the information that is the lifeblood of the enterprise.  This new CIO role doesn’t just serve the business, it also shapes it.

If CIOs don’t take on these challenges, then technology will be distributed across business divisions and an opportunity will be lost to innovate while maintaining the discipline that modern IT has fought to implement over the last thirty years.  This is a battle that is worth having.

Category: Information Management
1 Comment »

by: Bsomich
19  Jan  2012

Profile Spotlight: Helena Hamilton

Helena Hamilton

Helena Hamilton is currently the Manager of Enterprise Information Management at Deloitte and has over 10 years of professional working experience in technology and consulting. She is an Information Management specialist with recent client experience in information governance, data quality assessment, business intelligence and enterprise reporting solutions. Her industry expertise includes public sector banking and financial services, pharmaceuticals and consumer business.

Connect with Helena.

Category: Member Profiles
No Comments »

by: Phil Simon
17  Jan  2012

Is Agile MDM the Solution?

Ah…the wonderful world of master data.

From my perspective, the vast majority of large organizations has not embraced MDM quickly enough. Based upon my consulting experience, the MDM penetration rate isn’t anywhere near where it should be. Overwhelmingly, the pros exceed the benefits for the vast majority of mature organizations.


In her post, Master Data Management: Does an effective solution exist?, Brenda Somich writes about many of the cultural, data, process, and other complexities that plague organizations that go down the MDM road. Her excellent post sparked a maelstrom of comments exploring the issue. Among them, Tony Nudd writes:

too many projects fail because the initial business outcomes were not clearly defined in a manner which could be specifically measured. One of the biggest issues with MDM is that there is the danger of trying to boil the ocean with an enormous program of MDM work, when projects should be broken down into manageable, measurable projects which address the pressing needs of the business. Like enterprise Business Process Management, MDM should grow organically throughout the organisation starting with a manageable first steps project to get the frameworks in place and for the Centre of Excellence to be established. From that initial project a program of projects should then be formulated and prioritised to address the needs of the business. As the projects come online, so the project teams build competency, the business are educated as to what is possible and the technology is proven.

In this sense, MDM resembles other IT and IM initiatives. Beset by amorphous or overly ambitious goals, many organizations embark upon MDM with a “big bang” or “boil the ocean” approach, as Nudd points out. Those expecting MDM to immediately solve longstanding and prickly problems are probably going to be disappointed. What’s more, they then may well dangerously and prematurely dismiss MDM as an important concept.

Agile MDM

Perhaps the often unrealistic expectations of MDM projects can in part explain why many organizations are loathe to even give it a try. (However, let’s not forget declining IT budgets, shrinking headcounts, and risk-averse CIOs.)

But fewer financial and human resources doesn’t necessarily have to mean that new MDM projects are completely off the table, right? In fact, because of the rising popularity of Agile software development and deployment methods, I wonder if organizations have had more success eschewing Waterfall approaches. I can’t say that I’ve personally worked on Agile MDM projects, but it seems to make a great deal of sense.

And I’m not the only person who things so. Information management expert Scott W. Ambler writes:

If you’re going to adopt an MDM strategy within your organization, it should at least be an agile one.  Many organizations struggle when it comes to MDM, typically because they adopt a traditional, command-and-control strategy.  Your MDM efforts can in fact be very agile and streamlined if you choose to.

Ambler echoes Nudd’s comments about the organic nature of successful IM projects.

Simon Says

Many organizations struggle with Agile methods because they are time-driven, not requirement-driven. Determine in advance if yours is ready for an Agile MDM approach before undertaking this type of project. Cultural and/or execution problems may well poison the MDM and Agile wells.

Also, understand that even an successful MDM project–however implemented–is no elixir. Those expecting MDM to completely eradicate fundamental process, people, system integration, and political issues are bound to be disappointed.


What say you?

Category: Information Development

by: Bsomich
11  Jan  2012

What’s your number?

It’s that time of year! The holidays are finally over and the joys of housekeeping and number crunching are upon us. How did your business end up in 2011? Above or below your expectations?

For us marketing folks, January is typically the time we square away to audit our marketing efforts from the previous year in preparation for future strategies. And the one big question that should be on everyone’s mind is, how much did those leads really cost me? 

The good news is, the numbers should be pretty painless to figure out. With all the wonderful analytic and marketing intelligence provided by companies such as Google, Coremetrics, Optify, Marketo, Raven, etc, there’s no shortage of data to analyze and come up with that magic number that tells you, “was this effort really worth it?” And with the rise of online and social media marketing in the past few years, the cost to acquire good leads has reduced dramatically to the point that any company, regardless of size or budget, can easily compete in the digital marketplace.

So what’s your magic number? In order to determine this, you need to define what you consider a lead to be. Is it someone who fills out a form on your website? Is it someone who clicks on an email campaign? How do you know when your leads are qualified, unqualified, hot or cold? The answer to these questions will differ for every business, but should be defined before calculating the cost of lead acquisition. Once you’ve figured out how you define a lead, you’ll want to find out the number of total leads you acquired and break them down by marketing channel activity.

Next, you’ll want to determine the cost of your marketing channel activities. In the world of online marketing, activities will generally fall into one of 5 campaign categories: Email, Social, SEO, Paid Search or Events/Webinars. The cost of activities for each channel will generally be a combination of time plus resources.

Lastly, to determine the average cost to acquire a lead, you’ll want to divide the total cost of marketing channel activities by the total number of leads acquired from that channel. For those visual folks out there, the equation looks similar to this:

Average cost to acquire lead = Cost of activity time + resources / total leads acquired

Calculate that and tada! That is your magic number (aka the average cost of lead acquisition for that channel). If you want to know the cost of lead acquisition for all channels combined or your marketing department in general, add the total cost for each channel and divide by the total number of leads you received.

Knowing how much value you derrived from your marketing efforts is paramount in developing your marketing plan. It will enable you to determine which activities generate the highest return on your effort (and help you boost the bottom line). Sure it takes some time to figure out, but hey.. they don’t call it crunching because it’s easy!

Category: Business Intelligence
No Comments »

by: Phil Simon
09  Jan  2012

2012 Trends: Data Contests

A few years ago and while its stock was still sky-high, Netflix ran an innovative contest with the intent of improving its movie recommendation algorithm. Ultimately, a small team figured out a way for the company to significantly increase the accuracy with which it gently suggests movies to its customers.

Think Wikinomics.

It turns out that these types of data analysis and improvement contests are starting to catch on. Indeed, with the rise of Big Data, cloud computing, open source software, and collaborative commerce, it has never been easier to outsource these “data science projects.”

From a recent BusinessWeek article:

In April 2010, Anthony Goldbloom, an Australian economist, [f]ounded a company called Kaggle to help businesses of any size run Netflix-style competitions. The customer supplies a data set, tells Kaggle the question it wants answered, and decides how much prize money it’s willing to put up. Kaggle shapes these inputs into a contest for the data-crunching hordes. To date, about 25,000 people—including thousands of PhDs—have flocked to Kaggle to compete in dozens of contests backed by Ford (F), Deloitte, Microsoft (MSFT), and other companies. The interest convinced investors, including PayPal co-founder Max Levchin, Google Chief Economist Hal Varian, and Web 2.0 kingpin Yuri Milner, to put $11 million into the company in November.

The potential for these types of projects is hard to overstate. Ditto the benefits.

Think about it. Organizations can publish even extremely large data sets online for the world at large. Interested groups, companies, and even individuals can use powerful tools such as Hadoop to analyze the information and provide recommendations. In the process, these insights can lead to developing new products and services and dramatic enhancements in existing businesses process (see Netflix).


Of course, these organizations will have to offer some type of prize or incentive. Building a better mousetrap may be exciting, but don’t expect too many people to volunteer their time without the expectation of significant reward. Remember that, of the millions of people who visit Wikipedia every day, only a very small percentage of them actually does any editing. If Wikipedia (a non-profit) offered actual remuneration, that number would be significantly higher (although the quality of its edits would probably suffer).

Consider the following examples:

  • A pharmaceutical company has a raft of data on a new and potentially promising new drug.
  • A manufacturing company has years of historical data on its defects.
  • A retailer is trying to understand its customer churn but can’t seem to get its arms around its data.

I could go on, but you get my drift.

Simon Says

While there will always be the need for proprietary data and attendant analysis, we may be entering an era of data democratization. Open Data is here to stay and I can certainly see the growth of marketplaces and companies like Kaggle that match data analysis firms with companies in need of that very type of expertise.

Of course, this need has always existed, but unprecedented power of contemporary tools, technologies, methodologies, and data mean that outsourced analysis and contests have never been easier. No longer do you have to look down the hall, call IT, or call in a Big Four consulting firm to understand your data–and learn from it.


What say you?


Tags: , ,
Category: Enterprise Data Management
1 Comment »

by: Bsomich
06  Jan  2012

Profile Spotlight: Charlene Dickson

Charlene Dickson

Charlene Dickson is an Information & Process Engineer interested in various information management methodologies with a permanent consultancy career focus.  She is currently the Scrum Master for Mix Telematics in Cape Town, South Africa. Dickson has experience with varied working environments, working styles, organizational structures and management styles which enables her to successfully execute information management projects.

Her educational and professional certification experience includes Cambridge O and A levels, MCTS & MCP SQL Server 2005 Implementation and Maintenance, and Certification as Scrum Master.  She has also completed Lean Six Sigma Green Belt Certification studies 2012 with the ASQ.

Connect with Charlene.

Category: Member Profiles
No Comments »

by: Phil Simon
03  Jan  2012

Is 2012 the year?

Over the past two years, I have tried to dispense advice on this blog about intelligent information management, MDM, data quality, technology, and the like. Today, though, I’d like to ask a series of simple but vital questions about 2012.

Is this the year that your organization finally:

  • Decides to adopt data quality initiatives? Or, even better, tries to institutionalize it?
  • Attempts to make sense of its data?
  • Tries to consolidate multiple and disparate data sources?
  • Looks at mining its unstructured data for meaning?
  • Embraces semantic technologies?
  • Gets on board with MDM?
  • Retires legacy systems?

In all likelihood, this is not the year that your organization does all of the above. Perhaps it is already doing many of these things–and doing them well. Less likely, your organization has no need for MDM, data governance, etc.

Now Is the Time

Here’s the rub: data quality is not going to decline in importance. Nor is data governance. Unstructured data isn’t going away. The need to produce an accurate list of customers, vendors, and employees (and quickly, to boot) isn’t ephemeral. In fact, in each of these cases, 2012 and beyond will only intensify the need to do data right. Period.

No more excuses.

Open source software continues to make strides–as does the cloud. (And not just cute little apps that do this, that, or the other. I’m talking about enterprise-grade software like Scala.)

While employee time should not be underestimated in tackling these endeavors, a major source of resistance in the form of out-of-pocket expenditures for expensive, on-premise solutions is not less of a consideration.

So, we know that the costs have dropped. To make the case for significant IM improvements, I also contend that the benefits of data governance, MDM, et. al have never been higher. We continue to generate ungodly amounts of data–and different forms to boot. And look at the companies that manage their data exceptionally well. Do you think that Amazon, Apple, Facebook, and Google would be remotely as successful if they didn’t excel at IM?

Simon Says

If not now, then when? Next year? 2014? If your organization continues to struggle with basic data management, how much longer will it be around? Will it gradually erode into irrelevance? Will it be usurped by nimble startups or much larger companies?

I can’t think of a better time to start adopting intelligent IM practices–many of which are detailed on this very site.


What say you?


Tags: , , ,
Category: Data Quality, Enterprise Data Management, Information Development
No Comments »

Collapse Expand Close
TODAY: Fri, March 22, 2019
Collapse Expand Close
Recent Comments
Collapse Expand Close