Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Archive for June, 2011

by: Bsomich
29  Jun  2011

Profile Spotlight: Rob Armstrong


Rob Armstrong

Since 1987, Rob has focused on the implementation, management, and evolution of information systems with a goal of increased business value. He has worked the gamut of warehousing from database coding and architecture to application optimization and business justification.

In addition to his hands on knowledge, Rob is also a prolific writer with white papers, magazine articles and two books which have helped companies focus their energy on real opportunities while implementing a complete solution. Knowing that “winning in business” is a partnership between users, executives, and technology, Rob relates lessons learned from all industries, across the globe in ways that will help you understand the “other side” and appreciate your role to data warehouse, and ultimately business success.

Connect with Rob.

Category: Information Development
No Comments »

by: Phil Simon
27  Jun  2011

Voltaire, Apple, and the Myth of Perfekt

apple

I’ve been doing a great deal of research on Apple lately for my new book. In so doing, I’ve stumbled upon some amazing tales with lessons for information management professionals.

Andy Hertzfeld (Apple employee number 435) writes about the company struggled to release the 1983 version of the Macintosh. Engineers struggled to fix bugs as its deadline neared. Unable to triage every issue, Hertzfeld was placed in the unenviable task of having to recommend to Steve Jobs the company postpone the release by a few weeks. In his words:

“No way, there’s no way we’re slipping!”, Steve responded. The room let out a collective gasp. “You guys have been working on this stuff for months now, another couple weeks isn’t going to make that much of a difference. You may as well get it over with. Just make it as good as you can. You better get back to work!”

The entire story can be read here. In a word, it’s fascinating.

Jobs was ahead of his time. He understood then–as many do now–that there’s no such thing as perfect. Yet, many folks in organizations resist change and new data management initiatives because perfection cannot be achieved. From a consultant’s perspective, this is very frustrating.

A Little Yarn

For instance, I worked on a very contentious project about four years ago migrating systems for a large hospital system. The hospital was retiring its legacy mainframe finance and payroll system. In its place would be a modern ERP application with all sorts of bells and whistles. Web-based reporting, e-mail notification, and fancy dashboards would replace paper reports.

Except that it wouldn’t.

The data from the hospital’s legacy system was a complete mess–and was in no position to be loaded into the new ERP system. Those visually appealing reports and dashboards would be useless.

Of course, this wasn’t surprising, as it had never been cleansed over the past 25 years. Some fancy data manipulation and the creation of several ETL tools improved the data by orders of magnitude.

But it wasn’t perfect.

Conversion and load programs would correctly kick out errant records. Lamentably, key internal players on the project focused only on those legitimate errors, not the tens of thousands of (now much cleaner) records that successfully loaded by virtue of meeting a complex array of business rules. A finance director in particular would seemingly always ask, “How can I be sure that something else isn’t wrong?”

This was precisely the wrong question to ask, but his doubts resonated with other key players. The project was a mess and exceeded its budget and deadline by ghastly amounts.

Simon Says

Expecting perfection is the acme of foolishness. Yes, diligent data management professionals and functional end-users should care about information loaded into new systems. What’s more, it’s vastly easier to correct and edit records in Excel, Access, or any number of tools versus a CRM, ERP, or other application.

At the same time, though, the majority of these applications provide industrial strength tools to fix errors. Database refreshes, purge programs, and batch error handling mechanisms collectively allow for mistakes to be fixed. Don’t wait for perfection. In the words of Voltaire, “The perfect is the enemy of the good.”

Feedback

What say you?

 

Tags:
Category: Information Value
1 Comment »

by: Robert.hillard
24  Jun  2011

The “four layer” model applied to unstructured content

In my book, Information-Driven Business, I introduce a four layer model for information.  You can also read more about this model in the MIKE2.0 article: Four Layers of Information.

The four layer model provides a way of describing information in every organisation.  The model explains how information is consumed (layer 1: metrics), navigated (layer 2: dimensional), held (layer 3: atomic) and created (layer 4: operational).  Using this model helps to organisation to understand where it is overly dependent on staff or customer knowledge to manage information at any of these layers (such as summarising to report, or slicing-up in spreadsheets to answer questions).

Some people have commented that the descriptions I use in the book, and are used in the MIKE2.0 article, are geared towards structured data.  To help readers understand how the model equally applies to both structured and unstructured data, the following definitions of each layer may help

Layer 1: Metrics
For information to be used for management decision making, it ultimately needs to be summarised into a score or metrics against which “good” or “bad” can be defined.  This is the same regardless of whether we are talking about structured data or summarising a collection of unstructured content.  The metric for documents could be as simple as a count (for example, the number of policies) or a combination of factors such as the number of processes covered by a particular type of policy.

Layer 2: Dimensional or Navigational
While formally described as the dimensional layer, it is perhaps better described as the way that the organisation can be navigated.  At this layer we are talking about structuring the content in way that we can find in a systematic way (via a taxonomy).  It is from here that metrics, such as a count of policies, can be derived.  It is also from here that we go to find content in its general form (“get me all procedures associated with disaster recovery”).  For instance, in this layer policies can be cross referenced against each other.

Layer 3: Normalised or Atomic
In the unstructured sense it is better to use the term “atomic” for this layer which contains the content in its original form reference by the event that created it rather than a business taxonomy.  This layer is often handled badly in organisations but can be as simple as recording the time, author and organisational hierarchy.  It can also be aligned to business processes.  For instance, in this layer, policies and procedures should be fully formed but only associated with the scope that they are covering.

Layer 4: Operational
The fourth layer is the front line and refers to the situation and technology context in which the content is created or updated.  Examples include: social media, documents on network drives and email within the inbox of the conversation participants.  For instance, in this layer, policies are created (maybe in many parts) but have no context.

Tags:
Category: Enterprise Content Management, Information Management, Information Strategy, MIKE2.0
No Comments »

by: Bsomich
23  Jun  2011

Marketing Manners & Business Intelligence: Give and you shall receive.

Business intelligence: We all strive for it. Marketing research departments are notorious for shelling out big bucks to survey companies for information on their customers, competitors and emerging products and technologies but often forget that this type of information can be easily and more cheaply found in their own backyard.

How? Through techniques such as content marketing, also known as sharing content in exchange for information. In this digital age,  companies are learning to package and exploit basic services in exchange for email addresses, consumer preferences and potential competitors through their own marketing channels and social networks.  The more value a visitor can find in the content, the more information he or she is willing to part with.  The old adage “give and you shall receive” comes to mind. Some just call it good marketing manners.

So how is this method of collecting information better, or cheaper for companies?

Traditional marketing research firms and survey data providers have unique methodologies.  They each have their own recipe for how data is collected, stored and categorized.  When you purchase data from a third party, you lose control of where that data came from, who it came from, how old it is, and what system is used to interpret it.  In a nutshell, you really don’t know what you’re getting. It comes down to the basic principle that if you want something done right, you’ve got to do it yourself. 

Sounds expensive though, doesn’t it? Not really. Think about your company’s basic services… whether you sell financial consulting, Ferraris or light bulbs, there are always “freebies” you can easily part with, package  and provide to get people in the door. Instead of spending all your resources marketing your paid products, put some effort into marketing your free ones. You’ll get a much higher response rate and gain valuable information to help in future sales and marketing efforts. The trick is to make sure there’s enough value in your free content so visitors don’t leave without giving you what you need in return.  The benefit is you get to control what information gets collected and from who.  And the best part is you just spent a fraction of the resources to obtain better quality information while also engaging with potential customers.

Sounds like a no brainer, doesn’t it?

Category: Information Development
No Comments »

by: Phil Simon
20  Jun  2011

On Gambling, Data, Google, and Ethics

“Do you have a player’s card?”, the dealer asks.

“I do not”, I reply.

And with that, my mind starts to wonder about, of all things, data. I am sitting at a blackjack table in a small Las Vegas’ casino. It’s not one of those recent monstrosities on The Strip. This is an old school gambling den a good fifteen miles from the action.

This little casino percolates a certain quaint charm of 1950s-style Vegas, long before billion dollar hotels and The World Series of Poker came into existence.

Yet, despite its distinctly retro feel, The Horseshoe is very much figured it out. By sheer virtue of my dealer’s initial five-word question, I know that her manager—and her manager’s manager—understands the importance of customer data. They want to collect as much of it as possible. They want to know who I am, where I’m from, what games I like to play, and my entire gambling profile.

Casinos have long been lauded and excoriated for their ability to mine customer data. It all depends upon your point of view. I heard a few years ago that Gamblers’ Anonymous (GA) requested that certain casinos stop marketing to them. Why? They were just too good at luring recovering addicts back to the slots, tables, and poker rooms.

Ethics aside for a moment, I would want to run an organization this well. Why wouldn’t you want your company to be almost too good at data mining, data management, and data governance?

Curating and Managing Data

Have you ever noticed that many companies sued for data-related matters and issues because they’re just too good at information management? (The lone exceptions are those organizations so sloppy with the data management practices that they leave themselves open to breaches and other security snafus.)

I’d rather be the hunted than the hunter. Consider Microsoft, a company that has waned in importance over the last decade. It’s still important, but it’s nowhere near what is used to be.

Case in point: The company is suing Google for the latter’s ostensible monopoly in search. It’s arguing that Google’s search dominance hurts consumers, but its primary objective is to knock Page and Brin down a peg—and carve out a chunk of the billions of dollars available from keywords associated with search. Google stores troves of customer and user data. Ballmer and company are forced to play their final card: the legal system. Historians will point out that Microsoft was trumpeting the value of free enterprise when it was the defendant in a number of anti-trust suits.

Simon Says

I’ve said it before and I’ll say it again. It’s all about the data: who has it, who wants it, who’s is best, who’s using it right now—and in the future. To continue with the gambling analogy, organizations lacking coherent information management strategies leave chips on the table, fail to know the correct odds when they bet, and are likely to go bust.

Feedback

What say you?

 

Tags:
Category: Data Quality, Information Management
No Comments »

by: Bsomich
15  Jun  2011

Profile Spotlight: Steve Sarsfield


Steve Sarsfield is a data governance business expert, speaker, author and blogger.

He is a product marketing professional at a major data quality vendor and author of the book “The Data Governance Imperative.”  He has held speaking engagements at events such as MIT Information Quality Symposium, the International Association for Information and Data Quality (IAIDQ) Symposium and at SAP CRM 2006 summit.

Specialties:

• Strategic Business Planning & Development
• Enterprise Product Expertise
• Author of Book “Data Governance Imperative”
• Accomplished Speaker and Dale Carnegie Graduate
• Project and Budget Management

Connect with Steve.

Category: Member Profiles
No Comments »

by: Phil Simon
13  Jun  2011

Who will manage Big Data?

An interesting McKinsey Global Institute (MGI) analysis looks at the vast amount of enterprise information these days–and the challenges that organizations will face in trying to manage it. The report, Big data: The next frontier for innovation, competition, and productivity, explores things such as the state of digital data and how different domains can use large data sets to create value. Here’s an excerpt:

MGI’s analysis shows that companies and policy makers must tackle significant hurdles to fully capture big data’s potential. The United States alone faces a shortage of 140,000 to 190,000 people with analytical and managerial expertise and 1.5 million managers and analysts with the skills to understand and make decisions based on the study of big data (exhibit). Companies and policy makers must also tackle misaligned incentives around issues such as privacy and security, access to data, and technology deployment.

To read the entire report, click here (registration required).

Reactions

While the sheer numbers surprised me a little, the issues the report raised certainly did not. Industry observers have long noted the shortage of skilled people who can handle the increasing amounts of structured and unstructured data accumulated every day–a trend that shows no signs of abating. What’s more, misaligned incentives can cause many problems and exacerbate others, something that I have seen repeatedly in my years as a consultant.

To be sure,  semantic technologies will help us get our arms around enormous data sets, a subject that I have written about on this site before. This is already happening to varying extents. But technology alone won’t get us into the end zone. No, we’ll need highly skilled and adaptable people–and lots of them.

One of my favorite posts on the matter, Rise of the Data Scientist, was written more than two years ago. Data scientists:

…have a combination of skills that not just makes independent work easier and quicker; it makes collaboration more exciting and opens up possibilities in what can be done. Oftentimes, visualization projects are disjointed processes and involve a lot of waiting. Maybe a statistician is waiting for data from a computer scientist; or a graphic designer is waiting for results from an analyst.

Beyond knowledge of different statistical methods and programming languages, the data scientists will need soft skills. Programming a computer is hardly tantamount to dealing with people issues, especially when those darn misaligned incentives rear their ugly heads in.

Simon Says

More than ever today, competitive advantages are fleeting. Never before have companies been able to adapt as quickly, as evinced by examples such as Google, Amazon, and Apple. At the same time, though, nothing has really changed in the last 20 years. The companies that will be able to sustain high levels of success attract, retain, and develop the best people.

Feedback

What say you?

 

Category: Enterprise Data Management, Information Development
No Comments »

by: Bsomich
09  Jun  2011

An Open Source Solution for Master Data Management

In most large organisations, managing master data is a very complex problem that technology alone will not solve.  The majority of issues are process and competency-oriented:

  • Organisations typically have complex data quality issues with master data, especially with customer and address data from legacy systems
  • There is often a high degree of overlap in master data, e.g. large organisations storing customer data across many systems in the enterprise
  • Organisations typically lack a Data Mastering Model which defines primary masters, secondary masters and slaves of master data and therefore makes integration of master data complex
  • It is often difficult to come to a common agreement on domain values that are stored across a number of systems, especially product data
  • Poor information governance (stewardship, ownership, policies) around master data leads to complexity across the organisation

MDM solutions are often perceived by business and executive management as significant and costly purely due to infrastructure improvement efforts lacking well-defined tangible business benefits.

The MIKE2.0 Information Development approach can help implement the necessary competences to address these issues in a systematic fashion. An end-to-end approach is followed that goes from business strategy to ongoing operational improvement.  MIKE2.0 contains the comprehensive set of activities and tasks that are required to see an MDM programme through from strategy to implementation.

Key aspects of MIKE2.0 that are applied include:

  • Definition of the business and technology blueprint, which defines strategic requirements and plans the implementation programme over multiple increments
  • Definition of strategic and tactical architecture frameworks, using the SAFE Architecture as a starting point
  • Documentation of strategic technology requirements for product selection
  • An organisational model to focus specifically on Information Development
  • An approach to improve Data Governance across the environment

This comprehensive approach goes across people, process, organisation and technology and is driven by an overall strategy.   Please check it out when you have a  moment.

Category: Master Data Management
No Comments »

by: Phil Simon
05  Jun  2011

Organizational Mind-Sets, Michael Jordan, and John McEnroe

In his excellent book, Little Bets: How Breakthrough Ideas Emerge from Small Discoveries, Peter Sims writes about psychology research performed by Dr. Carol Dweck of Stanford University. Among Dweck’s fascinating contributions, she has found that, at least at a high level, people can generally be broken into two categories. First, there are those with growth mind-sets. These folks embrace challenges, seeing them as opportunities to learn new skills and generally improve themselves.

Basketball legend Michael Jordan was perhaps the poster child for growth mind-sets. When faced with challenges, he would respond in a very constructive way. If necessary, he would retool. For example, early in his career, Jordan’s outside shooting was very mediocre, his three-point shooting lousy. Teams would routinely force Jordan to take outside shots and prevent him from getting to the rim at all costs.

It worked. Teams had at least temporarily figured out how to contain “MJ.”

So Jordan worked obsessively on his outside shooting, become a more than respectable threat from “behind the arc.”

Contrast Jordan with US tennis wunderkind John McEnroe, the embodiment of the fixed mind-set. Sims writes:



In other words, those with fixed mind-sets become frustrated when taken out of their comfort zones. (Note: I am a huge McEnroe fan. He was a magician at the net and can still bring it today.)

Which Mind-Set is Your Organization?

While I am no expert on Dweck’s work, she has clearly defined two extremes. That is, many people might have mind-sets more “fixed” than “growth” in orientation. Personally, I love improving, but have been known to have my McEnroe moments on and off the tennis court. Think in terms of a continuum, not a binary.

So, what does Dweck’s model have to teach us about the worlds of change and information management? It’s a little simplistic to view an organization as one monolithic entity. After all, any organization is comprised of many individuals, tribes, policies, politics, systems, and types of data. What’s more, different types of organizations employ different types of people in different roles. I’ve yet to come across a big company in which everyone had identical mind-sets.

Still, it’s instructive to look at organizations in this manner. Consider the following:

  • How do key people within the organization respond when confronted with unforeseen issues? Are they more Jordan or McEnroe?
  • Are certain departments more likely than others to see challenges as opportunities?
  • Which problems legitimately require a McEnroesque response? How often do you want to play that card with your vendors, partners, suppliers, and customers?

Simon Says

I don’t have the answers to these lofty questions. I am asking them because the MIKE2.0 Framework should not be viewed in a vacuum–nor should any other. Taking into account the reactions, dispositions, and general attitudes of key players is essential in considering anything remotely resembling planning. Mapping out key dates, tasks, and persons responsible sans these critical considerations is a recipe for disaster.

Feedback

What say you?

Tags:
Category: Information Development, Information Management
No Comments »

by: Bsomich
01  Jun  2011

Profile Spotlight: John Morris

John Morris

John Morris is a highly experienced Data Migration professional with a 25 year history in information technology.

For the last ten years he has specialised solely in delivering Data Migration projects. During that time he has worked on some of the biggest migration projects in the UK and for some of the major systems integrators.

John is the author of “Practical Data Migration” the top selling book on data migration published by the BCS (British Computing Society).

John’s personal blog on the BCS website is available here: John Morris – Practical Data Migration Blog.

John is a regular speaker at conferences in the UK and abroad. He is passionate about elevating Data Migration to the position it deserves as a skill-set and marketplace in its own right

In 2006 he co-founded Iergo as a specialist data migration consultancy. Iergo only specialise in Data Migration bringing their many years of experience to the problem domain.

Connect with John

Category: Information Development
No Comments »

Calendar
Collapse Expand Close
TODAY: Fri, April 28, 2017
June2011
SMTWTFS
2930311234
567891011
12131415161718
19202122232425
262728293012
Archives
Collapse Expand Close
Recent Comments
Collapse Expand Close