Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Archive for November, 2011

by: Phil Simon
27  Nov  2011

Top-Down Business Intelligence

“A good plan is like a road map: it shows the final destination and usually the best way to get there.”

–H. Stanley Judd

I recently had the opportunity to attend IBM’s Information On Demand Conference in Las Vegas, NV. At breakfast one day, Crysta Anderson of IBM, two attendees, and I talked about some BI implementations gone awry. In this post, I’ll discuss a few of the key problems that have plagued organizations spending millions of dollars on BI products–and still not seen the anticipated results.

A Familiar Scenario?

Recognize this? An organization (call it YYZ here) is mired in a sea of individual reports, overwhelmed end users, bad data, and the like. The CIO hears about a competitor successfully deploying a BI tool, generating meaningful KPIs and insights into the business. After lengthy vendor overview and procurement processes, YYZ makes the leap and rolls out its new BI toys. Two years later, YYZ has yet to see any of the expected benefits.

Many of you have seen this movie before.

So, you’re the CIO of YYZ and your own future doesn’t look terribly bright. What to do? Blame vendors if you want. Consultants also make for good whipping boys. Maybe some regulations stood in your way. And there’s always that recalcitrant department or individual employees who didn’t get with the program. But I have one simple question for you: Where did you start?

As we discussed at that table at the IOD conference, many organizations get off on the wrong foot. They begin their BI implementations from the bottom up. In other words, they take a look at the myriad reports, scripts, queries, and standalone databases used–and not used–and then attempt to recreate all or most of them in the new BI tool. Thousands of hours, hundreds of thousands of dollars, and months of employees’ time are spent replicating the old in the new.

The Wrong Questions

There are two major problem with the bottom-up approach. For one and most obvious, not all reports are needed. The core–and faulty assumption–is that we need tomorrow what we needed yesterday. As a consultant, I personally have seen many people request reports in the “new” system that they would never run. When I would ask a question about the purpose of the report or the ultimate destination, I would far too often hear crickets.

Beyond superfluous reports, there’s a deeper problem with bottom-up BI: it runs counter the whole notion of BI in the first place.

Let me explain. At its highest level, cubes, OLAP, data mining, and other sophisticated analytical techniques and tools all start at the top. That is, the conversation typically begins with high-level questions such as:

  • Who are our best customers?
  • Where are our profits coming from?
  • Which products are selling best in each area?
  • Which salespeople seem to be struggling?

An organization cannot start a BI implementation in the weeds and expect to be successful. It needs to start with the right questions, define its terms, and clean up its data. Only then can it begin to see the fruits from the BI tree.

Simon Says

Fight the urge to “just do it” when deploying BI tools. Failing to reassess core business processes, KPIs and metrics, and data quality issues are all recipes for disaster. Consultants and vendors who believe that you can just figure these things out mid-stream should be ignored.

Feedback

What say you?

 

Tags:
Category: Business Intelligence
4 Comments »

by: Robert.hillard
20  Nov  2011

Embracing the unexpected

The nineteen century belonged to the engineers.  Western society had been invigorated and changed beyond recognition by the industrial revolution through its early years and by its close the railroads were synonymous with the building of wealth.

The nineteen century was the era that saw the building of modern business with the foundation being established for many of the great companies that we know today.  The management thinkers who defined the discipline cluster around the first part of the twentieth century and it should be no surprise that they were heavily influenced by the engineers.

Business was built around the idea of engineered processes with defined inputs and outputs.  I’ve written before about the shift from process-driven to information-driven business.  In this post, though, I am really focusing on another consequence of the engineering approach to the running of businesses, the expectation of achieving planned outcomes.

There is a lot to be said for achieving a plan.  Investors dream of certainty in their returns.  Complex businesses like to be able to align production schedules.  Staff like knowing that they have a long-term job.

When you’re building a bridge or a railroad, there is certainty in the desired outcome.  Success is measured in terms of a completed project against time and budget.

When your business has a goal of providing products or services into a market, the definition of success is much harder to nail down.  You want your product or service to be profitable, but you are usually flexible on its exact definition.  However, internal structures tend not to have this flexibility built in.  Large businesses operate by ensuring each part of the organisation delivers their component of a new project as specified by the overall design.

This sounds fine until you look at these components in more detail.  Many are fiendishly complex.  In particular the IT can often involve many existing and new systems which have to be interfaced in ways that were never intended when they were originally created.  Staff trained to achieve a single outcome in the market keep on testing customers until they gain (or even bludgeon) acceptance for the product or service design.

Because of the scale of these projects, failure is not an option.  The business engineering philosophy that I’ve described will push the launch through regardless of the obstacles.  However, there is a growing trend in business to try and use “big data” to run experiments and confirm that the design of a new product or service is correct before this effort is undertaken.

There is also another trend in business.  Agile.  Agile methods are characterised by an evolutionary approach to achieving system outcomes.

Individually these trends make sense.  Taken together they may actually be starting to indicate a deeper change.  In a future world we may treat business as an experiment in its own right.  We know what the outcome is that we expect, but we will push our teams to embrace issues and look for systemic obstacles to guide us in new, and potentially more profitable, directions.

When customers don’t react positively to our initial designs, rather than adjust the design to their aesthetic, business should ask whether the product is appropriate at all and consider making a radical shift even at the last minute.

When IT finds that a system change is harder than they expected, they can legitimately ask whether there is a compromise that will deliver a different answer that might be equally acceptable, or sometimes even more useful.

One of the major differences between scientists and engineers is that the former look for the unexpected in their experiments and try to focus on the underlying knowledge they can get from things not going as planned.  Perhaps twenty-first century business needs less people thinking like engineers trying to railroad new products and services into the market and more who are willing to don the lab coat of a scientist who is willing to allow the complexity of modern business to flourish and support their innovation.

Tags: ,
Category: Information Strategy
No Comments »

by: Bsomich
19  Nov  2011

Weekly IM Update.

 
 
 logo.jpg

Open Source Solution Offerings

MIKE2.0 Open Source Solution Offerings are used to implement solutions to information management problems, solely through the use of Open Source technologies. The goal of MIKE2.0 is to become an organising framework for the use of Open Source in the Information Management space.

The MIKE2.0 Methodology plans to evolve to include:

  • An Open Source Maturity Model, using Technology Selection QuickScan as a starting point
  • A definition of an Open Source version of the SAFE Architecture bringing together multiple Open Source components
  • Open Sourcing of some of MIKE2.0 Tools, such as IM QuickScan
  • Assessments of Open Source Data Management projects from communities such as SourceForge and Eclipse
  • Detailed design and code Supporting Assets that are all Open Source
  • Drive development of new Open Source technologies in the data management space, through the through end-to-end lifecycle of creating these products.
  • An Open Source Collaboration Forum to harness ideas about realising the open source value proposition across industries.

We hope you find this of benefit and welcome any suggestions you may have to improve it.

Sincerely,

MIKE2.0 Community

 
Popular Content

Did you know that the following wiki articles are most popular on Google? Check them out, and feel free to edit or expand them!

What is MIKE2.0?
Deliverable Templates
The 5 Phases of MIKE2.0
Overall Task List
Business Assessment Blueprint
SAFE Architecture
Information Governance Solution

Contribute to MIKE:

Start a new article, help with articles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

 Join Us on images.jpg

This Week’s Food for Thought:

What Will Motivate Mainstream IT to Adopt Big Data? 

BigData/NoSQL solutions have a chasm to cross and it will need to happen soon. The investment in BigData certainly supposes adoption well beyond the initial group of Silicon Valley “new economiers” like eBay, Amazon, Yahoo and Google – companies who, incidentally, wrote many of the solutions. The BigData landscape could radically change if the adoption does not continue through to the automakers, big insurance companies, healthcare companies and big retailers. The investment community is counting on it.  

Read more.
Information Governance: Do you have a plan?

A successful enterprise is largely dependant on how well information policies are evaluated and adapted as business priorities and market conditions evolve. 

Still, many organizations do not have a formal information governance strategy in place.   An EIU research survey of senior executives from leading companies around the world found that nearly two-thirds (62%) of companies have no formal information governance program in place, a concerning trend that can leave many corporations underperforming and open to preventable risks to sensitive information.

Read more.
Too Big To Succeed

 By now, we’ve all heard the term too big to fail. Billions of dollars in bail-outs were required to save enormous financial institutions that had to survive, even with massive government loans.

Think about its counterpart: Too big to succeed.

A large health care organization with which I was working (call it “Company X” here) needs me to do an application upgrade of its time and attendance system. It’s a big project, but far short of a new ERP implementation. We agree on rate, contract language, and start date. IT sends consultants a 15-page instruction sheet just to connect to its intranet.

Let me repeat that: Fifteen pages.

Read more.

Category: Information Development
No Comments »

by: Phil Simon
17  Nov  2011

On Moneyball and the Importance of Data

At the recent IBM Information On Demand Conference, keynote speaker Michael Lewis discussed some of the principles behind his best-selling book, Moneyball. In his superb and compelling text, Lewis describes how Billy Beane (general manager of the Oakland Athletics, an American baseball team), successfully used data-oriented strategies to compete against teams with payrolls two or three times as high.

To be sure, Lewis and Beane (also in attendance) were not addressing the baseball intelligentsia at the conference. (OK, maybe a few wannabes and baseball geeks.) They were talking to information management (IM) professionals from a wide array of industries. Yet, the principles in the book could not have been more apropos to the audience:

  • Information matters now more than ever.
  • Information has never been easier to obtain and manipulate.
  • Any lead or advantage gleaned from the effective use of information is fleeting. It isn’t that hard to employ a copycat strategy.
  • People often refuse to adopt information-based strategies later in life because they know what’s best.

Lewis’ last point about change-averse baseball old-timers–and people in general–is particularly salient. For all of the pontificating I do on this site about data quality, intelligent information management, and the like, it all comes down to people. Human beings make decisions about what–and what not–to do.

Limited Means

Necessity is the mother of invention, as they say. It’s interesting to note that Beane had to rely upon unconventional means to field a competitive baseball team. In other words, he did not have the luxury of a big budget that would have allowed him to spend the big money on traditionally valued players. Instead, he had to develop and use new statistics, in many cases relying upon neglected but potentially valuable Sabermetrics. As Lewis explained at the conference and in the book, Beane had to look at the market for undervalued players and make intelligent bets. Paying $120 million to sign his best player at the time (Jason Giambi) was not an option. (Ten years ago, Giambi signed a 7-year $120-million deal with the New York Yankees.)

While the A’s have yet to win a championship on Beane’s watch, the team has been very competitive with limited means–especially in comparison to other small market clubs like the Pittsburgh Pirates. In fact, teams with far bigger payrolls have performed much worse.

Simon Says

The parallels between the A’s and most organizations could not be more stark. Few have unlimited means. Nearly all have to make tradeoffs between what is necessary and what is desirable. Learn from Lewis’ book and Beane’s approach to information management. Embrace data and new ways at looking at things. You may well be surprised with the results.

Feedback

What say you?

Tags:
Category: Information Strategy, Information Value
5 Comments »

by: Bsomich
10  Nov  2011

Master Data Management: Does an effective solution exist?

Despite being a term that has only reached popularity in the last decade, Master Data Management (MDM) is really a new turn on an old problem.

Managing master data such as customer, product, employee, locality and partner data has always been a major challenge – ever since organisations have tried to share or integrate data across systems. For this reason, master management marketplace has grown significantly in the last decade and is predicated to continue to grow aggressively over the coming years. Organisations are spending very large amounts of money on their Master Management programmes and they want to ensure their investment is sound.

However, in most large organisations, managing master data is a very complex problem that technology alone will not solve.  The majority of underlying issues are process and competency-oriented:

  • Organisations typically have complex data quality issues with master data, especially with customer and address data from legacy systems
  • There is often a high degree of overlap in master data, e.g. large organisations storing customer data across many systems in the enterprise
  • Organisations typically lack a Data Mastering Model which defines primary masters, secondary masters and slaves of master data and therefore makes integration of master data complex
  • It is often difficult to come to a common agreement on domain values that are stored across a number of systems, especially product data
  • Poor information governance (stewardship, ownership, policies) around master data leads to complexity across the organisation

MDM solutions are often perceived by business and executive management as significant and costly purely due to infrastructure improvement efforts lacking well-defined tangible business benefits. MIKE2.0′s approach to Information Development is one method can help implement the necessary competences to address some of these issues.

Would you recommend any other solutions, and if so, why?  What do you think are the key capabilities required for an effective MDM solution?

Category: Master Data Management
12 Comments »

by: Phil Simon
07  Nov  2011

Following the Information Management Chain

Although I have been focusing on writing for the last three years, I still do my fair share of consulting. I find that my consulting experiences frequently fuel my writing, and it’s my hope that the lessons of my posts help people and organizations avoid many of the mistakes that I regrettably continue to see.

Here’s another lesson from the data management trenches.

One of my clients was making some changes to its internal processing based upon implementing a new application, and the organization brought me in to do some data validation. In the course of my work, a few unexpected problems occurred because the organization’s systems are pretty integrated. In other words, like squeezing a balloon, making a change in one part of an application correctly–but unexpectedly–caused changes in some other areas. This sets off what I’ll term here as an IM Chain, consisting of four parts:

  • the problem
  • the cause
  • the fix
  • the fallout

A Very Simple Model: The IM Chain

The IM Chain can be represented simply and visually as follows:

Let’s look at the chain in some more depth. First, there’s the IM problem. Consider that:

  • The problem may not be noticed at all.
  • Some people may refuse to identify the problem–or the depth of the problem.
  • The problem may be underestimated.
  • The problem may be noticed too late.
  • The problem may be noticed by people you’d ideally like to avoid (read: regulators, attorneys, and your competition).

Then, of course, there’s the cause:

  • The cause is sometimes difficult to isolate, particularly if an end user is not knowledgeable of the other parts of the system–or other systems.
  • Different entities within and outside of the organization often disagree on the cause of the problem.
  • Some may refuse to identify the cause of the problem, even when faced with irrefutable proof.

Next up is the fix:

  • The fix could be very expensive and time- and resource-intensive.
  • The right resources might not be available to fix the problem.
  • Different entities within the organization might disagree on the “right” fix.
  • Legal or regulatory hurdles might compromise the proposed fix.
  • The fix could come too late.

Finally, there’s the fallout:

  • The fix could break other things. Can someone say Pandora’s Box?
  • The “right” fix may not be politically acceptable inside an organization.

Simon Says

In almost all instances, integrated systems and applications are far superior to their standalone equivalents. Lamentably, far too many organizations continue to use and support multiple and disparate legacy applications because of one reason or another. While I’m hardly a fan of multiple systems, records, and applications, oddly, there is often one major benefit to these types of data silos: the IM Chain may cease to exist.

Understand that, if all of your systems talk to each other (as is increasingly common via open APIs), you’re going to have to deal with the IM Chain. Ask yourself if making one change is going to precipitate others. Don’t wait until it actually does.

Feedback

What say you?

 

Tags:
Category: Data Quality, Enterprise Data Management, Information Management
3 Comments »

by: Phil Simon
02  Nov  2011

Beyond the Data Management Basics

Amazon, Apple, Facebook, and Google

Fast Company recently ran a fantastic article on the success and futures of Amazon, Apple, Facebook, and Google. These companies do so many things really well, not the least of which is their their astonishing levels of data management. From the piece:

Data is like mother’s milk for [these companies]. Data not only fuels new and better advertising systems (which Google and Facebook depend on) but better insights into what you’d like to buy next (which Amazon and Apple want to know). Data also powers new inventions: Google’s voice-recognition system, its traffic maps, and its spell-checker are all based on large-scale, anonymous customer tracking. These three ideas feed one another in a continuous (and often virtuous) loop. Post-PC devices are intimately connected to individual users. Think of this: You have a family desktop computer, but you probably don’t have a family Kindle. E-books are tied to a single Amazon account and can be read by one person at a time.

In a word, wow.

Consider what Amazon, Apple, Facebook, and Google (aka the Gang of Four) do with their data in relation to the average large organization. By way of stark contrast, at a recent conference I attended, DataFlux CEO Tony Fisher described how most companies need a full two days to gather a list of their customers.

Think about that.

Two days.

When I heard that statistic, I couldn’t help but wonder about the following questions:

  • Is this list of customers ultimately accurate?
  • Why does this take so long? Why can’t someone just run a report?
  • How many organizations are trying to fix this–especially those that take two weeks or more?
  • What about other types of lists (read: products, employees, vendors, etc.)?
  • What kind of resources are involved in cobbling together these types of reports?
  • How can an organization understand its customers’ motivations, preferences, and purchasing habits when, as is too often the case, even the definition of the term customer is in dispute?
  • Most important, what if the organization managed its data better and its data were more accurate, what else could it do with the time and resources required to “keep the lights on”?

Ah, good old opportunity cost. Think about what Amazon can do because it knows exactly who its customers are, which products they buy and when, and (increasingly) why they buy. Bezos and company waste no time and resources in being able to immediately pull accurate and comprehensive lists of who bought what and when.

How can an organization understand its customers’ motivations, preferences, and purchasing habits when, as is too often the case, even the definition of the term customer is in dispute?

Necessary and Sufficient

For good reason, the Gang of Four keeps its internal methods and systems pretty much under wraps. Even people who have written books about each company have had difficulty speaking with key internal players, as Richard Brandt (author of a forthcoming book on Amazon) recently told me.

However, this much I can write without fear of accurate contradiction: each did not achieve its level of success by poorly managing its data. Put differently, in the Age of the Platform, excellent data management is becoming a necessary–but insufficient–condition for success these days.

Simon Says

This is not 1995; companies don’t buy even staple products such as Microsoft Windows or Office because no legitimate alternatives exist. “Have to” is increasingly being replaced with “want to.” You won’t know the difference between the two unless you know your customers.

Feedback

What say you?

Tags: , , , ,
Category: Information Management
2 Comments »

by: Bsomich
02  Nov  2011

Profile Spotlight: Matthew Moore.


Matthew Moore

Matthew Moore has been working in the Knowledge Management field for over 10 years.  He is currently a Director at Innotecture, an information management consulting firm based in Australia.  Matt’s professional experience includes knowledge management, learning and development, internal communications and community development working with such companies as PricewaterhouseCoopers, IBM, Oracle and the Australian government.

Connect with Matt.

Category: Information Development
No Comments »

Calendar
Collapse Expand Close
TODAY: Thu, September 21, 2017
November2011
SMTWTFS
303112345
6789101112
13141516171819
20212223242526
27282930123
Archives
Collapse Expand Close
Recent Comments
Collapse Expand Close