Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Archive for August, 2012

by: Bsomich
31  Aug  2012

The Open MIKE Podcast: Episode 02

We’ve just released the second episode of our Open MIKE Podcast series!

Episode 02 features key aspects of our Information Governance solution offering and includes a “Step Up to the MIKE” call-to-action. Check it out:

Open MIKE Podcast – Episode 02 from Jim Harris on Vimeo.

We kindly invite any existing MIKE contributors to contact us if they’d like to contribute any audio or video segments for future episodes.

On Twitter? Contribute and follow the discussion via the #MIKEPodcast hashtag.

Category: Information Development
1 Comment »

by: Robert.hillard
26  Aug  2012

Just 3% smarter isn’t enough

There is a curious phenomenon in IQ tests.  Every test is designed to have an average score across the population of 100 but for any given test the average result rises by 3% every decade.  This is called the Flynn effect after James Flynn who was central to the observation and documentation of the pattern.

I wonder whether there is an argument that this also relates to society’s knowledge and potentially enterprise capability.  This is a controversial position and many would point out that other causes have been put forward for the Flynn effect although none have been proven.

My argument is that it seems to take a period of time for any idea, no matter how well proven, to get absorbed by a population in an organisation.  As much as IQ tests try to find different ways to test intelligence that is independent of knowledge and experience, they require some degree of common understanding.

In the early 20th century, general relativity was a concept that was so far beyond the understanding of the broad scientific community that it is said that when Sir Arthur Eddington was asked whether it was true that only three people in the world understood the theory he replied “Who’s the third?”.

Over many decades, the concept of relativity has become part of mainstream knowledge and is intuitively understood by the majority of the population who have received at least some higher education in physics.  What changed?  When Eddington made his comment, the idea of relativity had been in existence for some years and the details have barely changed since.  Even though the information had been in existence for a long time, its explanation has improved over many decades through retelling and hence is much more readily adopted.

My question then is whether big ideas get absorbed over a period of years and change the general background knowledge of a population almost by osmosis.  The Flynn effect could, therefore, reflect the rate at which these concepts get picked up.  Perhaps the general capability of an organisation, and society more broadly, is enhanced by new ideas at a rate of roughly 3% per decade.

I have previously argued that it is very hard to make a leap in enterprise capability unless something is substantially changed.  If, however, an organisation identifies the big ideas that are central to their business strategy then they can actively manage the learning of the broader population.  The Flynn effect tells us to not underestimate how hard it is for new ideas to become part of the natural operating knowledge of a group of people.  Recognising this, repetition and the retelling of explanatory stories over months rather than years provide an opportunity to change the game.

To the leaders amongst us, the big idea may be as obvious as relativity was to Eddington – but the question is whether there are just three people in the organisation who get it or whether we are working to lift the general capability faster than Flynn’s 3% per decade.

Category: Information Development
No Comments »

by: Phil Simon
24  Aug  2012

The Power of When

One would hope that today most businesses know which of their products sell–and how much. After all, it’s 2012. Beyond that, many companies have spent a great deal of money trying to determine why people buy–and don’t buy–certain products. Now, with advances in information technology, we’re getting closer to answering the when question–as in, when customers spend money.

Not surprisingly, it turns out that mega-retailer Wal-Mart is at the forefront of the all-important when question. It turns out that when people are paid is a major factor in when they buy what they buy. According to a recent Reuters’ article:

“The paycheck cycle remains pronounced, and there continues to be a lot of uncertainty in the global economy,” Chief Financial Officer Charles Holley said in a recorded message on Thursday.

The world’s largest retailer, viewed as a barometer of economic activity, continues to see signs customers are strapped—spending more at the beginning of the month, when they get their paychecks. [emphasis mine]

That is, budget-conscious customers may be more likely to buy a new cell phone when their bank accounts are flush with cash. By contrast, consumers may well buy diapers and food as needed. In other words, demand for “essential” products is probably more inelastic.

Let’s delve deeper into the power of when.

Complementary Goods

Now, to be sure, the question of when isn’t entirely new nor is it unaddressed. Basic economic theory tells us that certain products are more likely to be purchased with one another–a.k.a., complementary goods. For instance, it’s not uncommon for consumers to buy peanut butter and jelly together or salsa with their chips. There’s a very good reason that these products are often if not always stocked near each other in supermarkets.

So, what is new? I’d argue that even small businesses have never before had access to such vast information and technology. Those that build embracing the power of analytics into their cultures have at the ability to gain remarkable insights into their customers’ behavior.

With tremendously powerful data and technology, new relationships can be discovered. Maybe there’s an idea time of the month to put peanut butter on sale. Perhaps such a discount will spur sales of jelly or bread or bananas. Maybe product placement in certain areas at certain hours of the day mean more units sold. The possibilities are limitless.

Simon Says

I’ll grant that most businesses know what people buy and how they pay for these products (although the explosion of mobile payment methods like Square will introduce new data, considerations, and variables). That’s table stakes these days. But how many really understand other essential data questions such as why and when?

And think about the transformative power of when for the consumer. Yes, we know about daily deal sites like Groupon. When products go viral (on Twitter or on sites like, flash mobs can erupt making moot weekly or monthly sales projections. I’ll bet that the importance of when will continue to increase.


What say you?


Tags: , , , ,
Category: Information Management, Information Strategy
No Comments »

by: Bsomich
22  Aug  2012

Profile Spotlight: Kevin Parker

Kevin Parker

Kevin Parker is the Principal Consultant & Practice Lead for Information Management at  T. White Parker Associates, Inc.

He is a proven information and technology leader with an array of experience in enterprise information architecture. Parker brings a unique combination of technological expertise, business acumen, and strategic leadership, including:

SharePoint Architecture & Development
Enterprise Information Architecture & Management
Enterprise Web Application Architecture & Development
Enterprise Systems Architecture & Management
User Experience Design
Graphic Design, Marketing & Communication
Social Media & Enterprise 2.0
IT Strategy

Connect with Kevin.

Category: Member Profiles
No Comments »

by: Bsomich
17  Aug  2012

Debuting Today: The Open MIKE Podcast Series

The MIKE2.0 community is excited to announce the launch of our new podcast series “The Open MIKE.”  The Open MIKE Podcast is a video podcast show, hosted by Jim Harris, which discusses aspects of the MIKE2.0 framework, and features content contributed to MIKE 2.0 Wiki Articles, Blog Posts, and Discussion Forums.

View our first episode below:

And feel free to check out our community overview video for more information on how to get involved with MIKE2.0:



As always, contributions to the community are welcome and appreciated!

Category: Information Development
No Comments »

by: Phil Simon
16  Aug  2012

Big Data and the Gang of Four

At least to me, sometimes Big Data often seems like a bit of an amorphous term. Just what exactly is it, anyway?

Consider the following statistics from the Gang of Four:

  • Amazon: Excluding books, the company sells 160 million products on its website. Target sells merely 500,000. Amazon’s reported to have credit cards on file for 300 million customers. 300 million. For more Amazon stats, click here.
  • Apple: The company a few months ago passed 25 billion app downloads.
  • Facebook: 954 million registered users share more than one billion pieces of content every day.
  • Google: As of two years ago, Google handled 34,000 searches per second.

These numbers are nothing less than mind-blowing. While Facebook’s rate of growth seems to be waning, make no mistake: it’s still growing. (Deceleration of growth shouldn’t be confused with deceleration.)

While we’re at it, let’s look at some of Twitter’s numbers. On the company’s five-year anniversary, the company posted the following numbers:

  • 3 years, 2 months and 1 day. The time it took from the first tweet to the billionth tweet.
  • 1 week. The time it now takes for users to send a billion tweets.
  • 50 million. The average number of tweets people sent per day, one year ago.
  • 140 million. The average number of tweets people sent per day, in the last month.
  • Oddly, 80 percent of all tweets involve Charlie Sheen.

OK, I’m making the last one up, but you get my drift.

A few things strike me about these numbers. First, this is a staggering amount of data. Second, all of this data is kept somewhere. To varying extents, these companies and others are turning data into information and, ultimately knowledge.

What they do with that knowledge varies, but no one can doubt the potential of so much data–even if much of it is noise. Another issue: will people continue to use ad-supported platforms? Will we become sick of having our data sold to the highest bigger? Or, will private, ad-free platforms like flourish?

Even if the latter is true, those private platforms will still be generating data. So, in a way, the explosion of data does not hinge upon the continued growth of open or “somewhat-open” platforms

Simon Says

If you think that consumers are going to be generating and using less data in the upcoming years, you’re living in an alternate reality. Take steps now to ensure that your organization has the software, hardware, and human capabilities to handle vastly increasing amounts of data.


What say you?

Tags: , , ,
Category: Information Management, Information Strategy
No Comments »

by: Phil Simon
08  Aug  2012

Courage, Big Data, and the Long Tail

For my money, one of the most important business books of the last decade is Chris Anderson’s The Long Tail. In short, advances in technology, the drop in the cost of storage, and the rise of bandwidth collectively mean that the traditional notion of inventory is, in many instances, dead.

Consider that physical bookstores like Barnes and Noble will not stock titles that only sell two or three per year. It’s just not worth their while. However, physical stores matter less and less these days. It’s not 1992 anymore. A little company called now sells oodles of books, to use the technical term. To Jeff Bezos et. al, inventory is essentially unlimited because vast warehouses store less popular books that only sell a few copies per year. What’s more, the rise of e-books and print on demand (POD) only intensify this trend. With the latter, a digital file can be turned into a book in minutes. Brass tacks: More than ever, it’s never been easier to sell niche products.

Not Just Books

If you think that the long tail only applies to books, you’re way, way off. Think CDs, movies, and a bevy of other products. In their recent HBR article “Use Big Data to Find New Micromarkets“, Manish Goyal, Maryanne Q. Hancock, and Homayoun Hatami write about the increasingly ability of companies to segment their customers:

Consider the case of a chemicals company. Instead of looking at current sales by region, as it had always done [emphasis mine], the company examined market share within customer industry sectors in specific U.S. counties. The micromarket analysis revealed that although the company had 20% of the overall market, it had up to 60% in some markets but as little as 10% in others, including some of the fastest-growing segments. On the basis of this analysis, the company redeployed its sales force to exploit the growth.

For instance, one sales rep had been spending more than half her time 200 miles from her home office, even though only a quarter of her region’s opportunity lay there. This was purely because sales territories had been assigned according to historical performance rather than growth prospects. Now she spends 75% of her time in an area where 75% of the opportunity exists — within 50 miles of her office. Changes like these increased the firm’s growth rate of new accounts from 15% to 25% in just one year.

Recently, Jim Harris and I were talking about this subject. The promise of business intelligence has been with us for quite some time, although many organizations have for one reason or another failed to (fully) capitalize upon it. Reciting the reasons here isn’t the best use of my space, but I’d argue that in many organizations BI applications failed because people didn’t want to make ostensibly counterintuitive decisions. For instance, “the data tells me to deploy more salespeople in an area but it just doesn’t feel right.” We often ignore the data, even when the case is clear.

To quote from the HBR article, for “a micromarket strategy to work, however, management must have the courage and imagination to act on the insights revealed by this type of analysis.”

Simon Says

The benefits of the long tail, BI, and many emerging technologies have to be tempered against a slew of data, organizational, and human factors. Technology can only do so much. At a minimum, organizations with cultures that reward (or, at least, fail to punish) people who consistently ignore data fail to capitalize on lucrative opportunities. At worst, they set themselves up for massive failure and potential extinction.


What say you?

Tags: , ,
Category: Business Intelligence
No Comments »

by: Bsomich
04  Aug  2012

Profile Spotlight: Christine Connors

Christine Connors

Christine Connors is currently the Senior Consultant at Knowledgent Group. Ms. Connors has extensive experience in taxonomy, ontology and metadata design and development. Prior to forming TrivimRLG, Ms. Connors was the global director, semantic technology solutions for Dow Jones, responsible for partnering with business champions across Dow Jones to improve digital asset management and delivery. In that position, she managed a worldwide team responsible for the development of taxonomies, ontologies and metadata that are used to add value to Dow Jones news and financial information products. Ms. Connors also served as business champion for the Synaptica® software application, including managing a US-based team of software developers, and supported Dow Jones consulting practices worldwide, which deliver end-to-end information access solutions based on taxonomies, metadata and semantic technologies. Prior to joining Dow Jones Ms. Connors was a knowledge architect at Intuit, where she was responsible for introducing semantic technologies to online content management and search. And before that, she was a Metadata Architect at Raytheon Company and Cybrarian at CEOExpress Company. At Raytheon Company she oversaw knowledge representation and enterprise search, delivering large-scale taxonomies, metadata schema and rules-based classification to improve retrieval of internal information via a multi-vendor retrieval platform.

Ms. Connors is a certified Six Sigma Specialist, and is a member of both the American Society for Information Science and Technology (ASIS&T) and the Special Libraries Association. Christine is Organizer of the Philadelphia and Princeton Semantic Web Meetups, and Assistant Organizer of the New York Semantic Web Meetup.

Connect with Christine.

Category: Member Profiles
No Comments »

by: Phil Simon
01  Aug  2012

Rapid Deployment Data Challenges

I was recently watching Bloomberg West, my favorite tech show. Emily Chang was interviewing Bill McDermott, co-chief executive officer of SAP AG. McDermott spoke about the company’s stellar second-quarter results and growth strategy.

You can watch the interview below or click here:

During the interview, McDermott commented on SAP clients’ widespread adoption of preconfigured apps–i.e., rapid deployment (RD). I want to touch in this post upon some of the data management issues involved in these types of projects.

By way of background, at a high level RD projects involve a vendor or system integrator effectively plunking down a preconfigured application like BI or CRM. The deployment typically takes a fraction of the time typically involved in these often laborious projects but, importantly, clients lose the ability to customize these applications. Also note that SAP is hardly the only vendor to conceive of this concept. I’ve heard about RD for the better part of a decade from more than a few firms.

Benefits Must be Balanced with Costs

Many CIOs chomp at the bit at the very thought of being able to “bang out” new applications and functionality. This is especially true at cash-strapped organizations. To many senior executives, the tradeoffs of RD projects are more than justified.

I’m not here to argue that point, but understand a few things about RD deployments. First, RD hardly gets around the data quality issues facing legacy systems or Waterfall and Agile projects. Specifically, GIGO still applies. Don’t make the mistake of assuming that a live system or application contains accurate or complete data just because it is live.

Second, many organizations’ data is in such disrepair that a good chunk of it can’t be loaded. Period. ABCDE is not a valid zip code. $0 is not a real salary unless you’re a CEO getting millions in stock options. You get my drift. Data unable to be loaded because it conflicts with application rules will be rejected.

Third, RD projects often involve benchmarks and industry KPIs. That is, a retail organization can compare its employee turnover or sales-per-square-foot to industry averages. That’s all fine and dandy, but remember that RD eschews customizations. The way that organization XYZ calculates turnover or same-store sales may differ slightly or significantly from that of other companies, effectively rendering comparisons moot. In turn, this can quash the user adoption of the very tool that XYZ crammed in.

Simon Says: Take Vendor Promises with a Grain of Salt

I’m not against RD as a concept. Organizations that manage their data well will, all else being equal, get more out of applications than organizations lacking such discipline. Just remember that there’s no magic wand, no secret sauce.

Perhaps embarking on a data cleansing project before commencing an RD project is the way to go. Better yet, see if you can try a cloud-based version of the application (even on a limited basis) to fully appreciate life with the new application before writing a big check.


What say you?

Category: Business Intelligence, Data Quality, Information Development
No Comments »

Collapse Expand Close
TODAY: Fri, March 22, 2019
Collapse Expand Close
Recent Comments
Collapse Expand Close