Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Archive for November, 2012

by: Robert.hillard
25  Nov  2012

A prediction for the mobile era

Since the dawn of the personal computer, the hardware industry has relied on a cycle of obsolesce of just a few years.  Working with their partners in the software community, they created a symbiotic relationship where each operating system and application upgrade strained the performance of existing computer hardware forcing consumers and businesses to regularly upgrade their machines.

In the last decade, though, this endless cycle of upgrades has been challenged as “Moore’s Law” has finally given us computers that outstrip the upgrades of even the most aggressive of software providers.  Fortunately for the manufacturers, the battle has moved on to tablets and smartphones.

With the release of yet another wave of devices for Christmas, headlined of course by the iPhone 5, its time to wonder if we are repeating the upgrade arms race of the PC era or ask if this time it is different.

The mobile era is not the same as the PC era

The first generations of mobile devices have each added desperately needed features that required major changes to the machines themselves such as better cameras, screens and more processing power.  However more recently, unlike the early years of PCs, software developers aren’t helping the manufacturer’s cause.  For the developers of apps, the main game seems to be to move functionality into the cloud and to get closer to their users online rather than relying on the hardware or operating system provider as an intermediary.  Good examples of this include applications such as Dropbox and Spotify.

At the same time, mobile technology has inherited the Moore’s Law driven hardware of the PC era giving it a head start and the ability to rapidly out-perform the requirements of app developers.  Many users are sitting on first generation tablets and third generation phones with no need to upgrade even to run the latest software releases.  “Cool” is now more often in the features, particularly delivered through the cloud, than in the device.

Winning with cloud

Cloud has created a revolution in the capabilities that users can access and potentially broken the endless need to upgrade devices every couple of years.  However, people have been left confused.  How many people actually understand how iCloud works to backup their iPhone or iPad?  People are often surprised and worried when they find out how much information they are sharing about their location or online activities through cloud services.

In the device wars playing out in the tablet and smartphone space, the winners won’t necessarily provide the best hardware, rather they will be able to explain the complexity of the cloud services in a way that gives consumers both confidence and the ability to fully exploit features that today they are only scratching the surface of.

Imagine a world where you can easily share music across all the devices in your house, seamlessly watch the first half of a movie on your TV and finish it on the bus and just know that all of your work documents are securely backed-up.  All of this is possible today, but few people actually know how to do it.

A prediction for 2013

I’ll make one prediction for the next year.  There will be a major security breach in a mainstream service that will make consumers and corporates alike get nervous about the proliferation of cloud services.

It’s into this breach that a trusted brand that is ready to make mobile devices, apps and cloud simple and safe can step.  If they do, then we could see them dominate this space like the “Wintel” alliance did for two decades.

Category: Web2.0
1 Comment »

by: Phil Simon
25  Nov  2012

On Palm Trees, Data Quality, and Big Data

I recently had some palm trees put into my backyard in my Nevada home. It was a downright cool experience that required industrial cranes to life the two-ton trees above my home.

There was no damage done to my home or the driveway and nobody was injured. Everything went well. Well, almost everything. It turns out that the truck carrying these trees was delayed.

Did the drivers have hard time loading these monstrosities? No. Was the enormous truck able to snake its way into my community? Yes. So, what was the problem?

Good old human error. When I bought the trees, my friend Jeff accompanied me. Jeff knows a thing or two about landscaping and I’m anything but a palm tree expert. I paid for the trees and gave the woman at the counter my proper address. I assumed that all was good to go.

Fast forward to tree delivery day. After a few hurried calls and general wonderment about where these things were, we identified the culprit. The saleswoman wrote down Jeff’s address on the deliver-to line, not mine. She put my address in the ‘notes’ section. For their part, the delivery guys didn’t read the notes and wound up driving 60 miles out of the way.


I’ve seen many parallels between palm trees and enterprise data in my career. I’ve had users question the accuracy of my reports. I might hear things like “There’s no way that we had that many promotions last month! You’re report is wrong!”

While I’m not perfect, I would often tell the skeptical user that we should check the data in the source system. More often than not, my report was accurate but the data pulled into that report was not. Thanks to audit tables and metadata, I could typically pinpoint the time, date, and creator of the errant record.

I would then work backwards. That is, after we knew that a user made this mistake, I would ask the natural next questions:

  • What other mistakes did this user make?
  • What else do we have to clean up?
  • Is there a larger departmental or organizational training issue?
  • Couldn’t we write a business rule or audit report to prevent the recurrence of this problem?

Simon Says

Everyone makes mistakes, and I’m certainly no exception. The larger point here is that data matters, especially the accurate kind. One of my favorite expressions is PICNIC–aka, problem in chair, not in computer. We can do simply amazing things with Big Data, but I’ll always insist that Small Data and data quality are just as important.


What say you?

Category: Data Quality, Enterprise Data Management, Information Management

by: Bsomich
22  Nov  2012

Guiding Principles for the Open Semantic Enterprise

We’ve just released the seventh episode of our Open MIKE Podcast series!

Episode 07: “Guiding Principles for the Open Semantic Enterprise” features key aspects of the following MIKE2.0 solution offerings:

Semantic Enterprise Guiding Principles:

Semantic Enterprise Composite Offering:

Semantic Enterprise Wiki Category:

Check it out:

Open MIKE Podcast – Episode 07 from Jim Harris on Vimeo.


Want to get involved? Step up to the “MIKE”

We kindly invite any existing MIKE contributors to contact us if they’d like to contribute any audio or video segments for future episodes.

On Twitter? Contribute and follow the discussion via the #MIKEPodcast hashtag.

Category: Open Source, Semantic Web
1 Comment »

by: Phil Simon
18  Nov  2012

Curiosity, Big Data, and Data Science

If you haven’t heard of data science, you will soon. As organizations realize that Big Data isn’t going away, they will finally come around. This is always the case with the technology adoption lifecycle. Yes, this may very well mean new hardware purchases and upgrades, as well as new software solutions like Hadoop and NoSQL. At some point, however, employees with new skills will have to make all of this new stuff sing and dance.

Enter the Data Scientist

Part statistician, part coder, part data modeler, and part business person, the term has grown in importance since being introduced in 2008. But I’d argue that the single most important attribute of the data scientist is a childlike curiosity of why things happen–or don’t.

In their 2010 HBR piece “Data Scientist: The Sexiest Job of the 21st Century“, Thomas H. Davenport and D.J. Patil write about the key characteristics of these folks. From the piece:

But we would say the dominant trait among data scientists is an intense curiosity—a desire to go beneath the surface of a problem, find the questions at its heart, and distill them into a very clear set of hypotheses that can be tested. This often entails the associative thinking that characterizes the most creative scientists in any field. For example, we know of a data scientist studying a fraud problem who realized that it was analogous to a type of DNA sequencing problem. By bringing together those disparate worlds, he and his team were able to craft a solution that dramatically reduced fraud losses.

Think about what a real scientist does for a moment. Whether trying to invent a drug or cure a disease, scientists look at data, form hypothesis, test them, more often than not fail, reevaluate, and refine. Many problems remain unsolved even after years of analysis. Louis Pasteur didn’t create a vaccine for rabies and anthrax over a weekend. Science is not a linear process.

Simon Says

I’d argue that the same thing applies to data science. Detecting patterns in datasets in the petabytes much different than writing a simple SELECT statement or doing ETL. Big Data means plenty of iterations and failures. Understanding why sales are slipping or customer behavior in general are not simple endeavors–nor are they static. That is, factors motivating people to buy products and services will probably change over time. Paying a data scientist a king’s ransom may come with the expectations of immediate, profound insights. That may well be the case, but it’s also entirely plausible that progress will take a great deal of time, especially at the beginning.

As Jill Dyche points out on HBR, there’s rarely a eureka moment. If your organizational culture does not permit failure and insists upon immediate results, maybe hiring a data scientist isn’t wise. Your organization should save some money and revisit Big Data and data science in five years. That is, if your organizational is still around.


What say you?

Tags: , , ,
Category: Information Strategy, Information Value
No Comments »

by: Bsomich
17  Nov  2012

5 Years and Counting! Celebrating Milestones at MIKE2.0

At MIKE2.0, we believe that information is one of the most crucial assets of a business. A successful approach for managing information can help foster meaningful, efficient, and cost-effective business and technology processes. Five years ago, we made the MIKE2.0 Methodology available to the Open Source community in an effort to expand our initiative to improve the way we manage information. Since that time, we’ve made some great accomplishments in the IM space.

Here are just a few of our noteworthy milestones:

  • A leadership team for MIKE2.0 was formed in 2009, consisting of the founding members of the MIKE2.0 Governance Association
  • We debuted the “Open MIKE” Podcast series to highlight key features of the MIKE2.0 wiki community.
  • Our new book, Information Development, is currently being published and will soon be available to the public.
  • We now have 869 articles and 8,677 people signed-up to contribute their knowledge and experience with the community.
  • The MIKE2.0 Methodology has been utilized in many academic and enterprise projects across the globe.
  • The open methodology concept has inspired some other projects, such as open-sustainability and openCCS.
  • The MIKE2.0 community has had nearly a million visitors since it’s inception.
  • Today, more than 19,000 unique visitors come to the site every month, making it one of the leading, if not the largest, open source IM community on the planet.

Read more about our history and what we’re trying to accomplish.

It’s been a fun and productive five years for our team at MIKE, and we look forward to more progress on our path to improving Information Development. We hope that you will choose to become a Contributor in this project and help make MIKE2.0 a truly collaborative and successful initiative.

Category: Information Development
No Comments »

by: Phil Simon
11  Nov  2012

Seven Deadly Sins of Information Management, Part 7: Gluttony

“Errors using inadequate data are much less than those using no data at all.”
Charles Babbage

In my last post, I discussed wrath from an IM perspective. Today, I conclude this series by looking at the data management implications of gluttony:

meaning to gulp down or swallow, means over-indulgence and over-consumption of food, drink, or wealth items to the point of extravagance or waste. In some Christian denominations, it is considered one of the seven deadly sins—a misplaced desire of food or its withholding from the needy.

Now, years ago, one could pretty persuasively make the argument that you could have too much data. Data storage costs used to be much higher than they are today. What’s more, without getting too techie, structured or transactional data could not be compressed nearly as much as its unstructured equivalent (this is still true today). And, against that backdrop, many organizations made horrible decisions based upon having too much data.

Count among them an ex-employer of mine. When the company implemented its ERP system, it made the truly awful decision to bifurcate its data among two different environments. Rather than splurge on decent hardware that would support keeping key enterprise data in single tables, management spent ungodly sums on other “nice-to-have” systems. The net result for less technical end users: If they wanted to report on five years’ worth of financial transactions, payroll history, or sales, they’d have to run the same report twice and manually merge the results.

Few did and, as a result, many decisions were based upon hunches or incomplete or inaccurate data.

Gluttony and Big Data

Today, Big Data has arrived. The amount of unstructured data trumps its structured counterpart. To be sure, many old-school CIOs won’t get it. They worry about the inputs required to make Big Data hum (increased employee training, hardware upgrades, new software purchases, data transformation efforts) while the outputs are uncertain. Yes, open-source tools like Hadoop are free, but think free speech instead of free beer, as the old saying goes.

But there’s good news for those who think that this data makes me look fat. Data storage costs have tumbled, data compression has advanced, and new tools like Hadoop and NoSQL and columnar databases allow organizations to “pig out” on data. Big datawarehouses used to contain terabytes of data; now some of the largest contain petabytes.

Simon Says

Bottom line: today we can be as gluttonous as we want with our data. Diet if you wish, but new tools and technologies allow organizations to finally harness the power of unstructured data–and lots of it.


What say you?

Tags: ,
Category: Enterprise Data Management
No Comments »

by: Phil Simon
05  Nov  2012

Seven Deadly Sins of Information Management, Part 6: Envy

In my last post, I discussed the sin of pride and information management (IM) projects. Today, let’s talk about envy, defined as “a resentful emotion that occurs when a person lacks another’s (perceived) superior quality, achievement or possession and wishes that the other lacked it.”

I’ll start off by saying that, much like lust, envy isn’t inherently bad. Wanting to do as well as another employee, department, division, or organization can spur improvement, innovation, and better business results. Yes, I’m channeling my inner Gordon Gekko: Greed, for lack of a better word, is good.

With respect to IM, I’ve seen envy take place in two fundamental ways: intra-organizational and inter-organizational  Let’s talk about each.

Intra-Organizational Envy

This type of envy takes place when employees at the same company resent the succes of their colleagues. Perhaps the marketing folks for product A just can’t do the same things with their information, technology, and systems that their counterparts representing product B can. Maybe division X launched a cloud-based CRM or wiki and this angers the employees in division Y.

At its core, intra-organizational envy stems from the inherently competitive and insecure nature of certain people. These envious folks have an axe to grind and typically have some anger issues going on.  Can someone say schadenfreude?

Inter-Organizational Envy

This type of envy takes place between employees at different companies. Let’s say that the CIO of hospital ABC sees what her counterpart at hospital XYZ has done. The latter has effectively deployed MDM, BI, or cloud-based technologies with apparent success. The ABC CEO wonders why his company is so ostensibly behind its competitor and neighbor.

I’ve seen situations like this over my career. In many instances, organization A will prematurely attempt to deploy more mature or Enterprise 2.0 technologies simply because other organizations  have already done so–not because organization A itself is ready. During these types of ill-conceived deployments, massive corners are cut, particularly with respect to data quality and IT and data governance. The CIO of ABC will look at the outcome of XYZ (say, the deployment of a new BI tool) and want the same outcome, even though the two organizations’ challenges are unlikely to be the same in type and magnitude.

Simon Says

Envy is a tough nut to crack in large part because it’s part of our DNA. I certainly cannot dispense pithy advice to counteract thousands of years of human evolution. I will, however, say this: Recognize that envy exists and that it’s impossible to eradicate. Don’t be Pollyanna about it. Try to minimize envy within and across your organization. Deal with outwardly envious people sooner rather than later.


What say you?

Next up: gluttony.

Tags: , , ,
Category: Business Intelligence, Information Development, Information Governance, Master Data Management
No Comments »

by: Bsomich
02  Nov  2012

Episode 06: Getting to Know NoSQL

We’ve just released the sixth episode of our Open MIKE Podcast series!

Episode 06: Getting to Know NoSQL features key aspects of the following MIKE2.0 solution offerings :

Big Data Solution Offering:

Preparing for NoSQL:

Hadoop and the Enterprise Debates:

Big Data Definition:

Big Sensor Data:

Check it out:

Open MIKE Podcast – Episode 06 from Jim Harris on Vimeo.

Want to get involved? Step up to the “MIKE”

We kindly invite any existing MIKE contributors to contact us if they’d like to contribute any audio or video segments for future episodes.

On Twitter? Contribute and follow the discussion via the #MIKEPodcast hashtag.

Category: Information Development
1 Comment »

Collapse Expand Close
TODAY: Tue, March 19, 2019
Collapse Expand Close
Recent Comments
Collapse Expand Close