Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Archive for May, 2013

by: Ocdqblog
29  May  2013

Headaches, Data Analysis, and Negativity Bias

I have suffered from bad headaches most of my life, but over the last few years they seemed to be getting worse.  Discussing this with my doctor, he asked lots of questions: How often do you get headaches? Do they occur at the same time of day? How long do they last? Are they always severe or sometimes mild? How many doses of over-the-counter medication do you take per headache?

Since I have been a data management professional for over twenty years, I felt kind of stupid when I realized that what my doctor was asking for to aid his medical diagnosis was . . . data.

So for the next two months, in preparation for my follow-up appointment, like a good human sensor, I diligently collected data about my headaches.  For severity, I used a scale of 1 to 5, where 1 was so mild I didn’t need medication, and 5 was so severe I had to lie down for a while in a dark and quiet room.

As I collected the data, I felt certain I was building a solid case for how bad my headaches were.  I had no doubt that the data analysis would prove me right — but I couldn’t have been more wrong.

Although I remembered frequently having headaches, most of which I recalled being quite severe, the data begged to differ.  On average, I had 3 headaches per week.  Only 33% rated above a 3 on my severity scale.  Only 25% required multiple doses of over-the-counter medication.  And, despite this being my biggest previous complaint to my doctor, only 10% of my headaches lasted most of the day.

How could my memory of those two months disagree so much with the data?

In psychology, the term negativity bias is used to explain how bad evokes a stronger reaction than good in the human mind.  This psychological phenomenon causes us to pay more attention to, and give more weight to our memories of, negative rather than positive experiences.

Negativity bias made me remember the few times when I had a really bad headache, and forget the far more frequent times when I did not.  As it turned out, my doctor’s prescription of data analysis proved he didn’t need to prescribe me stronger headache medication.

Data analysis helps us evaluate business problems, but sometimes it can give us a headache when its results force us to confront a bias we have about our business problems.

Tags: ,
Category: Information Development
1 Comment »

by: Robert.hillard
26  May  2013

Living as far from 1984 as Orwell

Over the last month I’ve been talking a lot about personally controlled records and the ownership of your own information.  For more background, see last month’s post and a discussion I took part in on ABC radio.

The strength of the response reinforces to me that this is an area that deserves greater focus.  On the one hand, we want business and government to provide us with better services and to effectively protect us from danger.  On the other, we don’t want our personal freedoms to be threatened.  The question for us to ask is whether we are at risk of giving up our personal freedom and privacy by giving away our personal information.

I couldn’t help but think about that most obvious of literary works: George Orwell’s 1984.  Like many teenagers of my generation, I read the book for the first time in 1984 right at the peak of the Cold War.  My overwhelming feeling was one of cultural arrogance, Orwell had gotten it wrong and the story did not apply to my society even though it probably was relevant for others.

In 2013 we are nearly as far from the year 1984 as George Orwell was when he wrote the book in 1948.  Arguable, as much has happened since 1984 as had occurred between 1948 and 1984.  The book introduces many interesting ideas including “telescreens”, “thoughtcrime” and “newspeak”.  While the forces that Orwell wrote about have not been the driver for these concepts to come to reality, much of their essence may well have slipped into our society without us noticing.

The ubiquitous telescreen of the book was a frightening device that combined a television with a camera which allowed authorities to watch what you were doing at all times.  While the technology has been around since Orwell himself, it really hasn’t been until the rise of the smartphone that constant monitoring has become possible.

While we aren’t being monitored visually, we are increasingly giving away large amounts of personal information in terms of our location.  Worse, it is starting to become a suspicious act when we choose to take ourselves off this form of tracking for a period of time.  To see how this is playing out in the courts, just look at criminal trials where the defendant is asked to justify why they’ve turned their phone off at the time of a crime taking place.

In the 1984 that we all experienced, freedom of thought was entrenched through institutions such as a free press and free libraries supporting research without fear of surveillance.  By 2013, many of these institutions have either moved online entirely or are well on their way to doing so.  Far from providing the protection of a library system that ensured complete confidentiality of research topics, any government can see what interests most of its citizens choose to pursue through Wikipedia or any other research tool.

The Orwellian concept of thoughtcrime assumed that there was some sort of hint at unconventional thoughts that could be a risk to their society.  It is easy to see that today’s governments could come to the same conclusion using the tools of the internet to identify what they deem to be antisocial interests.

Finally, newspeak was the language that the shadowy rulers of 1984 were creating to dumb-down the population and discourage thoughtcrimes.  While it might be a stretch, it is staggering to see how the short-form of modern messaging such as Twitter is encouraging a simplification of our language which is finding its way into the mainstream.

It is easy to write a post that claims conspiracies at every turn.  Far from arguing a major government plan to undermine our freedom, I thought that it was interesting to see that many of George Orwell’s fears are coming true.  The cause is not an oppressive government but rather an eagerness by the population as a whole to move services onto new platforms without demanding the same level of protection that their previous custodians have provided for a couple of centuries or more.

Tags: ,
Category: Information Strategy, Web Content Management, Web2.0
5 Comments »

by: Bsomich
24  May  2013

Weekly IM Update.

logo.jpg

Book Release Announcement: “Information Development Using MIKE2.0”

Have you heard? Our new book, “Information Development Using MIKE2.0” is now available for order.
MIKE2.0, Method for an Integrated Knowledge Environment, is an open source delivery framework for Enterprise Information Management. It provides a comprehensive methodology that can be applied across a number of different projects within the Information Management space. While initially focused around structured data, the goal of MIKE2.0 is to provide a comprehensive methodology for any type of Information Development.

The vision for Information Development and the MIKE2.0 Methodology have been available in a collaborative, online fashion since 2006, and are now made available in print publication to a wider audience, highlighting key wiki articles, blog posts, case studies and user applications of the methodology.

Authors for the book include Andreas Rindler, Sean McClowry, Robert Hillard, and Sven Mueller, with additional credit due to Deloitte, BearingPoint and over 7,000 members and key contributors of the MIKE2.0 community. The book has been published in paperback as well as all major e-book publishing platforms.

Get Involved: To get your copy of the book, visit our order page on Amazon.com. For more information on MIKE2.0 or how to get involved with our online community, please visit www.openmethodology.org.
Sincerely,

MIKE2.0 Community 

Contribute to MIKE:

Start a new article, help with articles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links: Home Page Login Content Model FAQs MIKE2.0 Governance

Join Us on
42.gif

Follow Us on 43 copy.jpg

Join Us on

images.jpg

Did You Know? All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

 

This Week’s Blogs for Thought:

Why You Need Data Quality Standards
The weakness of using relative rankings instead of absolute standards was exemplified in the TED Book by Jim Hornthal, A Haystack Full of Needles, using the humorous story of how Little Rock, Arkansas was initially rated as one of the world’s premier destinations for museums, romance, and fine dining by Triporati.com, a travel site intended to offer personalized destination recommendations powered by expert travel advice.

During alpha testing of their recommendation engine, Triporati discovered that the expert assigned to rate Little Rock had awarded the city’s best museums and restaurants with the highest score (5) because they were “the best that Little Rock has.”

No offense intended to Little Rock, Arkansas, but this is why you need data quality standards.

Read more.

Data Visualizations: Making Big Data Dance

Fifteen years ago, the presentation of data typically fell under the purview of analysts and IT professionals. Quarterly or annual meetings entailed rolling data up into now quaint diagrams, graphs, and charts.

My, how times have changed. Today, data is everywhere. We have entered the era of Big Data and, as I write in Too Big to Ignore, many things are changing.
Read more.

New Article: The Simple Knowledge Organization System
The Simple Knowledge Organization System (SKOS), is a common data model for sharing and linking knowledge organization systems via the Web.
Many knowledge organization systems, such as thesauri, taxonomies, classification schemes and subject heading systems, share a similar structure, and are used in similar applications. SKOS captures much of this similarity and makes it explicit, to enable data and technology sharing across diverse applications.
Read more.

 

Category: Information Development
No Comments »

by: Ocdqblog
22  May  2013

Why You Need Data Quality Standards

The weakness of using relative rankings instead of absolute standards was exemplified in the TED Book by Jim Hornthal, A Haystack Full of Needles, using the humorous story of how Little Rock, Arkansas was initially rated as one of the world’s premier destinations for museums, romance, and fine dining by Triporati.com, a travel site intended to offer personalized destination recommendations powered by expert travel advice.

During alpha testing of their recommendation engine, Triporati discovered that the expert assigned to rate Little Rock had awarded the city’s best museums and restaurants with the highest score (5) because they were “the best that Little Rock has.”

No offense intended to Little Rock, Arkansas, but this is why you need data quality standards.

As Hornthal explained, “relative ranking alone is deadly.”  Triporati’s experts had to be re-trained to understand that the goal wasn’t to determine “What is the best that a city has to offer?” but rather, “How do these attributes compare to the best?”

To put their ratings into context for the experts, Triporati explained the rating system used by the recommendation engine for attractions (e.g., museums, restaurants) should represent the following:

  • 5 — People would come from all over the world to visit the attraction
  • 4 — People would come from all over the country to visit the attraction
  • 3 — People would come from all across a state or region to visit the attraction
  • 2 — People could be enticed to cross a city to visit the attraction
  • 1 — People might be enticed to cross the street to visit the attraction

In the business scenario of the Triporati recommendation engine, improving data quality required the establishment of an absolute standard.  Most data quality professionals would say that this is an example of defining data quality as real-world alignment.

An alternative definition of data quality is fitness for the purpose of use, which is more akin to a relative ranking.  A common challenge is that data often has multiple business uses, each with its own fitness requirements, which is why applying data investigation to different business problems reveals different data quality tolerances that an absolute standard cannot reflect.

Furthermore, since some fine folks in the Southern United States might argue that Little Rock, Arkansas being a premier destination for museums and fine dining is more aligned with their real world, and since fine dining that is fit for the purpose of my use would be any restaurant that serves bacon double cheeseburgers and local micro-brewed beers, there is simply no accounting for taste.

But there needs to be an accounting for data quality, which is why you need data quality standards.

Tags: , ,
Category: Data Quality, Information Development
12 Comments »

by: Phil Simon
20  May  2013

Data Visualization: Making Big Data Dance

Fifteen years ago, the presentation of data typically fell under the purview of analysts and IT professionals. Quarterly or annual meetings entailed rolling data up into now quaint diagrams, graphs, and charts.

My, how times have changed. Today, data is everywhere. We have entered the era of Big Data and, as I write in Too Big to Ignore, many things are changing.

Big Data: Enterprise Shifts

In the workplace, let’s focus on two major shifts. First, today it’s becoming incumbent upon just about every member of a team, group, department, and organization to effectively present data in a compelling manner. Hidden in the petabytes of structured and unstructured data are key consumer, employee, and organizational insights that, if unleashed, would invariably move the needle.

Second, data no longer needs be presented on an occasional or periodic basis. Many employees are routinely looking at data of all types, a trend that will only intensify in the coming years.

The proliferation of effective data visualization tools like Ease.ly and Tableau provides tremendous opportunity. (The latter just went public with the übercool stock symbol $DATA.) Sadly, though, not enough employees—and, by extension, organizations—maximize the massive opportunity presented by data visualization. Of course, notable exceptions exist, but far too many professionals ignore DV tools. The result: they fail to present data in visually compelling ways. Far too many of us rely upon old standbys: bar charts, simple graphs, and the ubiquitous Excel spreadsheet. One of the biggest challenges to date with Big Data: Getting more people actually use the data–and the tools that make that data dance.

This begs the question: Why the lack of adoption? I’d posit that two factors are at play here:

  • Lack of knowledge that such tools exist among end users.
  • Many end users who know of these tools are unwilling to use them.

Simon Says: Make the Data Dance

Big Data in and of itself guarantees nothing. Presenting findings to senior management should involve more than pouring over thousands of records. Yes, the ability to drill down is essential. But starting with a compelling visual represents a strong start in gaining their attention.

Big Data is impossible to leverage with traditional tools (read: relational databases, SQL statements, Excel spreadsheets, and the like.)  Fortunately, increasingly powerful tools allow us to interpret and act upon previously unimaginable amounts of data. But we have to decide to use them.

Feedback

What say you?

Tags: ,
Category: Business Intelligence, Information Strategy
No Comments »

by: Bsomich
17  May  2013

Weekly IM Update.

logo.jpg

Big Data Solution Offering

Have you seen the latest offering in our Composite Solutions suite?

The Big Data Solution Offering provides an approach for storing, managing and accessing data of very high volumes, variety or complexity.
Storing large volumes of data from a large variety of data sources in traditional relational data stores is cost-prohibitive, and regular data modeling approaches and statistical tools cannot handle data structures with such high complexity. This solution offering discusses new types of data management systems based on NoSQL database management systems and MapReduce as the typical programming model and access method.
Read our Executive Summary for an overview of this solution.
We hope you find this new offering of benefit and welcome any suggestions you may have to improve it.

Sincerely,
MIKE2.0 Community

Popular Content
Did you know that the following wiki articles are most popular on Google? Check them out, and feel free to edit or expand them!
What is MIKE2.0? Deliverable Templates The 5 Phases of MIKE2.0 Overall Task List Business Assessment Blueprint SAFE Architecture Information Governance Solution
Contribute to MIKE:

Start a new article, help with articles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links: Home Page Login Content Model FAQs MIKE2.0 Governance

Join Us on
42.gif

Follow Us on 43 copy.jpg

Join Us on
images.jpg

 

This Week’s Blogs for Thought:

Data Dictatorship versus Data Democracy

It seems rather obvious to state that choosing between a dictatorship and a democracy is such an easy choice it doesn’t even need to be discussed.  However, when it comes to data, it seems like the choice is not so obvious, since for as long as I can remember the data management industry has been infatuated with the notion of instituting some form of a Data Dictatorship.

Read more.

Netflix: Understanding the Who, How, When and Why

I’ve written before on this site on how Netflix uses data in fascinating ways. The company’s knowledge of its customers–and their viewing habits–is beyond impressive. Moreover, it’s instructive for companies attempting to navigate the era of Big Data.

Consider this Wired article explaining how Netflix operates and utilizes its data.

Read more.

Three Common Mistakes in an Agile Transformation
Time and time again I see teams making the same mistakes when attempting their agile transformation. Since one of the most important ideas in agile methods is visibility, I wanted to make some of these visible in the hopes of creating change. This is by no means an exhaustive list, just three I see a lot.

Read more.

Category: Information Development
No Comments »

by: Ocdqblog
14  May  2013

Data Dictatorship versus Data Democracy

It seems rather obvious to state that choosing between a dictatorship and a democracy is such an easy choice it doesn’t even need to be discussed.  However, when it comes to data, it seems like the choice is not so obvious, since for as long as I can remember the data management industry has been infatuated with the notion of instituting some form of a Data Dictatorship.

Providing the organization with a single system of record, a single version of the truth, a single view, a golden copy, or a consolidated repository of trusted data has long been the rallying cry and siren song of data warehousing, and more recently, of master data management.

I admit a data dictatorship has its appeal, especially since most information development concepts are easier to manage and govern when you only have to deal with one official source of enterprise data.

Of course, the reality is most organizations are a Data Democracy, which means that other data sources, both internal and external, will be used.  During a recent Twitter chat, one discussion thread noted how the democratization of data has lead to the consumerization of data, which has made the silo-ification of data easier than ever.  Cloud-based services (and other consumerization of IT trends) make rolling your own data silo simple and inexpensive, at least in terms of financial cost, but arguably expensive in terms of splintering the enterprise’s data asset.

Forging one big data silo in the cloud to rule them all might soon be pitched as a new form of data dictatorship.  For this to happen, users must surrender the freedoms consumerization brought them, but history has shown reverting back to dictatorship after democracy is difficult, if not impossible.

As Winston Churchill famously said, “no one pretends that democracy is perfect or all-wise.  Indeed, it has been said that democracy is the worst form of government except all those other forms that have been tried from time to time.”

No one pretends that data democracy is perfect or all-wise.  Indeed, most data professionals would say that data democracy is the worst form of data governance except all those other forms that have been tried from time to time.  But perhaps it’s simply time to stop pursuing any form of data dictatorship.

Tags: , ,
Category: Enterprise Data Management, Information Development, Information Governance, Master Data Management
11 Comments »

by: Phil Simon
09  May  2013

Netflix: Understanding the Who, How, When, and Why

I’ve written before on this site on how Netflix uses data in fascinating ways. The company’s knowledge of its customers–and their viewing habits–is beyond impressive. Moreover, it’s instructive for companies attempting to navigate the era of Big Data.

Consider this Wired article explaining how Netflix operates and utilizes its data:

But Net­flix doesn’t only know what its audience likes to watch; it also knows how viewers like to watch it—beyond taste, the company understands habits. It doesn’t just know that we like to watch Breaking Bad; it knows that we like to watch four episodes of Breaking Bad in a row instead of going to sleep. It knows, in other words, that we like to binge.

Think about the astonishing level of knowledge that Netflix has developed on its customers. Not just the what, but the how, the when and (increasingly) the why. Talk about the Holy Grail. This should be the goal of every for-profit enterprise. Period.

Minimizing Risk

Equipped with information like this, Netflix can make more informed business decisions. Case in point: Its decision to resurrect the very popular cult classic Arrested Development. While by no means an inexpensive or risk-free venture, Netflix management is reasonably confident that its bet will pay off for one simple reason: its data supports the decision.

Note that each move Netflix makes is hardly guaranteed. No business decision even resembles complete certainty. But, by virtue of its exceptional data management, Netflix has moved the needle considerably. Its odds of success are without question much higher because it has minimized risk.

Think about the way in which far too many organizations operate these days. Forget Big Data and effectively harnessing its power. I’ve personally seen many enterprises manage their data so poorly that a comprehensive and accurate list of customers could not be produced in a reasonable period of time. Without this information, questions like how, when, and why could not be answered. The downstream effect: Decisions were based upon conjecture, rank, culture, and policy. This type of scenario is hardly ideal.

Simon Says

Rome, as they say, was not built in a day–and neither was Netflix. Rather than fret over the state of your data, take the steps now to improve your ability to analyze data in a few years.

Feedback

What say you?

Tags: ,
Category: Information Value
1 Comment »

by: Bsomich
07  May  2013

Weekly IM Update.

logo.jpg

What is an Open Methodology Framework?

An Open Methodology Framework is a collaborative environment for building methods to solve complex issues impacting business, technology, and society.  The best methodologies provide repeatable approaches on how to do things well based on established techniques. MIKE2.0′s Open Methodology Framework goes beyond the standards, techniques and best practices common to most methodologies with three objectives:

  • To Encourage Collaborative User Engagement
  • To Provide a Framework for Innovation
  • To Balance Release Stability with Continuous Improvement

We believe that this approach provides a successful framework accomplishing things in a better and collaborative fashion. What’s more, this approach allows for concurrent focus on both method and detailed technology artifacts. The emphasis is on emerging areas in which current methods and technologies lack maturity.

The Open Methodology Framework will be extended over time to include other projects. Another example of an open methodology, is open-sustainability which applies many of these concepts to the area of sustainable development. Suggestions for other Open Methodology projects can be initiated on this article’s talk page.

We hope you find this of benefit and welcome any suggestions you may have to improve it.

Sincerely,
MIKE2.0 Community

Popular Content
Did you know that the following wiki articles are most popular on Google? Check them out, and feel free to edit or expand them!
What is MIKE2.0? Deliverable Templates The 5 Phases of MIKE2.0 Overall Task List Business Assessment Blueprint SAFE Architecture Information Governance Solution

Contribute to MIKE
:

Start a new article, help with articles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links: Home Page Login Content Model FAQs MIKE2.0 Governance

Join Us on
42.gif

Follow Us on 43 copy.jpg

Join Us on

images.jpg

This Week’s Blogs for Thought:
Bigger Questions, not Bigger Data
In her book Being Wrong: Adventures in the Margin of Error, Kathryn Schulz explained “the pivotal insight of the Scientific Revolution was that the advancement of knowledge depends on current theories collapsing in the face of new insights and discoveries. In this model of progress, errors do not lead us away from the truth. Instead, they edge us incrementally toward it.”
Read more.

Healthcare, Risk Aversion, and Big Data Case Studies

Think for a minute about how much we spend on healthcare. In the United States, the numbers break down as follows:

  • Roughly $3 trillion spent annually, a number rising at 6-7% per year
  • This represents about 17% of US Gross Domestic Product (GDP)
  • Some estimates put the number wasted annually on healthcare at a mind-boggling $2 trillion
  • There’s at least $60B in annual Medicare fraud alone each year (and some estimates put that number at $250B) in fraud

For more astonishing data on healthcare, click here. The stats are frightening.
Read more.

Information Careers in 2013: The Hottest Jobs and Highest Salaries
Unemployment numbers for IT last month were stagnant, according to figures released Friday by the U.S. Bureau of Labor Statistics. The outlook is much rosier for demand and salaries with specific IT careers in the years ahead.
Read more.

 

Category: Information Development
No Comments »

by: Phil Simon
01  May  2013

Healthcare, Risk Aversion, and Big Data Case Studies

Think for a minute about how much we spend on healthcare. In the United States, the numbers break down as follows:

  • Roughly $3 trillion spent annually, a number rising at 6-7% per year
  • This represents about 17% of US Gross Domestic Product (GDP)
  • Some estimates put the number wasted annually on healthcare at a mind-boggling $2 trillion
  • There’s at least $60B in annual Medicare fraud alone each year (and some estimates put that number at $250B) in fraud

For more astonishing data on healthcare, click here. The stats are frightening. With so much waste and opportunity, it should be no surprise that quite a few software vendors are focusing on Big Data–and not just behemoths like IBM. Start-ups like Explorys, Humedica, Apixio, and scores of others have entered the space.

Where’s the Data?

With so much action surrounding Big Data and healthcare, you’d think that there would be a tremendous number of examples. You’d expect there to be more statistics on how Big Data has helped organizations save lives, reduce costs, and increase revenue.

And you’d be wrong.

I’ve worked in hospitals a great deal over my career, and the term risk aversion is entirely apropos. Forget for a minute the significant difficulty in isolating cause and effect. (It’s not easy to accurately claim that deploying Hadoop throughout the organization saved 187 lives in 2012.)

Say for a minute that you’re the CIO of a healthcare organization and you make such a claim. Think about the potential ramifications from lawsuit-happy attorneys. Imagine having to respond to inquiries from lawyers about why you waited so long to deploy software that would have saved so many lives. What were you waiting for? How much will you pay my clients to drop their suit?

This isn’t to say that you can’t find data on, well, Big Data and healthcare. You can. You just have to look really hard–and you’ll more than likely be less than satisfied with the results. For example, this Humedica case study shows increased diagnosis of patients with diabetes who fell between the cracks.

Simon Says

Large organizations are conservative by their nature. Toss in potential lawsuits and it’s easy to understand the paucity results-oriented Big Data healthcare studies. What’s more, we’re still in the early innnings. Expect more data on Big Data in healthcare over the coming years.

Feedback

What say you?

Tags: , ,
Category: Information Development, Information Value
No Comments »

Calendar
Collapse Expand Close
TODAY: Sun, December 17, 2017
May2013
SMTWTFS
2829301234
567891011
12131415161718
19202122232425
2627282930311
Archives
Collapse Expand Close
Recent Comments
Collapse Expand Close