Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Archive for April, 2013

by: Ocdqblog
30  Apr  2013

Bigger Questions, not Bigger Data

In her book Being Wrong: Adventures in the Margin of Error, Kathryn Schulz explained “the pivotal insight of the Scientific Revolution was that the advancement of knowledge depends on current theories collapsing in the face of new insights and discoveries.  In this model of progress, errors do not lead us away from the truth. Instead, they edge us incrementally toward it.”

In his book Ignorance: How It Drives Science, Stuart Firestein explained “questions are more relevant than answers.  Questions are bigger than answers.  One good question can give rise to several layers of answers, inspire decades-long searches for solutions, generate whole new fields of inquiry, and prompt changes in entrenched thinking.  Answers, on the other hand, often end the process.”

Unfortunately, some people seem to misunderstand the goal of big data and data science to be the pursuit to provide the answers to all of our questions.  Some go so far as to claim that eventually we will know everything, that soon we will be able to foretell the future with absolute certainty.

These were a few of the misunderstandings addressed by Andrew McAfee in his recent Harvard Business Review blog post Pundits: Stop Sounding Ignorant About Data. “I’ve been talking and hanging out with a lot of data geeks over the past months and even though they’re highly ambitious people,” McAfee concluded, “they’re very circumspect when they talk about their work.  They know that the universe is a ridiculously messy and complex place and that all we can do is chip away at its mysteries with whatever tools are available, our brains always first and foremost among them.  The geeks are excited these days because in the current era of Big Data the tools just got a whole lot better.”

“The right question asked in the right way, rather than the accumulation of more data,” Firestein concluded, “allows a field to progress.  Scientists don’t just design an experiment based on what they don’t know.  The truly successful strategy is one that provides them even a glimpse of what’s on the other side of their ignorance and an opportunity to see if they can’t get the question to be bigger.  Ignorance works as the engine of science because it is virtually unbounded, and that makes science much more expansive.”

Science has always been about bigger questions, not bigger data.

Tags: , ,
Category: Information Development
8 Comments »

by: Robert.hillard
25  Apr  2013

You should own your own data

In virtually every country there is a debate around privacy, driven most recently by the rise of Big Data, social networks and technology more generally.  Without doubt, the major technology companies recognise the value of the information they are collecting from individuals and are working very hard to minimise the impact of privacy changes by putting as much control as possible in the hands of the individual.

This debate will only intensify as the social networks push to also own the identity space.  Just look at the number of websites that encourage you to use Facebook, Google or similar services to log in.  In doing so, yet more of your data is available to these businesses which you are providing in exchange for the service of having your identity conveniently confirmed.

Citizens and consumers are becoming increasingly aware of the value of their personal information and hence are looking for more privacy options.  The default position of governments has been to revisit their privacy regulations.  This approach, though, is doomed from the start as no regulation can possibly keep up with this rapidly developing information economy.

In fact, it is likely to be economic forces that will provide the solution to the privacy risks of Big Data.  Both government and businesses are beginning to realise that they can make individuals much more comfortable to share their data if rather than providing an almost infinite (and incomprehensible) set of privacy settings they simply renounce any attempt at taking over ownership of the data and simply borrow or lease it with the permission of the true owner – you.

This approach is referred to as personally controlled records and is made possible by the databases that support the growth of Big Data.  No longer is it necessary to extract every piece of information and replicate it many times over in order to support complex analytics.  Rather it can be packaged as a neat record, kept in the control of the individual and purged upon expiry or the revocation of the lease that has been provided.

Rather than reducing the value to business and government, this approach actually opens up a huge array of new possibilities and leaves the individual in control.  The evidence is growing that when people feel confident that they can withdraw their information at any time and are not at risk of unintended consequences they are much more willing to submit their data for a wide array of purposes.

The protection of the individual in a world of Big Data is not through better privacy, but rather through clarifying who actually owns the data and ensuring that their rights are maintained.

Hear more in a Sky News interview I provided recently on trends in technology.

Tags: ,
Category: Information Value
5 Comments »

by: Bsomich
23  Apr  2013

Weekly IM Update.

logo.jpg

MGA Book Release Announcement: “Information Development Using MIKE2.0”

Since 2006, MIKE2.0 has acted as a free resource for information professionals by bringing together best practices in methods, techniques and benchmarking. It is now being made available in print publication to a wider audience, highlighting key wiki articles, blog posts, case studies and user applications of the methodology. With many governments and businesses thinking about how to respond to the latest round of challenges including cyber risks and privacy, the MIKE2.0 book elevates the discussion from day-to-day headlines to a strategic response.

Get Involved

Our new book has been published in paperback (available on Amazon.com and Barnes & Noble) as well as all major e-book publishing platforms. For more information on MIKE2.0 or how to get involved with the online MIKE2.0 community, please contact us.

Sincerely,

MIKE2.0 Community 

Contribute to MIKE:

Start a new article, help with articles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links: Home Page Login Content Model FAQs MIKE2.0 Governance

Join Us on
42.gif

Follow Us on 43 copy.jpg

Join Us on
images.jpg

 

Did You Know? All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

 

This Week’s Blogs for Thought:

In his 1938 collection of essays World Brain, H. G. Wells explained that “it is not the amount of knowledge that makes a brain. It is not even the distribution of knowledge. It is the interconnectedness.”

This brought to my brain the traditional notion of data warehousing as the increasing accumulation of data, distributing information across the organization, and providing the knowledge necessary for business intelligence.

Read more.

Big Data and Normative Control
In 1992, sociologists Stephen Barley and Gideon Kunda wrote a classic paper that broadly defines two management schools of thought. In short, organizations are either governed by one of the two following means:

  • Normative control. Culture, norms, and other soft skills dominate data, systems, and technology. In organizations ruled by normative control, decisions are made by groups, symbols, organizational rituals, shared experiences, charismatic leadership, and vision statements.
  • Rational control. Data, systems, and technology dominate culture, norms, and other soft skills. In organizations ruled by rational control, decisions are made by data, mechanisms, rules, and algorithms.

To the extent that we are living in the era of Big Data, many organizations are once again shifting. Rational control will invariably make inroads against normative control.
Read more.

Is the cloud really safe?

With the rise of Apple’s iCloud and file-sharing solutions such as Dropbox and Sharepoint, it’s no surprise that SaaS and cloud computing solutions are increasing in popularity. From reduced costs in hosting and infrastructure to increased mobility, backups and accessibility, the benefits are profound. Need proof? A recent study by IDC estimates that spending on public IT cloud services will expand at a compound annual growth rate of 27.6%, from $21.5 billion in 2010 to $72.9 billion in 2015.

Read more.

Forward this message to a friend

Questions?
If you have any questions, please email us at mike2@openmethodology.org.

Category: Information Development
No Comments »

by: Ocdqblog
22  Apr  2013

The Internet of Humans

The Internet of Things became a more frequently heard phrase over the last decade as more things embedded with radio-frequency identification (RFID) tags, or similar technology, allowed objects to be uniquely identified, inventoried, and tracked by computers.  Early adopters focused on inventory control and supply chain management, but the growing fields of application include smart meters, smart appliances, and, of course, smart phones.

The concept is referred to as the Internet of Things to differentiate its machine-generated data from data generated directly by humans typing, taking pictures, recording videos, scanning bar codes, etc.

The Internet of Things is the source of the category of big data known as sensor data, which is often the new type you come across while defining big data that requires you to start getting to know NoSQL.

In his book Too Big to Know, David Weinberger discussed another growing category of data facilitated by the Internet, namely the crowd-sourced knowledge of amateurs providing scientists with data to aid in their research.  “Science has a long tradition of embracing amateurs,” Weinberger explained.  “After all, truth is truth, no matter who utters it.”

The era of big data could be called the era of big utterance, and the Internet is the ultimate platform for crowd-sourcing the knowledge of amateurs.  Weinberger provided several examples, including websites like GalaxyZoo.orgeBird.org, and PatientsLikeMe.com, which, as Weinberger explained, “not only enables patients to share details about their treatments and responses but gathers that data, anonymizes it, and provides it to researchers, including to pharmaceutical companies.  The patients are providing highly pertinent information based on an expertise in a disease in which they have special credentials they have earned against their will.”  The intriguing term Weinberger used to describe the source of this crowd-sourced amateur knowledge was human sensors.

In our increasingly data-constructed world, where more data might soon be constructed by things than by humans, I couldn’t help but wonder whether the phrase the Internet of Humans needs to be frequently heard in the coming decades to not only differentiate machine-generated data from human-generated data, but, more importantly, to remind us that humans (amateurs and professionals alike) are a vital source of knowledge that no amount of data from any source could ever replace.

Tags: , ,
Category: Information Development
4 Comments »

by: Ocdqblog
18  Apr  2013

The Enterprise Brain

In his 1938 collection of essays World Brain, H. G. Wells explained that “it is not the amount of knowledge that makes a brain.  It is not even the distribution of knowledge.  It is the interconnectedness.”

This brought to my brain the traditional notion of data warehousing as the increasing accumulation of data, distributing information across the organization, and providing the knowledge necessary for business intelligence.

But is an enterprise data warehouse the Enterprise Brain?  Wells suggested that interconnectedness is what makes a brain.  Despite Ralph Kimball’s definition of a data warehouse being the union of its data marts, more often than not a data warehouse is a confederacy of data silos whose only real interconnectedness is being co-located on the same database server.

Looking at how our human brains work in his book Where Good Ideas Come From, Steven Johnson explained that “neurons share information by passing chemicals across the synaptic gap that connects them, but they also communicate via a more indirect channel: they synchronize their firing rates, what neuroscientists call phase-locking.  There is a kind of beautiful synchrony to phase-locking—millions of neurons pulsing in perfect rhythm.”

The phase-locking of neurons pulsing in perfect rhythm is an apt metaphor for the business intelligence provided by the structured data in a well-implemented enterprise data warehouse.

“But the brain,” Johnson continued, “also seems to require the opposite: regular periods of electrical chaos, where neurons are completely out of sync with each other.  If you follow the various frequencies of brain-wave activity with an EEG, the effect is not unlike turning the dial on an AM radio: periods of structured, rhythmic patterns, interrupted by static and noise.  The brain’s systems are tuned for noise, but only in controlled bursts.”

Scanning the radio dial for signals amidst the noise is an apt metaphor for the chaos of unstructured data in external sources (e.g., social media).  Should we bring order to chaos by adding structure (or at least better metadata) to unstructured data?  Or should we just reject the chaos of unstructured data?

Johnson recounted research performed in 2007 by Robert Thatcher, a brain scientist at the University of South Florida.  Thatcher studied the vacillation between the phase-lock (i.e., orderly) and chaos modes in the brains of dozens of children.  On average, the chaos mode lasted for 55 milliseconds, but for some children it approached 60 milliseconds.  Thatcher then compared the brain-wave scans with the children’s IQ scores, and found that every extra millisecond spent in the chaos mode added as much as 20 IQ points, whereas longer spells in the orderly mode deducted IQ points, but not as dramatically.

“Thatcher’s study,” Johnson concluded, “suggests a counterintuitive notion: the more disorganized your brain is, the smarter you are.  It’s counterintuitive in part because we tend to attribute the growing intelligence of the technology world with increasingly precise electromechanical choreography.  Thatcher and other researchers believe that the electric noise of the chaos mode allows the brain to experiment with new links between neurons that would otherwise fail to connect in more orderly settings.  The phase-lock [orderly] mode is where the brain executes an established plan or habit.  The chaos mode is where the brain assimilates new information.”

Perhaps the Enterprise Brain also requires both orderly and chaos modes, structured and unstructured data, and the interconnectedness between them, forming a digital neural network with orderly structured data firing in tandem, while the chaotic unstructured data assimilates new information.

Perhaps true business intelligence is more disorganized than we have traditionally imagined, and perhaps adding a little disorganization to your Enterprise Brain could make your organization smarter.

Tags: , , , ,
Category: Business Intelligence
3 Comments »

by: Phil Simon
18  Apr  2013

Big Data and Normative Control?

In 1992, sociologists Stephen Barley and Gideon Kunda wrote a classic paper that broadly defines two management schools of thought. In short, organizations are either governed by one of the two following means:

  • Normative control. Culture, norms, and other soft skills dominate data, systems, and technology. In organizations ruled by normative control, decisions are made by groups, symbols, organizational rituals, shared experiences, charismatic leadership, and vision statements. 
  • Rational control. Data, systems, and technology dominate culture, norms, and other soft skills. In organizations ruled by rational control, decisions are made by data, mechanisms, rules, and algorithms.

To the extent that we are living in the era of Big Data, many organizations are once again shifting. Rational control will invariably make inroads against normative control.

Think Continuum, Not Binary

It’s important to note that these types of control should be seen as part of a continuum; they are not binaries. For instance, a rationally controlled organization may well make certain decisions in a normative manner. In most HR departments, the vast majority of decisions ultimately come down to soft skills, not data and algorithms.

The “normative vs. rational” dichotomy is useful, but think of it as a guide. That is, I for one have never seen a completely consistent organization. Even at highly rational organizations like Google, every decision does not come down to data, technology, systems, and the like. Not everything can be quantified (let alone with complete accuracy and precision). There’s never anything close to complete certainty that a new product will be successful, a new hire will ultimately pan out, and the like. If that were the case, then Google’s Buzz, Wave, and other products would not have failed so spectacularly.

Simon Says: Know Your Pulse

Organizations like MIKE 2.0 have created methodologies that enable enterprises to implement different information management (IM) solutions. Blindly adopting any methodology, however, is ill advised. It’s kind of analogous inviting your friends to a party at your place. You cook, but you follow a recipe without knowing the dietary restrictions of your guests. What if people are allergic to dairy or peanuts? What if someone is going no-carb with Atkins?

To maximize the chance of long-term success of any IM endeavor, know the pulse of your organization. Is the default normative, rational, or some type of hybrid? Knowing the answer to that question will go a long way towards seeing the type of results that you’d like to see.

Feedback

What say you?

Tags:
Category: Information Management
No Comments »

by: Bsomich
17  Apr  2013

MGA Book Release Announcement: “Information Development Using MIKE2.0”

Big Data, and information more generally, is big business and top of mind in this era of cyber security as the vision of an integrated world moves from science fiction to a day-to-day reality. There are no trivial answers to managing this burgeoning resource even though there are a myriad of point solutions. With this in mind, MIKE2.0 Governance Association (MGA), the governing body of the open source standard for information management, announced today the release of their latest publication, Information Development Using MIKE2.0.

MIKE2.0, which stands for Method for an Integrated Knowledge Environment, is an open source delivery framework for Enterprise Information Management. It provides a comprehensive methodology (with 984 significant articles so far) that can be applied across the key domains of Information Management including Business Intelligence & Performance Management, Enterprise Data Management, Access/Search & Content Delivery, Enterprise Content Management, Information Asset Management, and Information Strategy, Architecture & Governance.

Since 2006, MIKE2.0 has acted as a free resource for information professionals by bringing together best practices in methods, techniques and benchmarking. It is now being made available in print publication to a wider audience, highlighting key wiki articles, blog posts, case studies and user applications of the methodology. With many governments and businesses thinking about how to respond to the latest round of challenges including cyber risks and privacy, the MIKE2.0 book elevates the discussion from day-to-day headlines to a strategic response.

The book has already received significant industry praise:

  • “The book is a definite must read for Information Management professionals who are looking to establish a data management competency within their organizations” – Sunil Soares ( Founder and Managing Partner, Information Asset LLC)
  • “Make sure you read it before your auditors or compliance managers do so!” – Andy Mulholland ( Henley Business School and former top 25 most influential CTOs in the world – InfoWorld USA)

Authors for the book include Andreas Rindler, Sean McClowry, Robert Hillard, and Sven Mueller, with additional credit due to Deloitte, BearingPoint and over 7,000 members and key contributors of the MIKE2.0 community.

 

Get Involved

The book has been published in paperback (available on Amazon.com and Barnes & Noble) as well as all major e-book publishing platforms. For more information on MIKE2.0 or how to get involved with the online MIKE2.0 community, please contact us.

Category: Information Development
1 Comment »

by: Bsomich
13  Apr  2013

Weekly IM Update.

logo.jpg

Did You Know?

MIKE’s Integrated Content Repository brings together the open assets from the MIKE2.0 Methodology, shared assets available on the internet and internally held assets. The Integrated Content Repository is a virtual hub of assets that can be used by an Information Management community, some of which are publicly available and some of which are held internally.

Any organisation can follow the same approach and integrate their internally held assets to the open standard provided by MIKE2.0 in order to:

  • Build community
  • Create a common standard for Information Development
  • Share leading intellectual property
  • Promote a comprehensive and compelling set of offerings
  • Collaborate with the business units to integrate messaging and coordinate sales activities
  • Reduce costs through reuse and improve quality through known assets

The Integrated Content Repository is a true Enterprise 2.0 solution: it makes use of the collaborative, user-driven content built using Web 2.0 techniques and technologies on the MIKE2.0 site and incorporates it internally into the enterprise. The approach followed to build this repository is referred to as a mashup.

Feel free to try it out when you have a moment- we’re always open to new content ideas.

Sincerely,

MIKE2.0 Community 

Contribute to MIKE:

Start a new article, help with articles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links: Home Page Login Content Model FAQs MIKE2.0 Governance

Join Us on
42.gif

Follow Us on 43 copy.jpg

Join Us on
images.jpg

 

Did You Know? All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

 

This Week’s Blogs:

Data Visualizations: This Isn’t 1996

During the summer of 1996, I interned at a high-tech company near Boston, MA. I played with data for most of the day, occasionally sneaking in looks at these cool new things called web sites. Rather than presenting my data in basic spreadsheets, I would throw it into a graph or chart. In Microsoft Excel, that was easy enough to do.

Fast forward 17 years and the state of data visualization is orders of magnitude more advanced. Yet, many presentations still contain slides like this one.
Read more.

Overcoming Information Overload
Information is a key asset for every organization, yet due to the rise of technology, web 2.0 and a general over abundance of raw data, many businesses are not equipped to make sense of it all.

How can managers overcome an age of “information overload” and begin to concentrate on the data that is most meaningful for the business?  Based on your experience, do you have any tips to share?

Read more.

New Survey Finds Collaborative Technologies Can Improve Corporate Performance

A new McKinsey Quarterly report, The rise of the networked enterprise: Web 2.0 finds its payday, released just this week is proving to be a treasure chest of information on the value of collaborative technologies. This new release of a series of similar studies from McKinsey over the years comes out of a survey of over 3000 executives across a range of regions, industries and functional areas. It provides detailed information on business value and measurable benefits in multiple venues of collaboration: between employees, with customers, and with business partners. It also examines the link—and finds great correlations—between implementing collaborative technologies and corporate performance.

Read more.

Category: Information Development
No Comments »

by: Phil Simon
09  Apr  2013

Data Visualizations: This Isn’t 1996

During the summer of 1996, I interned at a high-tech company near Boston, MA. I played with data for most of the day, occasionally sneaking in looks at these cool new things called web sites. Rather than presenting my data in basic spreadsheets, I would throw it into a graph or chart. In Microsoft Excel, that was easy enough to do.

Fast forward 17 years and the state of data visualization is orders of magnitude more advanced. Yet, many presentations still contain downright ugly slides like this:

In a word, yuck.

The vast majority of the time, spreadsheets make for truly awful slides. What’s more, as I write in Too Big to Ignore, today there are so many neat ways to visualize data. Forget basic pie charts and bar graphs. They’re better than the slide above, but we’re not limited to Excel when telling our stories. It’s not 1996 anymore.

I recently started playing with Easel.ly, a drag-and-drop tool that allows users to create sunning, visually compelling infographics. You can see from the objects above that customization is very WISYWIG. See below:

easelly-infographics

Why should we spend the time sexifying our data? Many reasons come to mind. For one, infographics let you’ll tell a much better story with data. And let’s not forget our ever-declining attention spans. When we see hard-to-read (let alone understand) slides like the spreadsheet above, how many of us just tune out? I’ll cop to it. We’re carrying around mini-computers in the way of tablets and smart phones. It’s not difficult for us to sneak a peak at our e-mail or text someone if a slide bores or confuses us, especially at a conference.

Bottom line: it’s just plain lazy to present data like this–whether or not that data is structured or not. Lamentably, though, that doesn’t stop making that mistake. The result: people look down at their devices, not up at the speaker. Is that really what you want?

Simon Says

Some people say that you shouldn’t include data in presentations, period. They cite horrible slides and tables like the one above.

While I concur about ugly slides, I couldn’t disagree with the “no data in presentations” argument more. We live in an era of Big Data. If you want to make an argument to do–or not do–something, data can certainly help, with one major caveat: that data needs to be presented in a compelling fashion. Confusing your attendees or colleagues with impenetrable spreadsheets and tables is unlikely to achieve the desired results.

Feedback

What say you?

Tags:
Category: Information Value
No Comments »

by: Phil Simon
01  Apr  2013

What the Data Doesn’t Appear to Tell Us–But Actually Does

While researching Too Big to Ignore, I discovered that Big Data is already having a major impact on a wide swath of industries and functions. It’s interesting to note, though, that what the data doesn’t appear to say can actually say quite a bit.

In How Big Data Is Changing the Whole Equation for Business, Stephen Rosenbush and Michael Totty write about the impact that Big Data is having on the world of business. From the article:

Big Data is also changing hiring. Take Catalyst IT Services, a Baltimore-based technology outsourcing company that assembles teams for programming jobs. This year, the company will screen more than 10,000 candidates. Not only is traditional recruiting too slow and cumbersome, the company says, but also the subjective choices of hiring managers too often result in new employees who aren’t the best fit.

“You need to be able to build models that help you take that subjective view away,” says Michael Rosenbaum, Catalyst’s founder and chief executive.

So, Catalyst asks candidates to fill out an online assessment—a move that many companies are making these days, most famously Google. Catalyst uses it to collect thousands of bits of information about each applicant; in fact, it gets more data from how they answer than what they answer.

For instance, the assessment might give a problem requiring calculus to an applicant who isn’t expected to know it. How the applicant reacts—laboring over an answer; answering quickly and then returning later; or skipping it entirely—provides lots of data about how someone will deal with challenges.

Someone who labors over a difficult question might fit an assignment that requires a methodical approach to problem solving, while an applicant who takes a more aggressive approach might be better in another setting.

Think about the ramifications of things like this.

New Questions and Answers

No longer will we be required to assess people (say, students, customers, or potential employees) by a simple score. How long they took to answer questions or make purchases (or not) can serve as a very effective gauge of future performance–and areas in which improvement is needed.

Sai Khan of Khan Academy has made the same point about assessing student performance. Through data, we will be able to determine which parts of a subject give a student difficulty. For instance, no longer will be confined to statements like “Jill struggles with geometry.” Rather, will be able to ascertain which parts of geometry give Jill the most trouble. Maybe she just doesn’t get along with cotangents and cosines.

Simon Says

It is incumbent upon everyone in business (from CXOs to entry-level employees) to realize the vast potential of the Big Data opportunity. New ways of thinking and addressing problems have arrived; they’re not just for early adopters.

Realize that information inheres a tremendous amount of value–and not just the traditional kind. Start asking yourselves what we know, what we don’t, and how we can bridge that gap. Short answer: through data and technology. There’s gold in the streets for those who understand the enormous potential of Big Data.

Feedback

What say you?

Tags: , ,
Category: Information Management, Information Value
No Comments »

Calendar
Collapse Expand Close
TODAY: Thu, June 22, 2017
April2013
SMTWTFS
31123456
78910111213
14151617181920
21222324252627
2829301234
Archives
Collapse Expand Close
Recent Comments
Collapse Expand Close