Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.
by: Alandduncan
13  May  2014

Is Your Data Quality Boring?

https://www.youtube.com/watch?v=vEJy-xtHfHY

Is this the kind of response you get when you mention to people that you work in Data Quality?!

Let’s be honest here. Data Quality is good and worthy, but it can be a pretty dull affair at times. Information Management is something that “just happens”, and folks would rather not know the ins-and-outs of how the monthly Management Pack gets created.

Yet I’ll bet that they’ll be right on your case when the numbers are “wrong”.

Right?

So here’s an idea. The next time you want to engage someone in a discussion about data quality, don’t start by discussing data quality. Don’t mention the processes of profiling, validating or cleansing data. Don’t talk about integration, storage or reporting. And don’t even think about metadata, lineage or auditability. Yaaaaaaaaawn!!!!

Instead of concentrating on telling people about the practitioner processes (which of course are vital, and fascinating no doubt if you happen to be a practitioner), think about engaging in a manner that is relevant to the business community, using language and examples that are business-oriented. Make it fun!

Once you’ve got the discussion flowing in terms of the impacts, challenges and inhibitors that get in the way of successful business operations, then you can start to drill into the underlying data issues and their root causes. More often than not, a data quality issue is symptomatic of a business process failure rather than being an end in itself. By fixing the process problem, the business user gains a benefit, and the data in enhanced as a by-product. Everyone wins (and you didn’t even have to mention the dreaded DQ phrase!)

Data Quality is a human thing – that’s why its hard. As practitioners, we need to be communicators. Lead the thinking, identify the impact and deliver the value.

Now, that’s interesting!

https://www.youtube.com/watch?v=5sGjYgDXEWo

Category: Business Intelligence, Data Quality, Enterprise Data Management, Information Governance, Information Management, Information Strategy, Information Value, Master Data Management, Metadata
No Comments »

by: Alandduncan
12  May  2014

The Information Management Tube Map

Just recently, Gary Allemann posted a guest article on Nicola Askham’s Blog, which made an analogy between Data Governance and the London Tube map. (Nicola also on Twitter. See also Gary Allemann’s blog, Data Quality Matters.)

Up until now, I’ve always struggled to think of a way to represent all of the different aspects of Information Management/Data Governance; the environment is multi-faceted, with the interconnections between the component capabilities being complex and not hierarchical. I’ve sometimes alluded to there being a network of relationship between elements, but this has been a fairly abstract concept that I’ve never been able to adequately illustrate.

And in a moment of perspiration, I came up with this…

http://informationaction.blogspot.com.au/2014/03/the-information-management-tube-map.html

I’ll be developing this further as I go but in the meantime, please let me know what you think.

(NOTE: following on from Seth Godin’ plea for more sharing of ideas, I am publishing the Information Management Tube Map under Creative Commons License Attribution Share-Alike V4.0 International. Please credit me where you use the concept, and I would appreciate it if you could reference back to me with any changes, suggestions or feedback. Thanks in advance.)

Category: Business Intelligence, Data Quality, Enterprise Data Management, Information Development, Information Governance, Information Management, Information Strategy, Information Value, Master Data Management, Metadata
No Comments »

by: Bsomich
10  May  2014

MIKE2.0 Community Update

Missed what’s been happening in the data management community? Check out our bi-weekly update:

 
 logo.jpg

Did You Know? 

MIKE’s Integrated Content Repository brings together the open assets from the MIKE2.0 Methodology, shared assets available on the internet and internally held assets. The Integrated Content Repository is a virtual hub of assets that can be used by an Information Management community, some of which are publicly available and some of which are held internally.

Any organisation can follow the same approach and integrate their internally held assets to the open standard provided by MIKE2.0 in order to:

  • Build community
  • Create a common standard for Information Development
  • Share leading intellectual property
  • Promote a comprehensive and compelling set of offerings
  • Collaborate with the business units to integrate messaging and coordinate sales activities
  • Reduce costs through reuse and improve quality through known assets

The Integrated Content Repository is a true Enterprise 2.0 solution: it makes use of the collaborative, user-driven content built using Web 2.0 techniques and technologies on the MIKE2.0 site and incorporates it internally into the enterprise. The approach followed to build this repository is referred to as a mashup.

Feel free to try it out when you have a moment- we’re always open to new content ideas.

Sincerely,

MIKE2.0 Community  

Contribute to MIKE:

Start a new article, help witharticles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

Join Us on
images.jpg

 

Did You Know?
All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

 

This Week’s Blogs for Thought:

An Open Source Solution for Better Performance Management

Today, many organizations are facing increased scrutiny and a higher level of overall performance expectation from internal and external stakeholders. Both business and public sector leaders must provide greater external and internal transparancy to their activities, ensure accounting data faces up to compliance challenges, and extract the return and competitive advantage out of their customer, operational and performance information.

Read more.

Finding Ugly Data with Pretty Pictures

While data visualization often produces pretty pictures, as Phil Simon explained in his new book The Visual Organization: Data Visualization, Big Data, and the Quest for Better Decisions, “data visualization should not be confused with art. Clarity, utility, and user-friendliness are paramount to any design aesthetic.” Bad data visualizations are even worse than bad art since, as Simon says, “they confuse people more than they convey information.”

Read more.
Is the cloud really safe? 

With the rise of Apple’s iCloud and file-sharing solutions such as Dropbox and Sharepoint, it’s no surprise that SaaS and cloud computing solutions are increasing in popularity. From reduced costs in hosting and infrastructure to increased mobility, backups and accessibility, the benefits are profound. Need proof? A recent study by IDC estimates that spending on public IT cloud services will expand at a compound annual growth rate of 27.6%, from $21.5 billion in 2010 to $72.9 billion in 2015.

Read more.

 

Forward this message to a friend

 

Category: Information Development
No Comments »

by: Bsomich
01  May  2014

An Open Source Solution for Better Performance Management

Today, many organizations are facing increased scrutiny and a higher level of overall performance expectation from internal and external stakeholders. Both business and public sector leaders must provide greater external and internal transparancy to their activities, ensure accounting data faces up to compliance challenges, and extract the return and competitive advantage out of their customer, operational and performance information: Managers, investors and regulators have a new perspective on performance and compliance, which may include:

  • No tolerance of any gap in financial controls
  • Increased personal accountability for the accuracy of results
  • Focus on fair representation and performance over accounting standards
  • Increased need to understand the drivers of both financial and operational results

Senior executives across industry agree that achieving growth and performance objectives requires improved analytics, defining and implementing a process to track, monitor and measure innovation and performance, and making available more timely, accurate information for decision-making. MIKE2.0′s open source Enterprise Performance Management offering can help organizations achieve these critical objectives.

Learn more about this solution.

800px-EPM_Framework_1107.2

Category: Information Development
No Comments »

by: Ocdqblog
30  Apr  2014

Finding Ugly Data with Pretty Pictures

While data visualization often produces pretty pictures, as Phil Simon explained in his new book The Visual Organization: Data Visualization, Big Data, and the Quest for Better Decisions, “data visualization should not be confused with art. Clarity, utility, and user-friendliness are paramount to any design aesthetic.” Bad data visualizations are even worse than bad art since, as Simon says, “they confuse people more than they convey information.”

Simon explained how data scientist Melinda Thielbar recommends using data visualization to help an analyst communicate with a nontechnical audience, as well as help the data communicate with the analyst.

“Visualization is a great way to let the data tell a story,” Thielbar explained. “It’s also a great way for analysts to fool themselves into believing the story they want to believe.” This is why she recommends developing the visualizations at the beginning of the analysis to allow the visualizations that really illustrate the story behind the data to stand out, a process she calls “building windows into the data.” When you look through a window, you may not like what you see.

“Data visualizations may include bad, suspect, duplicate, or incomplete data,” Simon explained. This can be a good thing, however, since data visualizations “can help users identify fishy information and purify data faster than manual hunting and pecking. Data quality is a continuum, not a binary. Use data visualization to improve data quality.” Even when you are looking at what appears to be the pretty end of the continuum, Simon cautioned that “just because data is visualized doesn’t necessarily mean that it is accurate, complete, or indicative of the right course of action.”

Especially when dealing with volume aspect of big data, data visualization can help find outliers faster. While detailed analysis is needed to determine whether the outlier is a business insight or a data quality issue, data visualization can help you shake those needles out of the haystack and into a clear field of vision.

Among its many other uses, which Simon illustrates well in his book, finding ugly data with pretty pictures is one way data visualization can be used for improving data quality.

Category: Data Quality
No Comments »

by: Bsomich
26  Apr  2014

Community Update.

 

 logo.jpg

New to MIKE2.0? Here’s a Structural Overview…

If you’re not already familiar, here is an intro to the structure of the MIKE2.0 methodology and associated content:

  • A New Model for the Enterprise provides an intro rationale for MIKE2.0
  • What is MIKE2.0? is a good basic intro to the methodology with some of the major diagrams and schematics
  • Introduction to MIKE2.0 is a category of other introductory articles
  • Mike 2.0 How To – provides a listing of basic articles of how to work with and understand the MIKE2.0 system and methodology.
  • Alternative Release Methodologies describes current thinking about how the basic structure of MIKE2.0 can itself be modified and evolve. The site presently follows a hierarchical model with governance for major changes, though branching and other models could be contemplated.

We hope you find this of benefit and welcome any suggestions you may have to improve it.

Sincerely,

MIKE2.0 Community

New! Popular Content

Did you know that the followingwiki articles are most popular on Google? Check them out, and feel free to edit or expand them!

What is MIKE2.0?
Deliverable Templates
The 5 Phases of MIKE2.0
Overall Task List
Business Assessment Blueprint
SAFE Architecture
Information Governance Solution

Contribute to MIKE:

Start a new article, help witharticles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

Join Us on
 images.jpg

 

 

This Week’s Blogs for Thought:

Outsourcing our Memory to the Cloud

On the recent Stuff to Blow Your mind podcast episode Outsourcing Memory, hosts Julie Douglas and Robert Lamb discussed how, from remembering phone numbers to relying on spellcheckers, we’re allocating our cognitive processes to the cloud. Have you ever tried to recall an actual phone number stored in your cellphone, say of a close friend or relative, and been unable to do so?” Douglas asked. She remarked how that question would have been ridiculous ten years ago, but nowadays most of us would have to admit that the answer is yes.

Read more.

How CIOs Can Discuss the Contribution of IT

Just how productive are Chief Information Officers or the technology that they manage?  With technology portfolios becoming increasingly complex it is harder than ever to measure productivity.  Yet boards and investors want to know that the capital they have tied-up in the information technology of the enterprise is achieving the best possible return.
For CIOs, talking about value improves the conversation with executive colleagues.  Taking them aside to talk about the success of a project is, even for the most strategic initiatives, usually seen as a tactical discussion.  Changing the topic to increasing customer value or staff productivity through a return on technology capital is a much more strategic contribution.

Read more.
Let the Computers Calculate and the Humans Cogitate

Many organizations are wrapping their enterprise brain around the challenges of business intelligence, looking for the best ways to analyze, present, and deliver information to business users.  More organizations are choosing to do so by pushing business decisions down in order to build a bottom-up foundation.

However, one question coming up more frequently in the era of big data is what should be the division of labor between computers and humans?
Read more.

 

 

Forward to a Friend!

Know someone who might be interested in joining the Mike2.0 Community? Forward this message to a friendQuestions?

If you have any questions, please email us at mike2@openmethodology.org.

 

 

 

Category: Information Development
No Comments »

by: Ocdqblog
22  Apr  2014

Outsourcing our Memory to the Cloud

On the recent Stuff to Blow Your mind podcast episode Outsourcing Memory, hosts Julie Douglas and Robert Lamb discussed how, from remembering phone numbers to relying on spellcheckers, we’re allocating our cognitive processes to the cloud.

“Have you ever tried to recall an actual phone number stored in your cellphone, say of a close friend or relative, and been unable to do so?” Douglas asked. She remarked how that question would have been ridiculous ten years ago, but nowadays most of us would have to admit that the answer is yes. Remembering phone numbers is just one example of how we are outsourcing our memory. Another is spelling. “Sometimes I find myself intentionally misspelling a word to make sure the application I am using is running a spellchecker,” Lamb remarked. Once confirmed, he writes without worrying about misspellings since the spellchecker will catch them. I have to admit that I do the same thing. In fact, while writing this paragraph I misspelled several words without worry since they were automatically caught by those all-too-familiar red-dotted underlines. (Don’t forget, however, that spellcheckers don’t check for contextual accuracy.)

Transactive Memory and Collaborative Remembering

Douglas referenced the psychological concept of transactive memory, where groups collectively store and retrieve knowledge. This provides members with more and better knowledge than any individual could build on their own. Lamb referenced cognitive experimental research on collaborative remembering. This allows a group to recall information that its individual members had forgotten.

The memory management model of what we now call the cloud is transactive memory and collaborative remembering on a massive scale. It has pervaded most aspects of our personal and professional lives. Douglas and Lamb contemplated both its positive and negative aspects. Many of the latter resonated with points I made in my previous post about Automation and the Danger of Lost Knowledge.

Free Your Mind

In a sense, outsourcing our memory to the cloud frees up our minds. It is reminiscent of Albert Einstein remarking that he didn’t need to remember basic mathematical equations since he could just look them up in a book when he needed them. Nowadays he would just look them up on Google or Wikipedia (or MIKE2.0 if, for example, he needed a formula for calculating the economic value of information). Not bothering to remember basic mathematical equations freed up Einstein’s mind for his thought experiments, allowing him to contemplate groundbreaking ideas like the theory of relativity.

Forgetting how to Remember

I can’t help but wonder what our memory will be like ten years from now after we have outsourced even more of it to the cloud. Today, we don’t have to remember phone numbers or how to spell. Ten years from now, we might not have to remember names or how to count.

Wearable technology, like Google Glass or Narrative Clip, will allow us to have an artificial photographic memory. Lifelogging will allow us to record our own digital autobiography. “We have all forgot more than we remember,” Thomas Fuller wrote in the 18th century. If before the end of the 21st century we don’t have to remember anything, perhaps we will start forgetting how to remember.

I guess we will just have to hope that a few trustworthy people remember how to keep the cloud working.

Category: Information Development
No Comments »

by: Robert.hillard
20  Apr  2014

How CIOs can discuss the contribution of IT

Just how productive are Chief Information Officers or the technology that they manage?  With technology portfolios becoming increasingly complex it is harder than ever to measure productivity.  Yet boards and investors want to know that the capital they have tied-up in the information technology of the enterprise is achieving the best possible return.

For CIOs, talking about value improves the conversation with executive colleagues.  Taking them aside to talk about the success of a project is, even for the most strategic initiatives, usually seen as a tactical discussion.  Changing the topic to increasing customer value or staff productivity through a return on technology capital is a much more strategic contribution.

What is the return on an IT system?

There are all sorts of productivity measures that can be applied to individual systems, but they are usually based on the efficiency of existing processes which leads to behaviours which reduce flexibility.  The future of business and government depends on speed of response to change, not how efficiently they deal with a static present.

Businesses invest in information systems to have the right information at the right time to support decisions or processes.  Information that is used is productive while information that is collected, but poorly applied, is wasted or unproductive.

However, to work out what proportion of information is being used there needs to be a way to quantify it.

How much information is contained in the systems?

There is a formal way to measure the quantity of information.  I introduce this extensively in chapter 6 of Information-Driven Business.

The best way to understand “quantity” in terms of information is to count the number of artefacts rather than the number of bits or bytes required to store them.  The best accepted approach to describing this quantity is called “information entropy” which, confusingly, uses a “bit” as its unit of measure which is a count of the potential permutations that the system can represent.

A system that holds 65,536 names has just 16 “bits” of unique information (log265536).  That might sound strange given that the data storage of 65,536 names might use of the order of 6MB.

To understand why there only 16 bits of unique information in a list of 65,536 names consider whether the business uses the spelling of the names of if there is any additional insight being gained from the data that is stored.

How much of that information is actually used?

Knowing how much information there is in a system opens up the opportunity to find how much information is being productively used.  The amount of information being used to drive customer or management choices is perhaps best described as “decision entropy”.  The decision entropy is either equal or less than the total information entropy.

An organisation using 100% of their available information is incredibly lean and nimble.  They have removed much of the complexity that stymies their competitors (see Value of decommissioning legacy systems).

Of course, no organisation productively uses all of the information that they hold.  Knowing that holding unproductive information comes at a cost to the organisation, the CIO can have an engaging conversation with fellow executives about extracting more value from existing systems without changing business processes.

When looking at how business reports are really used, and how many reports lie unread on management desks, there is a lot of low hanging fruit to be picked just by improving the way existing business intelligence is used.

Similarly, customer systems seldom maximise their use of hints based on existing information to guide buyers to close the best available offer.  A few digital enhancements at the front line can bring to the surface a vast array of otherwise unused information.

Changing the conversation

Globally, CIOs are finding themselves pushed down a rung in the organisational ladder.  This is happening at the very same time that technology is moving from the back office to become a central part of the revenue story through digital disruption.

CIOs are not automatically entitled to be at the executive table.  They have to earn the right by contributing to earnings and business outcomes.  One of the best discussions for a CIO to focus on is increasing productivity of the capital tied-up in the investments that have already been made in the systems that support staff and customers.

Category: Information Development, Information Strategy, Information Value
No Comments »

by: Bsomich
15  Apr  2014

Community Update

Missed what happened in the MIKE2.0 Community? Here’s a quick recap:

 logo.jpg

Business Drivers for Better Metadata Management

There are a number Business Drivers for Better Metadata Management that have caused metadata management to grow in importance over the past few years at most major organisations. These organisations are focused on more than just a data dictionary across their information – they are building comprehensive solutions for managing business and technical metadata.

Our wiki article on the subject explores many factors contributing to the growth of metadata and guidance to better manage it:  

Feel free to check it out when you have a moment.

Sincerely,MIKE2.0 Community

Contribute to MIKE:

Start a new article, help witharticles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

Join Us on
images.jpg

Did You Know?
All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

 

This Week’s Blogs for Thought:

Departing Employees: How well are you closing the information loop? 

Every organization, regardless of size, understands the importance of good on-boarding procedures for incoming employees and new hires. We have a slew of welcome packets, orientations, procedures and trainings to ensure we’re providing our incoming talent with the right tools to be successful in their new roles. But how often are we taking the same care to off board our departing employees when sensitive company information and intellectual property is at stake?

Read more. 

On Sharing Data

While security and privacy issues prevent sensitive data from being shared (e.g., customer data containing personal financial information or patient data containing personal health information), do you have access to data that would be more valuable if you shared it with the rest of your organization—or perhaps the rest of the world?Read more.

 

Evernote’s Three Laws of Data Protection

“It’s all about bucks, kid. The rest is conversation.” –Michael Douglass as Gordon Gekko, Wall Street (1987) Sporting more than 60 million users, Evernote is one of the most popular productivity apps out there these days. You may in fact use the app to store audio notes, video, pics, websites, and perform a whole host of other tasks.Read more.

  Forward this message to a friend

 

 

Category: Information Development
No Comments »

by: Bsomich
11  Apr  2014

Departing Employees: How well are you closing the information loop?

Every organization, regardless of size, understands the importance of good on-boarding procedures for incoming employees and new hires. We have a slew of welcome packets, orientations, procedures and trainings to ensure we’re providing our incoming talent with the right tools to be successful in their new roles. But how often are we taking the same care to off board our departing employees when sensitive company information and intellectual property is at stake?

This topic has really hit home for our organization this year, as we began to develop procedures for operations that haven’t been clearly defined or documented, impacting critical departments such as HR, IT and Finance. Developing SOPs for these “gray areas” of the business has uncovered some interesting gaps that were not previously being closed. In the end, we discovered that while we had very clear instructions for bringing new talent into the company, we had no formal process for those who left and most off boarding activities were being carried out on an ad hoc basis.

Lesson learned: Regardless of company size or resources, taking the right approach to off boarding can save a giant headache when it comes to information security. It should be a preventive measure and not a reactive process.

As a baseline, organizations should give careful thought to the following information access points:

- Email

- Phone Directories

- Documents/File Sharing Systems

- CRM/Mailing Lists

- Company Intranets

- Website or FTPs

 

How well is your team closing the gap with respect to these information access points? Does your HR department communicate off boarding needs to IT, and do employees sign an NDA upon hire? How are you ensuring your intellectual property and other critical enterprise information is being safeguarded from departing talent?

 

Category: Information Development
No Comments »

Calendar
Collapse Expand Close
TODAY: Fri, July 25, 2014
July2014
SMTWTFS
293012345
6789101112
13141516171819
20212223242526
272829303112
Recent Comments
Collapse Expand Close