Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Posts Tagged ‘Information Development’

by: Ocdqblog
29  Jul  2014

Do you know good information when you hear it?

When big data is discussed, we rightfully hear a lot of concerns about signal-to-noise ratio. Amidst the rapidly expanding galaxy of information surrounding us every day, most of what we hear sounds like meaningless background static. We are so overloaded but underwhelmed by information, perhaps we are becoming tone deaf to signal, leaving us hearing only noise.

Before big data burst onto the scene, bursting our eardrums with its unstructured cacophony, most of us believed we had data quality standards and therefore we knew good information when we heard it. Just as most of us believe we would know great music when we hear it.

In 2007, Joshua Bell, a world-class violinist and recipient of that year’s Avery Fisher Prize, an award given to American musicians for outstanding achievement in classical music, participated in an experiment for The Washington Post columnist Gene Weingarten, whose article about the experiment won the 2008 Pulitzer Prize for feature writing.

During the experiment, Bell posed as a street performer donned in jeans and a baseball cap and played violin for tips in a Washington, D.C. metro station during one Friday morning rush hour. For 45 minutes he performed 6 classical music masterpieces, some of the most elegant music ever written on one of the most valuable violins ever made.

Weingarten described it as “an experiment in context, perception, and priorities—as well as an unblinking assessment of public taste: In a banal setting at an inconvenient time, would beauty transcend? Each passerby had a quick choice to make, one familiar to commuters in any urban area where the occasional street performer is part of the cityscape: Do you stop and listen? Do you hurry past with a blend of guilt and irritation, aware of your cupidity but annoyed by the unbidden demand on your time and your wallet? Do you throw in a buck, just to be polite? Does your decision change if he’s really bad? What if he’s really good? Do you have time for beauty?”

As Bell performed, over 1,000 commuters passed by him. Most barely noticed him at all. Very few stopped briefly to listen to him play. Even fewer were impressed enough to toss a little money into his violin case. Although three days earlier he had performed the same set in front of a sold out crowd at Boston Symphony Hall where ticket prices for good seats started at $100 each, at the metro station Bell earned only $32.17 in tips.

“If a great musician plays great music but no one hears,” Weingarten pondered, “was he really any good?”

That metro station during rush hour is an apt metaphor for the daily experiment in context, perception, and priorities facing information development in the big data era. In a fast-paced business world overcrowded with fellow travelers and abuzz with so much background noise, if great information is playing but no one hears, is it really any good?

Do you know good information when you hear it? How do you hear it amidst the noise surrounding it?

 

Tags: , , , ,
Category: Data Quality, Information Development
No Comments »

by: Ocdqblog
11  Oct  2013

The Quality of Things

I have previously blogged about how more data might soon be generated by things than by humans, using the phrase the Internet of Humans to not only differentiate human-generated data from the machine-generated data from the Internet of Things, but also as a reminder that humans are a vital source of knowledge that no amount of data from any source could ever replace.

The Internet of Things is the source of the category of big data known as sensor data, which is often the new type you come across while defining big data that requires you to start getting to know NoSQL, and which differentiates machine-generated data from the data generated directly by humans typing, taking pictures, recording videos, scanning bar codes, etc.

One advantage of machine-generated data that’s often alleged is its inherently higher data quality due to humans being removed from the equation.  In this blog post, let’s briefly explore that concept.

Is it hot in here or is it just me?

Let’s use the example of accurately recording the temperature in a room, something that a thing, namely a thermostat, has been helping us do long before the Internet of Anything ever existed.

(Please note, for the purposes of keeping this discussion simple, let’s ignore the fact that the only place where room temperature matches the thermostat is at the thermostat — as well as how that can be compensated for with a network of sensors placed throughout the room).

A thermostat only displays the current temperature, so if I want to keep a log of temperature changes over time, I could do this by writing my manual readings down in a notebook.  Here is where the fallible human, in this case me, enters the equation and could cause a data quality issue.

For example, the temperature in the room is 71 degrees Fahrenheit, but I incorrectly write it down as 77 degrees (or perhaps my sloppy handwriting caused even me to later misinterpret the 1 as 7).  Let’s also say that I wanted to log the room temperature every hour.  This would require me to physically be present to read the thermostat every hour on the hour.  Obviously, I might not be quite that punctual and I could easily forget to take a reading (or several), thereby preventing it from being recorded.

The Quality of Things

Alternatively, using an innovative new thermostat that wirelessly transmits the current temperature, at precisely defined time intervals, to an Internet-based application certainly eliminates the possibility of the two human errors in my example.

However, there are other ways a data quality issue could occur.  For example, the thermostat may not be accurately displaying the temperature because it is not calibrated correctly.  Another thing is that a power loss, mechanical failure, loss of Internet connection, or the Internet-based application crashing could prevent the temperature readings from being recorded.

The point I am trying to raise is that although the quality of things entered by things certainly has some advantages over the quality of things entered by humans, nothing could be further from the truth than to claim that machine-generated data is immune to data quality issues.

Tags: , , , ,
Category: Data Quality, Information Development
No Comments »

by: Robert.hillard
19  Feb  2012

Technology gardening

There is little that is guaranteed to soothe the stressed mind as much as a well-structured garden.  It brings together order and nature in a magic combination.  From a distance, the garden follows a clear plan that has probably been laid out by a landscape architect.  Up close, each garden bed logically leads to the next.  A good garden tells a story, describes its own purpose and combines aesthetics with function.

While the information technology profession often uses the building metaphor for its projects perhaps gardens might be more appropriate.  Where a building architect is operating to a clear project plan with a beginning, middle and end, the landscape architect may have a vision for an end but it will be many years before it can be fully realised with a long evolution along the way.  Before realising the landscape architect’s vision, the goals of the garden are almost certain to have changed in some way and the gardeners who are charged with planting, pruning and maintaining the garden will morph the plans as they learn more about what thrives in different part s of the garden and observe the preferences of the users of the garden (the public or the individual owners).

Perhaps the most frustrated members of the technology team are the architects.  They set standards and try to impose discipline across the enterprise.  They often feel like they have made a breakthrough with everyone agreeing to very sensible principles at the start of projects, such as adopting just one set of business intelligence tools, adhering to integration standards or consolidating all online activity through a single platform.  Unfortunately real world complexity conspires to cause the project team to rapidly break these agreements.

There is no doubt that information technology is the foundation of modern business.  With something so critical, compromising quality should not be an option.  It might be expected that senior executives would be prepared to invest and plan ahead.  Similarly, project managers are engaged with a goal in mind and yet are often forced to abandon the plans laid out by the very same information technology architects that they themselves engaged.

Increasingly organisations are looking to adopt more evolutionary approaches to technology projects.  The methods often encourage a high level outline of the project’s goals and then experiment or test different approaches in a series of structured mini-projects or “sprints”.  Perhaps adopting the gardening metaphor will lead information technology strategists to evolve the entire technology landscape across the enterprise leveraging both the techniques of agile methods combined with the principles of developing a good garden.

An enterprise approach to technology which adopts landscape gardening principles might think of vendor choices not in terms of a single standard but rather what will suit the needs of one area while retaining the aesthetic or integration needs of the whole landscape.  Decisions about system priorities might be expected to change over time as the business moves through its natural cycles, just as gardeners change their focus as the environment moves through good and bad seasons.

More than anything else, though, an organisation adopting the garden metaphor would embrace rather than fear user empowerment.  These organisations would seek to plant many technology seeds to find out what grows best and then be willing to prune or remove some of newly grown systems to keep the overall landscape in line with the vision.  In this model, users focused on their own little patch will create fertile or barren ground without putting the enterprise goals at risk.  Not everyone needs to understand the garden as a whole in order to be able to meaningfully contribute to an individual plant or tree.

Tags: ,
Category: Enterprise2.0
2 Comments »

by: Robert.hillard
18  Dec  2011

Value of decommissioning legacy systems

Most organisations reward their project managers for achieving scope, within a given timeframe for a specified budget.  While scope is usually measured in terms of user functions most projects usually include the decommissioning of legacy systems.  Unfortunately it is the decommissioning step which is most often compromised in the final stages of any project.

I’ve previously written about the need to measure complexity (see CIOs need to measure the right things).  One of the pieces of feedback I have received from a number of CIOs over the past few months has been that it is very hard to get a business case for decommissioning over the line from a financial perspective.  What’s more, even when it is included in the business case for a new system, it is very hard to avoid it being removed during the inevitable scope and budget management that most major projects go through.

One approach to estimating the benefit of decommissioning is to list out the activities that will be simpler as a result of removing the system.  These can include duplicated user effort, reduced operational management costs and, most importantly, a reduction in the effort to implement new systems.  The problem is that last of these is the most valuable but is very hard to estimate deterministically.  Worse, by the time you do know, it is likely to be too late to actually perform the decommissioning.  For that reason, it is better to take a modelling approach across the portfolio rather than try to prove the cost savings using a list of known projects.

The complexity that legacy systems introduce to new system development is largely proportional to the cost of developing information interfaces to those systems.  Because the number of interfaces grow to the power of the number of systems, the complexity they introduce is a logarithmic function as shown below.

Figure 1

Any model is only going to provide a basis for estimating, but I outline here a simple and effective approach.

Start by defining c as the investment per new system and n as the number of system builds expected over 5 years.  Investment cost for a domain is therefore c times n.  For this example assume c as $2M and n as 3 giving an investment of $6M.

However the number of legacy systems (l) add complexity at a rate that rapidly increases most initially before trailing off (logarithmic).  The complexity factor (f), is dependent on the ratio of the cost of software to development (c) to the average interface cost (i):

f=logc/i(l+1)

For this example assume l as 2 and i as $200K:

f=log10(3)=.048

The complexity factor can then be applied to the original investment:

c x n x (f+1)

In this example the five year saving of decommissioning the three systems in terms of future savings would be of the order of $2.9M.  It is important to note that efficiencies in interfacing similarly provide benefit.  As the cost of interfacing drops the logarithm base increases and the complexity factor naturally decreases.  Even arguing that the proportion of the ratio needs to be adjusted doesn’t substantially affect the complexity factor.

While this is only one method of identifying a savings trend, it provides a good starting point for more detailed modelling of benefits.  At the very least it provides the basis for an argument that the value of decommissioning legacy system shouldn’t be ignored simply because the exact benefits cannot be immediately identified.

Tags: ,
Category: Information Governance
6 Comments »

by: Sean.mcclowry
15  Jul  2009

Reporting from the future

Jason Kolb provides a great post on predictive analytics. Its powerful stuff, but technology is values-neutral and can cause issues just as it can help solve them. To see analytics applied for good, check out this presentation by Hans Rosling.

As with any complex system, its easy to get things wrong and you need to be really smart to really screw up. But investment banks won’t try and stop predicting the future and people won’t stop playing on the edge. The predictive analytics genie is definitely out of the bottle, the goal is make sure these predictions have some controls, and understanding of data lineage and regulations around risk. That’s one of the reasons why I think Information Development is so important.

Tags: , ,
Category: Business Intelligence, Information Development
No Comments »

by: Sean.mcclowry
10  Nov  2008

Managing Your Business Data

I had the pleasure of meeting Maria Villar at the IBM’s Information on Demand Conference where she presented on her new book – Managing Your Business Data. Its a great book for someone looking to make the “case for change” for treating information as an asset across the enterprise, building a culture of better information management and establishing roles and responsibilities for better data management across the organization.  There’s a wealth of information in there – and its something a business leader can understand just as well as a technologist. I particularly liked blending the conceptual (Maslow’s needs hierarchy for data management) with the pragmatic (lots of cases studies).

Maria tells me the book will be on Amazon soon but in the meantime you can order direct from the publisher at the link above.

Tags: , ,
Category: Information Governance, Information Strategy
No Comments »

by: Sean.mcclowry
30  Sep  2008

Many eyes for information management

IBM’s many eyes product is pretty cool – its a way to visualize information and comes with a number of tools you can used for textual and more structured information.

Here’s many eyes run against a wikipedia article that describes MIKE2.0:

This is just one view (a wordle) but many eyes give you many ways to visualize the data.

And to see something really interesting, check it out against some of the latest political dialogue.

But many eyes is far from a toy.  There’s some real power in being able to connect semantics to analytics.  My favorite is from another technology – GapMinder: http://www.ted.com/index.php/talks/hans_rosling_shows_the_best_stats_you_ve_ever_seen.html

Have fun with many eyes for now but watch this space – I think we’ll be see it grow a lot more!

Tags: , ,
Category: Web2.0
No Comments »

by: Sean.mcclowry
13  May  2008

Agile Information Development

My experience is that many big, complex organizations need an approach based on Information Management Transformation – a radical change to how they manage their information that is referred to as Information Development in MIKE2.0. But transformation is hard and there is a need to show value quickly. I’ve seen agile development techniques work for Software Development; can they work for Information Development?

The idea behind Agile Information Development is to provide an approach to most quickly deliver Information Management engagements using the MIKE2.0 Methodology. Agile development processes can be difficult for information management engagements due to the complexity of historical issues. Agile Information Development makes use of the techniques in XBR, Continuous Implementation and Continuous Improvement and accelerates them further. These techniques are from strategy through to implementation.

To find out more, go to the Agile Information Development Solution Offering. It’s in the early stages, so please jump in and help out!

Tags: , ,
Category: Information Strategy
2 Comments »

by: Andreas.rindler
24  Apr  2008

MIKE2.0 Facelift

As you will have noticed (unless you are a new user), MIKE2.0 just received a major facelift. We have spent the last 6 weeks up until now introducing new, exciting functionality and a major improvement in the look at feel of the site. Here is a summary of the major changes:

  • New integrated skin across the wiki, blogs and social bookmarking, enabling common navigation and common search

MIKE2.0 integrated skin navigation

  • Single sign on from your wiki account with bookmarks, so there is no need any more to log in twice
  • Improved navigation menu on the MIKE2.0 wiki, giving you easy access to the key pages of the site

Wiki_menu

  • New rating of bookmarks and commenting on bookmarks to bring out the bookmarks most valued by the IM community

Bookmark Rating and Commenting

  • Social networking where you can share your social profile and connect with other practitioners and build the IM community
  • Upgrade of blogs to WordPress 2.5 with improved blog post management functionality

Wordpress Editor 2.5

You will have also seen a new “Partners” feature box on the bottom left. We want to thank our supporters and would like to give them an opportunity to feature their site. If you are interested in becoming a supporter for MIKE2.0, please get in touch with the MIKE2.0 Leadership Team.

What’s in the works? We want to provide single sign on with the group blog so that you can comment without having to enter your name/email address every time. We are also looking to improve how we showcase some of our key contributors. And we would like to improve office integration and WYSIWYG editing functionality.

In the mean time, enjoy the new site, tell your colleagues about it or blog about it.

I also want to say a special thank you to the people who made this major facelift happen: Alex Papadopoulos, Aran Dunkley, Jarrod Poynton, Pete Dakin and Sean McClowry. Thanks for your hard work.

Yours,

Andreas Rindler

Solution Architect for MIKE2.0 Collaboration Platform

Tags:
Category: MIKE2.0, omCollab, Site Announcements
4 Comments »

by: Sean.mcclowry
20  Jan  2008

Sustainable Development requires Information Development

Climate change is increasingly a mainstream issue. There is growing momentum to address the issue at a number of levels, in the corporate sector, with private citizens and within government.

There is, however, a real danger with the trend towards carbon neutrality that is analogous to the poor systems designs. As described by the United Nations, climate change is just 1 of the 36 areas of Sustainable Development. An Information Development approach can help:

  • Understand the impacts of decisions and how they impact other areas of Sustainability (e.g. how a programme that helps reduce a retailer’s carbon footprint by stopping imports from Africa impacts other areas).

  • Collaboratively develop standards and share lessons learned. This will be a major benefit as the transformation organizations must go through is so significant. This isn’t necessarily “trade secrets” but an open and transparent forum for them to share information.

  • Collaborative Business Intelligence platform to make objective decisions. Decisions can be based on a historical evidence and make us of historical models. Policy analysis, researches and the public can share different forms of information as part of the process.

  • Information Sharing across different organisations, involving operational and analytical information. Enablers include open and common standards, search and collaboration. Some information can be anynoymised while other content can seen by both parties.

  • Collaboratively develop standards and share lessons learned. This will be a major benefit as the transformation organizations must go through is so significant. This isn’t necessarily “trade secrets” but an open and transparent forum for them to share information.

  • Measure progress based on standard metrics. We need standards because when we don’t account for what we produce, we may get unexpected issues. Information Quality issues are analogous to the pollutants we see from poor sustainability design.

The Information Development approach to Sustainable Development can be applied to design the interaction points between the different areas. It also means the ability to make fact-based decisions, share information between systems and provide easy access to information from complex, federated sources.

Tags: , , , ,
Category: Sustainable Development
No Comments »

Calendar
Collapse Expand Close
TODAY: Mon, December 18, 2017
December2017
SMTWTFS
262728293012
3456789
10111213141516
17181920212223
24252627282930
31123456
Archives
Collapse Expand Close
Recent Comments
Collapse Expand Close