Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.
by: Robert.hillard
26  Jul  2014

Your insight might protect your job

Technology can make us lazy.  In the 1970s and 80s we worried that the calculator would rob kids of insight into the mathematics they were learning.  There has long been evidence that writing long-hand and reading from paper are far superior vehicles for absorbing knowledge than typing and reading from a screen.  Now we need to wonder whether that ultimate pinnacle of humanity’s knowledge, the internet, is actually a negative for businesses and government.

The internet has made a world of experience available to anyone who is willing to spend a few minutes seeking out the connections.  Increasingly we are using big data analytics to pull this knowledge together in an automated way.  Either way, the summed mass of human knowledge often appears to speak as one voice rather than the cacophony that you might expect of a crowd.

Is the crowd killing brilliance?

The crowd quickly sorts out the right answer from the wrong when there is a clear point of reference.  The crowd is really good at responding to even complex questions.  The more black or white the answer is, the better the crowd is at coming to a conclusion.  Even creative services, such as website design, are still problems with a right or wrong answer (even if there is more than one) and are well suited to crowd sourcing.

As the interpretation of the question or weighting of the answer becomes more subjective, it becomes harder to discern the direction that the crowd is pointing with certainty.  The lone voice with a dissenting, but insightful, opinion can be shouted down by the mob.

The power of the internet to answer questions is being used to test new business ideas just as quickly as to find out the population of Nicaragua.  Everything from credit cards to consumer devices are being iteratively crowd sourced and crowd tested to great effect.  Rather than losing months to focus groups, product design and marketing, smart companies are asking their customers what they want, getting them involved in building it and then getting early adopters to provide almost instant feedback.

However, the positive can quickly turn negative.  The crowd comments early and often.  The consensus usually reinforces the dominant view.  Like a bad reality show, great ideas are voted off before they have a chance to prove themselves.  If the idea is too left-field and doesn’t fit a known need, the crowd often doesn’t understand the opportunity.

Automating the crowd

In the 1960s and 1970s, many scientists argued that an artificial brain would display true intelligence within the bounds of the twentieth century.  Research efforts largely ground to a halt as approach after approach turned out to be a dead-end.

Many now argue that twenty-first century analytics is bridging the gap.  By understanding what the crowd has said and finding the response to millions, hundreds of millions and even billions of similar scenarios the machine is able to provide a sensible response.  This approach even shows promise of meeting the famous Turning test.

While many argue that big data analytics is the foundation of artificial intelligence, it isn’t providing the basis of brilliant or creative insight.  IBM’s Watson might be able to perform amazing feats in games of Jeopardy but the machine is still only regurgitating the wisdom of the crowd in the form of millions of answers that have been accumulated on the internet.

No amount of the crowd or analytics can yet make a major creative leap.  This is arguably the boundary of analytics in the search for artificial intelligence.

Digital Disruption could take out white collar jobs

For the first time digital disruption, using big data analytics, is putting white collar jobs at the same risk of automation that blue collar worker have had to navigate over the last fifty years.  Previously we assumed process automation would solve everything, but our organisations have become far too complex.

Business process management or automation has reached a natural limit in taking out clerical workers.  As processes have become more complex, and their number of interactions has grown exponentially, it has become normal for the majority of instances to display some sort of exception.  Employees have gone from running processes to handling exceptions.  The change in job function has largely masked the loss of traditional clerical works since the start of mass rollout of business IT.

Most of this exception handling, though, requires insight but no intuitive leap.  When asked, employees will tell you that their skill is to know how to connect the dots in a standard way to every unique circumstance.

Within organisations, email and, increasingly, social platforms have been the tools of choice for collaboration and crowdsourcing solutions to individual process exceptions.  Just as big data analytics is automating the hunt for answers on the internet, it is now starting to offer the promise of the same automation within the enterprise.

In the near future, applications driven by big data analytics will allow computers to move from automating processes to also handling any exceptions in a way that will feel almost human to customers of everything from bank mortgages to electric utilities.

Where to next for the jobs?

Just as many white collar jobs have moved from running processes in the 70s and 80s to handling their exceptions in the 90s and new millennium, these same jobs need to move now to find something new.

At the same time, the businesses they work for are being disrupted by the same digital forces and are looking for new sources of revenue.

These two drivers may come together to offer an opportunity for those who spent their time handling exceptions either for customer or internal processes.  Future opportunities are in spotting opportunities in business through intuitive insights and creative leaps and turning them into product or service inventions rather than seeking permission from the crowd who will force a return to the conservative norm.

Perhaps this is why design thinking and similar creative approaches to business have suddenly joined the mainstream.

Category: Information Development, Information Strategy, Web2.0
No Comments »

by: RickDelgado
26  Jul  2014

How Machine Learning is Improving Computer Security

If there’s one thing that keeps business leaders awake at night, it’s worries over data security. Nowadays, every company no matter the size uses technology in their operations, whether its using cloud systems for emails, massive server rooms for handling online transactions, or simply allowing employees to access company information on their smartphones. One misstep could end up leading to data loss or even data theft, which could end up costing the company some big money. Even mega-corporations like Target aren’t immune to this unfortunate trend. Businesses are looking for ways to make their information more secure, so to do that, many security systems are turning to big data, or more specifically to machine learning as a way to prevent and combat threats.

 

When you get right down to it, computer security is all about being able to analyze the data. A company’s security is largely dependent on the amount of data analysis they’re capable of, along with the quality of that data. A company that can collect a lot of data at once but doesn’t have the means to analyze it properly for threats won’t get very far. The same goes for a business with excellent analytic tools but without the resources to gather and store that information. These facts are very important because without a lot of data, machine learning simply can’t be as effective.

 

For those who aren’t familiar with machine learning, it essentially means a system that is capable of learning from data. The system is given a task, and from that algorithm can constantly get better, performing the task more efficiently and perhaps even finding new ways to do it. The more data a machine learning system has to work with, the better it will be at its assigned duties. In the case of cyber security, a machine learning system is able to sort through vast sets of big data in order to identify certain complex signals that it has deemed to be particularly damaging or a threat.

 

The machine learning approach has a major advantage over the more traditional way of threat detection. With the traditional way, systems had to look for signatures that had already been determined to be a threat. Once these signatures were identified within a network, the system would have to either stop it from further infiltration, or eliminate it. This method has some rather obvious weaknesses, the main one being its non-predictive nature. A threat that doesn’t fit an existing signature would likely not be identified, opening up the network to an attack. In essence, companies and organizations would always be behind prospective attackers looking to steal valuable data. Machine learning is able to address this major weakness. By looking through data for certain patterns and signals, machine learning is much more capable of predicting future attacks and preventing them, letting the system stay one step ahead of those who intend to do harm. By keeping a database of all existing malware, machine learning can root out problems before they happen, which for obvious reasons can be of great value to businesses.

 

There are plenty of security tools available for organizations that want to employ machine learning as a defensive measure. With these tools, systems are able to detect aberrant behavior, or actions that fall outside the norm, which can trigger an alert sent to security teams. As machine learning security systems constantly improve, they can then narrow down the alerts even more so team aren’t subject to waves of alerts each and every day. One such machine learning tool called Fortscale is able to separate abnormal events and place them in a special inbox, allowing IT security personnel to take a look at the problem and address it as needed.

 

Machine learning is already a common part of security measures. Spam mail filters are one simple example, but other systems exist like antivirus software, intrusion detection programs, and even credit card detection codes. There is much progress still to be made, but machine learning is at the forefront of raising computer security to a new level. With machine learning properly deployed, business leaders can rest easy knowing their data is more secure.

Category: Information Development
No Comments »

by: RickDelgado
22  Jul  2014

The Growing Concern of Privacy in the Cloud

Cloud computing is no longer something reserved for high-tech companies or large corporations. If you have a smartphone or use a tablet, you are likely part of the cloud. So what is cloud computing? While there’s no single definition for it, it essentially means sharing or storing information on remote servers that can be accessed over the internet. If you use email, your emails are stored on the cloud. If you’ve ever saved a document on Google Drive, that’s all saved on the cloud. Apps are also downloaded off of the cloud. Cloud computing is certainly convenient, but it also opens the door to some pressing concerns about privacy, especially when it comes to sensitive data. It’s a concern many cloud providers will need to address, and they need to do it soon.

 

The issue of privacy has been in the headlines in recent weeks. Back in June, the U.S. Supreme Court ruled that law enforcement could not search people’s cell phones without first obtaining a warrant. The side supporting warrantless searches contended that police didn’t need a judge’s permission to search what’s in a suspect’s pockets, which includes that person’s phone. But the opposing side, which the Supreme Court agreed with, said the issue wasn’t the cell phone itself but rather the data contained on that cell phone. Data stored in the cloud was deemed to be private and protected, though it is likely further challenges will come in the form of future cases. The main point is that any line distinguishing physical possessions and those stored electronically is vanishing quickly if it hasn’t already happened.

 

Many businesses have turned to cloud computing as a way to optimize their productivity. That usually requires important and vital information be stored on the cloud by way of cloud providers. That practice leads to some interesting philosophical questions about who actually owns that data. If a cloud provider stores it on their own servers, is it theirs, or does it still belong to the company or individual that hired them? So far, there are no clear answers to the questions that have arisen from this complex issue, but it’s a major reason why some businesses have been reluctant to adopt cloud computing. Data is an essential ingredient of any business, and compromising it in any way could be disastrous.

 

When using a cloud provider, businesses will often have to sign some sort of confidentiality agreement. Protecting that client’s privacy then becomes part of the responsibility of the provider, but the protections offered can vary greatly depending on a number of factors. For example, even if something is stored on the cloud, it still must be stored in an actual physical server, and that server must be located somewhere. If that server is located in a different country, that data becomes subject to the laws of that country. That means the privacy protections offered can depend on the privacy laws the country has. Other adverse consequences, as described by a report presented at the World Privacy Forum, can affect legal protections as well. Data stored with a third party may lose some legal protections. By being stored with a provider, weaker protections may mean easier access by governments or private litigants. These are all concerns that need to be considered when businesses choose to go with a cloud provider for their data needs.

 

Privacy in the cloud is also a major concern in education. With lots of data on students to handle, many schools and districts are turning to third party cloud providers. In fact, one study puts the number of districts that rely on cloud services at 95%. But with this massive growth also comes worries over student privacy. Many times, parents aren’t even informed that their children’s information is being stored in the cloud. The same study found that only one-fourth of districts tell parents they’re using the cloud. Even more alarming is how districts often surrender control of student information when using a provider. Fewer than 7% of provider contracts actually restrict selling or marketing student data, and most contracts have no mention of parental notice of consent. The same study recommends that greater transparency is needed among districts and cloud providers, while districts need to establish new policies clearly outlining how cloud services will be used.

 

(Tweet This: About 95% of school districts rely on third party #cloud services but only 25% of districts tell the parents. #privacy)

 

All is not bleak, however, with the world of privacy and cloud computing. Cloud providers that don’t have adequate privacy protection will likely go out of business quickly, so most will look to prop up their reputations by making privacy a top priority. There are also other ways to protect a company’s data through authentication techniques and authorization programs. Concerns over privacy shouldn’t dissuade companies from taking advantage of the cloud, but they should be prepared to address the issue and proceed with caution.

 

Category: Information Development
No Comments »

by: Ocdqblog
19  Jul  2014

Micro-Contributions form a Collaboration Macro

Collaboration is often cited as a key success factor in many enterprise information management initiatives, such as metadata managementdata quality improvementmaster data management, and information governance. Yet it’s often difficult to engage individual contributors in these efforts because everyone is busy and time is a zero-sum game. However, a successful collaboration needn’t require a major time commitment from all contributors.

While a small core group of people must be assigned as full-time contributors to enterprise information management initiatives, success hinges on a large extended group of people making what Clive Thompson calls micro-contributions. In his book Smarter Than You Think: How Technology is Changing Our Minds for the Better, he explained that “though each micro-contribution is a small grain of sand, when you get thousands or millions you quickly build a beach. Micro-contributions also diversify the knowledge pool. If anyone who’s interested can briefly help out, almost everyone does, and soon the project is tapping into broad expertise.”

Wikipedia is a great example since anyone can click on the edit tab of an article and become a contributor. “The most common edit on Wikipedia,” Thompson explained, “is someone changing a word or phrase: a teensy contribution, truly a grain of sand. Yet Wikipedia also relies on a small core of heavily involved contributors. Indeed, if you look at the number of really active contributors, the ones who make more than a hundred edits a month, there are not quite thirty-five hundred. If you drill down to the really committed folks—the administrators who deal with vandalism, among other things—there are only six or seven hundred active ones. Wikipedia contributions form a classic long-tail distribution, with a small passionate bunch at one end, followed by a line of grain-of-sand contributors that fades off over the horizon. These hardcore and lightweight contributors form a symbiotic whole. Without the micro-contributors, Wikipedia wouldn’t have grown as quickly, and it would have a much more narrow knowledge base.”

MIKE2.0 is another great example since it’s a collaborative community of information management professionals contributing their knowledge and experience. While MIKE2.0 has a small group of core contributors, micro-contributions improve the breadth and depth of its open source delivery framework for enterprise information management.

The business, data, and technical knowledge about the end-to-end process of how information is being developed and used within your organization is not known by any one individual. It is spread throughout your enterprise. A collaborative effort is needed to make sure that important details are not missed—details that determine the success or failure of your enterprise information management initiative. Therefore, be sure to tap into the distributed knowledge of your enterprise by enabling and encouraging micro-contributions. Micro-contributions form a collaboration macro. Just as a computer macro is comprised of a set of instructions that are used to collectively perform a particular task, think of collaboration as a macro that is comprised of a set of micro-contributions that collectively manage your enterprise information.

 

Category: Information Development
No Comments »

by: Bsomich
19  Jul  2014

Community Update.

Missed what’s been happening in the MIKE2.0 community? Check out our bi-weekly update:

 

 
 logo.jpg

Data Governance: How competent is your organization?

One of the key concepts of the MIKE2.0 Methodology is that of an Organisational Model for Information Development. This is an organisation that provides a dedicated competency for improving how information is accessed, shared, stored and integrated across the environment.

Organisational models need to be adapted as the organisation moves up the 5 Maturity Levels for organisations in relation to the Information Development competencies below:

Level 1 Data Governance Organisation – Aware

  • An Aware Data Governance Organisation knows that the organisation has issues around Data Governance but is doing little to respond to these issues. Awareness has typically come as the result of some major issues that have occurred that have been Data Governance-related. An organisation may also be at the Aware state if they are going through the process of moving to state where they can effectively address issues, but are only in the early stages of the programme.
Level 2 Data Governance Organisation – Reactive
  • Reactive Data Governance Organisation is able to address some of its issues, but not until some time after they have occurred. The organisation is not able to address root causes or predict when they are likely to occur. “Heroes” are often needed to address complex data quality issues and the impact of fixes done on a system-by-system level are often poorly understood.
Level 3 Data Governance Organisation – Proactive
  • Proactive Data Governance Organisation can stop issues before they occur as they are empowered to address root cause problems. At this level, the organisation also conducts ongoing monitoring of data quality to issues that do occur can be resolved quickly.
Level 4 Data Governance Organisation – Managed
Level 5 Data Governance Organisation – Optimal

The MIKE2.0 Solution for the the Centre of Excellence provides an overall approach to improving Data Governance through a Centre of Excellence delivery model for Infrastructure Development and Information Development. We recommend this approach as the most efficient and effective model for building these common set of capabilities across the enterprise environment.

Feel free to check it out when you have a moment and offer any suggestions you may have to improve it.

Sincerely,

MIKE2.0 Community

Contribute to MIKE:

Start a new article, help witharticles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

Join Us on
 images.jpg

Did You Know?
All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

 

This Week’s Blogs for Thought:

Big Data 101

Big data. What exactly is it?

Big data has been hitting the headlines in 2014 more so than any other year. For some people that’s no surprise, and they have a grasp on what big data really is. For others, the two words “big” and “data” don’t conjure up of a meaning of any kind, but instead they stir confusion and many times, misunderstanding.

Read more.

Dinosaurs, Geologists, and the IT-Business Divide

“Like it or not, we live in a tech world, from Apple to Hadoop to Zip files. You can’t ignore the fact that technology touches every facet of our lives. Better to get everything you can, leveraging every byte and every ounce of knowledge IT can bring.”

So write Thomas C. Redman and Bill Sweeney on HBR. Of course, they’re absolutely right. But the tech gap between what organizations can and actually accomplish remains considerable.

Read more.

Data Quality Profiling Considerations

Data profiling is an excellent diagnostic method for gaining additional understanding of the data. Profiling the source data helps inform both business requirements definition and detailed solution designs for data-related project, as well as enabling data issues to be managed ahead of project implementation.

Read more.

 

  

Forward to a Friend!

 

Category: Information Development
No Comments »

by: Alandduncan
19  Jul  2014

Data Quality Profiling Considerations

Data profiling is an excellent diagnostic method for gaining additional understanding of the data. Profiling the source data helps inform both business requirements definition and detailed solution designs for data-related project, as well as enabling data issues to be managed ahead of project implementation.

Profiling of a data set will be measured with reference to and agreed Data Quality Dimensions (e.g. per those proposed in the recent DAMA white paper).

Profiling may be required at several levels:

• Simple profiling with a single table (e.g. Primary Key constraint violations)
• Medium complexity profiling across two or more interdependent tables (e.g. Foreign Key violations)
• Complex profiling across two or more data sets, with applied business logic (e.g. reconciliation checks)

Note that field-by-field analysis is required to truly understand the data gaps.

Any data profiling analysis must not only identify the issues and underlying root causes, but must also identify the business impact of the data quality problem (measured by effectiveness, efficiency, risk inhibitors). This will help identify any value in remediating the data – great for your data quality Business Case. Root cause analysis also helps identify any process outliers and and drives out requirements for remedial action on managing any identified exceptions.

Be sure to profile your data and take baseline measures before applying any remedial actions – this will enable you to measure the impact of any changes.

I strongly recommend Data Quality Profiling and root-cause analysis to be undertaken as an initiation activity as part of all data warehouse, master data and application migration project phases.

Category: Business Intelligence, Data Quality, Enterprise Data Management, Information Development, Information Governance, Information Management, Information Strategy, Information Value, Metadata
No Comments »

by: Gil Allouche
18  Jul  2014

Big Data 101

Big data.

What exactly is it?

Big data has been hitting the headlines in 2014 more so than any other year. For some people that’s no surprise, and they have a grasp on what big data really is. For others, the two words “big” and “data” don’t conjure up of a meaning of any kind, but instead they stir confusion and many times, misunderstanding.

There’s a reason big data keeps hitting the headlines — it’s making a tremendous impact on our society. And as it expands to the cloud via Amazon Elastic MapReduce and others, its implementation will only increase. Because of that, it’s important that an understanding of what it is, why it’s important and how it can be used, be fostered among tech professionals, businesses professionals and the general public to ensure it’s continued success well into the future.

WHAT IS BIG DATA?
To the non tech person, as with any tech term or product, the official lingo to describe big data sounds like a big jumble of random, unintelligible words. Users like to throw around words like nodes, data sets, NoSQL, Hadoop, Hive, MapReduce, query and a host of others. To most people, however, these words don’t mean anything.

Put simply, big data is the overarching term for the process of gathering, storing and analyzing extremely large amounts of data (instead of megabyte and gigabyte, we’re talking petabyte (the “byte” after terabyte) and potentially exabyte).

Why kind of data is it? It can span the spectrum of internet browsing information, social media posts and mobile app usage to sales data, income reports and millions of movements tracked during a basketball game. Any and all data make up big data. Really, it’s whatever data a company or enterprise decides to gather.

Using software like Hadoop, Hive, Pig, Presto or others the data is gathered in many different ways and stored on large servers on the company premises, or it’s stored in the cloud, an increasingly popular option, through a big data in the cloud provider. Then data is taken and analyzed. Best of all, much of the gathering and analytics is now done in real-time. It’s completely changing the business world.

WHY IS BIG DATA IMPORTANT?
Once you know what big data is, it becomes easier to understand why it’s so important. The changes that it can bring about are truly astounding.

Big data is important because with modern technology like the internet, social media and mobile apps on smart phones and tablets, there is more information available and more information being created than ever before. However, without big data technology there would be no way to gather, store or analyze all this information and what could be used would be slow and inefficient. Big data is taking this jumble of information and turning it into a gold mine — a place where companies can go to find solutions to some of their most difficult questions.

Big data not only handles large amounts of information, but it handles it quickly and efficiently. It’s drastically reducing the downtime companies encountered in the past. Changes can be implemented quickly and monitored in real-time. Emergencies can be handled in a matter of minutes instead of days or weeks.

With big data, companies are finding new avenues to market and sell their products and services.

For the consumer, big data means better products, lower costs and quicker service.

Big data makes every business or enterprise that adopts it better. That being said, big data won’t solve every problem. Certainly a company still needs exceptional leadership, dedicated employees and valuable products to succeed. However, when you take a successful business, or a startup that’s on the right path, and combine it with big data, you’re going to see tremendous results.

HOW CAN YOU TAKE ADVANTAGE OF BIG DATA?
Begin using big data now. The sooner you start, the better off you’ll be. It’s not just a hype word or some novel technology anymore. Today it’s a necessity.

That being said, don’t be naive! You need to understand what big data is, what it can do and how it works before you implement it. It has to have a purpose or you’re just wasting your company’s time and money. Look at how it can help your business, because it certainly can, and then go and implement it.

Make sure to understand the options available. Do you want a big data system (known as infrastructure in the industry) on-site or do you want to go through cloud computing? There are pros and cons to each, but increasingly companies are finding decreased risk, greater flexibility and lower costs with big data in the cloud. How much storage will you need and what growth do you expect in the future? What kind of capital do you have available to invest currently?

Asking the right questions will go a long way to ensure successful adoption of big data technology.

Finally, make big data a company priority. Invest in leadership that supports it. Invest in employees who grasp it and are enthusiastic about it. Get the technology and stay on top.

There’s no substitute for the dynamic change that big data will bring to your business.

Category: Enterprise Data Management
No Comments »

by: Bsomich
12  Jul  2014

Community Update.

Missed what’s been happening in the MIKE2.0 community? Check out our biweekly update for the latest blog posts, wiki articles and information management resources:

 

 logo.jpg

Business Drivers for Better Metadata Management

There are a number Business Drivers for Better Metadata Management that have caused metadata management to grow in importance over the past few years at most major organisations. These organisations are focused on more than just a data dictionary across their information – they are building comprehensive solutions for managing business and technical metadata.

Our wiki article on the subject explores many factors contributing to the growth of metadata and guidance to better manage it:  

Feel free to check it out when you have a moment.

Sincerely,MIKE2.0 Community

Contribute to MIKE:

Start a new article, help witharticles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

Join Us on
images.jpg

Did You Know?
All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

 

This Week’s Blogs for Thought:

Information Requirements Gathering: The One Question You Must Never Ask 

Over the years, I’ve tended to find that asking any individual or group the question “What data/information do you want?” gets one of two responses:

“I don’t know.” Or;
“I don’t know what you mean by that.”

End of discussion, meeting over, pack up go home, nobody is any the wiser. Result? IT makes up the requirements based on what they think the business should want, the business gets all huffy because IT doesn’t understand what they need, and general disappointment and resentment ensues. Clearly for Information Management & Business Intelligence solutions, this is not a good thing.

Read more.

Overcoming Outcome Bias

What is more important, the process or its outcome? Information management processes, like those described by the MIKE2.0 Methodology, drive the daily operations of an organization’s business functions as well as support the tactical and strategic decision-making processes of its business leaders. However, an organization’s success or failure is usually measured by the outcomes produced by those processes.

As Duncan Watts explained in his book Everything Is Obvious: How Common Sense Fails Us, “rather than the evaluation of the outcome being determined by the quality of the process that led to it, it is the observed nature of the outcome that determines how we evaluate the process.” This is known as outcome bias.

Read more.

Evernote’s Three Laws of Data Protection

“It’s all about bucks, kid. The rest is conversation.” –Michael Douglass as Gordon Gekko, Wall Street (1987) Sporting more than 60 million users, Evernote is one of the most popular productivity apps out there these days. You may in fact use the app to store audio notes, video, pics, websites, and perform a whole host of other tasks.Read more.

  Forward this message to a friend

 

 

Category: Information Development
No Comments »

by: Alandduncan
01  Jul  2014

Information Requirements Gathering: The One Question You Must Never Ask!

Over the years, I’ve tended to find that asking any individual or group the question “What data/information do you want?” gets one of two responses:

“I don’t know.” Or;

“I don’t know what you mean by that.”

End of discussion, meeting over, pack up go home, nobody is any the wiser. Result? IT makes up the requirements based on what they think the business should want, the business gets all huffy because IT doesn’t understand what they need, and general disappointment and resentment ensues.

Clearly for Information Management & Business Intelligence solutions, this is not a good thing.

So I’ve stopped asking the question. Instead, when doing requirements gathering for an information project, I go through a workshop process that follows the following outline agenda:

Context setting: Why information management / Business Intelligence / Analytics / Data Governance* is generally perceived to be a “good thing”. This is essentially a very quick précis of the BI project mandate, and should aim at putting people at ease by answering the question “What exactly are we all doing here?”

(*Delete as appropriate).

Business Function & Process discovery: What do people do in their jobs – functions & tasks? If you can get them to explain why they do those things – i.e. to what end purpose or outcome – so much the better (though this can be a stretch for many.)

Challenges: what problems or issues do they currently face in their endeavours? What prevents them from succeeding in their jobs? What would they do differently if they had the opportunity to do so?

Opportunities: What is currently good? Existing capabilities (systems, processes, resources) are in place that could be developed further or re-used/re-purposed to help achieve the desired outcomes?

Desired Actions: What should happen next?

As a consultant, I see it as part of my role to inject ideas into the workshop dialogue too, using a couple of question forms specifically designed to provoke a response:

“What would happen if…X”

“Have you thought about…Y”

“Why do you do/want…Z”.

Notice that as the workshop discussion proceeds, the participants will naturally start to explore aspects that relate to later parts of the agenda – this is entirely ok. The agenda is there to provide a framework for the discussion, not a constraint. We want people to open up and spill their guts, not clam up. (Although beware of the “rambler” who just won’t shut up but never gets to the point…)

Notice also that not once have we actively explored the “D” or “I” words. That’s because as you explore the agenda, any information requirements will either naturally fall out of the discussion as it proceed, or else you can infer the information requirements arising based on the other aspects of the discussion.

As the workshop attendees explore the different aspects of the session, you will find that the discussion will touch upon a number of different themes, which you can categorise and capture on-the-fly (I tend to do this on sheets of butchers paper tacked to the walls, so that the findings are shared and visible to all participants.). Comments will typically fall into the following broad categories:

* Functions: Things that people do as part of doing business.
* Stakeholders: people who are involved (including helpful people elsewhere in the organisation – follow up with them!)
* Inhibitors: Things that currently prevent progress (these either become immediate scope-change items if they are show-stoppers for the current initiative, or else they form additional future project opportunities to raise with management)
* Enablers: Resources to make use of (e.g. data sets that another team hold, which aren’t currently shared)
* Constraints: “non-negotiable” aspects that must be taken into account. (Note: I tend to find that all constraints are actually negotiable and can be overcome if there is enough desire, money and political will.)
* Considerations: Things to be aware of that may have an influence somewhere along the line.
* Source systems: places where data comes from
* Information requirements: Outputs that people want

Here’s a (semi) fictitious example:

e.g. ADD: “What does your team do?”

Workshop Victim Participant #1: “Well, we’re trying to reconcile the customer account balances with the individual transactions.”

ADD: And why do you wan to do that?

Workshop Victim Participant #2: “We think there’s a discrepancy in the warehouse stock balances, compared with what’s been shipped to customers. The sales guys keep their own database of customer contracts and orders and Jim’s already given us dump of the data, while finance run the accounts receivables process. But Sally the Accounts Clerk doesn’t let the numbers out under any circumstances, so basically we’re screwed.”

Functions: Sales Processing, Contract Mangement, Order Fulfilment, Stock Management, Accounts Receivable.
Stakeholders: Warehouse team, Sales team (Jim), Finance team.
Inhibitors: Finance don’t collaborate.
Enablers: Jim is helpful.
Source Systems: Stock System, Customer Database, Order Management, Finance System.
Information Requirements: Orders (Quantity & Price by Customer, by Salesman, by Stock Item), Dispatches (Quantity & Price by Customer, by Salesman, by Warehouse Clerk, by Stock Item), Financial Transactions (Value by Customer, by Order Ref)

You will also probably end up with the attendees identifying a number of immediate self-assigned actions arising from the discussion – good ideas that either haven’t occurred to them before or have sat on the “To-Do” list. That’s your workshop “value add” right there….

e.g.
Workshop Victim Participant #1: “I could go and speak to the Financial Controller about getting access to the finance data. He’s more amenable to working together than Sally, who just does what she’s told.”

Happy information requirements gathering!

Category: Business Intelligence, Data Quality, Enterprise Data Management, Information Development, Information Governance, Information Management, Information Strategy, Information Value, Master Data Management, Metadata
No Comments »

by: Ocdqblog
29  Jun  2014

Overcoming Outcome Bias

What is more important, the process or its outcome? Information management processes, like those described by the MIKE2.0 Methodology, drive the daily operations of an organization’s business functions as well as support the tactical and strategic decision-making processes of its business leaders. However, an organization’s success or failure is usually measured by the outcomes produced by those processes.

As Duncan Watts explained in his book Everything Is Obvious: How Common Sense Fails Us, “rather than the evaluation of the outcome being determined by the quality of the process that led to it, it is the observed nature of the outcome that determines how we evaluate the process.” This is known as outcome bias.

While an organization is enjoying positive outcomes, such as exceeding its revenue goals for the current fiscal period, outcome bias basks processes in a rose-colored glow. Information management processes must be providing high-quality data to decision-making processes, which business leaders are using to make good decisions. However, when an organization is suffering from negative outcomes, such as a regulatory compliance failure, outcome bias blames it on broken information management processes and poor data quality that lead to bad decision-making.

“Judging the merit of a decision can never be done simply by looking at the outcome,” explained Jeffrey Ma in his book The House Advantage: Playing the Odds to Win Big In Business. “A poor result does not necessarily mean a poor decision. Likewise a good result does not necessarily mean a good decision.”

“We are prone to blame decision makers for good decisions that worked out badly and to give them too little credit for successful moves that appear obvious after the fact,” explained Daniel Kahneman in his book Thinking, Fast and Slow.

While risk mitigation is an oft-cited business justification for investing in information management, Kahneman also noted how outcome bias can “bring undeserved rewards to irresponsible risk seekers, such as a general or an entrepreneur who took a crazy gamble and won. Leaders who have been lucky are never punished for having taken too much risk. Instead, they are believed to have had the flair and foresight to anticipate success. A few lucky gambles can crown a reckless leader with a halo of prescience and boldness.”

Outcome bias triggers overreactions to both success and failure. Organizations that try to reverse engineer a single, successful outcome into a formal, repeatable process often fail, much to their surprise. Organizations also tend to abandon a new process immediately if its first outcome is a failure. “Over time,” Ma explained, “if one makes good, quality decisions, one will generally receive better outcomes, but it takes a large sample set to prove this.”

Your organization needs solid processes governing how information is created, managed, presented, and used in decision-making. Your organization also needs to guard against outcomes biasing your evaluation of those processes.

In order to overcome outcome bias, Watts recommended we “bear in mind that a good plan can fail while a bad plan can succeed—just by random chance—and therefore judge the plan on its own merits as well as the known outcome.”

 

Category: Data Quality, Information Development
No Comments »

Calendar
Collapse Expand Close
TODAY: Sat, December 20, 2014
December2014
SMTWTFS
30123456
78910111213
14151617181920
21222324252627
28293031123
Recent Comments
Collapse Expand Close