Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Archive for June, 2014

by: Ocdqblog
29  Jun  2014

Overcoming Outcome Bias

What is more important, the process or its outcome? Information management processes, like those described by the MIKE2.0 Methodology, drive the daily operations of an organization’s business functions as well as support the tactical and strategic decision-making processes of its business leaders. However, an organization’s success or failure is usually measured by the outcomes produced by those processes.

As Duncan Watts explained in his book Everything Is Obvious: How Common Sense Fails Us, “rather than the evaluation of the outcome being determined by the quality of the process that led to it, it is the observed nature of the outcome that determines how we evaluate the process.” This is known as outcome bias.

While an organization is enjoying positive outcomes, such as exceeding its revenue goals for the current fiscal period, outcome bias basks processes in a rose-colored glow. Information management processes must be providing high-quality data to decision-making processes, which business leaders are using to make good decisions. However, when an organization is suffering from negative outcomes, such as a regulatory compliance failure, outcome bias blames it on broken information management processes and poor data quality that lead to bad decision-making.

“Judging the merit of a decision can never be done simply by looking at the outcome,” explained Jeffrey Ma in his book The House Advantage: Playing the Odds to Win Big In Business. “A poor result does not necessarily mean a poor decision. Likewise a good result does not necessarily mean a good decision.”

“We are prone to blame decision makers for good decisions that worked out badly and to give them too little credit for successful moves that appear obvious after the fact,” explained Daniel Kahneman in his book Thinking, Fast and Slow.

While risk mitigation is an oft-cited business justification for investing in information management, Kahneman also noted how outcome bias can “bring undeserved rewards to irresponsible risk seekers, such as a general or an entrepreneur who took a crazy gamble and won. Leaders who have been lucky are never punished for having taken too much risk. Instead, they are believed to have had the flair and foresight to anticipate success. A few lucky gambles can crown a reckless leader with a halo of prescience and boldness.”

Outcome bias triggers overreactions to both success and failure. Organizations that try to reverse engineer a single, successful outcome into a formal, repeatable process often fail, much to their surprise. Organizations also tend to abandon a new process immediately if its first outcome is a failure. “Over time,” Ma explained, “if one makes good, quality decisions, one will generally receive better outcomes, but it takes a large sample set to prove this.”

Your organization needs solid processes governing how information is created, managed, presented, and used in decision-making. Your organization also needs to guard against outcomes biasing your evaluation of those processes.

In order to overcome outcome bias, Watts recommended we “bear in mind that a good plan can fail while a bad plan can succeed—just by random chance—and therefore judge the plan on its own merits as well as the known outcome.”

 

Tags: , , ,
Category: Data Quality, Information Development
No Comments »

by: RickDelgado
25  Jun  2014

Cloud Computing Trends to Watch for in 2014 and Beyond

Businesses tapping into the potential of cloud computing now make up the vast majority of enterprises out there. If anything, it’s those companies disregarding the cloud that have fallen way behind the rest of the pack. According to the most recent State of the Cloud Survey from RightScale, 87% of organizations are using the public cloud. Needless to say, businesses have figured out just how advantageous it is to make use of cloud computing, but now that it’s mainstream, experts are trying to predict what’s next for the incredibly useful technology. Here’s a look at some of the latest trends for cloud computing for 2014 and what’s to come in the near future.

1. IT Costs Reduced

There are a multitude of reasons companies have gotten involved with cloud computing. One of the main reasons is to reduce operational costs. This has proven true so far, but as more companies move to the cloud, those savings will only increase. In particular, organizations can expect to see a major reduction in IT costs. Adrian McDonald, president of EMEA at EMC, says the unit cost of IT could decrease by more than 38%. This development could allow for newer, more creative services to come from the IT department.

2. More Innovations

Speaking of innovations, the proliferation of cloud computing is helping business leaders use it for more creative solutions. At first, many felt cloud computing would allow companies to run their business in mostly the same way only with a different delivery model. But with cloud computing becoming more common, companies are finding ways to obtain new insights into new processes, effectively changing the way they were doing business before.

3. Engaging With Customers

To grow a business, one must attract new customers and hold onto those that are already loyal. Customer engagement is extremely important, and cloud computing is helping companies find new ways to do just that. By powering systems of engagement, cloud computing can optimize how businesses interact with customers. This is done with database technologies along with collecting and analyzing big data, which is used to create new methods of reaching out to customers. With cloud computing’s easy scalability, this level of engagement is within the grasp of every enterprise no matter the size.

4. More Media

Another trend to watch out for is the increased use of media among businesses, even if they aren’t media companies. Werner Vogels, the vice president and CTO of Amazon, says that cloud computing is giving businesses media capabilities that they simply didn’t have before. Companies can now offer daily, fresh media content to customers, which can serve as another avenue for revenue and retention.

5. Expansion of BYOD

Bring Your Own Device (BYOD) policies are already pretty popular with companies around the world. With cloud computing reaching a new high point, expect BYOD to expand even faster. With so many wireless devices being used by people, that necessitates use of the cloud in order to store and access valuable company data. IT personnel are also finding ways to use cloud services through mobile device management, mainly to organize and keep track of each worker’s activities.

 6. More Hybrid Cloud

Whereas before there was a lot of debate over whether public or private cloud should be used by a company, it has now become clear that businesses are choosing to use hybrid clouds. The same RightScale cloud survey mentioned before shows that 74% of organizations have already developed a hybrid cloud strategy, with more than half of them already using it. Hybrid clouds combine private cloud security with the power and scalability of public clouds, basically giving companies the advantages of both. It also allows IT to come up with customized solutions while maintaining a secure infrastructure.
These are just a few of the trends that are happening as cloud computing expands. Its growth has been staggering, fueling greater innovation in companies as they look to save on operational costs. As more and more businesses get used to what the cloud has to offer and how to take full advantage of its benefits, we can expect even greater developments in the near future. For now, the technology will continue to be a valuable asset to every organization that makes the most of it.

Category: Business Intelligence
No Comments »

by: Gil Allouche
24  Jun  2014

Tips for Creating a Big Data Infrastructure

Big data is making an impact in nearly every industry across the globe. It’s fundamentally changing the way companies do business and how consumers find and buy products and services.

With the large amount of change that is occurring so rapidly in the big data industry, companies are feeling the pressure to act quickly to stay ahead of the competition. The sooner companies can begin implementing big data, the better, but only if the company is ready. Big data should propel companies forward, not hold them back. A focus for companies as they begin exploring big data implementation is establishing a successful infrastructure. After all, the infrastructure is where everything begins with big data. If it’s implemented correctly, a premier infrastructure will ensure that big data serves its primary purpose — increasing business success and consumer satisfaction.

Here are four things companies can do to establish a successful infrastructure.

1. Establish a Purpose
Why do you want to use big data and what are you going to use it for? In order for a company to establish the right kind of infrastructure it needs to know how much and what kind of data it’s going to be handling, along with how that data is going to help the company succeed. From a workforce standpoint, companies need to understand the staffing needs that come with any type of big data infrastructure and make sure they have they manpower to meet those needs. Last, the business needs to understand the financial implications of establishing a successful infrastructure. There will need to be a significant investment of money if it’s going to succeed. However, that investment will pay great dividends in the future, whereas if companies take shortcuts to reduce expenditures, they will pay significantly more in the future. It will hinder their ability to innovate now and in the future.

2. Secure the Data
Securing data is the most important thing a company can do. An enormous advantage for companies with their own infrastructure is the inherent data security that comes with it. Unlike big data in the cloud, companies control all aspects of data. However, companies still have work to do, and if the company lacks expertise in this area, a big data platform in the cloud can actually be the more secure route. When evaluating a cloud computing provider, it’s important to establish and understand who can access the data, when they can access it and what happens with the data when it’s accessed. Appropriate and simple measures will ensure your data remains safe.

3. Make it Scalable
Your company is going to evolve, so make sure your infrastructure can handle change. In their recently published eBook, “Running a Big Data Infrastructure: Five Areas That Need Your Attention,” this ability to change is called, “Elasticity.” Your infrastructure has to be flexible enough to move through the ebbs and flows of your business — the good times and the bad. This type of flexibility doesn’t necessarily require more and more servers to handle the excess information. It requires great “elasticity” with the existing technology allowing it to effectively handle more or less data than originally intended.

4. Monitor the data
The ebook also points out the importance of constant monitoring. If your company is going to install big data infrastructure, then it signifies that big data will play a big part in the future of your business. Because of that, it’s important that your infrastructure is up to speed, all the time. Delays of any sort in the infrastructure can prove extremely costly to your business. As with any new technology, there are going to be glitches and bugs in the system. Without consistent, constant monitoring those could cause an enormous impact on your company. With monitoring, however, they become small inconveniences instead of big problems.

Implementing big data infrastructure can be one of the best moves for your business if it’s done correctly. It allows companies to access and analyze troves of information that can improve every aspect of a company. Companies will succeed by understanding why they want to implement big data, how it’s going to be used and then focus on making it secure, flexible and monitored.

Tags: ,
Category: Enterprise Data Management
No Comments »

by: Robert.hillard
22  Jun  2014

The Quantum Computer dream could be killed by Information Management

For years now the physics community has been taking the leap into computer science through the pursuit of the quantum computer.  As weird as the concepts underpinning the idea of such a device are, even weirder is the threat that this machine of the future could pose to business and government today.

There are many excellent primers on quantum computing but in summary physicists hope to be able to use the concept of superposition to allow one quantum computer bit (called a “qubit”) to carry the value of both zero and one at the same time and also to interact with other qubits which also have two simultaneous values.

A quantum computer would be hoped to come up answers to useful questions with far fewer processing steps than a conventional computer as many different combinations would be evaluated at the same time.  Algorithms that use this approach are generally in the category of solution finding (best paths, factors and other similar complex problems).

As exciting as the concept of a quantum computer sounds, one of the applications of this approach would be a direct threat to many aspects of modern society.  Shor’s algorithm provides an approach to integer factorisation using a quantum computer which is like a passkey to the encryption used across our digital world.

The cryptography techniques that dominate the internet are based on the principle that it is computationally infeasible to find the factors of a large number.  However, Shor’s algorithm provides an approach that would crack the code if a quantum computer could actually be built.

Does it matter today?

We’re familiar with businesses of today being disrupted by new technology tomorrow.  But just as weird, as the concept of quantum superposition is the possibility that the computing of tomorrow could disrupt the business of today!

We are passing vast quantities of data across the internet.  Much of it is confidential and encrypted.  Messages that we are confident will remain between the sender and receiver.  These include payments, conversations and, through the use of virtual private networks, much of the internal content of both companies and government.

It is possible that parties hoping to crack this content in the future are taking the opportunity to store it today.  Due to the architecture of the internet, there is little to stop anyone from intercepting much of this data and storing it without anyone having any hint of its capture.

In the event that a quantum computer, capable of running Shor’s algorithm, is built the first thought will need to be to ask what content could have been intercepted and what secrets might be open to being exposed.  The extent of the exposure could be so much greater than might appear at first glance.

How likely is a quantum computer to be built?

There is one commercially available device marketed as a quantum computer, called the D-Wave (from D-Wave Systems).  Sceptics, however, have published doubts that it is really operating based on the principles of Quantum Computing.  Even more importantly, there is no suggestion that it is capable of running Shor’s algorithm or that it is a universal quantum computer.

There is a great deal of evidence that the principles of quantum computing are consistent with the laws of physics as they have been uncovered over the past century.  At the same time as physics is branching into computing, the information theory branch of computing is expanding into physics.  Many recent developments in physics are borrowing directly from the information discipline.

It is possible, though, that information theory as applied to information management problems could provide confidence that a universal quantum computer is not going to be built.

Information entropy

Information entropy was initially constructed by Claude Shannon to provide a tool for quantifying information.  While the principles were deliberately analogous to thermal entropy, it has subsequently become clear that the information associated with particles is as important as the particles themselves.  Chapter 6 of my book, Information-Driven Business, explains these principles in detail.

It turns out that systems can be modelled on information or thermal entropy interchangeably.  As a result, a quantum computer that needs to obey the rules of information theory also needs to obey the laws of thermal entropy.

The first law of thermodynamics was first written by Rudolf Clausius in 1850 as: “In all cases in which work is produced by the agency of heat, a quantity of heat is consumed which is proportional to the work done; and conversely, by the expenditure of an equal quantity of work an equal quantity of heat is produced”.

Rewording over time has added sophistication but fundamentally, the law is a restatement of the conservation of energy.  Any given system cannot increase the quantity of energy or, as a consequent of the connection between thermal and information entropy, the information that it contains.

Any computing device, regardless of whether it is classical or quantum in nature, consumes energy based on the amount of information that is being derived as determined by the information entropy of the device.  While it is entirely possible that massive quantities of information could be processed in parallel, there is no escaping the requirement to adhere to this requirement with a quantum computer truly delivering this level of computing requiring the same order of energy as the thousands or even millions of classical computers required to deliver the same result.

I anticipate that developers of quantum computers will either find that the quantity of energy required to process is prohibitive or that their qubits will constantly frustrate their every effort to maintain coherence for long enough to complete useful algorithms.

Could I be wrong?

Definitely!  In a future post I propose to create a scorecard tracking the predictions I’ve made over the years.

However, anyone who claims to really understand quantum mechanics is lying.  Faced with the unbelievably complex wave functions required for quantum mechanics which seem to defy any real world understanding, physicist David Mermin famously advised his colleagues to just “Shut up and calculate!”.

Because of the impact of a future quantum computer on today’s business, the question is far from academic and deserves almost as much investment as the exploration of these quantum phenomena do in their own right.

At the same time, the investments in quantum computing are far from wasted.  Even if no universal quantum computer is possible, the specialised devices that are likely to follow the D-Wave machine are going to prove extremely useful in their own right.

Ultimately, the convergence of physics and computer science can only benefit both fields as well as the business and government organisations that depend on both.

Tags:
Category: Information Management, Information Strategy
No Comments »

by: Bsomich
21  Jun  2014

Community Update.

 

 
 logo.jpg

Have you seen our Open MIKE Series? 

The Open MIKE Podcast is a video podcast show which discusses aspects of the MIKE2.0 framework, and features content contributed to MIKE 2.0 Wiki Articles, Blog Posts, and Discussion Forums.

You can scroll through the Open MIKE Podcast episodes below:

For more information on MIKE2.0 or how to get involved with our online community, please visit www.openmethodology.org.

Sincerely,

MIKE2.0 Community  

Contribute to MIKE:

Start a new article, help witharticles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

Join Us on
 images.jpg

 

Did You Know?
All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

 

This Week’s Blogs for Thought:

Open and Secure Personal DataIn his book Open Data Now: The Secret to Hot Startups, Smart Investing, Savvy Marketing, and Fast Innovation, Joel Gurin explained a type of Open Data called Smart Disclosure, which was defined as “the timely release of complex information and data in standardized, machine-readable formats in ways that enable consumers to make informed decisions.”
As Gurin explained, “Smart Disclosure combines government data, company information about products and services, and data about an individual’s own needs to help consumers make personalized decisions. Since few people are database experts, most will use this Open Data through an intermediary—a choice engine that integrates the data and helps people filter it by what’s important to them, much the way travel sites do for airline and hotel booking. These choice engines can tailor the options to fit an individual’s circumstances, budget, and priorities.”

Read more.

Careers in Technology: Is there a future?

Is there a future for careers in Information Technology?  Globally, professional societies such as the British Computer Society and the Australian Computer Society have long argued that practitioners need to be professionals.  However, there is a counter-argument that technology is an enabler for all professions and is more generally a capability of many rather than a profession of the few.

Read more.

An Open Source Solution for Better Performance Management 

Today, many organizations are facing increased scrutiny and a higher level of overall performance expectation from internal and external stakeholders. Both business and public sector leaders must provide greater external and internal transparancy to their activities, ensure accounting data faces up to compliance challenges, and extract the return and competitive advantage out of their customer, operational and performance information: Managers, investors and regulators have a new perspective on performance and compliance.

Read more.

Forward to a Friend!

Know someone who might be interested in joining the Mike2.0 Community? Forward this message to a friend

Questions?

If you have any questions, please email us at mike2@openmethodology.org.

 

 


If you no longer wish to receive these emails, please reply to this message with “Unsubscribe” in the subject line or simply click on the following link: Unsubscribe

Category: Information Development
No Comments »

by: Ocdqblog
17  Jun  2014

Open and Secure Personal Data

In his book Open Data Now: The Secret to Hot Startups, Smart Investing, Savvy Marketing, and Fast Innovation, Joel Gurin explained a type of Open Data called Smart Disclosure, which was defined as “the timely release of complex information and data in standardized, machine-readable formats in ways that enable consumers to make informed decisions.”

As Gurin explained, “Smart Disclosure combines government data, company information about products and services, and data about an individual’s own needs to help consumers make personalized decisions. Since few people are database experts, most will use this Open Data through an intermediary—a choice engine that integrates the data and helps people filter it by what’s important to them, much the way travel sites do for airline and hotel booking. These choice engines can tailor the options to fit an individual’s circumstances, budget, and priorities.”

Remember (if you are old enough) what it was like to make travel arrangements before websites like Expedia, Orbitz, Travelocity, Priceline, and Kayak existed, and you can imagine the immense consumer-driven business potential for applying Smart Disclosure and choice engines to every type of consumer decision.

“Smart Disclosure works best,” Gurin explained, “when it brings together data about the services a company offers with data about the individual consumer. Smart Disclosure includes giving consumers data about themselves—such as their medial records, cellphone charges, or patterns of energy use—so they can choose the products and services uniquely suited to their needs. This is Open Data in a special sense: it’s open only to the individual whom the data is about and has to be released to each person under secure conditions by the company or government agency that holds the data. It’s essential that these organizations take special care to be sure the data is not seen by anyone else. Many people may balk at the idea of having their personal data released in a digital form. But if the data is kept private and secure, giving personal data back to individuals is one of the most powerful aspects of Smart Disclosure.”

Although it sounds like a paradox, the best way to secure our personal data may be to make it open. Currently most of our own personal data is closed—especially to us, which is the real paradox.

Some of our personal data is claimed as proprietary information by the companies we do business with. Data about our health is cloaked by government regulations intended to protect it, but which mostly protects doctors from getting sued while giving medical service providers and health insurance companies more access to our medical history than we have.

If all of our personal data was open to us, and we controlled the authorization of secure access to it, our personal data would be both open and secure. This would simultaneously protect our privacy and improve our choice as consumers.

 

Tags: , , , ,
Category: Information Development
No Comments »

by: Bsomich
07  Jun  2014

Community Update.

Missed what’s been happening in the MIKE2.0 community? Check out our bi-weekly update:

 

logo.jpg

Available for Order: Information Development Using MIKE2.0

Have you heard? Our new book, “Information Development Using MIKE2.0” is available for order.

The vision for Information Development and the MIKE2.0 Methodology have been available in a collaborative, online fashion since 2006, and are now made available in print publication to a wider audience, highlighting key wiki articles, blog posts, case studies and user applications of the methodology.

Authors for the book include Andreas Rindler, Sean McClowry, Robert Hillard, and Sven Mueller, with additional credit due to Deloitte, BearingPoint and over 7,000 members and key contributors of the MIKE2.0 community. The book has been published in paperback as well as all major e-book publishing platforms.

Get Involved:

To get your copy of the book, visit our order page on Amazon.com. For more information on MIKE2.0 or how to get involved with our online community, please visitwww.openmethodology.org.

Sincerely,

MIKE2.0 Community  

Contribute to MIKE:

Start a new article, help witharticles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

Join Us on
 images.jpg

 

Did You Know?
All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

 

This Week’s Blogs for Thought:

Open and Big Data

In his book Open Data Now: The Secret to Hot Startups, Smart Investing, Savvy Marketing, and Fast Innovation, Joel Gurin explained that Open Data and Big Data are related but very different.

While various definitions exist, Gurin noted that “all definitions of Open Data include two basic features: the data must be publicly available for anyone to use, and it must be licensed in a way that allows for its reuse.

Read more.

Careers in Technology: Is there a future?

Is there a future for careers in Information Technology?  Globally, professional societies such as the British Computer Society and the Australian Computer Society have long argued that practitioners need to be professionals.  However, there is a counter-argument that technology is an enabler for all professions and is more generally a capability of many rather than a profession of the few.

Read more.

Data Quality Profiling: Do you trust in the dark arts? 

Why estimating Data Quality profiling doesn’t have to be guess-work. 

Data Management lore would have us believe that estimating the amount of work involved in Data Quality analysis is a bit of a “Dark Art,” and to get a close enough approximation for quoting purposes requires much scryingharuspicy and wet-finger-waving, as well as plenty of general wailing and gnashing of teeth. (Those of you with a background in Project Management could probably argue that any type of work estimation is just as problematic, and that in any event work will expand to more than fill the time available).

Read more.

Forward to a Friend!

Know someone who might be interested in joining the Mike2.0 Community? Forward this message to a friend

 

Category: Information Development
No Comments »

Calendar
Collapse Expand Close
TODAY: Sun, July 23, 2017
June2014
SMTWTFS
1234567
891011121314
15161718192021
22232425262728
293012345
Archives
Collapse Expand Close
Recent Comments
Collapse Expand Close