Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Archive for the ‘Information Development’ Category

by: RickDelgado
22  Jul  2014

The Growing Concern of Privacy in the Cloud

Cloud computing is no longer something reserved for high-tech companies or large corporations. If you have a smartphone or use a tablet, you are likely part of the cloud. So what is cloud computing? While there’s no single definition for it, it essentially means sharing or storing information on remote servers that can be accessed over the internet. If you use email, your emails are stored on the cloud. If you’ve ever saved a document on Google Drive, that’s all saved on the cloud. Apps are also downloaded off of the cloud. Cloud computing is certainly convenient, but it also opens the door to some pressing concerns about privacy, especially when it comes to sensitive data. It’s a concern many cloud providers will need to address, and they need to do it soon.

 

The issue of privacy has been in the headlines in recent weeks. Back in June, the U.S. Supreme Court ruled that law enforcement could not search people’s cell phones without first obtaining a warrant. The side supporting warrantless searches contended that police didn’t need a judge’s permission to search what’s in a suspect’s pockets, which includes that person’s phone. But the opposing side, which the Supreme Court agreed with, said the issue wasn’t the cell phone itself but rather the data contained on that cell phone. Data stored in the cloud was deemed to be private and protected, though it is likely further challenges will come in the form of future cases. The main point is that any line distinguishing physical possessions and those stored electronically is vanishing quickly if it hasn’t already happened.

 

Many businesses have turned to cloud computing as a way to optimize their productivity. That usually requires important and vital information be stored on the cloud by way of cloud providers. That practice leads to some interesting philosophical questions about who actually owns that data. If a cloud provider stores it on their own servers, is it theirs, or does it still belong to the company or individual that hired them? So far, there are no clear answers to the questions that have arisen from this complex issue, but it’s a major reason why some businesses have been reluctant to adopt cloud computing. Data is an essential ingredient of any business, and compromising it in any way could be disastrous.

 

When using a cloud provider, businesses will often have to sign some sort of confidentiality agreement. Protecting that client’s privacy then becomes part of the responsibility of the provider, but the protections offered can vary greatly depending on a number of factors. For example, even if something is stored on the cloud, it still must be stored in an actual physical server, and that server must be located somewhere. If that server is located in a different country, that data becomes subject to the laws of that country. That means the privacy protections offered can depend on the privacy laws the country has. Other adverse consequences, as described by a report presented at the World Privacy Forum, can affect legal protections as well. Data stored with a third party may lose some legal protections. By being stored with a provider, weaker protections may mean easier access by governments or private litigants. These are all concerns that need to be considered when businesses choose to go with a cloud provider for their data needs.

 

Privacy in the cloud is also a major concern in education. With lots of data on students to handle, many schools and districts are turning to third party cloud providers. In fact, one study puts the number of districts that rely on cloud services at 95%. But with this massive growth also comes worries over student privacy. Many times, parents aren’t even informed that their children’s information is being stored in the cloud. The same study found that only one-fourth of districts tell parents they’re using the cloud. Even more alarming is how districts often surrender control of student information when using a provider. Fewer than 7% of provider contracts actually restrict selling or marketing student data, and most contracts have no mention of parental notice of consent. The same study recommends that greater transparency is needed among districts and cloud providers, while districts need to establish new policies clearly outlining how cloud services will be used.

 

(Tweet This: About 95% of school districts rely on third party #cloud services but only 25% of districts tell the parents. #privacy)

 

All is not bleak, however, with the world of privacy and cloud computing. Cloud providers that don’t have adequate privacy protection will likely go out of business quickly, so most will look to prop up their reputations by making privacy a top priority. There are also other ways to protect a company’s data through authentication techniques and authorization programs. Concerns over privacy shouldn’t dissuade companies from taking advantage of the cloud, but they should be prepared to address the issue and proceed with caution.

 

Category: Information Development
No Comments »

by: Ocdqblog
19  Jul  2014

Micro-Contributions form a Collaboration Macro

Collaboration is often cited as a key success factor in many enterprise information management initiatives, such as metadata managementdata quality improvementmaster data management, and information governance. Yet it’s often difficult to engage individual contributors in these efforts because everyone is busy and time is a zero-sum game. However, a successful collaboration needn’t require a major time commitment from all contributors.

While a small core group of people must be assigned as full-time contributors to enterprise information management initiatives, success hinges on a large extended group of people making what Clive Thompson calls micro-contributions. In his book Smarter Than You Think: How Technology is Changing Our Minds for the Better, he explained that “though each micro-contribution is a small grain of sand, when you get thousands or millions you quickly build a beach. Micro-contributions also diversify the knowledge pool. If anyone who’s interested can briefly help out, almost everyone does, and soon the project is tapping into broad expertise.”

Wikipedia is a great example since anyone can click on the edit tab of an article and become a contributor. “The most common edit on Wikipedia,” Thompson explained, “is someone changing a word or phrase: a teensy contribution, truly a grain of sand. Yet Wikipedia also relies on a small core of heavily involved contributors. Indeed, if you look at the number of really active contributors, the ones who make more than a hundred edits a month, there are not quite thirty-five hundred. If you drill down to the really committed folks—the administrators who deal with vandalism, among other things—there are only six or seven hundred active ones. Wikipedia contributions form a classic long-tail distribution, with a small passionate bunch at one end, followed by a line of grain-of-sand contributors that fades off over the horizon. These hardcore and lightweight contributors form a symbiotic whole. Without the micro-contributors, Wikipedia wouldn’t have grown as quickly, and it would have a much more narrow knowledge base.”

MIKE2.0 is another great example since it’s a collaborative community of information management professionals contributing their knowledge and experience. While MIKE2.0 has a small group of core contributors, micro-contributions improve the breadth and depth of its open source delivery framework for enterprise information management.

The business, data, and technical knowledge about the end-to-end process of how information is being developed and used within your organization is not known by any one individual. It is spread throughout your enterprise. A collaborative effort is needed to make sure that important details are not missed—details that determine the success or failure of your enterprise information management initiative. Therefore, be sure to tap into the distributed knowledge of your enterprise by enabling and encouraging micro-contributions. Micro-contributions form a collaboration macro. Just as a computer macro is comprised of a set of instructions that are used to collectively perform a particular task, think of collaboration as a macro that is comprised of a set of micro-contributions that collectively manage your enterprise information.

 

Tags: ,
Category: Information Development
No Comments »

by: Bsomich
19  Jul  2014

Community Update.

Missed what’s been happening in the MIKE2.0 community? Check out our bi-weekly update:

 

 
 logo.jpg

Data Governance: How competent is your organization?

One of the key concepts of the MIKE2.0 Methodology is that of an Organisational Model for Information Development. This is an organisation that provides a dedicated competency for improving how information is accessed, shared, stored and integrated across the environment.

Organisational models need to be adapted as the organisation moves up the 5 Maturity Levels for organisations in relation to the Information Development competencies below:

Level 1 Data Governance Organisation – Aware

  • An Aware Data Governance Organisation knows that the organisation has issues around Data Governance but is doing little to respond to these issues. Awareness has typically come as the result of some major issues that have occurred that have been Data Governance-related. An organisation may also be at the Aware state if they are going through the process of moving to state where they can effectively address issues, but are only in the early stages of the programme.
Level 2 Data Governance Organisation – Reactive
  • Reactive Data Governance Organisation is able to address some of its issues, but not until some time after they have occurred. The organisation is not able to address root causes or predict when they are likely to occur. “Heroes” are often needed to address complex data quality issues and the impact of fixes done on a system-by-system level are often poorly understood.
Level 3 Data Governance Organisation – Proactive
  • Proactive Data Governance Organisation can stop issues before they occur as they are empowered to address root cause problems. At this level, the organisation also conducts ongoing monitoring of data quality to issues that do occur can be resolved quickly.
Level 4 Data Governance Organisation – Managed
Level 5 Data Governance Organisation – Optimal

The MIKE2.0 Solution for the the Centre of Excellence provides an overall approach to improving Data Governance through a Centre of Excellence delivery model for Infrastructure Development and Information Development. We recommend this approach as the most efficient and effective model for building these common set of capabilities across the enterprise environment.

Feel free to check it out when you have a moment and offer any suggestions you may have to improve it.

Sincerely,

MIKE2.0 Community

Contribute to MIKE:

Start a new article, help witharticles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

Join Us on
 images.jpg

Did You Know?
All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

 

This Week’s Blogs for Thought:

Big Data 101

Big data. What exactly is it?

Big data has been hitting the headlines in 2014 more so than any other year. For some people that’s no surprise, and they have a grasp on what big data really is. For others, the two words “big” and “data” don’t conjure up of a meaning of any kind, but instead they stir confusion and many times, misunderstanding.

Read more.

Dinosaurs, Geologists, and the IT-Business Divide

“Like it or not, we live in a tech world, from Apple to Hadoop to Zip files. You can’t ignore the fact that technology touches every facet of our lives. Better to get everything you can, leveraging every byte and every ounce of knowledge IT can bring.”

So write Thomas C. Redman and Bill Sweeney on HBR. Of course, they’re absolutely right. But the tech gap between what organizations can and actually accomplish remains considerable.

Read more.

Data Quality Profiling Considerations

Data profiling is an excellent diagnostic method for gaining additional understanding of the data. Profiling the source data helps inform both business requirements definition and detailed solution designs for data-related project, as well as enabling data issues to be managed ahead of project implementation.

Read more.

 

  

Forward to a Friend!

 

Category: Information Development
No Comments »

by: Alandduncan
19  Jul  2014

Data Quality Profiling Considerations

Data profiling is an excellent diagnostic method for gaining additional understanding of the data. Profiling the source data helps inform both business requirements definition and detailed solution designs for data-related project, as well as enabling data issues to be managed ahead of project implementation.

Profiling of a data set will be measured with reference to and agreed Data Quality Dimensions (e.g. per those proposed in the recent DAMA white paper).

Profiling may be required at several levels:

• Simple profiling with a single table (e.g. Primary Key constraint violations)
• Medium complexity profiling across two or more interdependent tables (e.g. Foreign Key violations)
• Complex profiling across two or more data sets, with applied business logic (e.g. reconciliation checks)

Note that field-by-field analysis is required to truly understand the data gaps.

Any data profiling analysis must not only identify the issues and underlying root causes, but must also identify the business impact of the data quality problem (measured by effectiveness, efficiency, risk inhibitors). This will help identify any value in remediating the data – great for your data quality Business Case. Root cause analysis also helps identify any process outliers and and drives out requirements for remedial action on managing any identified exceptions.

Be sure to profile your data and take baseline measures before applying any remedial actions – this will enable you to measure the impact of any changes.

I strongly recommend Data Quality Profiling and root-cause analysis to be undertaken as an initiation activity as part of all data warehouse, master data and application migration project phases.

Category: Business Intelligence, Data Quality, Enterprise Data Management, Information Development, Information Governance, Information Management, Information Strategy, Information Value, Metadata
No Comments »

by: Bsomich
12  Jul  2014

Community Update.

Missed what’s been happening in the MIKE2.0 community? Check out our biweekly update for the latest blog posts, wiki articles and information management resources:

 

 logo.jpg

Business Drivers for Better Metadata Management

There are a number Business Drivers for Better Metadata Management that have caused metadata management to grow in importance over the past few years at most major organisations. These organisations are focused on more than just a data dictionary across their information – they are building comprehensive solutions for managing business and technical metadata.

Our wiki article on the subject explores many factors contributing to the growth of metadata and guidance to better manage it:  

Feel free to check it out when you have a moment.

Sincerely,MIKE2.0 Community

Contribute to MIKE:

Start a new article, help witharticles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

Join Us on
images.jpg

Did You Know?
All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

 

This Week’s Blogs for Thought:

Information Requirements Gathering: The One Question You Must Never Ask 

Over the years, I’ve tended to find that asking any individual or group the question “What data/information do you want?” gets one of two responses:

“I don’t know.” Or;
“I don’t know what you mean by that.”

End of discussion, meeting over, pack up go home, nobody is any the wiser. Result? IT makes up the requirements based on what they think the business should want, the business gets all huffy because IT doesn’t understand what they need, and general disappointment and resentment ensues. Clearly for Information Management & Business Intelligence solutions, this is not a good thing.

Read more.

Overcoming Outcome Bias

What is more important, the process or its outcome? Information management processes, like those described by the MIKE2.0 Methodology, drive the daily operations of an organization’s business functions as well as support the tactical and strategic decision-making processes of its business leaders. However, an organization’s success or failure is usually measured by the outcomes produced by those processes.

As Duncan Watts explained in his book Everything Is Obvious: How Common Sense Fails Us, “rather than the evaluation of the outcome being determined by the quality of the process that led to it, it is the observed nature of the outcome that determines how we evaluate the process.” This is known as outcome bias.

Read more.

Evernote’s Three Laws of Data Protection

“It’s all about bucks, kid. The rest is conversation.” –Michael Douglass as Gordon Gekko, Wall Street (1987) Sporting more than 60 million users, Evernote is one of the most popular productivity apps out there these days. You may in fact use the app to store audio notes, video, pics, websites, and perform a whole host of other tasks.Read more.

  Forward this message to a friend

 

 

Category: Information Development
No Comments »

by: Alandduncan
01  Jul  2014

Information Requirements Gathering: The One Question You Must Never Ask!

Over the years, I’ve tended to find that asking any individual or group the question “What data/information do you want?” gets one of two responses:

“I don’t know.” Or;

“I don’t know what you mean by that.”

End of discussion, meeting over, pack up go home, nobody is any the wiser. Result? IT makes up the requirements based on what they think the business should want, the business gets all huffy because IT doesn’t understand what they need, and general disappointment and resentment ensues.

Clearly for Information Management & Business Intelligence solutions, this is not a good thing.

So I’ve stopped asking the question. Instead, when doing requirements gathering for an information project, I go through a workshop process that follows the following outline agenda:

Context setting: Why information management / Business Intelligence / Analytics / Data Governance* is generally perceived to be a “good thing”. This is essentially a very quick précis of the BI project mandate, and should aim at putting people at ease by answering the question “What exactly are we all doing here?”

(*Delete as appropriate).

Business Function & Process discovery: What do people do in their jobs – functions & tasks? If you can get them to explain why they do those things – i.e. to what end purpose or outcome – so much the better (though this can be a stretch for many.)

Challenges: what problems or issues do they currently face in their endeavours? What prevents them from succeeding in their jobs? What would they do differently if they had the opportunity to do so?

Opportunities: What is currently good? Existing capabilities (systems, processes, resources) are in place that could be developed further or re-used/re-purposed to help achieve the desired outcomes?

Desired Actions: What should happen next?

As a consultant, I see it as part of my role to inject ideas into the workshop dialogue too, using a couple of question forms specifically designed to provoke a response:

“What would happen if…X”

“Have you thought about…Y”

“Why do you do/want…Z”.

Notice that as the workshop discussion proceeds, the participants will naturally start to explore aspects that relate to later parts of the agenda – this is entirely ok. The agenda is there to provide a framework for the discussion, not a constraint. We want people to open up and spill their guts, not clam up. (Although beware of the “rambler” who just won’t shut up but never gets to the point…)

Notice also that not once have we actively explored the “D” or “I” words. That’s because as you explore the agenda, any information requirements will either naturally fall out of the discussion as it proceed, or else you can infer the information requirements arising based on the other aspects of the discussion.

As the workshop attendees explore the different aspects of the session, you will find that the discussion will touch upon a number of different themes, which you can categorise and capture on-the-fly (I tend to do this on sheets of butchers paper tacked to the walls, so that the findings are shared and visible to all participants.). Comments will typically fall into the following broad categories:

* Functions: Things that people do as part of doing business.
* Stakeholders: people who are involved (including helpful people elsewhere in the organisation – follow up with them!)
* Inhibitors: Things that currently prevent progress (these either become immediate scope-change items if they are show-stoppers for the current initiative, or else they form additional future project opportunities to raise with management)
* Enablers: Resources to make use of (e.g. data sets that another team hold, which aren’t currently shared)
* Constraints: “non-negotiable” aspects that must be taken into account. (Note: I tend to find that all constraints are actually negotiable and can be overcome if there is enough desire, money and political will.)
* Considerations: Things to be aware of that may have an influence somewhere along the line.
* Source systems: places where data comes from
* Information requirements: Outputs that people want

Here’s a (semi) fictitious example:

e.g. ADD: “What does your team do?”

Workshop Victim Participant #1: “Well, we’re trying to reconcile the customer account balances with the individual transactions.”

ADD: And why do you wan to do that?

Workshop Victim Participant #2: “We think there’s a discrepancy in the warehouse stock balances, compared with what’s been shipped to customers. The sales guys keep their own database of customer contracts and orders and Jim’s already given us dump of the data, while finance run the accounts receivables process. But Sally the Accounts Clerk doesn’t let the numbers out under any circumstances, so basically we’re screwed.”

Functions: Sales Processing, Contract Mangement, Order Fulfilment, Stock Management, Accounts Receivable.
Stakeholders: Warehouse team, Sales team (Jim), Finance team.
Inhibitors: Finance don’t collaborate.
Enablers: Jim is helpful.
Source Systems: Stock System, Customer Database, Order Management, Finance System.
Information Requirements: Orders (Quantity & Price by Customer, by Salesman, by Stock Item), Dispatches (Quantity & Price by Customer, by Salesman, by Warehouse Clerk, by Stock Item), Financial Transactions (Value by Customer, by Order Ref)

You will also probably end up with the attendees identifying a number of immediate self-assigned actions arising from the discussion – good ideas that either haven’t occurred to them before or have sat on the “To-Do” list. That’s your workshop “value add” right there….

e.g.
Workshop Victim Participant #1: “I could go and speak to the Financial Controller about getting access to the finance data. He’s more amenable to working together than Sally, who just does what she’s told.”

Happy information requirements gathering!

Category: Business Intelligence, Data Quality, Enterprise Data Management, Information Development, Information Governance, Information Management, Information Strategy, Information Value, Master Data Management, Metadata
No Comments »

by: Ocdqblog
29  Jun  2014

Overcoming Outcome Bias

What is more important, the process or its outcome? Information management processes, like those described by the MIKE2.0 Methodology, drive the daily operations of an organization’s business functions as well as support the tactical and strategic decision-making processes of its business leaders. However, an organization’s success or failure is usually measured by the outcomes produced by those processes.

As Duncan Watts explained in his book Everything Is Obvious: How Common Sense Fails Us, “rather than the evaluation of the outcome being determined by the quality of the process that led to it, it is the observed nature of the outcome that determines how we evaluate the process.” This is known as outcome bias.

While an organization is enjoying positive outcomes, such as exceeding its revenue goals for the current fiscal period, outcome bias basks processes in a rose-colored glow. Information management processes must be providing high-quality data to decision-making processes, which business leaders are using to make good decisions. However, when an organization is suffering from negative outcomes, such as a regulatory compliance failure, outcome bias blames it on broken information management processes and poor data quality that lead to bad decision-making.

“Judging the merit of a decision can never be done simply by looking at the outcome,” explained Jeffrey Ma in his book The House Advantage: Playing the Odds to Win Big In Business. “A poor result does not necessarily mean a poor decision. Likewise a good result does not necessarily mean a good decision.”

“We are prone to blame decision makers for good decisions that worked out badly and to give them too little credit for successful moves that appear obvious after the fact,” explained Daniel Kahneman in his book Thinking, Fast and Slow.

While risk mitigation is an oft-cited business justification for investing in information management, Kahneman also noted how outcome bias can “bring undeserved rewards to irresponsible risk seekers, such as a general or an entrepreneur who took a crazy gamble and won. Leaders who have been lucky are never punished for having taken too much risk. Instead, they are believed to have had the flair and foresight to anticipate success. A few lucky gambles can crown a reckless leader with a halo of prescience and boldness.”

Outcome bias triggers overreactions to both success and failure. Organizations that try to reverse engineer a single, successful outcome into a formal, repeatable process often fail, much to their surprise. Organizations also tend to abandon a new process immediately if its first outcome is a failure. “Over time,” Ma explained, “if one makes good, quality decisions, one will generally receive better outcomes, but it takes a large sample set to prove this.”

Your organization needs solid processes governing how information is created, managed, presented, and used in decision-making. Your organization also needs to guard against outcomes biasing your evaluation of those processes.

In order to overcome outcome bias, Watts recommended we “bear in mind that a good plan can fail while a bad plan can succeed—just by random chance—and therefore judge the plan on its own merits as well as the known outcome.”

 

Tags: , , ,
Category: Data Quality, Information Development
No Comments »

by: Bsomich
21  Jun  2014

Community Update.

 

 
 logo.jpg

Have you seen our Open MIKE Series? 

The Open MIKE Podcast is a video podcast show which discusses aspects of the MIKE2.0 framework, and features content contributed to MIKE 2.0 Wiki Articles, Blog Posts, and Discussion Forums.

You can scroll through the Open MIKE Podcast episodes below:

For more information on MIKE2.0 or how to get involved with our online community, please visit www.openmethodology.org.

Sincerely,

MIKE2.0 Community  

Contribute to MIKE:

Start a new article, help witharticles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

Join Us on
 images.jpg

 

Did You Know?
All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

 

This Week’s Blogs for Thought:

Open and Secure Personal DataIn his book Open Data Now: The Secret to Hot Startups, Smart Investing, Savvy Marketing, and Fast Innovation, Joel Gurin explained a type of Open Data called Smart Disclosure, which was defined as “the timely release of complex information and data in standardized, machine-readable formats in ways that enable consumers to make informed decisions.”
As Gurin explained, “Smart Disclosure combines government data, company information about products and services, and data about an individual’s own needs to help consumers make personalized decisions. Since few people are database experts, most will use this Open Data through an intermediary—a choice engine that integrates the data and helps people filter it by what’s important to them, much the way travel sites do for airline and hotel booking. These choice engines can tailor the options to fit an individual’s circumstances, budget, and priorities.”

Read more.

Careers in Technology: Is there a future?

Is there a future for careers in Information Technology?  Globally, professional societies such as the British Computer Society and the Australian Computer Society have long argued that practitioners need to be professionals.  However, there is a counter-argument that technology is an enabler for all professions and is more generally a capability of many rather than a profession of the few.

Read more.

An Open Source Solution for Better Performance Management 

Today, many organizations are facing increased scrutiny and a higher level of overall performance expectation from internal and external stakeholders. Both business and public sector leaders must provide greater external and internal transparancy to their activities, ensure accounting data faces up to compliance challenges, and extract the return and competitive advantage out of their customer, operational and performance information: Managers, investors and regulators have a new perspective on performance and compliance.

Read more.

Forward to a Friend!

Know someone who might be interested in joining the Mike2.0 Community? Forward this message to a friend

Questions?

If you have any questions, please email us at mike2@openmethodology.org.

 

 


If you no longer wish to receive these emails, please reply to this message with “Unsubscribe” in the subject line or simply click on the following link: Unsubscribe

Category: Information Development
No Comments »

by: Ocdqblog
17  Jun  2014

Open and Secure Personal Data

In his book Open Data Now: The Secret to Hot Startups, Smart Investing, Savvy Marketing, and Fast Innovation, Joel Gurin explained a type of Open Data called Smart Disclosure, which was defined as “the timely release of complex information and data in standardized, machine-readable formats in ways that enable consumers to make informed decisions.”

As Gurin explained, “Smart Disclosure combines government data, company information about products and services, and data about an individual’s own needs to help consumers make personalized decisions. Since few people are database experts, most will use this Open Data through an intermediary—a choice engine that integrates the data and helps people filter it by what’s important to them, much the way travel sites do for airline and hotel booking. These choice engines can tailor the options to fit an individual’s circumstances, budget, and priorities.”

Remember (if you are old enough) what it was like to make travel arrangements before websites like Expedia, Orbitz, Travelocity, Priceline, and Kayak existed, and you can imagine the immense consumer-driven business potential for applying Smart Disclosure and choice engines to every type of consumer decision.

“Smart Disclosure works best,” Gurin explained, “when it brings together data about the services a company offers with data about the individual consumer. Smart Disclosure includes giving consumers data about themselves—such as their medial records, cellphone charges, or patterns of energy use—so they can choose the products and services uniquely suited to their needs. This is Open Data in a special sense: it’s open only to the individual whom the data is about and has to be released to each person under secure conditions by the company or government agency that holds the data. It’s essential that these organizations take special care to be sure the data is not seen by anyone else. Many people may balk at the idea of having their personal data released in a digital form. But if the data is kept private and secure, giving personal data back to individuals is one of the most powerful aspects of Smart Disclosure.”

Although it sounds like a paradox, the best way to secure our personal data may be to make it open. Currently most of our own personal data is closed—especially to us, which is the real paradox.

Some of our personal data is claimed as proprietary information by the companies we do business with. Data about our health is cloaked by government regulations intended to protect it, but which mostly protects doctors from getting sued while giving medical service providers and health insurance companies more access to our medical history than we have.

If all of our personal data was open to us, and we controlled the authorization of secure access to it, our personal data would be both open and secure. This would simultaneously protect our privacy and improve our choice as consumers.

 

Tags: , , , ,
Category: Information Development
No Comments »

by: Bsomich
07  Jun  2014

Community Update.

Missed what’s been happening in the MIKE2.0 community? Check out our bi-weekly update:

 

logo.jpg

Available for Order: Information Development Using MIKE2.0

Have you heard? Our new book, “Information Development Using MIKE2.0” is available for order.

The vision for Information Development and the MIKE2.0 Methodology have been available in a collaborative, online fashion since 2006, and are now made available in print publication to a wider audience, highlighting key wiki articles, blog posts, case studies and user applications of the methodology.

Authors for the book include Andreas Rindler, Sean McClowry, Robert Hillard, and Sven Mueller, with additional credit due to Deloitte, BearingPoint and over 7,000 members and key contributors of the MIKE2.0 community. The book has been published in paperback as well as all major e-book publishing platforms.

Get Involved:

To get your copy of the book, visit our order page on Amazon.com. For more information on MIKE2.0 or how to get involved with our online community, please visitwww.openmethodology.org.

Sincerely,

MIKE2.0 Community  

Contribute to MIKE:

Start a new article, help witharticles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

Join Us on
 images.jpg

 

Did You Know?
All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

 

This Week’s Blogs for Thought:

Open and Big Data

In his book Open Data Now: The Secret to Hot Startups, Smart Investing, Savvy Marketing, and Fast Innovation, Joel Gurin explained that Open Data and Big Data are related but very different.

While various definitions exist, Gurin noted that “all definitions of Open Data include two basic features: the data must be publicly available for anyone to use, and it must be licensed in a way that allows for its reuse.

Read more.

Careers in Technology: Is there a future?

Is there a future for careers in Information Technology?  Globally, professional societies such as the British Computer Society and the Australian Computer Society have long argued that practitioners need to be professionals.  However, there is a counter-argument that technology is an enabler for all professions and is more generally a capability of many rather than a profession of the few.

Read more.

Data Quality Profiling: Do you trust in the dark arts? 

Why estimating Data Quality profiling doesn’t have to be guess-work. 

Data Management lore would have us believe that estimating the amount of work involved in Data Quality analysis is a bit of a “Dark Art,” and to get a close enough approximation for quoting purposes requires much scryingharuspicy and wet-finger-waving, as well as plenty of general wailing and gnashing of teeth. (Those of you with a background in Project Management could probably argue that any type of work estimation is just as problematic, and that in any event work will expand to more than fill the time available).

Read more.

Forward to a Friend!

Know someone who might be interested in joining the Mike2.0 Community? Forward this message to a friend

 

Category: Information Development
No Comments »

Calendar
Collapse Expand Close
TODAY: Mon, June 25, 2018
June2018
SMTWTFS
272829303112
3456789
10111213141516
17181920212223
24252627282930
Archives
Collapse Expand Close
Recent Comments
Collapse Expand Close