Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Archive for December, 2013

by: Ocdqblog
30  Dec  2013

Scanning the 2014 Technology Horizon

In 1928, the physicist Paul Dirac, while attempting to describe the electron in quantum mechanical terms, posited the theoretical existence of the positron, a particle with all the electron’s properties but of opposite charge.  In 1932, the experiments of physicist Carl David Anderson confirmed the positron’s existence, a discovery for which he was awarded the 1936 Nobel Prize in Physics.

“If you had asked Dirac or Anderson what the possible applications of their studies were,” Stuart Firestein wrote in his 2012 book Ignorance: How It Drives Science, “they would surely have said their research was aimed simply at understanding the fundamental nature of matter and energy in the universe and that applications were unlikely.”

Nonetheless, 40 years later a practical application of the positron became a part of one of the most important diagnostic and research instruments in modern medicine when, in the late 1970s, biophysicists and engineers developed the first positron emission tomography (PET) scanner.

“Of course, a great deal of additional research went into this as well,” Firestein explained, “but only part of it was directed specifically at making this machine.  Methods of tomography, an imaging technique, some new chemistry to prepare solutions that would produce positrons, and advances in computer technology and programming—all of these led in the most indirect and fundamentally unpredictable ways to the PET scanner at your local hospital.  The point is that this purpose could never have been imagined even by as clever a fellow as Paul Dirac.”

This story came to mind since it’s that time of year when we try to predict what will happen next year.

“We make prediction more difficult because our immediate tendency is to imagine the new thing doing an old job better,” explained Kevin Kelly in his 2010 book What Technology Wants.  Which is why the first cars were called horseless carriages and the first cellphones were called wireless telephones.  But as cars advanced we imagined more than transportation without horses, and as cellphones advanced we imagined more than making phone calls without wires.  The latest generation of cellphones are now called smartphones and cellphone technology has become a part of a mobile computing platform.

IDC predicts 2014 will accelerate the IT transition to the emerging platform (what they call the 3rd Platform) for growth and innovation built on the technology pillars of mobile computing, cloud services, big data analytics, and social networking.  IDC predicts the 3rd Platform will continue to expand beyond smartphones, tablets, and PCs to the Internet of Things.

Among its 2014 predictions, Gartner included the Internet of Everything, explaining how the Internet is expanding beyond PCs and mobile devices into enterprise assets such as field equipment, and consumer items such as cars and televisions.  According to Gartner, the combination of data streams and services created by digitizing everything creates four basic usage models (Manage, Monetize, Operate, Extend) that can be applied to any of the four internets (PeopleThingsInformationPlaces).

These and other predictions for the new year point toward a convergence of emerging technologies, their continued disruption of longstanding business models, and the new business opportunities that they will create.  While this is undoubtedly true, it’s also true that, much like the indirect and unpredictable paths that led to the PET scanner, emerging technologies will follow indirect and unpredictable paths to applications as far beyond our current imagination as a practical application of a positron was beyond the imagination of Dirac and Anderson.

“The predictability of most new things is very low,” Kelly cautioned.  “William Sturgeon, the discoverer of electromagnetism, did not predict electric motors.  Philo Farnsworth did not imagine the television culture that would burst forth from his cathode-ray tube.  Advertisers at the beginning of the last century pitched the telephone as if it was simply a more convenient telegraph.”

“Technologies shift as they thrive,” Kelly concluded.  “They are remade as they are used.  They unleash second- and third-order consequences as they disseminate.  And almost always, they bring completely unpredicted effects as they near ubiquity.”

It’s easy to predict that mobile, cloud, social, and big data analytical technologies will near ubiquity in 2014.  However, the effects of their ubiquity may be fundamentally unpredictable.  One unpredicted effect that we all became painfully aware of in 2013 was the surveillance culture that burst forth from our self-surrendered privacy, which now hangs the Data of Damocles wirelessly over our heads.

Of course, not all of the unpredictable effects will be negative.  Much like the positive charge of the positron powering the positive effect that the PET scanner has had on healthcare, we should charge positively into the new year.  Here’s to hoping that 2014 is a happy and healthy new year for us all.

Tags: , , , , ,
Category: Information Development
No Comments »

by: Bsomich
28  Dec  2013

Weekly IM Update.

 logo.jpg

Guiding Principles for the Open Semantic Enterprise

We’ve recently released the seventh episode of our Open MIKE Podcast series.

Episode 07: “Guiding Principles for the Open Semantic Enterprise” features key aspects of the following MIKE2.0 solution offerings:

Semantic Enterprise Guiding Principles: openmethodology.org/wiki/Guiding_Principles_for_the_Open_Semantic_Enterprise

Semantic Enterprise Composite Offering: openmethodology.org/wiki/Semantic_Enterprise_Composite_Offering

Semantic Enterprise Wiki Category: openmethodology.org/wiki/Category:Semantic_Enterprise

The Open MIKE Podcast is a video podcast show, hosted by Jim Harris, which discusses aspects of the MIKE2.0 framework, and features content contributed to MIKE 2.0 Wiki Articles, Blog Posts, and Discussion Forums.

We kindly invite any existing MIKE contributors to contact us if they’d like to contribute any audio or video segments for future episodes.

Check it out and feel free to check out our community overview video for more information on how to get involved with MIKE2.0.

On Twitter? Contribute and follow the discussion via the #MIKEPodcast hashtag.

Sincerely,

MIKE2.0 Community

New! Popular Content

Did you know that the following wiki articles are most popular on Google? Check them out, and feel free to edit or expand them!

What is MIKE2.0?
Deliverable Templates
The 5 Phases of MIKE2.0
Overall Task List
Business Assessment Blueprint
SAFE Architecture
Information Governance Solution

Contribute to MIKE:

Start a new article, help with articles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

Join Us on
images.jpg

This Week’s Blogs for Thought:

RDF PROVenance Ontology

This week features a special 3-part focus on the RDF PROVenance Ontology, a Recommendation from the Worldwide Web Consortium that is expected to be a dominant interchange standard for enterprise metadata in the immediate future, replacing the Dublin Core. Following introduction, reviews and commentary about the ontology’s RDF classes and RDF properties should stimulate lively discussion of the report’s conclusions.

Read more.

Has your data quality been naughty or nice?

Whether you celebrate it as a holy day, holiday, or just a day off, Christmas is one of the most anticipated days of the year.  ‘Tis the season for traditions, including seasonal blog posts, such as The Twelve (Data) Days of Christmas by Alan D. Duncan, A Christmas Data Carol by Nicola Askham, and an exposé OMG: Santa is Fake by North Pole investigative blogger Henrik Liliendahl Sørensen.

Read more.

The 12 (Data) Days of Christmas 

For my first blog post on MIKE2, and in the spirit of the season, here’s a light-hearted take on some of the challenges you might be facing if you’re dealing with increased volumes of data in the run up to Christmas… On the first day of Christmas, my true love sent to me a ragged hierarchy. On the second day of Christmas, my true love sent to me two empty files and a ragged hierarchy. On the third day of Christmas, my true love sent to me three null values, two missing files and a ragged hierarchy.Read more.

Forward this message to a friend

Category: Information Development
No Comments »

by: John McClure
27  Dec  2013

Semantic Relations

The previous article looked at some of the hierarchical properties in the notational syntax proposed by the W3C’s RDF PROVenance Ontology. This entry reviews now its class hierarchy.

The PROVenance Ontology’s base model is composed of three classes: Entity-Agent-Activity, which echos W. McCarthy’s Resource-Actor-Event (REA) model for financial accounting transactions. Which is pretty nice since the REA model has a solid pedigree: it is foundational to OASIS, OMG and UN electronic commerce standards and is now headed towards becoming an ISO specification (one which may soon include long-overdue accounting of economic transaction externalities – hooray!).

In a linguistic framework these base models might both be viewed a subset of a higher-order model by which essential journalistic questions (who, what, when, which, why, how and how much) are root concepts. The PROVenance Ontology’s second tier of classes however does include the prov:Location element, “an identifiable geographic place (ISO 19112), but it can also be a non-geographic place” so though overall the PROVenance Ontology doesn’t yet establish a foundation mature enough for modelling arbitrary information, it is well on its way.

Unfortunately a capability to specify provenance for any (text or resource) attribute value — arguably a fundamental requirement — currently seems to be missing (or at least, hard to detect). One might think that a prov:Statement element, a subclass of rdf:Statement, would be an easy to annotate with reference(s) to a prov:Entity or prov:Bundle in which are details about provenance-significant events. But with the present PROV ontology, for an operation of this nature, the fact that there is no prov:Provenance class in the PROVenance Ontology seems to make the standard appear more difficult to grasp and implement.

This has come about perhaps because, as the W3′s Steering Group noted, there’s a functional ambiguity between present metadata and historical provenance data and a resource’s own functional data, particularly for generation- & maintenance- related activities. Functional elements in this context are specific, hierarchically-organized sets of formal (& informal) agents, actions, dates, resources and usages — these “influence” to some degree the identity of a resource. The result is a set of event-type classes that can equally be used in non-provenance contexts.

Perhaps an area needing more work is the extent to which the PROVenance Ontology leverages the Simple Knowledge Organization System (SKOS), and the RDF Data Cube Ontology (Qb) for that matter, important recent members of the RDF family of W3 Recommendations. In the latter case Linked Open Data woven into the interchange format for provenance information might increase the number of tools and uptake of the ontology in the years ahead. Noticing the PROV Ontology’s focus on the use case of scientific data, though, it is a golden opportunity to show provenance markup interacting with Data Cube concepts, from qb:Dataset through qb:Observation. In a similar manner it would be useful to state a relationship between skos:Concept and prov:Entity. As well an alternative design of the PROV Ontology could functionally rely on a taxonomy of activities rather than a set of classes. This approach would result in a smaller and more appealingly under-specified ontology. Not specifying properties that semantically distinguish one activity class from another, suggests that a taxonomy would be just as effective.

Capturing “influences” [1] that are upon a resource, a seque to ‘trust’ processes, is found in the PROV ontology as three classes: prov:AgentInfluence, prov:ActivityInfluence and prov:EntityInfluence. Another approach might define less what an “influential resource” is, and use a class like prov:Influential that follows a facet-oriented design patterns. In this way, the facet can be applied to anything as a categorical tag, permitting influential relationships with things otherwise not influential in their own right. Currently the Influence base class has three properties, only one of which properly relates to an “Influence”.

In summary the class structure presented by the PROVenance Ontology remains to be tested in practice. The simplicity of the Dublin Core “provenance model” has not yet been replicated by the PROVenance Ontology (in fact, it can’t be considered ‘useable’ in any sense until the Dublin Core ontology has been mapped). The premise of RDF to be able to say anything about anything, seems problematic for provenance statements: one can only make provenance statements about instances of prov:Entity. Should this restriction be re-examined semantic enterprises may more fruitfully detect exchange and exploit provenance metadata.

[1] “Provenance of a resource is a record that describes entities and processes involved in producing and delivering or otherwise influencing that resource. Provenance provides a critical foundation for assessing authenticity, enabling trust, and allowing reproducibility. Provenance assertions are a form of contextual metadata and can themselves become important records with their own provenance” -[Provenance WG]

Category: Information Development
No Comments »

by: John McClure
23  Dec  2013

Semantic Notations

In a previous entry about the Worldwide Web Consortium (W3C)’s RDF PROVenance Ontology, I mentioned that it includes a notation aimed at human consumption — wow, I thought, that’s a completely new architectural thrust by the W3C. Until now the W3C has published only notations aimed at computer consumption. Now it is going to be promoting a “notation aimed at human consumption”! So here’s examples of what’s being proposed.
----------------------------------------------------
[1] entity(e1)
[2] activity(a2, 2011-11-16T16:00:00, 2011-11-16T16:00:01)
[3] wasDerivedFrom(e2, e1, a, g2, u1)
----------------------------------------------------

[1] declares that an entity named “e1″ exists. This could have been “book(e1)” presumably, so any subclass of prov:Entity can be referenced instead. Note: The prov:entity property and the prov:Entity class are top concepts in the PROV ontology.
[2] should be read as “activity a2, which occurred between 2011-11-16T16:00:00 and 2011-11-16T16:00:01″. An ‘activity’ is a sub-property of prov:influence, as an ‘Activity’ is a sub-class of prov:Influence, both top concepts in the PROV ontology.
[3] additional details are shown for a “wasDerivedFrom” event (a sub-property of prov:wasInfluencedBy, a top concept of the PROV ontology); to wit, that activity a, the generation g2, and the usage u1, resources are eacb related to the “wasDerivedFrom” relation.
—————————————————-

The W3 syntax above is a giant step towards establishing standard mechanisms for semantic notations. I’m sure though this approach doesn’t yet qualify as an efficient user-level syntax for semantic notation however. First, I’d note that the vocabulary of properties essentially mirrors the vocabulary of classes — from a KISS view this ontology design pattern imposes a useless burden on a user, to learn two vocabularies of similar and hierarchical names, one for classes and one for properties. Secondly camel-case conventions are surely not amenable to most users (really, who wants to promote poor spelling?). Thirdly attribute values are not labelled classically (”type:name”) — treating resource names opaquely wastes an obvious opportunity for clarifications and discoveries of subjects’ own patterns of information as well as incidental annotations not made elsewhere. Finally a small point, is that commas are not infrequently found in names of resources causing problems in this context.

Another approach is to move around the strings in the notations above, to achieve a more natural reading to average English speakers. Using a consistent framework of verbs and prepositions for properties named verb:preposition, an approach introduced in an earlier entry, yields an intuitively more interesting syntax with possibilities for future expansion.
----------------------------------------------------
[1] has:this(Entity:e1)
[2] has:this(Activity:a2; Timestamp:2011-11-16T16:00:00; Timestamp:2011-11-16T16:00:01)
[3] was:from(Source:e2; Event:e1; Activity:a; Generation:g2; Usage:u1)
[4] was:from(Source:e1; Source:e2; Source:e3)
----------------------------------------------------

[1] declares that an annotation for a specific page (a subject) has a certain entity named e1, which may or may not exist (that is, be de-referenceable). e1 is qualified as of type “Entity”.
[2] By convention the first value in a set of values provided to a “property function” is the target of the namespaced relation “has:this” with the subject resource being the resource which is being semantically annotated. Each attribute value associated with the relation is qualified by the type of value that it is.
[3] The property wasDerivedFrom is here a relation with a target resource that is to be interpreted as a “Source” kind-of-thing, i.e., “role”. This relation shows four related (perhaps influential) resources.
[4] This is a list of attribute values acceptable in this notation, overloading the ‘was:from’ predicate function for a less tiresome syntax.
—————————————————-

The chief advantages of this approach to semantic notations in comparison with the current W3C recommendation is first, that it eliminates the need for dual (dueling) hierarchies, by its adoption of a fixed number of prepositional properties. Second it is in a lexical sense formally more complete yet ripe with opportunities for improved semantic markup, in part by its requirement to type strings. Lastly it is intuitively clearer to an average user, perhaps leading to a more conducive process for semantic annotations.

Category: Data Quality, Enterprise Content Management, Enterprise Data Management, Information Development, Information Governance, Information Management, Information Strategy, Semantic Web, Sustainable Development
No Comments »

by: Ocdqblog
23  Dec  2013

Has your Data Quality been Naughty or Nice?

Whether you celebrate it as a holy day, holiday, or just a day off, Christmas is one of the most anticipated days of the year.  ‘Tis the season for traditions, including seasonal blog posts, such as The Twelve (Data) Days of Christmas by Alan D. Duncan, A Christmas Data Carol by Nicola Askham, and an exposé OMG: Santa is Fake by North Pole investigative blogger Henrik Liliendahl Sørensen.

Another Christmas tradition is the exchange of gifts, preceded by a long shopping season, during which in the United States the average person spends $750.  The gifts are often wrapped and placed under a tree, concealing their contents until the paper, ribbons, and bows, a presentation which may have taken a long time to prepare, are ripped off in seconds in an exciting rush to see what’s inside.  At that point, expectation meets reality and one of two things happen.  Either the gift-recipient is thrilled to see a gift they asked for, or they are so disappointed that they can’t possibly imagine the misunderstanding that lead the gift-giver to believe that this so-called gift is what they actually wanted.

This, like most things, reminds me of data quality.  So much time, effort, and money is expended creating data-driven deliverables that carry a high expectation of quality for which users believe they have clearly communicated their requirements.  Everyone expects the quality of their data to be nice and not naughty.  In fact, some people refuse to believe that their data could ever be naughty—until poor-quality data hits them like a stocking full of coal.  To paraphrase a line from one of my favorite Christmas songs Grandma Got Run Over by a Reindeer: “you can say that there’s no such thing as naughty data, but as for me and my fellow data quality professionals, we believe.”

Complicating matters even further, user perspectives on what qualifies as nice data can also vary greatly.  As a seasonal example of the impact of differing perspectives, consider how even though when many of us say It’s Beginning to Look a Lot Like Christmas we imagine a Winter Wonderland, in the lands down under dreaming of a White Christmas is just a dream since in the Southern Hemisphere Christmastime arrives in the middle of summertime.

Whether you believe Santa Claus Is Coming to Town or a data steward is running a SQL Santa Clause, thereby making a list and checking it twice to find out if your data quality has been naughty or nice, I hope you Have Yourself a Merry Little Christmas and may all your data quality be nice.

Tags:
Category: Data Quality
No Comments »

by: Alandduncan
22  Dec  2013

The Twelve (Data) Days of Christmas

For my first blog post on MIKE2, and in the spirit of the season, here’s a light-hearted take on some of the challenges you might be facing if you’re dealing with increased volumes of data in the run up to Christmas…

On the first day of Christmas, my true love sent to me a ragged hierarchy.

On the second day of Christmas, my true love sent to me two empty files and a ragged hierarchy.

On the third day of Christmas, my true love sent to me three null values, two missing files and a ragged hierarchy.

On the fourth day of Christmas, my true love sent to me four audit logs, three null values, two missing files and a ragged hierarchy.

On the fifth day of Christmas, my true love sent to me five golden records! Four audit logs, three null values, two missing files and a ragged hierarchy.

On the sixth day of Christmas, my true love sent to me six late arrivals, five golden records! Four audit logs, three null values, two missing files and a ragged hierarchy.

On the seventh day of Christmas, my true love sent to me seven free-text columns, six late arrivals, five golden records! Four audit logs, three null values, two missing files and a ragged hierarchy.

On the eighth day of Christmas, my true love sent to me eight mis-spelled surnames, seven free-text columns, six late arrivals, five golden records! Four audit logs, three null values, two missing files and a ragged hierarchy.

On the ninth day of Christmas, my true love sent to me nine duplications, eight mis-spelled surnames, seven free-text columns, six late arrivals, five golden records! Four audit logs, three null values, two missing files and a ragged hierarchy.

On the tenth day of Christmas, my true love sent to me ten coding errors, nine duplications, eight mis-spelled surnames, seven free-text columns, six late arrivals, five golden records! Four audit logs, three null values, two missing files and a ragged hierarchy.

On the eleventh day of Christmas, my true love sent to me eleven double meanings, ten coding errors, nine duplications, eight mis-spelled surnames, seven free-text columns, six late arrivals, five golden records! Four audit logs, three null values, two missing files and a ragged hierarchy.

On the twelfth day of Christmas, my true love sent to me twelve domain outliers, eleven double meanings, ten coding errors, nine duplications, eight mis-spelled surnames, seven free-text columns, six late arrivals, five golden records! Four audit logs, three null values, two missing files and a ragged hierarchy.

Seasons gratings!
ADD

Category: Information Development
No Comments »

by: John McClure
21  Dec  2013

The RDF PROVenance Ontology

"At the toolbar (menu, whatever) associated with a document there is a button marked "Oh, yeah?". You press it when you lose that feeling of trust. It says to the Web, 'so how do I know I can trust this information?'. The software then goes directly or indirectly back to metainformation about the document, which suggests a number of reasons.” [[Tim Berners Lee, 1997]]

“The problem is – and this is true of books and every other medium – we don’t know whether the information we find [on the Web] is accurate or not. We don’t necessarily know what its provenance is.” – Vint Cerf

The Worldwide Web Consortium (W3) has hit another home-run when the RDF PROVenance Ontology officially became a member of the Resource Description Framework last May. This timely publication proposes a data model well-suited to its task: representing provenance metadata about any resource. Provenance data for a thing relates directly to its chain of ownership, its development or treatment as a managed resource, and its intended uses and audiences. Provenance data is a central requirement for any trust-ranking process that often occurs against digital resources sourced from outside an organization.

The PROV Ontology is bound to have important impacts on existing provenance models in the field, including Google’s Open Provenance Model Vocabulary; DERI’s X-Prov and W3P vocabularies; the open-source SWAN Provenance, Authoring and Versioning Ontology and Provenance Vocabulary; Inference Web’s Proof Markup Language-2 Ontology; the W3C’s now outdated RDF Datastore Schema; among others. As a practical matter, the PROV Ontology is already the underlying model for the bio-informatics industry as implemented at Oxford University, a prominent thought-leader in the RDF community.

At the core of the PROV Ontology is a conceptual data model with semantics instantiated by serializations including RDF and XML plus a notation aimed at human consumption. These serializations are used by implementations to interchange provenance data. To help developers and users create valid provenance, a set of constraints are defined, useful to the creation of provenance validators. Finally, to further support the interchange of provenance, additional definitions are provided for protocols to locate access and connect multiple provenance descriptions and,most importantly how to interoperate with the widely used Dublin Core two metadata vocabularies.

The PROV Ontology is slightly ambitious too despite the perils of over-specification. It aims to provide a model not just for discrete data-points and relations applicable to any managed-resource, but also for describing in-depth the processes relevant to its development as a concept. This is reasonable in many contexts — such as a scholarly article, to capture its bibliography — but it seems odd in the context of non-media resources such as Persons. For instance, it might be odd to think of a notation of one’s parents as within the scope of “provenance data”. The danger of over-specification is palpable in the face of grand claims that, for instance, scientific publications will be describable by the PROV Ontology to an extent that reveals “How new results were obtained: from assumptions to conclusions and everything in between” [W3 Working Group Presentation].

Recommendations. Enterprises and organizations should immediately adopt the RDF PROVenance Ontology in their semantic applications. At a minimum this ontology should be deeply incorporated within the fundamentals of any enterprise-wide models now driving semantic applications, and it should be a point of priority among key decision-makers. Based upon my review and use in my clients’ applications, this ontology is surely of a quality and scope that it will drive a massive amount of web traffic clearly to come in the not distant future. A market for user-facing ‘trust’ tools based on this ontology should begin to appear soon that can stimulate the evolution of one’s semantic applications.

Insofar as timing, the best strategy is to internally incubate the present ontology, with plans to then fully adopt the second Candidate Recommendation. This gives the standardization process for this Recommendation a chance to achieve a better level of maturity and completeness.

Category: Data Quality, Enterprise Data Management, Information Development, Information Governance, Information Management, Information Strategy, Information Value, Master Data Management, Metadata, Semantic Web, Web Content Management
No Comments »

by: Bsomich
21  Dec  2013

Weekly IM Update.

Missed what happened in the MIKE2.0 Community this week? Check out our weekly update:

 

 
 logo.jpg

The Five Phases of MIKE2.0

In order to realize results more quickly, the MIKE2.0 Methodology has abandoned the traditional linear or waterfall approach to systems development. In its stead, MIKE2.0 has adopted an iterative, agile approach called continuous implementation. This approach divides the development and rollout of anentire system into a series of implementation cycles. These cycles identify and prioritize the portions of the system that can be constructed and rolled out before the entire system is complete. Each cycle also includes

  • A feedback step to evaluate and prioritize the implementation results
  • Strategy changes
  • Improvement requests on the future implementation cycles.

Following this approach, there are five phases to the MIKE2.0 Methodology:

Feel free to check them out when you have a moment.

  Sincerely,

MIKE2.0 Community

Contribute to MIKE:

Start a new article, help with articles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

Join Us on
images.jpg

Did You Know?
All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

This Week’s Food for Thought:

Turning decision making into a game

Organisations are more complex today than ever before, largely because of the ability that technology brings to support scale, centralisation and enterprise-wide integration.  One of the unpleasant side effects of this complexity is that it can take too long to get decisions made.Read more.

Making Sense of Microsemantics

Like many, I’m one who’s been around since the cinder block days, once entranced by shiny Tektronix tubes stationed nearby a dusty card sorter. After years using languages as varied as Assembler through Scheme, I’ve come to believe the shift these represented, from procedural to declarative, has well-improved the flexibility of software organizations produce.Read more.

Does your company need data visualization apps?

Few learned folks dispute the fact that the era of Big Data has arrived. Debate terms if you like, but most of us are bombarded with information these days. The question is turning to, How do we attempt to understand all of this data?

Read more.

 

  Forward this message to a friend

 

Category: Information Development
No Comments »

by: Robert.hillard
17  Dec  2013

Turning decision making into a game

Organisations are more complex today than ever before, largely because of the ability that technology brings to support scale, centralisation and enterprise-wide integration.  One of the unpleasant side effects of this complexity is that it can take too long to get decisions made.

With seemingly endless amounts of information available to management, the temptation to constantly look for additional data to support any decision is too great for most executives.  This is even without the added fear of making the wrong decision and hence trying to avoid any decision at all.

While having the right information to support a choice is good, in a world of Big Data where there are almost endless angles that can be taken, it is very hard to rule a line under the data and say “enough is enough”.

Anyone who has ever studied statistics or science would know that the interpretation of results is something that needs to be based on criteria that have been agreed before the data is collected.  Imagine if the veracity of a drug could be defined after the tests had been conducted, inevitably the result would be open to subjective interpretation.

Gamification

The application of game mechanics to business is increasingly popular with products in the market supporting business activities such as sales, training and back-office processing.

Decision making, and the associated request for supporting data, is another opportunity to apply game mechanics.  Perhaps a good game metaphor to use is volleyball.

As most readers will know, volleyball allows a team to hit a ball around their own side of the court in order to set it up for the best possible return to the opposition.  However, each team is only allowed to have three hits before returning it over the net, focusing the team on getting it quickly to the best possible position.

Management decision making should be the same.  The team should agree up-front what the best target position would be to get the decision “ball” to and make sure that everyone is agreed on the best techniques to get the decision there.  The team should also agree on a reasonable maximum number of “hits” or queries to be allowed before a final “spike” or decision.is made.

Application

That might mean for job interviews there will be no more than three interviews.  For an investment case there will be no more than two meetings and three queries for new data.  For a credit application there will no more than two queries for additional paperwork.

The most important aspect of improving the speed of decision making is the setting of these rules before the data for the decision is received.  It is too tempting once three interviews with a job candidate have been completed to think that “just one more” will answer all the open questions.  There will always be more data that could make an investment case appear more watertight.  It is always easier to ask another question than to provide an outright yes or no to a credit application or interview candidate.

But simply setting rules doesn’t leverage the power of gamification.  There needs to be a spirit of a shared goal.  Everyone on the decision side of the court needs to be conscious that decision volleyball means that each time they are “hitting” the decision to the next person they are counting down to one final “spike” of a yes or no answer.  The team are all scored regardless of whether they are the first to hit the “decision ball” or the last to make the “decision spike”.

Shared story

Games, and management, are about more than a simple score.  They are also about shared goals and an overarching narrative. The storyline needs to be compelling and keep participants engaged in between periods of intense action.

For management decisions, it is important that the context has an engaging goal that can be grasped by all.  In the game of decision volleyball, this should include the challenge and narrative of the agile organisation.  The objective is not just to make the right decision, but also along the way to set things up for the final decision maker to achieve the “spike” with the best chance of a decision which is both decisive and right.

The game of decision volleyball also has the opportunity to bring the information providers, such as the business intelligence team, into the story as well.  Rather than simply providing data upon request, without any context, they should be engaged in understanding the narrative that the information sits in and how it will set the game up for that decisive “spike”.

Tags:
Category: Business Intelligence
No Comments »

by: Phil Simon
17  Dec  2013

Does Your Company Need Data Visualization Apps?

Few learned folks dispute the fact that the era of Big Data has arrived. Debate terms if you like, but most of us are bombarded with information these days. The question is turning to, How do we attempt to understand all of this data?

With the proliferation of low-cost and free dataviz tools, isn’t it fair to ask about the need for standalone data visualization applications? It’s not as if it takes hundreds of thousands of dollars to deploy these tools. It’s not 1998 anymore.

And Bill Shander recently asks the $60,000 dataviz question on the Harvard Business Review blog. Shander writes:

Data visualization can produce big benefits, some of which are subtle yet powerful. One of the biggest benefits is personalization–e.g., enabling potential customers to estimate the value of a complex solution to their challenges.

It’s an interesting post and a timely one for me, what with The Visual Organization coming out soon. In short, I agree. I doubt that a barber or salon owner would need data visualization tools. If you’re reading this blog, however, I doubt that you work in either business.

Dataviz: Cheaper and Easier than Ever

Shander correctly points out that cost can get out of hand. Fair enough. At the same time, though, let’s not forget the advent of open-source and cloud-based tools that have dramatically changed the equation. You can be up and running with Tableau in a few minutes. D3 is completely free, although learning it takes some time.

Beyond cost, ease of use is orders of magnitude better than, say, 1997. I remember those days. Proper programmers needed to code. People in different lines of business needed to rely almost exclusively on IT for these types of things.

My, how times have changed. Drag-and-drop functionality is almost a sine qua non. Few vendors hawk their wares as “only for developers.” It’s become much easier to add disparate data sources, although power users can still do a great deal more than the average functional (read: non-technical user).

Simon Says

In the future, more and more companies will realize that Excel is not enough. Big Data is here to stay, and dataviz helps us make sense of it–and ask better questions. Those intent on trying to cram square pegs into round holes will forgo tremendous opportunities.

Feedback

What say you?

If you’d like to watch the trailer to The Visual Organization, see below.

Tags: , ,
Category: Information Management, Information Value
No Comments »

Calendar
Collapse Expand Close
TODAY: Sun, December 17, 2017
December2013
SMTWTFS
1234567
891011121314
15161718192021
22232425262728
2930311234
Archives
Collapse Expand Close
Recent Comments
Collapse Expand Close