Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Archive for September, 2013

by: Ocdqblog
29  Sep  2013

The Baseball Strike Zone and Data Quality

In Major League Baseball (MLB), a strike is a ball thrown by the pitcher that the batter either hits into foul territory (i.e., out of bounds), swings at but misses, or doesn’t swing at but is called a strike by the umpire.  The last scenario is where the strike zone is used, which is currently officially defined by MLB as:

“That area over home plate the upper limit of which is a horizontal line at the midpoint between the top of the shoulders and the top of the uniform pants, and the lower level is a line at the hollow beneath the knee cap.  The strike zone shall be determined from the batter’s stance as the batter is prepared to swing at a pitched ball.”

In data quality parlance, we could say the baseball strike zone is a mixture of real-world alignment and fitness for the purpose of use since while the dimensions of home plate never change (it is always 17 inches wide), the dimensions of each batter do (e.g., shorter players have a smaller strike zone).

As the historical timeline of the strike zone shows, there have been 13 official changes to its rules since 1876.  Davey Johnson once summed up all the strike zone rule changes in a single statement: “It’s always been the job of the batter and pitcher to recognize the strike zone for that particular night, whether it is high or wide, and adjust accordingly.  It’s been like that for like two hundred years.”

Three Strike Zones

As Ted Williams explained, “the batter has three strike zones: his own, the opposing pitcher’s, and the umpire’s.  The umpire’s zone is defined by the rule book, but it’s also more importantly defined by the way the umpire works.  A good umpire is consistent so you can learn his strike zone.  The batter has a strike zone in which he considers the pitch the right one to hit.  The pitchers have zones where they are most effective.  Once you know the pitcher and his zone you can get set for a particular pitch.”

The words of the Splendid Splinter splendidly remind you that when defining data quality standards you need to know the reference points from which you are approaching those standards.  His emphasis on consistency, not accuracy, being the trademark of a good umpire is also a classic corollary to data quality.  While accuracy is often a hotly debated topic, consistency is easier to get consensus on.  And when data is consistently represented the same way, it’s easier to correct when it’s inaccurate.

Umpiring the Umpires

Of course, not only do you need to define dimensions of data quality (of which consistency and accuracy are only two of many), you need to monitor and measure data quality for ongoing improvement.

In 2002, MLB started privately measuring the accuracy of each umpire’s strike zone with monitoring technology, which used a series of cameras to record the locations of pitches to compare the position of the ball as it entered the official strike zone for comparison with how the umpire called the pitch (e.g., an umpire not calling a borderline pitch a strike when the monitoring system showed that it was).

The Strike Zone Strikes Back

MLB’s monitoring grades its umpires on accuracy and determines which umpires receive post season assignments (i.e., umpiring playoff games).  John Smoltz said this monitoring killed the game because umpires became so grade-conscious that they were unwilling to call close pitches a strike.

While continuously monitoring your data quality won’t kill your organization (in fact, the opposite is closer to the truth), it is possible to monitor data quality a little too continuously, causing you to call too many strikes against your business users as they step up to the plate on a daily basis, using data as best they can to help the organization stay in the business equivalent of the baseball playoff race.

Without question, you need a strike zone—data quality standards—so everyone knows the rules of the data game.  Just don’t let it strike back against your organization’s business goals.  After all, baseball is not about balls and strikes.  Baseball is about wins and losses—just like every business is.

Tags: ,
Category: Data Quality
No Comments »

by: Bsomich
28  Sep  2013

Weekly IM Update

 
 
 logo.jpg

What is an Open Methodology Framework?

An Open Methodology Framework is a collaborative environment for building methods to solve complex issues impacting business, technology, and society.  The best methodologies provide repeatable approaches on how to do things well based on established techniques. MIKE2.0′s Open Methodology Framework goes beyond the standards, techniques and best practices common to most methodologies with three objectives:

  • To Encourage Collaborative User Engagement
  • To Provide a Framework for Innovation
  • To Balance Release Stability with Continuous Improvement

We believe that this approach provides a successful framework accomplishing things in a better and collaborative fashion. What’s more, this approach allows for concurrent focus on both method and detailed technology artifacts. The emphasis is on emerging areas in which current methods and technologies lack maturity.

The Open Methodology Framework will be extended over time to include other projects. Another example of an open methodology, is open-sustainability which applies many of these concepts to the area of sustainable development. Suggestions for other Open Methodology projects can be initiated on this article’s talk page.

We hope you find this of benefit and welcome any suggestions you may have to improve it.

Sincerely,

MIKE2.0 Community

 
Popular Content

Did you know that the following wiki articles are most popular on Google? Check them out, and feel free to edit or expand them!

What is MIKE2.0?
Deliverable Templates
The 5 Phases of MIKE2.0
Overall Task List
Business Assessment Blueprint
SAFE Architecture
Information Governance Solution

Contribute to MIKE:

Start a new article, help with articles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

 Join Us on
images.jpg

 

This Week’s Blogs for Thought:

The Gordian Knot of Big Data Integration

According to legend, the Gordian Knot was an intricate knot binding the ox-cart of the peasant-turned-king Gordias of Phrygia (located in the Balkan Peninsula of Southeast Europe) to a pole within the temple of Zeus in Gordium.  It was prophesied that whoever loosed the knot would become ruler of all Asia.

In 333 BC, Alexander the Great sliced the knot in half with a stroke of his sword (the so-called “Alexandrian solution”) and that night there was a violent thunderstorm, which Alexander took as a sign that Zeus was pleased and would grant Alexander many victories.  He then proceeded to conquer Asia, and much of the then known world, apparently fulfilling the prophecy.

Read more.
 

What can we learn from the failure of NFC?

For years now the industry has confidently predicted that Near Field Communication (NFC) would be the vehicle by which we would all abandon our leather wallets and embrace our smartphones for electronic payments.It hasn’t happened and I don’t believe NFC is the solution.  Here’s what we can learn from the failure of NFC to take off.  As enticing as the idea is of using our phone for payments is, there are two problems that the focus on NFC fails to solve.

Read more.

Through a PRISM, Darkly

By now, most people have heard about the NSA surveillance program known as PRISM that, according to a recent AP story, is about even bigger data seizure. Without question, big data has both positive and negative aspects, but these recent revelations are casting more light on the darker side of big data, especially   its considerable   data privacy implications.

A few weeks ago, Robert Hillard blogged about how we are living as far from 1984 as George Orwell, whose dystopian novel 1984, which was published in 1948, has recently spiked in sales since apparently Big Brother is what many people see when they look through a PRISM, darkly.

Read more.

 

 

Category: Information Development
No Comments »

by: Ocdqblog
23  Sep  2013

The Gordian Knot of Big Data Integration

According to legend, the Gordian Knot was an intricate knot binding the ox-cart of the peasant-turned-king Gordias of Phrygia (located in the Balkan Peninsula of Southeast Europe) to a pole within the temple of Zeus in Gordium.  It was prophesied that whoever loosed the knot would become ruler of all Asia.

In 333 BC, Alexander the Great sliced the knot in half with a stroke of his sword (the so-called “Alexandrian solution”) and that night there was a violent thunderstorm, which Alexander took as a sign that Zeus was pleased and would grant Alexander many victories.  He then proceeded to conquer Asia, and much of the then known world, apparently fulfilling the prophecy.

Nowadays, this legend is often used as a metaphor for an intractable problem (i.e., disentangling an impossible knot) solved by “thinking outside the box” or cheating (i.e., “cutting the Gordian knot”).

NoSQL is Not an Alexandrian Solution

In the era of Big Data, organizations are amassing their own Gordian Knot, an impossibly tangled knot of data from a vast multitude of sources, both internal and external.  It has even been prophesied that those capable of setting loose the potential of big data will become rulers of the business world.

The intricate structure—or more often un-structure—of Big Data’s Gordian Knot is so complex that many organizations are turning to Hadoop and other NoSQL options to provide an Alexandrian solution, i.e., a shortcut through the data complexity that will reveal the business insights within it.

Although Hadoop, to use the most common NoSQL option as an example, is great at adding data to the organization’s ox-cart without forcing data to first be modeled, then transformed, and then loaded, the processing efficiency gained does not integrate that data—it just keeps growing the Gordian Knot.

It is the the interconnectedness, not the accumulation, of data that makes it truly valuable to the organization.  Despite Ralph Kimball’s definition of a data warehouse being the union of its data marts, more often than not a data warehouse is a confederacy of data silos whose only real interconnectedness is being co-located on the same database server.  And just because a bunch of disparate data is stored within the Hadoop Distributed File System (HDFS) doesn’t mean that it’s automatically integrated.

So, yes, it’s great that you can pull in, for example, social data about what your customers are saying about your products on Twitter and Facebook, but if that data is antisocial (i.e., not integrated) with the data about your customers and products in MDM and the data warehouse, then what use is it?

Big Data Integration

Despite the fact that, as David Linthicum recently blogged, “many believe that data integration is a problem solved long ago, and that it’s pretty much an automatic part of the fabric of IT these days, nothing could be further from the truth.  While data integration is much easier than it was 10 years ago, it’s still a complex task.  The bottom line is that our data environments are becoming more complex and distributed, and this trend will continue for at least the next 10 years.  Enterprises will continue to see the need and importance of data integration solutions.”

Unintegrated data is of little use to your organization.  Don’t let unintegrated big data grow into your organization’s Gordian Knot of Big Data Integration.

Tags: ,
Category: Information Governance, Information Management
No Comments »

by: Robert.hillard
21  Sep  2013

NFC is failing retail but drugs could come to the rescue

For years now the industry has confidently predicted that Near Field Communication (NFC) would be the vehicle by which we would all abandon our leather wallets and embrace our smartphones for electronic payments.

It hasn’t happened and I don’t believe NFC is the solution.  Here’s what we can learn from the failure of NFC to take off.  As enticing as the idea is of using our phone for payments is, there are two problems that the focus on NFC fails to solve.

The first is the barrier of adoption.  Until consumers can be confident that all of their purchases can be made with their phone, they don’t dare leave their wallets behind.  Similarly, until retailers believe that enough of their consumers aren’t carrying conventional wallets they won’t invest sufficiently to encourage take-up.

The second, and potentially more serious, issue is that the ergonomics are all wrong.  NFC payments have been designed to mimic the way retail has been done for centuries with the consumer picking-up products from shelves and taking them to a payment terminal when they’ve made their selection.

Designers of the next generation of payments solution need to step away from facilitating the exchange of currency between consumer and retailer and think about the whole transaction.  The highest priority for both shopper and shopkeeper is to rid the store of lines at the checkout and ideally eliminate the final step altogether.  If they can do away with shoplifting at the same time then all the better!

While the future of NFC is uncertain, the one technology that we know will have a place in years to come is location services.  The ability to know where smartphones, or even wearable machines, are is fundamental to a whole range of new services.  Increasingly, knowing that a customer is in a store is enough to establish the beginning of a transaction.  It is likely that the early adopters will be in areas such as food service where cafes will enter a virtual contract with their regular customers.

For electronic wallet adoption to really take-off, the supermarkets and department stores need to have a way to allow the store to pair the goods with the customer who is picking them up.  Ideally a contract will be established when a customer walks out of the store with the product, perhaps making shoplifting a thing of the past!

However, anyone familiar with this problem will immediately identify that the product identification is the real hurdle with this compelling picture.  While Radio-Frequency Identification (RFID) is the most obvious approach, the cost of adding a tag to each product on the shelf is simply too great.

Until someone solves the problem of pairing the product with the customer there is little to make smartphone-based wallets really compelling.  However the advantage of eliminating the checkout and reducing shrinkage is so large there is a ready market that will encourage rapid innovation.

The surprising innovators to watch are the drug companies who are keen to find sensors to attach to their medicines to track drug absorption and improve inventory management.  If they can make technology which is simple and cheap enough to add to a tablet it is likely that it can be printed onto the packaging of consumer products as simply as a barcode is today.

Tags:
Category: Information Development
No Comments »

by: Phil Simon
20  Sep  2013

What Does Google Tell Us about the Popularity of Dataviz?

Is contemporary dataviz really new?

So would argue no. After all, many of the same reporting, business intelligence, and analytics applications also provide at least rudimentary levels of data visualization and have for some time now. Yes, there are “pure” dataviz tools like Tableau, but clear lines of demarcation among these terms just do not exist. In fact, lines between terms blur considerably.

But I would argue that modern-day dataviz really is new. This begs the natural question: How is contemporary dataviz fundamentally different than KPIs, dashboards, and other reporting tools?

In short, dataviz is about data exploration and discovery, not traditional reporting. To me, those trried and true terms always implied that that the organization, department, group, or individual employee knew exactly what to ask and measure. Example included:

  • How many sales per square foot are we seeing?
  • What’s employee turnover?
  • What’s our return on assets?

These are still important questions, even in an era of Big Data. But  contemporary dataviz is less, well, certain. There’s a genuine curiosity at play when you don’t know exactly when you don’t know what you’re looking for, much less what you’ll find.

Ask Google

In keeping with the data discovery theme of this post, why not try to answer my question about dataviz using dataviz? Still, while it’s only a proxy, I find Google Trends to be a very useful tool for answering questions about what’s popular/new, where, when, and how things are changing. For instance, consider the searches taking place on “data visualization” over the past four years throughout the world:

us1
Since I live in the US, I was curious about how my home country broke down. In other words, is dataviz more popular in different parts of the country? With Google Trends, that’s not hard to see:

us2

Note here that new and popular are not necessarily one and the same. Again, this was meant to serve as a proxy–and to illustrate the fact that dataviz doesn’t necessarily lead to a particular next step. I was exploring the data and, if I really wanted, I could keep going.

Simon Says

Data discovery doesn’t necessarily lead to a logical outcome–and that’s fine.

Feedback

What say you?

Tags: ,
Category: Information Development
No Comments »

by: Ocdqblog
12  Sep  2013

Big Data is the Library of Babel

Richard Ordowich, commenting on my Hail to the Chiefs post, remarked how “most organizations need to improve their data literacy.  Many problems stem from inadequate data definitions, multiple interpretations and understanding about the meanings of data.  Skills in semanticstaxonomy and ontology as well as information management are required.  These are skills that typically reside in librarians but not CDOs.  Perhaps hiring librarians would be better than hiring a CDO.”

I responded that maybe not even librarians can save us by citing The Library of Babel, a short story by Argentine author and librarian Jorge Luis Borges, which is about, as James Gleick explained in his book The Information: A History, A Theory, A Flood, “the mythical library that contains all books, in all languages, books of apology and prophecy, the gospel and the commentary upon that gospel and the commentary upon the commentary upon the gospel, the minutely detailed history of the future, the interpolations of all books in all other books, the faithful catalogue of the library and the innumerable false catalogues.  This library (which others call the universe) enshrines all the information.  Yet no knowledge can be discovered there, precisely because all knowledge is there, shelved side by side with all falsehood.  In the mirrored galleries, on the countless shelves, can be found everything and nothing.  There can be no more perfect case of information glut.”

More than a century before the rise of cloud computing and the mobile devices connected to it, the imagination of Charles Babbage foresaw another library of Babel, one where “the air itself is one vast library, on whose pages are forever written all that man has ever said or woman whispered.”  In a world where word of mouth has become word of data, sometimes causing panic about who may be listening, Babbage’s vision of a permanent record of every human utterance seems eerily prescient.

Of the cloud, Gleick wrote about how “all that information—all that information capacity—looms over us, not quite visible, not quite tangible, but awfully real; amorphous, spectral; hovering nearby, yet not situated in any one place.  Heaven must once have felt this way to the faithful.  People talk about shifting their lives to the cloud—their informational lives, at least.  You may store photographs in the cloud; Google is putting all the world’s books into the cloud; e-mail passes to and from the cloud and never really leaves the cloud.  All traditional ideas of privacy, based on doors and locks, physical remoteness and invisibility, are upended in the cloud.”

“The information produced and consumed by humankind used to vanish,” Gleick concluded, “that was the norm, the default.  The sights, the sounds, the songs, the spoken word just melted away.  Marks on stone, parchment, and paper were the special case.  It did not occur to Sophocles’ audiences that it would be sad for his plays to be lost; they enjoyed the show.  Now expectations have inverted.  Everything may be recorded and preserved, at least potentially: every musical performance; every crime in a shop, elevator, or city street; every volcano or tsunami on the remotest shore; every card played or piece moved in an online game; every rugby scrum and cricket match.  Having a camera at hand is normal, not exceptional; something like 500 billion images were captured in 2010.  YouTube was streaming more than a billion videos a day.  Most of this is haphazard and unorganized.”

The Library of Babel is no longer fiction.  Big Data is the Library of Babel.

Tags: , , , , , ,
Category: Data Quality, Information Development
No Comments »

by: John McClure
10  Sep  2013

Prepositional Ontologies

Lately I’ve been crafting an ontology that uses English language prepositions in a manner that seems natural and economic while staying semantically distinguishable. For reference sake, let’s call this ontology “Grover“, from the hilarious skits Grover on Sesame Street staged to teach children nuanced semantic differences among prepositions like “at” “by” “on” and all the others (just 100) which glue together our nouns into discrete concepts upon which children sometimes but not always willingly act.

I began this effort as I noticed that ontologies were being published that named many of its properties the same as their classes. For instance, the Data Cube Vocabulary I’ve blogged about has a class named “Dataset” and a property named “dataset”. The property “dataset” names, predictably, the “Dataset” class as its range, a pattern that provides little to no “sidecar” information when a concept is labelled with subject, predicate and object. To me, this pattern is rankly duplicative.

A better approach to naming properties, our glue between our common and proper nouns, flows foremost from the requirement that all instances of rdf:Resource are named as type:name; the effects of this requirement on an ontology, can be surprisingly profound. To begin with the above pattern would be aliased to, for instance, the is:within, more accurately stating that a given Observation “is:within” a given Dataset. But, note that with a property named “dataset” there’s little chance to ever exploit its inverse, that is, one that names the mirror property that flows from a dataset to its set of observations.

With Grover however, the inverse of “is:within” is “has:within”, perhaps unfortunately because it’s such a simple statement of fact. Consequently a dataset can be queried to determine its set of observations, a seemingly costless improvement to the Data Cube Vocabulary. But it’s costless only to the extent that the resource naming pattern mentioned above, is followed:

Statement part Data Cube
resource name
Grover
resource name
Subject xyz Observation:xyz
Predicate qub:dataset is:within
Object abc Dataset:abc

The type:name convention is key to maintaining the only advantage that conventional noun-oriented property names enjoy, namely, ensuring the type of the object of a given property is of one (or more) certain classes, as ontology applications try to keep the ‘range’ of a property consistent throughout its knowledgebase operations.

But Grover’s prepositions suggest the Resource Description Framework needs to provide a new mechanism for specifying pair-wise combinations of domains and ranges which are legal as classes for Subject and Object resources in triples involving any given RDF Property. Otherwise there’s not other means evident to specify the formalities of implied property/class relationships that exist in a Grover universe.

The type:name convention works because queries can be just as easily written to test (using a LIKE operator) to test for certain prefixes on a resource’s name, e.g., for Dataset:*. However note the other use of the colon in names, namely for prepositional properties such as is:within. These prefixes are actually xml namespace prefixes, with the idea that verb-based semantic namespace identifiers present several valuable advantages for a highly practical ontology.

Ciao until next month!

Category: Information Development
No Comments »

by: Phil Simon
09  Sep  2013

On Corkscrews, Cocaine, and Duplicate Data

In 2006, I was working on a large and particularly crazy ERP implementation in Philadelphia, PA. I had plenty to do with HR and payroll data conversions, but people there knew that I was pretty adept at data manipulation and ETL. It wasn’t totally surprising, then, that someone one day tapped on my shoulder.

The PM asked me to visit the folks in procurement. Turns out that the product list was a complete mess. Of the nearly 60,000 “different” items listed at this hospital, there were maybe 15,000 unique ones. Different product codes and descriptions spiraled out of control over the past twenty years. Could I work my magic and fix them?

SELECT DISTINCT

Ah, if only I were that good. Forget the fact that I’m not a doctor. I know the difference between a syringe and a lab coat, but I could not possibly tell you that the specific differences among different types of medical supplies. I could run queries that counted the number of times that cocaine appeared in the list, but I couldn’t tell you that type A was different than type B. (And I wasn’t willing to sample either to find out.)

After I looked at the data, I told the PM that it was just too messy. No number of SQL statements would fix the problem. Someone was going to have to roll up his or her sleeves and decide on the master list. Ideally, that person would have years of direct experience working with these supplies.

He was none too pleased. Maybe I wasn’t as good as my reputation. Dupes ran rampant at that hospital, and not just with respect to medical supplies. I should just be able to dedupe immediately, according to the PM.

Amazon Gets It

OXO Good Grips Winged Corkscrew, Black

Contrast that experience with my recent order of a OXO Good Grips Winged Corkscrew, Black from Amazon. In a rare PICNIC, I submitted the same order twice.

So, did Amazon accept my duplicate order?

No, the site made me confirm that I wanted to order a second wine opener.

Simon Says: Build in Dupe Protection

You can’t prevent user errors. You can, however, build in system safeguards and soft edits to minimize duplicate records. Do it.

Feedback

What say you?

 

Tags: ,
Category: Information Management, Information Value
No Comments »

by: John McClure
07  Sep  2013

Data Cubes and LOD Exchanges

Recently the W3C issued a Candidate Recommendation that caught my eye about the Data Cube Vocabulary which claims to be both very general but also useful for data sets such as survey data, spreadsheets and OLAP. The Vocabulary is based on SDMX, an ISO standard for exchanging and sharing statistical data and metadata among organizations, so there’s a strong international impetus towards consensus about this important piece of Internet 3.0.

Linked Open Data (LOD) is as W3C says “an approach to publishing data on the web, enabling datasets to be linked together through references to common concepts.” This is an approach based on W3′s Resource Desription Frmework (RDF), of course. So the foundational ontology actually to implement this quite worthwhile but grandiose LOD vision is undoubtedly to be this Data Cube Vocabulary — very handily enabling the exchange of semantically annotated HTML tables. Note that relational government data tables can now be published as “LOD data cubes” which are shareable with a public adhering to this new world-standard ontology.

But as the key logical layer in this fast-coming semantic web, Data Cubes very well may affect the manner an Enterprise ontology might be designed. Start with the fact that Data Cubes are themselves built upon several more basic RDF-compliant ontologies:

The Data Cube Vocabulary says that every Data Cube is a Dataset that has a Dataset Definition (more a “one-off ontology” specification). Any dataset can have many Slices of a metaphorical, multi-dimensional “pie” of the dataset. Within the dataset itself and within each slice are unbounded masses of Observations – each observation has values for not only the measured property itself but also any number of applicable key values — that’s all there’s to it, right?

Think of an HTML table. A “data cube” is a “table” element, whose columns are “slices” and whose rows are “keys”. “Observations” are the values of the data cells. This is clear but now the fun starts with identifying the TEXTUAL VALUES that are within the data cells of a given table.

Here is where “Dataset Descriptions” come in — these are associated with an HTML table elment an LOD dataset. These describe all possible dataset dimension keys and the different kinds of properties that can be named in an Observation. Text attributes, measures, dimensions, and coded properties are all provided, and all are sub-properties of rdf:Property.

This is why a Dataset Description is a “one-off ontology”, because it defines only text and numeric properties and, importantly, no classes of functional things. So with perfect pitch, the Data Cube Vocabulary virtually requires Enterprise Ontologies to ground their property hierarchy with the measure, dimension, code, and text attribute properties.

Data Cubes define just a handful of classes like “Dataset” “Slice” “SliceKey” and “Observation”. How are these four classes best inserted to an enterprise’s ontology class hierarchy? “Observation” is easy — it should be the base class of all observable properties, that is, all and only textual properties. “SliceKey” is a Role that an observable property can play. A “Slice” is basically an annotated rdf:Bag, mediated by skos:Collection at times.

A “Dataset” is a hazy term applicable to anythng classifiable as data objects or as data structures, that is, “a set of data” is merely an aggregate collection of data items just as a data object or data structure is. Accordingly, a Data Cube “dataset” class might be placed at or near the root of a class hierarchy, but its more clear to establish it as a subclass of an html:table class.

There’s more to this topic saved for future entries — all those claimed dependencies need to be examined.

Category: Business Intelligence, Information Development, Semantic Web
No Comments »

by: Bsomich
07  Sep  2013

Weekly IM Update.

 
 
 logo.jpg

Available on Archive: The Open MIKE Podcast Series

The Open MIKE Podcast is a video podcast show, hosted by Jim Harris, which discusses aspects of the MIKE2.0 framework, and features content contributed to MIKE 2.0 Wiki Articles, Blog Posts, and Discussion Forums.

You can scroll through all 12 of the Open MIKE Podcast episodes below:

Feel free to check them out when you have a moment.

Sincerely,

MIKE2.0 Community

 
Contribute to MIKE:

Start a new article, help with articles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

 Join Us on

images.jpg

Did You Know?
All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

 

This Week’s Blogs for Thought:

Hail to the Chiefs

The MIKE2.0 wiki defines the Chief Data Officer (CDO) as one that plays a key executive leadership role in driving data strategy, architecture, and governance as the executive leader for data management activities.

“Making the most of a company’s data requires oversight and evangelism at the highest levels of management,” Anthony Goldbloom and Merav Bloch explained in their Harvard Business Review blog post Your C-Suite Needs a Chief Data Officer.

Goldbloom and Bloch describe the CDO as being responsible for identifying how data can be used to support the company’s most important priorities, making sure the company is collecting the right data, and ensuring the company is wired to make data-driven decisions.

Check it out.

Can Big Data Save CMOs?

Executive turnover has always fascinated me, especially as of late. HP’s CEO Leo Apotheker had a very short run and Yahoo! has been a veritable merry-go-round over the last five years. Beyond the CEO level, though, many executive tenures resemble those of Spinal Tap drummers. For instance, CMOs have notoriously short lifespans. While the average tenure of a CMO has increased from 23.6 to 43 months since 2004, it’s still not really a long-term position. And I wonder if Big Data can change that.

Read more.

The Downside of DataViz

ITWorld recently ran a great article on the perils of data visualization. The piece covers a number of companies, including Carwoo, a startup that aims to make car buying easier. The company has been using dataviz tool Chartio for a few months. From the article:

Read more.

 
 

  Forward this message to a friend

Category: Information Development
No Comments »

Calendar
Collapse Expand Close
TODAY: Tue, March 28, 2017
September2013
SMTWTFS
1234567
891011121314
15161718192021
22232425262728
293012345
Archives
Collapse Expand Close
Recent Comments
Collapse Expand Close