Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Posts Tagged ‘analytics’

by: Ocdqblog
29  Jul  2014

Do you know good information when you hear it?

When big data is discussed, we rightfully hear a lot of concerns about signal-to-noise ratio. Amidst the rapidly expanding galaxy of information surrounding us every day, most of what we hear sounds like meaningless background static. We are so overloaded but underwhelmed by information, perhaps we are becoming tone deaf to signal, leaving us hearing only noise.

Before big data burst onto the scene, bursting our eardrums with its unstructured cacophony, most of us believed we had data quality standards and therefore we knew good information when we heard it. Just as most of us believe we would know great music when we hear it.

In 2007, Joshua Bell, a world-class violinist and recipient of that year’s Avery Fisher Prize, an award given to American musicians for outstanding achievement in classical music, participated in an experiment for The Washington Post columnist Gene Weingarten, whose article about the experiment won the 2008 Pulitzer Prize for feature writing.

During the experiment, Bell posed as a street performer donned in jeans and a baseball cap and played violin for tips in a Washington, D.C. metro station during one Friday morning rush hour. For 45 minutes he performed 6 classical music masterpieces, some of the most elegant music ever written on one of the most valuable violins ever made.

Weingarten described it as “an experiment in context, perception, and priorities—as well as an unblinking assessment of public taste: In a banal setting at an inconvenient time, would beauty transcend? Each passerby had a quick choice to make, one familiar to commuters in any urban area where the occasional street performer is part of the cityscape: Do you stop and listen? Do you hurry past with a blend of guilt and irritation, aware of your cupidity but annoyed by the unbidden demand on your time and your wallet? Do you throw in a buck, just to be polite? Does your decision change if he’s really bad? What if he’s really good? Do you have time for beauty?”

As Bell performed, over 1,000 commuters passed by him. Most barely noticed him at all. Very few stopped briefly to listen to him play. Even fewer were impressed enough to toss a little money into his violin case. Although three days earlier he had performed the same set in front of a sold out crowd at Boston Symphony Hall where ticket prices for good seats started at $100 each, at the metro station Bell earned only $32.17 in tips.

“If a great musician plays great music but no one hears,” Weingarten pondered, “was he really any good?”

That metro station during rush hour is an apt metaphor for the daily experiment in context, perception, and priorities facing information development in the big data era. In a fast-paced business world overcrowded with fellow travelers and abuzz with so much background noise, if great information is playing but no one hears, is it really any good?

Do you know good information when you hear it? How do you hear it amidst the noise surrounding it?

 

Tags: , , , ,
Category: Data Quality, Information Development
No Comments »

by: Robert.hillard
26  Jul  2014

Your insight might protect your job

Technology can make us lazy.  In the 1970s and 80s we worried that the calculator would rob kids of insight into the mathematics they were learning.  There has long been evidence that writing long-hand and reading from paper are far superior vehicles for absorbing knowledge than typing and reading from a screen.  Now we need to wonder whether that ultimate pinnacle of humanity’s knowledge, the internet, is actually a negative for businesses and government.

The internet has made a world of experience available to anyone who is willing to spend a few minutes seeking out the connections.  Increasingly we are using big data analytics to pull this knowledge together in an automated way.  Either way, the summed mass of human knowledge often appears to speak as one voice rather than the cacophony that you might expect of a crowd.

Is the crowd killing brilliance?

The crowd quickly sorts out the right answer from the wrong when there is a clear point of reference.  The crowd is really good at responding to even complex questions.  The more black or white the answer is, the better the crowd is at coming to a conclusion.  Even creative services, such as website design, are still problems with a right or wrong answer (even if there is more than one) and are well suited to crowd sourcing.

As the interpretation of the question or weighting of the answer becomes more subjective, it becomes harder to discern the direction that the crowd is pointing with certainty.  The lone voice with a dissenting, but insightful, opinion can be shouted down by the mob.

The power of the internet to answer questions is being used to test new business ideas just as quickly as to find out the population of Nicaragua.  Everything from credit cards to consumer devices are being iteratively crowd sourced and crowd tested to great effect.  Rather than losing months to focus groups, product design and marketing, smart companies are asking their customers what they want, getting them involved in building it and then getting early adopters to provide almost instant feedback.

However, the positive can quickly turn negative.  The crowd comments early and often.  The consensus usually reinforces the dominant view.  Like a bad reality show, great ideas are voted off before they have a chance to prove themselves.  If the idea is too left-field and doesn’t fit a known need, the crowd often doesn’t understand the opportunity.

Automating the crowd

In the 1960s and 1970s, many scientists argued that an artificial brain would display true intelligence within the bounds of the twentieth century.  Research efforts largely ground to a halt as approach after approach turned out to be a dead-end.

Many now argue that twenty-first century analytics is bridging the gap.  By understanding what the crowd has said and finding the response to millions, hundreds of millions and even billions of similar scenarios the machine is able to provide a sensible response.  This approach even shows promise of meeting the famous Turning test.

While many argue that big data analytics is the foundation of artificial intelligence, it isn’t providing the basis of brilliant or creative insight.  IBM’s Watson might be able to perform amazing feats in games of Jeopardy but the machine is still only regurgitating the wisdom of the crowd in the form of millions of answers that have been accumulated on the internet.

No amount of the crowd or analytics can yet make a major creative leap.  This is arguably the boundary of analytics in the search for artificial intelligence.

Digital Disruption could take out white collar jobs

For the first time digital disruption, using big data analytics, is putting white collar jobs at the same risk of automation that blue collar worker have had to navigate over the last fifty years.  Previously we assumed process automation would solve everything, but our organisations have become far too complex.

Business process management or automation has reached a natural limit in taking out clerical workers.  As processes have become more complex, and their number of interactions has grown exponentially, it has become normal for the majority of instances to display some sort of exception.  Employees have gone from running processes to handling exceptions.  The change in job function has largely masked the loss of traditional clerical works since the start of mass rollout of business IT.

Most of this exception handling, though, requires insight but no intuitive leap.  When asked, employees will tell you that their skill is to know how to connect the dots in a standard way to every unique circumstance.

Within organisations, email and, increasingly, social platforms have been the tools of choice for collaboration and crowdsourcing solutions to individual process exceptions.  Just as big data analytics is automating the hunt for answers on the internet, it is now starting to offer the promise of the same automation within the enterprise.

In the near future, applications driven by big data analytics will allow computers to move from automating processes to also handling any exceptions in a way that will feel almost human to customers of everything from bank mortgages to electric utilities.

Where to next for the jobs?

Just as many white collar jobs have moved from running processes in the 70s and 80s to handling their exceptions in the 90s and new millennium, these same jobs need to move now to find something new.

At the same time, the businesses they work for are being disrupted by the same digital forces and are looking for new sources of revenue.

These two drivers may come together to offer an opportunity for those who spent their time handling exceptions either for customer or internal processes.  Future opportunities are in spotting opportunities in business through intuitive insights and creative leaps and turning them into product or service inventions rather than seeking permission from the crowd who will force a return to the conservative norm.

Perhaps this is why design thinking and similar creative approaches to business have suddenly joined the mainstream.

Tags: , ,
Category: Information Development, Information Strategy, Web2.0
No Comments »

by: Ocdqblog
25  Feb  2014

Big Data Strategy: Tag, Cleanse, Analyze

Variety is the characteristic of big data that holds the most potential for exploitation, Edd Dumbill explained in his Forbes article Big Data Variety means that Metadata Matters. “The notion of variety in data encompasses the idea of using multiple sources of data to help understand a problem. Even the smallest business has multiple data sources they can benefit from combining. Straightforward access to a broad variety of data is a key part of a platform for driving innovation and efficiency.”

But the ability to take advantage of variety, Dumbill explained, is hampered by the fact that most “data systems are geared up to expect clean, tabular data of the sort that flows into relational database systems and data warehouses. Handling diverse and messy data requires a lot of cleanup and preparation. Four years into the era of data scientists, most practitioners report that their primary occupation is still obtaining and cleaning data sets. This forms 80% of the work required before the much-publicized investigational skill of the data scientist can be put to use.”

Which begs the question Mary Shacklett asked with her TechRepublic article Data quality: The ugly duckling of big data? “While it seems straightforward to just pull data from source systems,” Shacklett explained, “when all of this multifarious data is amalgamated into vast numbers of records needed for analytics, this is where the dirt really shows.” But somewhat paradoxically, “cleaning data can be hard to justify for ROI, because you have yet to see what clean data is going to deliver to your analytics and what the analytics will deliver to your business.”

However, Dumbill explained, “to focus on the problems of cleaning data is to ignore the primary problem. A chief obstacle for many business and research endeavors is simply locating, identifying, and understanding data sources in the first place, either internal or external to an organization.”

This is where metadata comes into play, providing a much needed context for interpreting data and helping avoid semantic inconsistencies that can stymie our understanding of data. While good metadata has alway been a necessity, big data needs even better metadata. “The documentation and description of datasets with metadata,” Dumbill explained, “enhances the discoverability and usability of data both for current and future applications, as well as forming a platform for the vital function of tracking data provenance.”

“The practices and tools of big data and data science do not stand alone in the data ecosystem,” Dumbill concluded. “The output of one step of data processing necessarily becomes the input of the next.” When approaching big data, the focus on analytics, as well as concerns about data quality, not only causes confusion about the order of those steps, but also overlooks the important role that metadata plays in the data ecosystem.

By enhancing the discoverability of data, metadata essentially replaces hide-and-seek with tag. As we prepare for a particular analysis, metadata enables us to locate and tag the data most likely to prove useful. After we tag which data we need, we can then cleanse that data to remove any intolerable defects before we begin our analysis. These three steps—tag, cleanse, analyze—form the basic framework of a big data strategy.

It all begins with metadata management. As Dumbill said, “it’s not glamorous, but it’s powerful.”

Tags: , ,
Category: Data Quality, Information Development
No Comments »

by: Ocdqblog
14  Aug  2013

Let the Computers Calculate and the Humans Cogitate

Many organizations are wrapping their enterprise brain around the challenges of business intelligence, looking for the best ways to analyze, present, and deliver information to business users.  More organizations are choosing to do so by pushing business decisions down in order to build a bottom-up foundation.

However, one question coming up more frequently in the era of big data is what should be the division of labor between computers and humans?

In his book Emergence: The Connected Lives of Ants, Brains, Cities, and Software, Steven Johnson discussed how the neurons in our human brains are only capable of two hundred calculations per second, whereas the processors in computers can perform millions of calculations per second.

This is why we should let the computers do the heavy lifting for anything that requires math skills, especially the statistical heaving lifting required by big data analytics.  “But unlike most computers,” Johnson explained, “the brain is a massively parallel system, with 100 billion neurons all working away at the same time.  That parallelism allows the brain to perform amazing feats of pattern recognition, feats that continue to confound computers—such as remembering faces or creating metaphors.”

As the futurist Ray Kurzweil has written, “humans are far more skilled at recognizing patterns than in thinking through logical combinations, so we rely on this aptitude for almost all of our mental processes. Indeed, pattern recognition comprises the bulk of our neural circuitry.  These faculties make up for the extremely slow speed of human neurons.”

“Genuinely cognizant machines,” Johnson explained, “are still on the distant technological horizon, and there’s plenty of reason to suspect they may never arrive.  But the problem with the debate over machine learning and intelligence is that it has too readily been divided between the mindless software of today and the sentient code of the near future.”

But even if increasingly more intelligent machines “never become self-aware in any way that resembles human self-awareness, that doesn’t mean they aren’t capable of learning.  An adaptive information network capable of complex pattern recognition could prove to be one of the most important inventions in all of human history.  Who cares if it never actually learns how to think for itself?”

Business intelligence in the era of big data and beyond will best be served if we let both the computers and the humans play to their strengths.  Let’s let the computers calculate and the humans cogitate.

Tags: , , ,
Category: Business Intelligence
No Comments »

by: Phil Simon
01  Mar  2012

The Data War Room

The 1993 documentary The War Room tells the story of the 1992 US presidential campaign from a behind-the-scenes’ perspective. The film shows first-hand how Bill Clinton’s campaign team responded to different crises, including allegations of marital infidelity. While a bit dated today, it’s nonetheless a fascinating look into “rapid response” politics just when technology was starting to change traditional political media.

Today, we’re starting to see organizations set up their own data war rooms for essentially the same reasons: to respond to different crises and opportunities. Information Week editor Chris Murphy writes about one such company in “Why P&G CIO Is Quadrupling Analytics Expertise”:

[Procter & Gamble CIO Filippo] Passerini is investing in analytics expertise because the model for using data to run a company is changing. The old IT model was to figure out which reports people wanted, capture the data, and deliver it to the key people weeks or days after the fact. “That model is an obsolete model,” he says.

Murphy hits the nail on the head in this article. Now, let’s delve a bit depper into the need for a new model.

The Need for a New Model

There are at least three factors driving the need for a new information management (IM) model in many organizations. First, let’s look at IT track records. How many organizations invested heavily in the late 1990s and early 2000s on expensive, on-premise ERP, CRM, and BI applications–only to have these investments ultimately disappoint the vast majority of stakeholders? Now, on-premise isn’t the only option. Big Data and cloud computing are gaining traction in many organizations.

Next up: time to respond. Beyond the poor track record of many traditional IT investments, we live in different times relative to even ten years ago. Things happen so much faster today. Why? The usual supects are the explosion of mobility, broadband, tablets, and social media. Ten years ago, the old, reactive requirement-driven IM model might have made sense. Today, however, that model becoming increasingly difficult to justify. For instance, a social media mention might cause a run on products. By the time that proper requirements have been gathered, a crisis has probably exacerbated. An opportunity has probably been squandered.

Third, data analysis and manipulation tools have become much more user-friendly. Long gone are the days in which people needed a computer science or programming background to play with data. Of course, data modeling, data warehousing, and other heavy lifting necessitate more technical skills and backgrounds. But the business layperson, equipped with the right tools and a modicum of training, can easily investigate and drill down on issues related to employees, consumers, sales, and the like.

Against this new backdrop, which of the following makes more sense?

  • IT analysts spending the next six weeks or months interacting with users and building reports?
  • Skilled users creating their own reports, creating and interpreting their own analytics, and making business decisions with minimal IT involvement (aka, self service)?

Simon Says

Building a data war room is no elixir. You still have to employ people with the skills to manage your organizations data–and hold people accountable for their decisions. Further, rapid response means making decisions without all of the pertinent information. If your organization crucifies those who make logical leaps of faith (but ultimately turn out to be “wrong” in their interpretation of the data), it’s unlikely that this new model will take hold.

Feedback

What say you?

 

Tags: , , ,
Category: Business Intelligence, Information Management
1 Comment »

Calendar
Collapse Expand Close
TODAY: Thu, October 19, 2017
October2017
SMTWTFS
1234567
891011121314
15161718192021
22232425262728
2930311234
Archives
Collapse Expand Close
Recent Comments
Collapse Expand Close