Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Archive for June, 2012

by: Robert.hillard
26  Jun  2012

Customers smile at the edges

One of the best ways of understanding your success is by measuring how happy your customers are.  To make sense of their response you obviously need to segment customers.  In this post I propose segmenting customers according to the complexity of the products and services that they choose to use.

More complex customers typically have a range of relationships with you.  In telecommunications this might mean having some mobile phones, land lines and business accounts.  In banking this could be deposits, loans, business partnerships and superannuation.  For insurance this could include a portfolio of personal and business policies.  Even government is part of the picture, with their most complex stakeholders needing a whole range of services which typically span agencies.

In general, complexity is a good proxy for addressable value.  Of course, addressable value does not automatically convert into real revenue given that so many companies compete for the same premium customers.  These same customers also tend to be the most adept at negotiating advantageous financial terms.

What we find is that the simplest customers, usually those that have just one product or service, are the easiest to satisfy.  Particularly with the rise of digital capabilities, they are often able to be provided with a fully integrated approach to managing their needs.  Because their needs are simple, they are often the pilot for the latest approach to meeting their needs be it online or through other automated channels, so they can regularly receive disproportionate attention from the web or digital teams.

Of course, this simple customer only has a finite addressable value.  Worse, because they are entry level into the organisation many have been offered attractive “new customer” pricing which even further erodes their value.

Ironically, like the simplest customers, we also find that the most complex customers are usually pretty satisfied with the service that they are offered.  These premium customers are seen as having the capacity to provide a substantial return.  They are worth investing in, and generally know it, so demand a high degree of servicing.  This high touch means that any complexity in the organisation and across the products that they hold is handled on their behalf.  With such support, it is little wonder that they are well satisfied.

It is also worth noting that their satisfaction is also largely guaranteed by the fact that these highly capable customers won’t hesitate to leave when they are dissatisfied.  As a result, those that are left are generally remaining for a reason.

Customers smile at the edges

Customers smile at the edges

Stuck in the middle

It is the middle group who can be most interesting.  They have enough complexity in their product set that they miss out on the simplicity of service that the first group benefit from.  They don’t, however, have enough complexity or value to warrant the high touch of the most complex customers.  They are truly stuck in the middle.

This middle group of customers are often the ones who are most dissatisfied with the service that they receive.  Because of their numbers and the range of products that they could have, there are too many permutations to provide a simple intuitive digital pathway.  Often their products span organisational boundaries (both their own and yours).

Dealing effectively with the middle really means finding a way to service them systematically, ideally through digital channels.  The middle gains the most from any simplification and can be the most rewarding because everyone struggles with providing them with consistent, quality service at an affordable price.  The key to their satisfaction is providing consistent, understandable information and access to their portfolio across digital, call centre, in person and third-party channels.

Although the middle requires more servicing than your most simple customers, they are usually much more profitable and there is the opportunity for there to be more of them of them than there will ever be of your premium customers.  In addition, the middle ground is often less fought over and so more profitable.  From the customer’s perspective there is enough complexity for you to offer differentiated value and inadequate competition for them to feel that they are being meaningfully wooed by your competitors.

Standing in the way

Standing in the way of harnessing the middle is the complexity of the organisation.  Providing quality service is often hampered by internal organisational boundaries, overly simplistic assumptions around master data and the number of legacy products and systems (see my previous post: Value of decommissioning legacy systems).

For instance, it might make perfect sense to the company to differentiate between the services provided to business and personal customers.  To the customer, this distinction is frustrating when they are using products and services sponsored by a business (either their own or their employer) in combination with their own direct relationship.  Ultimately this is a master data problem with too much reliance being placed on the financial accountability and not enough on the user.

In most industries, there is the opportunity to differentiate your service to this group by making it incredibly easy for them to manage their products and services with you.  To do this, don’t try to have all of the answers.  Use crowdsourcing find answers to individual questions and even design new solutions.  Most importantly analyse the results to improve the services that the company provides or even hand them off to third parties.

In the future, servicing this large number of customers may look more like an “App Store” with solutions from others that you have sanctioned.  You may even provide a budget for customers to download from this “App Store” letting them drive a much more agile set of solutions than you could ever hope to design let alone develop.

A vision for the future

Your customer of the future might be referred to a particular product through a social network.  Even after looking at an overview on a website and exploring it in context using augmented reality (using a digital device to combine visual and location with external information such as real estate value, insurance premiums or network coverage) the potential customer might still need to talk to someone.  The call centre and the potential customer would both have the full suite of information in front of them at the same time and, if useful, live streaming from the digital device allowing the call centre operator to close the transaction with the minimum of fuss.

As additional information comes to hand, before or after completing the transaction, it will be easy to add it to the portfolio even if it changes details such as the customer’s employer or the basis on which the product or service is being paid for.

Seamless integration is the key to satisfying the middle group of customers.  Such integration is needed by the backend systems (by communicating using a lingua franca), the information layer (all channels need to be able to access the same information in real time) and the front-end (digital, call centres and third-parties servicing customers either on your behalf or in partnership with you).

Achieving this vision is a journey of steps.

The first step requires you to review the use of master data to determine if it is structurally rich enough to describe how your customers want to navigate their products and services.

The second step is to review how well your digital channels leverage this master data strategy.  Ensure it is codified in a way that means that future changes to your master data (including the addition of new products and services) and navigational needs (such as bridging social networks, employers and other relationships) will naturally flow through to the experience that your customers will have.

Finally consider opening the door to collaborating with others to provide the best possible service to your customers.  Collaboration might bring together different business units, other customers, third parties acting on your behalf, and joint venture partnerships.

 

Tags:
Category: Master Data Management
No Comments »

by: Phil Simon
24  Jun  2012

The Semantic Web Inches Closer

I’ve written before on this site about the vast implications of the forthcoming semantic web. In short, it will be a game-changer–but it certainly won’t happen anytime soon. Every day, though, I hear about organizations taking one more step in that direction. Case in point: A few days ago, Harvard announced that it was “making public the information on more than 12 million books, videos, audio recordings, images, manuscripts, maps, and more things inside its 73 libraries.” From the piece:

Harvard can’t put the actual content of much of this material online, owing to intellectual property laws, but this so-called metadata of things like titles, publication or recording dates, book sizes or descriptions of what is in videos is also considered highly valuable. Frequently descriptors of things like audio recordings are more valuable for search engines than the material itself. Search engines frequently rely on metadata over content, particularly when it cannot easily be scanned and understood.

This might not seem like a terribly big deal to the average person. Five years ago, I wouldn’t have given this announcement much thought. But think for a moment about the ramifications of such a move. After all, Harvard is a prominent institution and others will no doubt follow its lead here. More metadata from schools, publishers, record companies, music labels, and businesses mean that the web will become smarter–much smarter. Search will continue to evolve in ways that relatively few of us appreciate or think about.

Understanding Why

And let’s not forget about data mining and business intelligence. Forget about knowing more about who buys which books, although this is of enormous importance. (Ask Jeff Bezos.) Think about knowing whythese books or CDs or movies sell–or, perhaps more important, don’t sell. Consider the following questions and answers:

  • Are historical novels too long for the “average” reader? We’ll come closer to knowing because metadata includes page and word counts.
  • Which book designs result in more conversions? Are there specific fonts that readers find more appealing than others?
  • Are certain keywords registering more with a niche group of readers? We’ll know because tools will allow us to perform content and sentiment analysis.
  • Which authors’ books resonate with which readers? Executives at companies like Amazon and Apple must be frothing at the mouth here.
  • Which customers considered buying a book but ultimately did not? Why did they opt not to click the buy button?

I could go on but you get my drift. Metadata and the semantic web collectively mean that no longer will we have to look at a single book sale as a discrete event. We’ll be able to know so much more about who buys what and why. Ditto cars, MP3s, jelly beans, DVDs, and just about any other product out there.

Simon Says

In the next ten years, we still may not be able to answer every commerce-related question–or any question in its entirety. However, a more semantic web means that a significant portion of the mystery behind the purchase will be revealed. Every day, we get a little closer to a better, more semantic web.

Feedback

What say you?

Tags: ,
Category: Metadata, Semantic Web
1 Comment »

by: Bsomich
23  Jun  2012

Weekly IM Update.

 
 
 logo.jpg

Did You Know? 

MIKE’s Integrated Content Repository brings together the open assets from the MIKE2.0 Methodology, shared assets available on the internet and internally held assets. The Integrated Content Repository is a virtual hub of assets that can be used by an Information Management community, some of which are publicly available and some of which are held internally.

Any organisation can follow the same approach and integrate their internally held assets to the open standard provided by MIKE2.0 in order to:

  • Build community
  • Create a common standard for Information Development
  • Share leading intellectual property
  • Promote a comprehensive and compelling set of offerings
  • Collaborate with the business units to integrate messaging and coordinate sales activities
  • Reduce costs through reuse and improve quality through known assets

The Integrated Content Repository is a true Enterprise 2.0 solution: it makes use of the collaborative, user-driven content built using Web 2.0 techniques and technologies on the MIKE2.0 site and incorporates it internally into the enterprise. The approach followed to build this repository is referred to as a mashup.

Feel free to try it out when you have a moment- we’re always open to new content ideas.

Sincerely,

MIKE2.0 Community  

 
Contribute to MIKE:

Start a new article, help with articles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

 Join Us on images.jpg

 

Did You Know?
All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

 

This Week’s Blogs:

Big Data: Is it really that different? 

The interesting thing about new data sources and streams is that some of the old tools just don’t cut it. Consider the relational database. Data from CRM or ERP applications data fit nicely into transactional tables linked by primary and foreign keys. On many levels, however, social media is far from CRM and ERP–and this has profound ramifications for data management.

Read more.
 

 Overcoming Information Overload

Information is a key asset for every organization, yet due to the rise of technology, web 2.0 and a general over abundance of raw data, many businesses are not equipped to make sense of it all.

How can managers overcome an age of “information overload” and begin to concentrate on the data that is most meaningful for the business?  Based on your experience, do you have any tips to share?

Read more.
  
 

   New Survey Finds Collaborative Technologies Can Improve Corporate Performance

A new McKinsey Quarterly report, The rise of the networked enterprise: Web 2.0 finds its payday, released just this week is proving to be a treasure chest of information on the value of collaborative technologies. This new release of a series of similar studies from McKinsey over the years comes out of a survey of over 3000 executives across a range of regions, industries and functional areas. It provides detailed information on business value and measurable benefits in multiple venues of collaboration: between employees, with customers, and with business partners. It also examines the link—and finds great correlations—between implementing collaborative technologies and corporate performance.

Read more.

 
Forward to a Friend!

Know someone who might be interested in joining the Mike2.0 Community? Forward this to a friend

Questions?

If you have any questions, please email us at mike2@openmethodology.org.

 

 

 

Category: Information Development
No Comments »

by: Phil Simon
17  Jun  2012

Big Data: Is it Really That Different?

Nate Silver is a really smart guy. Not even 40, he’s already developed tools to analyze statistics ultimately bought by Major League Baseball. He writes about politics and stats for the New York Times. And, at the Data Science Summit here in Las Vegas, I saw him speak about Big Data. (You can watch the video of that talk here.)

DSS was all about Big Data, a topic about which I’m not terribly familiar. (It is a fairly new term, after all.) I suspected that Silver would be speaking over my head. But a funny thing happened when I watched his speak: I was able to follow most of what he was saying.

By way of background, I’m a numbers guy and have been for a very long time. Whether in sports or work, I like statistics. I like data. I remember most of the material from my different probability and stat courses in college. I understand terms like p values, statistical significance, and confidence intervals. This isn’t that difficult to understand; it’s not like I’m a full-time poet or ballet dancer.

Wide, Not Long

The interesting thing about new data sources and streams is that some of the old tools just don’t cut it. Consider the relational database. Data from CRM or ERP applications data fit nicely into transactional tables linked by primary and foreign keys. On many levels, however, social media is far from CRM and ERP–and this has profound ramifications for data management. Case in point: Twitter runs on Scala, not a relational database. Why? To make a long story short, the type of data generated and accessed by Twitter just can’t run fast enough in a traditional RMDB architecture. This type of data is wide, not long. Think columns, not rows.

To this end, Big Data requires some different tools and Scala is just one of many. Throw Hadoop in there as well. But our need to use “some different tools” does not mean that tried and true statistical principles fall by the wayside. On the contrary, old stalwarts like Bayes’ Theorem are still as relevant as ever, as Silver pointed out during his speech.

Simon Says: The Best of Both Worlds

In an era of Big Data, there will be winners and losers. Organizations that see success will be the ones that combine the best of the old with the best of the new. Retrofitting unstructured or semistructured data into RMDBs won’t cut it, nor will ignoring still-relevant statistical techniques. Use the best of the old and the best of the new to succeed in the Big Data world.

Feedback

What say you?

Tags: , , ,
Category: Information Management
1 Comment »

by: Phil Simon
08  Jun  2012

Big Data and A-B Testing

Test Everything. That’s the recent title of a recent Wired magazine article on the merits and usage of A/B testing. The piece is nothing less than fascinating and I encourage you to check it out. In this article, I’d like to chime in with my own thoughts on the subject.

Benefits

Perhaps first and foremost, A/B testing allows you to generate your own data. That is, organizations can be proactive with regard to data management. This is in stark contrast to the practices of far too many companies that rely almost extensively on much more reactive data management. A/B testing does not eliminate the need for judgment. There’s still some art to go with that science. But, without question, A/B testing allows data management professionals to increase the mix of science in that recipe. From the aforementioned Wiredpiece:

“It [A/B testing] is your favorite copyeditor,” says IGN co-founder Peer Schneider. “You can’t have an argument with an A/B testing tool like Optimizely, when it shows that more people are reading your content because of the change. There’s no arguing back. Whereas when your copyeditor says it, he’s wrong, right?” This comment stings retroactively, as forty-eight hours later I would cost his company umpteen clicks with my misguided “improvement.”

Think about the power of data. In my experience, data naysayers often discount data because, in large part, they’re “not numbers’ people.” While some people will always find reasons to discredit that which they don’t understand, A/B testing can provide some pretty strong ammunition against skeptics. After all, what happens when website layout A, book cover A, or product description page A shows twice the level of engagement as its alternatives? Even a skeptic will have to admit defeat.

Limitations

Of course, A/B testing is hardly a panacea. The basic laws of statistics still apply, not the least of which is the notion of statistical significance. A site that gets 50,000 unique hits per day can reasonably chop its audience into two and, in the end, feel confident that any results are genuine. Without getting all “statsy”, there’s not much of a chance of either Type I or Type II errors with such sample sizes. Now, if a site gets 50 unique visitors per day, the chance of seeing a “false positive” or failing to see a legitimate cause-effect relationship are considerably higher. Beyond statistics, though, there’s a stylistic or design issue with A/B testing. Consider the famous quote by Steve Jobs:

“It’s really hard to design products by focus groups. A lot of times, people don’t know what they want until you show it to them.” – BusinessWeek, May 25 1998

Do you really want to crowdsource everything? There’s something to be said for the vision of an individual, small team, or small company. Giving everyone a vote may well drive a product to mediocrity. That is, in an attempt to please everyone, you’ll please no one.

Simon Says

Whether A/B testing is right for your organization hinges upon a bevy of factors. Timing, culture, and sample sizes are just a few things to consider. If you go down this road, though, don’t stop just because you don’t like what the data are telling you.

Feedback

What say you?

Category: Business Intelligence, Information Development
2 Comments »

by: Phil Simon
01  Jun  2012

Mobile BI Mistakes: Know Your Organization

In my first book, Why New Systems Fail, I write about the perils of IT projects. Clocking in at over 350 pages, it’s certainly not a short book. In a nutshell, projects fail because of people, organizations, communications, and culture more than purely technological issues.

I began writing that book in mid-2008, long before apps and mobile BI had reached critical mass. While I know that technologies always change, when I leaf through my first text, I find that the book’s lessons are still very relevant today.

For instance, consider mobile BI. In an Information Management piece on the ten most common mobile BI mistakes, Lalitha Chikkatur writes

  1. Assuming mobile BI implementation is a project, like a traditional BI implementation
  2. Underestimating mobile BI security concerns
  3. Rolling out mobile BI for all users
  4. Believing that return on mobile BI investment cannot be derived
  5. Implementing mobile BI only for operational data
  6. Assuming that mobile BI is appropriate for all kinds of data
  7. Designing mobile BI similar to traditional BI design
  8. Assuming that BI is the only data source for mobile BI
  9. Believing mobile BI implementation is a one-time activity
  10. Claiming that any device is good for the mobile BI app

It’s an interesting list and I encourage you to check ou the entire post.

The Good, the Bad, and the Ugly

I’d argue that you could pretty much substitute ‘mobile BI’ for just about any contemporary enterprise technology, but let’s talk for a minute about the actual devices upon which mobile applications run.

With any technology, there have always been technological laggards and mobile BI is no exception to that rule. Think about the potential success of mobile BI in a typical large healthcare organization vs. a typical retail outlet.

Healthcare

In a very real way, mobile BI is quite similar to electronic medical records (EMRs). The technology behind EMRs has existed for quite some time, yet its low penetration rate is often criticized, especially in the United States. Why? The reasons are multifaceted, but user adoption is certainly at or near the top of the list. (For more on this, check out this HIMSS white paper.)

Generally speaking, many senior doctors have (at least in their view) been doing just fine for decades without having to mess around with smartphones, tablets, and other doohickeys. Old school doctors rely exclusively upon paper charts and pens. Technology just gets in the way and man of them just plain don’t get it.

Does this mean that the successful deployment of mobile BI is impossible at a hospital? Of course not. Just understand that many users will be detractors and naysayers, not advocates.

Retail

Retail, on the other hand, is quite a different animal on many levels (read: margins, employee turnover, elasticity of demand for the product, tax implications). Here, though, let’s focus on the end user.

Go into just about any retail store and I’ll bet you a steak vs. Saab that most of its employees aren’t much older than 30. Translation: they grew up on computers and the Internet. As such, they’ll all too willing to experiment with tablets, app, and mobile BI. The fear factor is gone and that fundamental willingness to experiment cannot be overstated.

Simon Says

Technology progresses faster than most users can embrace it–or, at least, want to embrace it. Like any technology, mobile BI can only be successful if your employees allow it to be.

Feedback

What say you?

Tags:
Category: Business Intelligence
No Comments »

Calendar
Collapse Expand Close
TODAY: Wed, October 18, 2017
June2012
SMTWTFS
272829303112
3456789
10111213141516
17181920212223
24252627282930
Archives
Collapse Expand Close
Recent Comments
Collapse Expand Close