Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Archive for March, 2012

by: Bsomich
29  Mar  2012

Data-Driven IT Transformation: 3 Factors Impacting the Success of Your Program

The “IT Transformation” of an organization from its legacy environment to the next generation of technology is one of the most complex and expensive changes an organization can undergo. Just building momentum to begin the program can be extremely difficult, as organizations typically undergo months (maybe years) of planning and false starts. When implementation finally begins, the team often focuses on the perceived pressure point – configuring the new off-the-shelf applications to meet the functional capabilities desired by the business.

The realization that eventually comes (often too late) is that the problem isn’t whether the new applications can provide the required functionality. These programs are starting in the wrong place and aren’t dealing with the real problems:

  • How to improve and optimize business processes
  • How to manage information across the enterprise
  • How to safely migrate from the legacy to the contemporary environment
  • How to deliver on a transition strategy that provides incremental functionality while mitigating risk and staying within budget
  • How to define an improvement strategy for your people, processes, and organization as well as the technology

Of all these factors, how information is managed is the most crucial factor to the success of the program. Unlike other areas (application functionality, integration, process, presentation) there is no choice with data but to fix the issues of the past – there is no opportunity to “start over.” For a complex, federated organization, this problem becomes even more complex due to the high degree of de-centralization of systems and processes.

The MIKE2.0 Leadership Team believes meaningful, cost-effective business and technology change processes can achieved much more effectively by taking an Information Development approach, and our methodology can be an integral part in a successful transformation. Our Executive Overview on Data Driven IT Transformation provides an introductory presentation on this approach.

3 Key Choices Impacting Your IT Transformation Program

We believe there are three strategic choices will affect the manner in which you define your transformation strategy:

Will your development streams be application-driven or business-driven?

A business-driven development approach means that not only applications are important; equal focus must be given to information management and integration. Taking this approach will impact not only the technology solution, but also how the organization is run, and how programs are typically funded. Moving to a business-driven approach means that application implementation at the project level no longer solely drives the architecture and the Technology Backplane; the integration layer as well as the data layer is funded and implemented at the enterprise level. Some organizations are better suited towards an application-driven approach as taking a business-driven approach also means more inherit complexity across a federated environment. It is a consideration that is fundamental to an organization’s resulting technology requirements and must therefore be considered very carefully.

Will your approach to integration use a common, standards-based approach or will it be driven by applications?

Integration is an inherently complex activity – not only in terms of technology but also in terms of organizational behavior and processes (it requires a lot of people to work together). Integration teams are typically assigned the most difficult tasks on the project, addressing limitations in legacy applications through the perceived newest and most flexible technology solutions. As a rapidly evolving set of technologies, it is therefore not surprising that much of the legacy integration environment is a mess – a “spider web” of integration touch-points between systems.

Contemporary integration technologies such as EAI were supposed to fix this mess – and help break you out of the spider web. Certainly, off-the-shelf integration technologies provide a capability to do new development more quickly, but many of the environments are perhaps more complex (and less reusable) than before.

The lack of standards is what makes integration difficult. Without proper standards, new technologies just make the spider web bigger (and more expensive) – automating the spider web through application-driven integration.

Implementing a Services-Oriented Architecture through open and common standards is the key to breaking out of the spider web. Just as the advent of standards in other areas of IT has greatly simplified database design, network communications, and inter-office collaboration, standards are the key to simplifying integration.

Will you make the management of information be a critical focus of your organization?

We believe that the Information Development approach is an Evolutionary Path that will be delivered over time and will be complemented by emerging standards in business and technology.

The MIKE2.0 Leadership Team believes this is the “path to the future” for many organizations. The changing landscape in business will require companies to adapt their current approach to meet the challenging and complex environment ahead. The real winners will be those that can manage information to serve their customers better, and use information and improved processes to take cost out of their business.

Category: Information Strategy
No Comments »

by: Phil Simon
20  Mar  2012

On Control, Structure, and Strategy

In “Can You Use Big Data? The Litmus Test“, Venkatesh Rao writes about the impact of Big Data on corporate strategy and structure. Rao quotes Alfred Chandler’s famous line, “structure follows strategy.” He goes on to claim that, “when the expressivity of a technology domain lags the creativity of the strategic thinking, strategy gets structurally constrained by technology.”

It’s an interesting article and, while reading it, I couldn’t help but think of some thought-provoking questions around implementing new technologies. That is, today’s post isn’t about Big Data per se. It’s about the different things to consider when deploying any new information management (IM) application.

Two Questions

Let’s first look at the converse of Rao’s claim? Specifically, doesn’t the opposite tend to happen (read:  technology is constrained by strategy)? How many organizations do not embrace powerful technologies like Big Data because they don’t fit within their overal strategies?

For instance, Microsoft could have very easily embraced cloud computing much earlier than it did. Why did it drag its feet and allow other companies to beat it to the punch? Did it not have the financial means? Of course not. I would argue that this was all about strategy. Microsoft for years had monopolized the market the desktop and many on-premise applications like Windows and Office.

To that end, cloud computing represented a threat to Microsoft’s multi-billion dollar revenue stream. Along with open source software and the rise of mobility, in 2007 one could start to imagine a world in which Microsoft would be less relevant than it was in 2005. The move towards cloud computing would have happend with or without Microsoft’s blessing, and no doubt many within the company thought it wise to maximize revenues while it could. (This isn’t inherently good or bad. It just supports the notion that strategy constrains technology as well.)

The Control Factor

Next up, what about control? What role does control play in structure and strategy? How many organizations and their employees have historically resisted implementing new technologies because key players simply refused to relinquish control over key data, processes, and outcomes? In my experience, quite a few.

I think here about my days working on large ERP projects. In the mid-2000s, employee and vendor self-service became more feasible but many organizations hemmed and hawed. They chose not to  deploy these time-saving and ultimately more efficient applications because internal resistance proved far too great to overcome. In the end, quite a few directors and middle managers did not want to cede control of “their” business processes and ownership of “their” data because it would make them less, well, essential.

Simon Says: It’s Never Just about One Thing

Strategy, culture, structure, and myriad other factors all play significant roles in any organization’s decision to deploy–or not to deploy–any given technology. In an ideal organization, all of these essential components support each other. That is, a solid strategy is buttressed by a healthy and change-tolerant structure and organizational culture. One without the other two is unlikely to result in the effective implementation of any technology, whether its mature or cutting-edge.

Feedback

What say you?

Tags: , ,
Category: Information Strategy, Open Source
No Comments »

by: Robert.hillard
18  Mar  2012

It’s time for a new definition of big data

Two words seemingly on every technologist’s lips are “big data”.  The Wikipedia definition for big data is: “In information technology, big data consists of datasets that grow so large that they become awkward to work with using on-hand database management tools”.  This approach to describing the term constrains the discussion of big data to scale and fails to realise the key difference between regular data and big data.  The blog posts and books which cover the topic seem to converge on the same approach to defining big data and describe the challenges with extracting value from this resource in terms of its size.

Big data can really be very small and not all large datasets are big!  It’s time to find a new definition for big data.

Big data that is very small

Modern machines such as cars, trains, power stations and planes all have increasing numbers of sensors constantly collecting masses of data.  It is common to talk of having thousands or even hundreds of thousands of sensors all collecting information about the performance and activities of a machine.

Imagine a plane on a regular one hour flight with a hundred thousand sensors covering everything from the speed of air over every part of the airframe through to the amount of carbon dioxide in each section of the cabin.  Each sensor is effectively an independent device with its own physical characteristics.  The real interest is usually in combinations of sensor readings (such as carbon dioxide combined with cabin temperature and the speed of air combined with air pressure).  With so many sensors the combinations are incredibly complex and vary with the error tolerance and characteristics of individual devices.

The data streaming from a hundred thousand sensors on an aircraft is big data.  However the size of the dataset is not as large as might be expected.  Even a hundred thousand sensors, each producing an eight byte reading every second would produce less than 3GB of data in an hour of flying (100,000 sensors x 60 minutes x 60 seconds x 8 bytes).  This amount of data would fit comfortably on a modest memory stick!

Large datasets that aren’t big

We are increasingly seeing systems that generate very large quantities of very simple data.  For instance, media streaming is generating very large volumes with increasing amounts of structured metadata.  Similarly, telecommunications companies have to track vast volumes of calls and internet connections.

Even if these two activities are combined, and petabytes of data is produced, the content is extremely structured.  As search engines, such as Google, and relational databases have shown, datasets can be parsed extremely quickly if the content is well structured.  Even though this data is large, it isn’t “big” in the same way as the data coming from the machine sensors in the earlier example.

Defining big data

If size isn’t what matters then what makes big data big?  The answer is in the number of independent data sources, each with the potential to interact.  Big data doesn’t lend itself well to being tamed by standard data management techniques simply because of its inconsistent and unpredictable combinations.

Another attribute of big data is its tendency to be hard to delete making privacy a common concern.  Imagine trying to purge all of the data associated with an individual car driver from toll road data.  The sensors counting the number of cars would no longer balance with the individual billing records which, in turn, wouldn’t match payments received by the company.

Perhaps a good definition of big data is to describe “big” in terms of the number of useful permutations of sources making useful querying difficult (like the sensors in an aircraft) and complex interrelationships making purging difficult (as in the toll road example).

Big then refers to big complexity rather than big volume.  Of course, valuable and complex datasets of this sort naturally tend to grow rapidly and so big data quickly becomes truly massive.

Tags:
Category: Information Strategy
11 Comments »

by: Phil Simon
13  Mar  2012

On Data Migration Estimates

The MIKE 2.0 framework provides specific tools to promote effective data migration. Today I’d like to discuss the paramount importance of the initial estimate.

First up, let me state the obvious: Not all data migrations are created equal. While particulars vary, they can be safely categorized into three big buckets: light, medium, and heavy. Which one most aptly describes your organization? This simple question is often the most important one on any information management (IM) project. What’s more, it’s often answered incorrectly by the following folks:

  • clients eager to minimize the budgets of their projects
  • software vendors eager to book a sale, especially before the end of a quarter or fiscal year
  • consulting firms, afraid of being too difficult for fear of not winning the business

In my experience, far too many senior executives severely underestimate the amount of time, money, effort involved in moving from one or more systems to one or more different ones. It’s hard to overstate the importance of being honest with yourself before an IM project commences.

For instance, let’s say that the picture below accurately represents what you believe to be the state of your organization’s systems and data–and their ultimate destination (see #7):

On the basis of the above diagram, you and your consulting partners estimate that it will take about three months to extract, cleanse, transfer, and load your data into the new target systems. Upon commencing the project, however, you discover that your organization actually supports more systems, applications, and standalone databases than previously known. (Don’t believe that this happens? Trust me. I’ve seen this more than a few times in my days as an IT consultant.).

As a result of these discoveries, the new architecture looks more like this:

To boot, that three-month estimate is now laughably low. The fallout begins. Consulting firms press for change orders and for more money. Clients resist this–and start complaining about the customer focus of their new partner. Internally, fingers are pointed at each other and obscenities are exchanged. People resign or are fired.

Simon Says: Truth in Advertising

Few things are more important than understanding the amount of time, money, and effort required to successfully migrate data. If in doubt, overestimate. It’s always easier to come in under a higher budget than over a lower one.

If you suspect that you’re organization falls into the medium scenario, don’t be afraid to call a spade a spade. Representing your company as light when it’s really medium (or medium when it’s really heavy) is a recipe for disaster.

Finally, realize that a good-faith estimate is just that. Anyone who believes that he or she can predict with absolute certainly how many hours or how much money is required on an IM project of any size is delusional. Take estimates with a bit of salt. Problems always emerge. Just be prepared when they do.

Feedback

What say you?

Tags:
Category: Information Management
2 Comments »

by: Phil Simon
08  Mar  2012

Publishing and Big Data

As the author of four books, I pay close attention to the publishing world, especially with respect to its use of emerging technologies. Brass tacks: I often doubt wonder if traditional publishers have truly embraced the Information Age. While I understand the resistance, can’t data analysis and business intelligence help publishing houses improve their batting averages (read: select more successful books, reach their readers better, and avoid expensive mishaps)?

These are just some of the issues broached at O’Reilly’s Tools of Change for Publishing conference. This annual event bills itself as the place in which “the publishing and tech industries converge, as practitioners and executives from both camps share what they’ve learned from their successes and failures, explore ideas, and join together to navigate publishing’s ongoing transformation.” From a recent article reflecting upon TOC 2012:

If one thing was clear from this year’s TOC it’s that the publishing business is finally getting serious about data and analytics. This can mean looking at the granularity of day-to-day marketing strategies — such as when to send out a tweet to get maximum re-tweets on your social network (there’s an app for that), making quantitative assessments of the number of books a “library power patron” will buy based on their reading habits (one bought for every two borrowed), or the likelihood of a German to feel bad about downloading a pirated e-book from BitTorrent (not so much).

Years ago, most publishers selected books exclusively upon the recommendations of acquisition editors. AEs have been the gatekeepers, those coveted folks who somehow knew which books would be successful.

Except many of them didn’t.

Bad Batting Averages

In fact, the number of misses by big publishers is pretty astounding. Stephen King received hundreds of scathing rejection letters before he proved himself a book-selling machine. Publishers passed on initial manuscripts of Chicken Soup for the Soul, a franchise that has reached tens of millions of people. John Grisham self-published his first book.

I could go on but you get my point: relying upon hunches and intuition isn’t exactly a recipe for successful decisions, and book sales are no exception to this rule. Slowly, publishes are recognizing this fact and embracing analytics and Big Data. They have to; their margins are being squeezed and they have no choice but to adapt or die. While developing the perfect equation to predict book sales may be impossible (there are always Black Swans), no doubt publishers can benefit from a more information-driven approach to managing their business. After all, it worked for the Oakland A’s, right?

Simon Says: It’s Not Just About Previous Book Sales

Individual judgment will always matter in evaluating any business opportunity. No one is saying that machines, data, and algorithms need to completely supplant the need for human intervention. The data may tell us what, but it may not tell us why? Plus, there are always times in which it makes sense to bet big the other way–to ignore the data. Maybe 20 years ago, an author who sold 20,000 copies of a book might sell more or less the same number. These days, however, publishers are using new and often fuzzier metrics like an author’s (or prospective author’s)

  • Twitter followers
  • site’s Google PageRank or Alexa ranking
  • RSS subscribers
  • size of mailing list
  • number of Facebook fans
  • Klout score
  • and others

Feedback

What say you?

 

Tags: ,
Category: Business Intelligence
No Comments »

by: Bsomich
08  Mar  2012

Is your dashboard working?

It’s no wonder they call it a dashboard… Like your car, your business could scarcely move forward without it. A well designed intelligence dashboard provides a snapshot of the overall health of your operations and gives immediate insight into areas that need improvement or tuning. In short, it’s key to keeping your finger on the pulse of your business.

At a minimum it helps:

1. Provide needed performance metrics and data for decision making.

2. Save time by showing only key reports.

3. Communicate trends with team members and management.

Most BI analytic and reporting programs (Google, Salesforce, Radian6, Raven, etc. to name a few) offer this feature, but how well are we using it? To get the most of your intelligence software, your analytics dashboard must (at a minumum) be up-to-date, designed with your bottom line in mind, and easily shareable and accessible.

It sounds like a no-brainer, but meeting these requirements are often easier said than done. Most organizations struggle with data quality and input delays. Does your sales or HR department update their progress on a daily or monthly basis? Are all the necessary fields being populated? If the information is not timely or correct, your dashboard and decision making will suffer as a result.

If data quality isn’t your problem, what about the design? When it comes to the layout and contents of your dashboard, you could easily be making a crucial mistake. Are you focusing on the correct reports that  impact the bottom line for your department? Is unnecessary information deterring viewers from seeing the real picture? Even the best dashboards need fine tuning as goals and business needs change.

What about accessibility? Possibily the most crucial requirement for having a well-performing intelligence dashboard is the ability for team members to easily access and share information in real time.  Allowing proper edit and access levels that allow the right people to pull reports and make changes sounds elementary, but can often be overlooked when designing your system.

What factors do you think contribute to a well-oiled intelligence dashboard, and how well is yours working?

Category: Business Intelligence
2 Comments »

by: Bsomich
03  Mar  2012

Weekly IM Update.

 
 
 logo.jpg

Did You Know? 

MIKE’s Integrated Content Repository brings together the open assets from the MIKE2.0 Methodology, shared assets available on the internet and internally held assets. The Integrated Content Repository is a virtual hub of assets that can be used by an Information Management community, some of which are publicly available and some of which are held internally.

Any organisation can follow the same approach and integrate their internally held assets to the open standard provided by MIKE2.0 in order to:

  • Build community
  • Create a common standard for Information Development
  • Share leading intellectual property
  • Promote a comprehensive and compelling set of offerings
  • Collaborate with the business units to integrate messaging and coordinate sales activities
  • Reduce costs through reuse and improve quality through known assets

The Integrated Content Repository is a true Enterprise 2.0 solution: it makes use of the collaborative, user-driven content built using Web 2.0 techniques and technologies on the MIKE2.0 site and incorporates it internally into the enterprise. The approach followed to build this repository is referred to as a mashup.

Feel free to try it out when you have a moment- we’re always open to new content ideas.

Sincerely,

MIKE2.0 Community  

 
Contribute to MIKE:

Start a new article, help with articles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

 Join Us on images.jpg

 

Did You Know?
All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

 

This Week’s Blogs:

The Data War Room

The 1993 documentary The War Room tells the story of the 1992 US presidential campaign from a behind-the-scenes’ perspective. The film shows first-hand how Bill Clinton’s campaign team responded to different crises, including allegations of marital infidelity. While a bit dated today, it’s nonetheless a fascinating look into “rapid response” politics just when technology was starting to change traditional political media.

Today, we’re starting to see organizations set up their own data war rooms for essentially the same reasons: to respond to different crises and opportunities.

Read more.

 Overcoming Information Overload

Information is a key asset for every organization, yet due to the rise of technology, web 2.0 and a general over abundance of raw data, many businesses are not equipped to make sense of it all.

How can managers overcome an age of “information overload” and begin to concentrate on the data that is most meaningful for the business?  Based on your experience, do you have any tips to share?

Read more.   

   New Survey Finds Collaborative Technologies Can Improve Corporate Performance

A new McKinsey Quarterly report, The rise of the networked enterprise: Web 2.0 finds its payday, released just this week is proving to be a treasure chest of information on the value of collaborative technologies. This new release of a series of similar studies from McKinsey over the years comes out of a survey of over 3000 executives across a range of regions, industries and functional areas. It provides detailed information on business value and measurable benefits in multiple venues of collaboration: between employees, with customers, and with business partners. It also examines the link—and finds great correlations—between implementing collaborative technologies and corporate performance.

Read more.

 
Forward to a Friend!

Know someone who might be interested in joining the Mike2.0 Community? Forward this to a friend

Questions?

If you have any questions, please email us at mike2@openmethodology.org.

 

 

 


Category: Information Development
No Comments »

by: Phil Simon
01  Mar  2012

The Data War Room

The 1993 documentary The War Room tells the story of the 1992 US presidential campaign from a behind-the-scenes’ perspective. The film shows first-hand how Bill Clinton’s campaign team responded to different crises, including allegations of marital infidelity. While a bit dated today, it’s nonetheless a fascinating look into “rapid response” politics just when technology was starting to change traditional political media.

Today, we’re starting to see organizations set up their own data war rooms for essentially the same reasons: to respond to different crises and opportunities. Information Week editor Chris Murphy writes about one such company in “Why P&G CIO Is Quadrupling Analytics Expertise”:

[Procter & Gamble CIO Filippo] Passerini is investing in analytics expertise because the model for using data to run a company is changing. The old IT model was to figure out which reports people wanted, capture the data, and deliver it to the key people weeks or days after the fact. “That model is an obsolete model,” he says.

Murphy hits the nail on the head in this article. Now, let’s delve a bit depper into the need for a new model.

The Need for a New Model

There are at least three factors driving the need for a new information management (IM) model in many organizations. First, let’s look at IT track records. How many organizations invested heavily in the late 1990s and early 2000s on expensive, on-premise ERP, CRM, and BI applications–only to have these investments ultimately disappoint the vast majority of stakeholders? Now, on-premise isn’t the only option. Big Data and cloud computing are gaining traction in many organizations.

Next up: time to respond. Beyond the poor track record of many traditional IT investments, we live in different times relative to even ten years ago. Things happen so much faster today. Why? The usual supects are the explosion of mobility, broadband, tablets, and social media. Ten years ago, the old, reactive requirement-driven IM model might have made sense. Today, however, that model becoming increasingly difficult to justify. For instance, a social media mention might cause a run on products. By the time that proper requirements have been gathered, a crisis has probably exacerbated. An opportunity has probably been squandered.

Third, data analysis and manipulation tools have become much more user-friendly. Long gone are the days in which people needed a computer science or programming background to play with data. Of course, data modeling, data warehousing, and other heavy lifting necessitate more technical skills and backgrounds. But the business layperson, equipped with the right tools and a modicum of training, can easily investigate and drill down on issues related to employees, consumers, sales, and the like.

Against this new backdrop, which of the following makes more sense?

  • IT analysts spending the next six weeks or months interacting with users and building reports?
  • Skilled users creating their own reports, creating and interpreting their own analytics, and making business decisions with minimal IT involvement (aka, self service)?

Simon Says

Building a data war room is no elixir. You still have to employ people with the skills to manage your organizations data–and hold people accountable for their decisions. Further, rapid response means making decisions without all of the pertinent information. If your organization crucifies those who make logical leaps of faith (but ultimately turn out to be “wrong” in their interpretation of the data), it’s unlikely that this new model will take hold.

Feedback

What say you?

 

Tags: , , ,
Category: Business Intelligence, Information Management
1 Comment »

Calendar
Collapse Expand Close
TODAY: Sat, June 24, 2017
March2012
SMTWTFS
26272829123
45678910
11121314151617
18192021222324
25262728293031
Archives
Collapse Expand Close
Recent Comments
Collapse Expand Close