Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Archive for December, 2011

by: Wmcknight
29  Dec  2011

Handling Criticism the Michelangelo Way

I had a chance to visit Florence, Italy earlier this month and visited the Galleria dell’Accademia Museum, the home of Michelangelo’s David. The presentation of David was captivating and awe-inspiring. The famous sculpture contained such incredible detail and every chisel and angle contributed to the exact message that the artist wanted to convey. It just worked on so many levels.

As I sat there, I remembered some of the backstory of David. The piece of marble was deemed of very high quality and for a long time awaited its artist and its ultimate use. Eventually, the job landed with Michelangelo and the target was determined to be a young, naked David from the Bible about to go into battle.

Michelangelo preferred to do his work in private and even shielded himself during David to avoid any would-be onlookers. Then one day well into the final product, along came Piero Soderini, an official of some sort who was a sponsor of the work. Soderini, the story goes, commented that the “nose was too thick.” We’d like to think Michelangelo would know better than Soderini about this and that the nose was not really “too thick.” However, it put Michelangelo in a dilemma.

He situated Soderini in a position where he could not see what he was doing very well and proceeded to clank the chisel a few times, although not on the statue, creating the requisite dust. He then asked Soderini how it looked. Soderini is reported to have said that those touches that he asked for really “put life into” David.

The usual feisty Michelangelo had to summon more than his artistic talent to create the perfect David. He had to keep Soderini from ruining perfection! Yet the input of Soderini, as the sponsor and payer, required a response. Michelangelo did this masterfully as well. How many of us respond equally masterfully when bosses, sponsors and stakeholders provide input to our work? I’ve seen heels get dug in over the smallest critique. I’ve seen enormous efforts gone to by project teams to “educate” stakeholders over to their way of doing things when all the stakeholder was looking for was a little respect and maybe a fingerprint or two on the production.

Project stakeholders are our lifeblood. Sometimes we can pacify the input. Was Soderini worse off for having his input not really accounted for in David? Not at all. And Michelangelo told no one until Soderini was well out of power and it would not embarrass him.

Like Steve Jobs once famously said, “real artists ship.” Michelangelo shipped perfection and he shipped a happy sponsor!

Category: Information Development
No Comments »

by: Phil Simon
28  Dec  2011

Data Batting Averages


For many reasons, I have done a great deal of research about companies such as Amazon, Apple, Facebook, Twitter, and Google over the past year. You could say many things about these companies. First and foremost, they sport high data batting averages (DBAs). By DBAs, I mean that these companies’ records on their users, employees, customers, and vendors are exceptionally accurate.
Of course, this begs the question Why?

A few reasons come to mind. Let’s explore them.

Self-Service

First up, the Gang of Four allows users and customers to maintain their own information. Consider Amazon for a moment. Its customers make any and all changes to their credit card numbers, mailing addresses, and communication preferences. That’s a given. But Vendor Central, Seller Central, and Author Central each allow affected parties to submit pertinent updates as needed. So, let’s say that I want to sell a copy of The World is Flat by Thomas Friedman. No problem. No one from Amazon needs to approve it.

Enforcement

Self-service is all fine and dandy, but surely mistakes are made. No one bats 1.000, right? Honest errors aside, there are some unscrupulous folks out there. For instance, a clown recently claimed that he wrote The Age of the Platform—and submitted a separate and fake listing to Amazon, including the cover of my actual book.

(I’m actually honored. The same thing happened to Seth Godin.)

While Amazon didn’t catch this, the author (in this case, yours truly) did. After a bit of bouncing around, I emailed copyright@amazon.com and, after a few days, Amazon removed the fraudulent listing. (One small step for man…) Evidently, the company’s systemic checks and balances aren’t foolproof. At least the company provides a mechanism to correct this oversight. The result: Amazon is today a tiny bit more accurate because I noticed this issue and took the time and effort to resolve it. Now, Amazon and I can make more money.

A Recognition of the Cardinal Importance of Accurate Data

The above example demonstrates that Amazon gets it: data matters. Fixable errors should be, well, fixed.

And soon.

Now, let’s turn to Facebook. The company takes steps to ensure that, to paraphrase the famous Dennis Green post-game rant, “you are who it thinks you are.” That is, Facebook is all about authenticity among its users. While it doesn’t ask for proof of identification upon joining, try singing up as Barrack Obama or Bruce Willis.

Go ahead. I’ll wait.

You see. You can’t do this—even if your name is Barrack or Bruce. Of course, there’s an appeals process, but those with celebrity names have to endure an additional step. Annoying to these namesakes? Perhaps, but in the end it prevents at least 50 apocryphal accounts for every one “false positive.”

And Facebook is hardly alone. Twitter does the same thing with verified accounts, a service that it introduced a while back, although I’m sure that there are at least tens of thousands of people on Twitter posing as other people.

Simon Says

The companies that manage data exceptionally well today aren’t complacent. Even “hitting” .990 means potentially tens or hundreds of thousands of errors. While perfection may never be attainable, the Facebooks and Googles of the world are constantly tweaking their systems and processes in the hope of getting better and better. This is an important lesson for every organization.

Feedback

What say you?

 

 

Tags: , ,
Category: Information Development
No Comments »

by: Bsomich
22  Dec  2011

Weekly IM Update.

 
 
 logo.jpg

What Can a SAFE Architecture Do for You?  
SAFE (Strategic Architecture for the Federated Enterprise) provides the technology solution framework for MIKE2.0. The SAFE_Architecture transcends applications, data, and infrastructure. Moreover, it was designed specifically to accommodate the inherent complexities of a highly federated organisation. SAFE covers a number of capabilities, covering:

MIKE2.0 uses an architecture-driven approach. Doing so allows one to move easily from a conceptual vision of the architecture to a set of strategic vendor products. All of this is accomplished before moving into a solution architecture and design for each increment. Many of the tasks of the Overall Implementation Guide are architecture-specific, and all MIKE2.0 Solutions reference aspects of the SAFE_Architecture as part of their approach to implementation.

Browse the complete inventory of SAFE Architecture assets.

Feel free to check them out when you have a moment.

Sincerely,

MIKE2.0 Community

 
Popular Content

Did you know that the following wiki articles are most popular on Google? Check them out, and feel free to edit or expand them!

What is MIKE2.0?
Deliverable Templates
The 5 Phases of MIKE2.0
Overall Task List
Business Assessment Blueprint
SAFE Architecture
Information Governance Solution

Contribute to MIKE:

Start a new article, help with articles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

 Join Us on

 images.jpg

This Week’s Food for Thought:

On the Government, Data, Cooperation, Crisis and Opportunity

In government circles, Open Data is getting its fair share of attention these days. At least in the US, resistance to sharing data, applications, systems, and resources hasn’t exactly been a hallmark of many government agencies—and I strongly suspect that we are hardly alone here.

In this post, I’d like to discuss arguably the major reason that the adoption of Open Data and other collaborative information management (IM) projects have perhaps been slower than necessary.

Read more.

The Value of Decommissioning Legacy Systems

Most organisations reward their project managers for achieving scope, within a given timeframe for a specified budget.  While scope is usually measured in terms of user functions most projects usually include the decommissioning of legacy systems.  Unfortunately it is the decommissioning step which is most often compromised in the final stages of any project.

Read more.

How to Build an Analytical Culture

The key to building an analytical culture is to make analytics easy to use, understand, and accessible.  The easiest way to do this is to develop an analytical portal.  The purpose of the analytical portal is to allow everyone in your organization to know, examine, and use the numbers without having to contact the accounting and information technology departments every time a report or numbers are needed.  Your analytical portal should allow people to have access to 5 business intelligence tools.

Read more.

 

Category: Information Development
No Comments »

by: Phil Simon
19  Dec  2011

On the Government, Data, Cooperation, Crisis, and Opportunity

In government circles, Open Data is getting its fair share of attention these days. At least in the US, resistance to sharing data, applications, systems, and resources hasn’t exactly been a hallmark of many government agencies—and I strongly suspect that we are hardly alone here.

In this post, I’d like to discuss arguably the major reason that the adoption of Open Data and other collaborative information management (IM) projects have perhaps been slower than necessary.

An Example

After one of my recent talks on The Age of the Platform at the Government Mobility Forum, a woman (call her Marlene here) from the Department of Veteran Affairs (DVA) approached me. We exchanged pleasantries and she started telling me about some of the challenges that she faced in her job. Marlene expressed frustration about how, at present, each and every member of the US military unnecessarily had at least two distinct records in two different systems. Think about it: once you enter the military, you will be a veteran—even if you’re dishonorably discharged after only one day.

Marlene didn’t have time to delve too deeply into some of the particulars and I’d be lying if I claimed to have done a great deal of work with the Army, Navy, Air Force, and Marines. Still, my extensive experience with duplicate student, customer, employee, and vendor records was entirely apropos.

Now, hold your fire here. No one is suggesting that justifiably sensitive medical and military information be made publicly available. I can think of a score of reasons not to do so. But shouldn’t government agencies dealing the essentially the same population (while admittedly at different points) at least attempt to play nice? I’d bet you a good bit of money that managing all military personnel on the same system probably makes economic sense—especially in an age of budget cuts. Of course, a big system integration project like this this might not be possible, at least in the short term.

Barring that, why not implement some type of MDM solution to better manage the population in each system? Why not use it to maintain a master record for each member of the military. When a sergeant retires and becomes an official veteran, all of his accurate and complete information would seamlessly transfer into the DVA’s system.

Simon Says

To me, scenarios like this are the very definition of low hanging fruit. They hardly qualify as rocket science. How many hours and how much money are wasted on duplicate data entry, resolving discrepancies between different data sets and systems, and dealing with justifiably angry veterans whose data are in disarray?

The Chinese have a saying: In crisis, there is opportunity. If there’s any benefit to the current and future budget crises (at least in the United States), perhaps it is that reduced headcounts and funds will finally force far too many unwilling folks to cooperate on matters of national importance.

And that certainly includes data.

Feedback

What say you?

 

Tags:
Category: Information Management, Master Data Management
1 Comment »

by: Robert.hillard
18  Dec  2011

Value of decommissioning legacy systems

Most organisations reward their project managers for achieving scope, within a given timeframe for a specified budget.  While scope is usually measured in terms of user functions most projects usually include the decommissioning of legacy systems.  Unfortunately it is the decommissioning step which is most often compromised in the final stages of any project.

I’ve previously written about the need to measure complexity (see CIOs need to measure the right things).  One of the pieces of feedback I have received from a number of CIOs over the past few months has been that it is very hard to get a business case for decommissioning over the line from a financial perspective.  What’s more, even when it is included in the business case for a new system, it is very hard to avoid it being removed during the inevitable scope and budget management that most major projects go through.

One approach to estimating the benefit of decommissioning is to list out the activities that will be simpler as a result of removing the system.  These can include duplicated user effort, reduced operational management costs and, most importantly, a reduction in the effort to implement new systems.  The problem is that last of these is the most valuable but is very hard to estimate deterministically.  Worse, by the time you do know, it is likely to be too late to actually perform the decommissioning.  For that reason, it is better to take a modelling approach across the portfolio rather than try to prove the cost savings using a list of known projects.

The complexity that legacy systems introduce to new system development is largely proportional to the cost of developing information interfaces to those systems.  Because the number of interfaces grow to the power of the number of systems, the complexity they introduce is a logarithmic function as shown below.

Figure 1

Any model is only going to provide a basis for estimating, but I outline here a simple and effective approach.

Start by defining c as the investment per new system and n as the number of system builds expected over 5 years.  Investment cost for a domain is therefore c times n.  For this example assume c as $2M and n as 3 giving an investment of $6M.

However the number of legacy systems (l) add complexity at a rate that rapidly increases most initially before trailing off (logarithmic).  The complexity factor (f), is dependent on the ratio of the cost of software to development (c) to the average interface cost (i):

f=logc/i(l+1)

For this example assume l as 2 and i as $200K:

f=log10(3)=.048

The complexity factor can then be applied to the original investment:

c x n x (f+1)

In this example the five year saving of decommissioning the three systems in terms of future savings would be of the order of $2.9M.  It is important to note that efficiencies in interfacing similarly provide benefit.  As the cost of interfacing drops the logarithm base increases and the complexity factor naturally decreases.  Even arguing that the proportion of the ratio needs to be adjusted doesn’t substantially affect the complexity factor.

While this is only one method of identifying a savings trend, it provides a good starting point for more detailed modelling of benefits.  At the very least it provides the basis for an argument that the value of decommissioning legacy system shouldn’t be ignored simply because the exact benefits cannot be immediately identified.

Tags: ,
Category: Information Governance
6 Comments »

by: Wmcknight
17  Dec  2011

How to Share Bad Project News

In the spirit of the holiday season, I just wanted to bring some levity to the blog. I’ll leave it to you as the reader to discern if there is any applicable value to you in how you communicate.

Email to the Divisional Vice President

Subject: Weekly Status Report on Project Thames

Alan,

This was not a good week for the project.

23 of our nodes failed and we learned that there are limitations to how much the 3-node failover processes work. We’re still trying to figure out what we’re missing. Once we do, we’ll be in a better position to know when we can recover our data. Hopefully it doesn’t happen again. We also learned that the patch we were expecting from the vendor has been backburnered to 2013.

The server also crashed.

We ended up on the Security department’s radar this week and they’re testing their new processes with us. They ran a penetration test on our software and it turns out anyone can inject SQL and retrieve our customer list – but only on odd days. Our plan is to shut down the system on odd days and display an “Under Construction” page on even days. I’m sure the users will accommodate.
We’ve had a data breach as well. Personal identification information associated with 250,000 customers has been removed. And I’ll be if we didn’t delete the backup we now need. All is not lost as we’ve had some great learning over the matter.

So much for low turnover. It all hit this week with 3 of our best people resigning. They’re going to start a competing company. I’m sorry now that I forgot to have them sign that non-compete. As you recall, we were busy that day what with the CEO’s video state of the union and the festive lunch menu that day. And I’ve heard have a discrimination lawsuit coming.

All that free code we committed to has turned out to be challenging. Nobody can figure it out. We’ll need a million dollars to fix the hole with real enterprise software. And we’ll try to keep the selection and procurement cycle down to 6 months.

We also had a well-meaning programmer turn on the SMTP relay option while fooling around. This caused 100,000 spam messages to be sent using our server as a relay.

We also failed to uncomment the predicate on the DELETE statements in our code. I can’t believe this wasn’t caught in unit test. Anyway, the code was executed. In production. Sorry.

I’m also sorry about this, but we have been working off the wrong specifications. I lost track! We’ve been doing Release 9 instead of 8. No wonder it’s been so difficult with all of those prerequisite tasks from Release 8 not done. Well, now we know and we’ll just start over. It’s not the first time a company has wasted a lot of money. I saw your boss and mentioned this to him. His face turned red, but it was an understanding shade of red.

Our public cloud provider went down for a few days, but I calculated that if it stays up nonstop through March 13, 2156, it will still meet its service level of 99.95% up time.

Just kidding. But we are 2 days behind with the data model.

Category: Information Development
2 Comments »

by: Bsomich
15  Dec  2011

Information Security Concerns: Establishing Ground Rules for Mobile Devices

From iPhones to Androids and Galaxy Tabs to iPads, the ability to access work remotely, anywhere and anytime, has become within nearly every employee’s reach. However, as the enterprise market for mobile devices continues to grow at an astounding pace, many organizations are left wondering if their data warehouses and the sensitive information assets contained within them have become at risk.

With new mobile devices and platforms launching nearly every month, enterprise IT departments have been given entirely new challenges on how to best support employees while protecting data assets.

What are some of the ground rules your organization has established to protect mobile users from information security breaches?

Category: Information Management, Information Strategy
No Comments »

by: Phil Simon
12  Dec  2011

The Last Resort: Custom Fields

For many years, I worked implementing different enterprise systems for organizations of all sizes. At some point during the project (hopefully earlier than later), someone would discover that the core application had no place to store a potentially key field. Against that backdrop, the team and I had a few choices:

  • customize the system
  • store the data in a standalone spreadsheet or database
  • add and use a custom field

While a custom field was often not an elixir, it often solved our problem. At the very least, it was usually the lesser of the three evils mentioned above. A custom field would not tax the IT department nearly as much as tweaking the code, and functional end users enjoyed being able to see that information in the native system–and report off of it.

In this post, I’ll review some best practices associated with custom fields.

Tip 1: If necessary, make it required.

Some systems I’ve seen lacked fields for specific information related to employees or vendors. Perhaps the company had to track whether an employee received a laptop or if a vendor required particularly unusual payment terms. (I’m using fairly generic examples here). In that case, adding a custom field made all of the sense in the world.

While optional fields can be beneficial, understand that there are perils associated with not requiring users to enter data for them. In the examples above, requiring a “Yes/No” response theoretically guarantees that someone selects one of the two in that field. (Of course, it might not be the right entry, but that’s a totally different discussion). Note that you might not want to require a field that only applies to two percent of the population, lest you face mass disaffection from your users–and a good number of errors.

Tip 2: Lock it down.

With custom fields, the single biggest mistake I’ve seen organizations make is to give employees the ability to add their own values. Imagine a drop-down list for employee laptops with the following choices:

  • YES
  • NO
  • ACER
  • DELL
  • DELL INSPIRON
  • IBM
  • IBM PC
  • MAC
  • DLL (intentionally mispelled)

Lists like this can explode in no time.

Agree on a predefined set of choices and restrict end users from adding their own, unless you trust them and they have been trained. Remember GIGO. Anyone reporting on “YES” only will not get true results–and incorrect business decisions will result.

Tip 3: Audit.

Creating a custom field by itself means very little. Regularly running audit reports can often nip a big data problem in the bud. In the example above, the purchase of more expensive Mac computers might drive higher procurement costs, although I would hope that most accounting departments wouldn’t need to rely upon a custom field.

But perhaps Macs need a software update and the organization needs to quickly amass a list of those with Apple computers. The possibilities are limitless.

Tip 4: Don’t overdo it.

Creating necessary custom fields can certainly bail organizations out of some pretty big problems, but it’s importnat not to rely on them too much. In other words, if your application requires 100 custom fields, maybe it’s time to look at a new system altogether–either enterprise-wide or best-of-breed. Custom fields are typically the places of last resort. Odds are that systems that rely upon them to a large extent are not terribly robust.

Simon Says

Follow these rules for creating custom fields and you should be able to get more out of them.

Feedback

What say you?

 

Tags: ,
Category: Information Management
1 Comment »

by: Bsomich
09  Dec  2011

Weekly IM Update.

 
 
 logo.jpg

Composite Solutions for the Enterprise

Our Composite Solutions Offering Group brings together Core Solution Offerings across Offering Groups. Composite Solutions are next-generation offerings that provide advanced information-centric capabilities. Each offering uses advanced aspects of the SAFE Architecture, facilitates sophisticated organisational struggles and helps enable new business models. This offering group includes the following Composite Solutions:

Feel free to check them out when you have a moment.

  Sincerely,

MIKE2.0 Community

 
Contribute to MIKE:

Start a new article, help with articles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

 Join Us onimages.jpg

Did You Know?
All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

This Week’s Food for Thought:

On AOL and Small Data

You may laugh at this statistic

As of the end of September, AOL still had 3.5 million subscribers to its dialup Internet access service. A recent BusinessInsider article points out that:

According to AOL’s earnings release, the “average paid tenure” of its subscribers was about 10.6 years in Q3, up from about 9.4 years last year. (Of course, some of AOL’s existing access subscribers might not even realize they’re still paying for it.)

But for those who don’t have access to broadband, don’t want it, or don’t need it, AOL is still better than no Internet access.

Read more.

An Open Source Approach to Data Warehousing

When it comes to providing a better approach to Data Warehousing and Business Intelligence (BI), it is often important to start with definitions. This is because there are varying definitions of the technologies and techniques that are required for a contemporary BI environment.

Read more.

  

The ROI of Enterprise 2.0 and Value of Intangible Assets

I want to dispel a myth that I keep hearing and that myth is about the ROI of emergent collaboration technologies.  We keep hearing about the challenges with measuring and showing ROI and how that is a sticking point for executives and decision makers at companies.  The problem with this is that there will never be an ROI from an emergent collaboration technology precisely because technology is just that…technology .

Read more.

Category: Information Development
No Comments »

by: Phil Simon
05  Dec  2011

On AOL and Small Data

You’re may laugh at this statistic

As of the end of September, AOL still had 3.5 million subscribers to its dialup Internet access service. A recent BusinessInsider article points out that:

According to AOL’s earnings release, the “average paid tenure” of its subscribers was about 10.6 years in Q3, up from about 9.4 years last year. (Of course, some of AOL’s existing access subscribers might not even realize they’re still paying for it.)

But for those who don’t have access to broadband, don’t want it, or don’t need it, AOL is still better than no Internet access.

I read this article and, once I got past the fact that more than 3 million people still hear those annoying dial-up sounds, couldn’t help but wonder about the inherent data limits of using such a datedtechnology . Note that I am well aware that for years AOL has offered broadband services. Yet, the company’s dial-up customers represent the very definition of Small Data.

Now, 3.5 million people can generate an awful lot of data, but dial-up is not making a comeback anytime soon. It’s hard for me to envision a future in which 52k/s Internet connection speeds rule the world. (Can you imagine a 15-year old kid waiting two minutes for a page to load?) And AOL head Tim Armstrong agrees, as evinced by his hyperlocal news strategy. Armstrong believes that the future of the Internet is all about content. (Shhh…don’t tell anyone.)

A Tale of Two Data Approaches

When you’re connected to the Internet at a fraction of the speed of the rest of the world, you’re going to generate a fraction as much data as those flying around at 20mb/s. Pages take longer to render, transactions don’t process as quickly, and people become more frustrated and give up doing whatever they were going. And you’ll lose customers–and all of the data that goes with them. The best that you can hope for is (relatively) Small Data.

Contrast Small Data with Big Data. The latter is so, well, big that it cannot be handled with traditional data management tools. Hence the need for Scala and Hadoop. The latter is a project with the intent of developing open-source software for reliable, scalable, distributed computing. Now, big data can be a big challenge, but it’s kind of like having to pay “too much” in taxes because you make a great deal of money. Would you rather not have to pay any taxes because you are impoverished?

Simon Says

AOL’s problems are well beyond the scope of an individual post, but suffice it to say for now that any organization that embraces Small Data faces considerable limits on its growth, revenue, and ultimately profits. The larger point is that, if you’re a small company with a fairly conservative reach and ambition, than Small Data may well be the way to go.

However, if you’re a burgeoning enterprise that plans on growing in all sorts of unexpected directions, get away from the Small Data mind-set. Of course, this isn’t just about adopting Big Data tools like Hadoop. Rather, in all likelihood, the products, services, and overall business strategy should require more powerful tools than relational databases. If you’re thinking big, don’t let Small Data constrain you.

Feedback

What say you?

Tags: , ,
Category: Information Development
2 Comments »

Calendar
Collapse Expand Close
TODAY: Fri, April 28, 2017
December2011
SMTWTFS
27282930123
45678910
11121314151617
18192021222324
25262728293031
Archives
Collapse Expand Close
Recent Comments
Collapse Expand Close