Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Archive for November, 2010

by: Bsomich
29  Nov  2010

Weekly IM Update.

Click to view this email in a browser

 
 
 logo.jpg

Complimentary Tools and Techniques

Our tools and techniques papers are methods that can be used to speed the implementation process.  They provide recommendations for solving a number of business problems for which information management is critical to success.

MIKE2.0 tools and techniques papers include:  

Feel free to check them out when you have a moment- your comments and contributions are much appreciated!

MIKE2.0 Community

 

 
Contribute to MIKE:

Start a new article, help with articles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

 Join Us on

images.jpg

Did You Know?
All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

This Week’s Food for Thought:

Agile BI (What it is, Why it is)

“I told the project development team what I wanted and all they ever do is hiding for 6 months, never to be heard from and then ultimately they do not deliver what I wanted! Their process doesn’t work!”

Ever heard this statement from customers? Most of us are familiar with not only such statement but a series of similar reactions from customers. Reactions like these enforced BI evangelists to think about the real problem of using waterfall model for BI projects.

This problem lead to the new model for BI projects, called Agile BI.

Read complete post.

  Do You Know Your Data? 

Midway through Michael Lewis’ excellent book The Big Short, the author makes an astonishing assertion: many financial institutions didn’t understand the complex derivatives and products that they were selling. And here’s the really scary part: He was absolutely right. Lewis shows that once-storied banks such as Merrill Lynch, Morgan Stanley, and Deutsche Bank could not even answer basic questions about what specific bonds comprised their CDOs, credit default swaps (CDSs), and other opaque financial instruments.

While most organizations don’t deal in such complexity, the book make me wonder.

Read complete post.   

Building an Effective Data Warehouse   

Within the past 20 years, companies have accumulated multitudes of data from their operations- data that gives them insight into process efficiencies, consumer preferences, product effectiveness, etc.   According to popular belief, this information is said to double every 18 months.  The problem here is not a lack of abundance, it is that the available data is not readily usable for strategic decision making.  It needs compiled, harvested and manipulated with the ability to be reported on in a minutes notice for a manager to make  an informed decision.  The question for many IT departments is, how do we do this?

 
Forward to a Friend!

Know someone who might be interested in joining the Mike2.0 Community? Forward this to a friend

Category: Information Development
No Comments »

by: Phil Simon
29  Nov  2010

ETL Checkpoints

ETL tools can be extremely involved, especially with complex data sets. At one time or another, many data management professionals have built tools that have done the following:

  • Taken data from multiple places.
  • Transformed into (often significantly) into formats that other systems can accept.
  • Loaded said data into new systems.

In this post, I discuss how to add some basic checkpoints into tools to prevent things from breaking bad.

The Case for Checkpoints

Often, consultants like me are brought into organizations in need of solving urgent data-related problems. Rather than gather requirements and figure everything out, the client usually wants to just start building. Rarely will people listen when consultants advocate the need to take a step back before beginning our development efforts in earnest. While this is a bit of a generalization, few non-technical folks understand:

  • the building blocks required to create effective ETL tools
  • the need to know what you need to do–before you actually have to do it
  • the amount of rework required should developers have an incomplete or inaccurate understanding of what the client needs done

Clients with a need to get something done immediately don’t want to wade through requirements; they want action–and now. The consultant who doth protests too much runs the risk of irritating his/her clients, not to mention being replaced. While you’ll never hear me argue against understanding as much as possible before creating an ETL tool, I have ways to placate demanding clients while concurrently minimizing rework.

Enter the Checkpoint

Checkpoints are simply invaluable tools for preventing things from getting particularly messy. Even simple SQL SELECT statements identifying potentially errant records can be enormously useful. For example, on my current assignment, I need to manipulate a large number of financial transactions from disparate systems. Ultimately, these transactions need to precisely balance against each other. Should one transaction be missing or inaccurate, things can go awry. I might need to review the thirty or so queries that transform the data, looking for an error on my end. This can be time-consuming and even futile.

Enter the checkpoint. Before the client or I even run the balancing routine, my ETL tool spits out a number of audits that identify major issues before anything else happens. These include:

  • Missing currencies
  • Missing customer accounts
  • Null values
  • Duplicate records

While the absence of results on these audits guarantees nothing, both the client and I know not to proceed if we’re not ready. Consider starting a round of golf only two realize on the third whole that you forgot extra balls, your pitching wedge, and drinking water. You’re probably not going to have a great round.

Simon Says

Sure, agile methods are valuable. However, one of the chief limitations of iterative development is that you may well be building something incorrectly or sub-optimally. While checkpoints offer no guarantee, at least they can stop the bleeding before wasting a great deal of time analyzing problems that don’t exist. Use them liberally; should the produce no errors, you can always ignore them, armed with increased confidence that you’re on the right track.

Feedback

What say you?

Tags: ,
Category: Information Management
4 Comments »

by: Usmanahmad
25  Nov  2010

Agile BI (What it is, Why it is)

“I told the project development team what I wanted and all they ever do is hiding for 6 months, never to be heard from and then ultimately they do not deliver what I wanted! Their process doesn’t work!”

Ever heard this statement from customers? Most of us are familiar with not only such statement but a series of similar reactions from customers. Reactions like these enforced BI evangelists to think about the real problem of using waterfall model for BI projects.

This problem lead to the new model for BI projects, called Agile BI.

Agile is all about Evolution. With Agile mode of work, Projects follow iterative development model rather than going to traditional waterfall model. Agile BI is getting fame in market because it simplifies and accelerates the development of business intelligence applications. It is an evolution of traditional BI development methods that continues to evolve as best practices and project experiences accumulate.

Since Agile can mean different things in the overall context of BI, I think it is worthwhile to take a ‘first principles’ approach. Agility in BI has to be evaluated in context of the Agile Manifesto, (http://agilemanifesto.org) which describes a set of values (4) & principles (12) of the agile way of doing things.

To me, the following core tenets of agile methodology make it very relevant for BI systems:

1) Shared vision among teams
2) Frequent releases that makes business sense
3) Relentlessly manage scope
4) Creating a multi-release framework with a master plan
5) Accommodating changes gracefully

Following are few characteristics of an Agile BI

Iterative, Incremental, Evolutionary – First and foremost Agile Business Intelligence is an iterative, incremental, and evolutionary style of development. We work in short iterations that are generally 1-3 weeks long, and never more than 4 weeks. We build the system in small increments or “chunks” of user-valued functionality. And we evolve the working system by adapting to frequent user feedback.

Feature Driven Development – The goal of each development iteration is the production of user-valued features. What users of BI systems care about is the presentation of, and access to information that either helps them solve a business problem or make better business decisions. Every iteration must produce at least one new user-valued feature in spite of the fact that user features are just the tip of the architectural iceberg that is a BI system.

Production Quality – Each newly developed feature must be fully tested and debugged during the development iteration. Agile development is not about building hollow prototypes, it is about incrementally evolving to the right solution with the best architectural underpinnings. We do this by integrating testing and QA early and continuously into the development process

Barely Sufficient Processes – Traditional styles of BI development are rife with a high degree of ceremony. I’ve worked on many projects that involved elaborate stage gate meetings between stages of development such the transition from requirements analysis to design. These gates are almost always accompanied by a formal document that must be “signed off” as part of the gating process. In spite of this ceremony many BI projects struggle or founder. Agile BI emphasizes a sufficient amount of ceremony to meet the practical needs of the project (and future generations) but nothing more.

Automation, Automation, Automation – The only way to be truly agile is to automate as many routine processes as possible. Agile BI teams seek to automate any process that is done more than once. The more you can automate, the more you can focus on developing user features. Test automation is perhaps the most critical form of automation. If you must test your features and system manually, then guess how often you’re likely to rerun your tests? Test automation enables you to frequently re-validate that everything is still working as expected.

Collaboration – Too often in traditional projects the development team solely bears the burden of ensuring that timelines are met, complete scope is delivered, budgets are managed, and quality is ensured. Agile business intelligence acknowledges that there is a broader project community that shares responsibility for project success. The project community includes the sub-communities of users, business owners, stakeholders, executive sponsors, technical experts, project managers, etc. Frequent collaboration between the technical and user communities is critical to success

Self-Organizing / Self- Managing Teams – Hire the best people, give them the tools and support they need, then stand aside and allow them to be successful. There is a key shift in agile project management style compared to traditional project management. The agile project manager’s role is to enable the team to work their magic; and to facilitate a high degree of collaboration with users and other members of the project community

I have found agile framework especially useful in the BI context where development (enhancements to BI landscape) and support (sustenance of the existing BI infrastructure) happens concurrently.

BI professionals have long been advocates of 90-day increments: Get the results to the users quickly in order to continue building momentum and interest in ongoing project funding. However, incremental isn’t agile no matter how short the increments or how fast the iterations. The agile difference is in participation and interaction–not just building things quickly, but building the right things quickly. The participative model of agile brings with it the promise of breaking away from the “just another report” syndrome that plagues virtually every BI team.

In my view, Agile BI is clearly here to stay and succeed, since most traditional, non-Agile BI platforms and applications will be increasingly challenged to keep pace with a world moving at lightning speed

References:

http://theagilist.com/2010/03/03/here%e2%80%99s-what-agile-bi-is/#more-89 (Agile BI characteristics in more details on Ken Collier Blogs)

Category: Business Intelligence
3 Comments »

by: Bsomich
25  Nov  2010

Building an Effective Data Warehouse

Within the past 20 years, companies have accumulated multitudes of data from their operations- data that gives them insight into process efficiencies, consumer preferences, product effectiveness, etc.   According to popular belief, this information is said to double every 18 months.  The problem here is not a lack of abundance, it is that the available data is not readily usable for strategic decision making.  It needs compiled, harvested and manipulated with the ability to be reported on in a minutes notice for a manager to make  an informed decision.  The question for many IT departments is, how do we do this? 

What are some techniques that you’ve used in the past that have enabled your company to build and maintain an effective data warehouse?  What are some obstacles we should avoid?

Category: Information Development
No Comments »

by: Bsomich
24  Nov  2010

Profile Spotlight: Bryn Davies

 

Bryn Davies

Bryn Davies has over 23 years of broad industry experience in IT and in particular Data Management. In addition to designing, building and managing one of the first data warehouses in SA, he spent 14 years with Sybase South Africa as Technical Director and later Regional Manager for the Cape Town operation. During this time, he was involved in all aspects of Business Intelligence, data management, enterprise application and data integration, and data and information quality.

He was also product and solutions manager for leading data quality software solutions, and managed numerous consulting exercises at SA companies for data quality, data integration and BI.

Subsequently Bryn co-founded InfoBlueprint and he is currently Managing Director of the company, which is dedicated to providing leading professional services in the fields of Information Management and Data Quality. He has presented and authored a number of articles on the subject of data quality.

Connect with Bryn.

Category: Member Profiles
No Comments »

by: Phil Simon
22  Nov  2010

Do You Know Your Data?

Midway through Michael Lewis’ excellent book The Big Short, the author makes an astonishing assertion: many financial institutions didn’t understand the complex derivatives and products that they were selling. And here’s the really scary part: He was absolutely right. Lewis shows that once-storied banks such as Merrill Lynch, Morgan Stanley, and Deutsche Bank could not even answer basic questions about what specific bonds comprised their CDOs, credit default swaps (CDSs), and other opaque financial instruments.

While most organizations don’t deal in such complexity, the book make me wonder:

  • To what extent do organizations lack a fundamental understanding of their data?
  • What are the potential legal and financial effects caused by such ignorance?
  • How can organizations understand their data better–and mitigate risks in the process?

These are questions worth considering over the course of a number of posts.

Dealing with the Suboptimal

I often work with organizations in the midst of some type of data migration project. Unearthing data from a variety of sources typically manifests some inconsistencies, to put it mildly. For example, on my current project, I have been surprised to learn that a mid-sized bank has no fewer than nine operational systems. There are several financial systems, additional trading systems, and some old GL legacy systems that have never been properly integrated with the others. And this is just on the domestic side. On the foreign side: Don’t ask.

As expected extracts from these systems often produce inconsistent results. Now, because I have only been on this project for three weeks, this is news to me. My de facto boss has been with the company for a very long time and, as a result, can spot these issues in seconds. Institutional knowledge has its benefts.

However, what if he were to leave–by his choice or otherwise? Who would be able to explain to me–or another consultant–all of the intricacies and exceptions to relatively arcane rules? I’m honestly not sure. At least there’s a comprehensive data dictionary and an ERD, right? Think again.

While my boss would agree that better documentation is necessary, it’s certainly not imperative right now. He’s being pressured to provide a consolidated look at the data and that’s why I’m there–not to pine for a idyllic state.

Simon Says

Not every organization or industry has the ability to stall the world credit markets and cause the worst economic crisis in 70 years. Let’s hope that we never have to experience anything like what we have seen over the last three years. While “one system” may be impossible at large organizations for one reason or another, consider the dangers with not having one version of the truth or at least a complete version–even if it needs to be derived from multiple sources. We’ve recently seen what can happen when this happens.

Feedback

What say you?

Tags:
Category: Information Management
3 Comments »

by: Bsomich
22  Nov  2010

Weekly IM Update.

 
 
 logo.jpg

What Can a SAFE Architecture Do for You?

SAFE (Strategic Architecture for the Federated Enterprise) provides the technology solution framework for MIKE2.0. The SAFE_Architecture transcends applications, data, and infrastructure. Moreover, it was designed specifically to accommodate the inherent complexities of a highly federated organisation. SAFE covers a number of capabilities, covering:

MIKE2.0 uses an architecture-driven approach. Doing so allows one to move easily from a conceptual vision of the architecture to a set of strategic vendor products. All of this is accomplished before moving into a solution architecture and design for each increment. Many of the tasks of the Overall Implementation Guide are architecture-specific, and all MIKE2.0 Solutions reference aspects of the SAFE_Architecture as part of their approach to implementation.

Browse the complete inventory of SAFE Architecture assets.

Feel free to check them out when you have a moment.

Sincerely,MIKE2.0 Community  

 

 

 
Contribute to MIKE:

Start a new article, help with articles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

 Join Us on

images.jpg

Did You Know?
All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

This Week’s Food for Thought:

Spinal Tap and The Art of Data Management

Here’s a dirty little about data management: It’s about art as well as science. In this post, I discuss how many people often mistakenly focus on the “science” of things while minimizing the art piece.

Considerations

Any good developer knows that there are many ways to skin a cat. For even something requiring high a degree of precision, there are often many options. Of course, some are better than others. When I develop data management tools, I often have a number of different alternatives with respect to moving and manipulating data, including the use of:

  • Temp tables
  • Queries
  • Batch processes
  • Export/import routines

While the MIKE2.0 framework provides for extensive best practices at a general level, there is a blizzard of individual decisions to make.

Read complete post.

 How do you define project success (or failure)?  

I recently read a post by Robert Diana claiming that processes do not fail, people fail.  It made me think… we define project success or failure by measureable factors such as budget, time and quality of results.   Rarely do we ever include people in the mix.  “The project went over budget because Manager XYZ hired too many staff” or “We succeeded because the team showed up on time every day.”   What they are essentially saying is that people either followed or didn’t follow a plan or process correctly. 

Read complete post.   

ECM3 meets MIKE2.0  

The latest buzz phrase in information management is master data management. It’s yet another take on getting to that lofty goal of a single version of the truth. You may believe you already do MDM within a data warehouse or some other existing structure, and you could be right. As a matter of fact, a form of MDM is to master it in the data warehouse.The latest buzz phrase in information management is master data management. It’s yet another take on getting to that lofty goal of a single version of the truth. You may believe you already do MDM within a data warehouse or some other existing structure, and you could be right. As a matter of fact, a form of MDM is to master it in the data warehouse. ECM3 (ecm3.org) has been without a doubt the most successful maturity model for ECM (Enterprise Content Management – aka Document Management) with downloads of the model passing the 5,000 mark recently. So how to top success with more success? Well we have decided to merge efforts with MIKE2.0, the de-facto maturity model for structured data. Our hope is that by adding our work in unstructure data to MIKE2.0, that we can spread the love even further and help raise the profile and importance of ECM. .

Read complete post.

 

Category: Information Development
No Comments »

by: Robert.hillard
21  Nov  2010

Reclaim email as a business tool

Any serious business discussion about information must include email.  Like it, or loathe it, email is a major part of every knowledge worker’s life.  Unfortunately many staff have grown to hate its intrusion into their personal time, the fragmentation of their work and the expectation of a rapid reply to important messages.

The result has been that many people argue that email should be phased out and replaced by the next generation of social networking and collaboration tools within the enterprise.  To some extent, this is true with collaboration and business messaging tools continuing to gain in popularity.  However, email still remains the most popular way for most people within business to share information.

There are some things that we can do and in this post I suggest two quick actions that can change the email culture.

First, create the concept of “email bankruptcy”.  The term has been around for a while, but it is time to give it some formality.  Many staff report that exiting the company they work for, and the resultant clearing of their email, is a tremendous relief.  In effect we’ve created a reward for resignation, which is usually the exact opposite of the behaviour we want to encourage.

A potential solution is to allow staff to declare themselves “email bankrupts”.  The act of doing so will result in a declaration, through a message to all who have sent an email outstanding in their inbox, that nothing prior to the given date will be read or actioned.  The bankrupt then has a clean inbox and a fresh start.

Declaring bankruptcy should have some consequences, but they must not be too serious (name and shame would normally suffice).  In addition, like a financial bankrupt, they should be given some assistance to help them avoid the situation in the future.

Second, encourage staff (starting with yourself) to batch email sends.  Email was created based on the analogy of paper memos.  Those memos went through an internal or external mail system (“snail mail”) that caused a natural lag in the communication.  People typically looked at their incoming mail in the morning when they came to work.  If there was a backlog of mail they took it home in their briefcase to read and reply – but the sending was done the next day.

There is nothing wrong with doing work out-of-hours.  What is a problem is that the resulting messages appear in our colleague’s inboxes within moments of us sending them, creating a reminder that they should perhaps be working as well.  Worse, the near instant nature of email encourages responses that are rapid rather than considered – leading to many people working through something that in the past required just one person to do it properly.

The solution is to batch email in the same way that paper memos were in the past.  Email clients typically allow you to select a “delay” option.  For instance, in Outlook, go to the options tab and select “delay delivery”.  Set the delay to the next business day when working after hours and to a time several hours hence when responding to an email during business hours.

The result of this batching is that you still get the sense of being in control of your inbox without the depressing reality of a flood of replies coming in as fast as you deal with them.  As people get used to you working this way they will consider their reply carefully so as to maximise the value in the information that you return, knowing that they can’t create a dynamic conversation.

Email is a powerful tool and to reject it outright because of its shortcomings would be a mistake.  We must, however, all work to make it much more effective.

Category: Enterprise Content Management, Information Governance, Information Management
2 Comments »

by: Realstorygroup
20  Nov  2010

ECM3 meets MIKE2.0

ECM3 (ecm3.org) has been without a doubt the most successful maturity model for ECM (Enterprise Content Management – aka Document Management) with downloads of the model passing the 5,000 mark recently. So how to top success with more success? Well we have decided to merge efforts with MIKE2.0, the de-facto maturity model for structured data. Our hope is that by adding our work in unstructure data to MIKE2.0, that we can spread the love even further and help raise the profile and importance of ECM.

Enterprises face ever-increasing volumes of content. The practice of Enterprise Content Management (ECM) attempts to address key concerns such as content storage; effective classification and retrieval; archiving and disposition policies; mitigating legal and compliance risk; reducing paper usage; and more.

However, enterprises looking to execute on ECM strategies face myriad human, organizational, and technology challenges. As a practical matter, enterprises cannot deal with all of these challenges concurrently. Therefore, to achieve business benefits from ECM, enterprises need to work step-by-step, following a roadmap to organize their efforts and hold the attention of program stakeholders.

The ECM Maturity Model (ECM3) attempts to provide a structured framework for building such a roadmap, in the context of an overall strategy. The framework suggests graded levels of capabilities — ranging from rudimentary information collection and basic control through increasingly sophisticated levels of management and integration — finally resulting in a mature state of continuous experimentation and improvement.

Level 1: Unmanaged
Level 2: Incipient
Level 3: Formative
Level 4: Operational
Level 5: Pro-Active

Like all maturity models, it is partly descriptive and partly prescriptive. You can apply the model to audit, assess, and explain your current state, as well as inform a roadmap for maturing your enterprise capabilities. It can help you understand where you are over- and under-investing in one dimension or another (e.g., overspending on technology and under-investing in content analysis), so you can re-balance your portfolio of capabilities. The model can also facilitate developing a common vocabulary and shared vision among ECM project stakeholders. And it is our fervent hope that the ECM model we work we started, will be continued, expanded upon and itself mature with the MIKE2.0 community.

Tags: , , ,
Category: Enterprise Content Management, Enterprise Search, Information Management, Web Content Management
3 Comments »

by: Bsomich
18  Nov  2010

How do you define project success (or failure)?

I recently read a post by Robert Diana claiming that processes do not fail, people fail.  It made me think… we define project success or failure by measureable factors such as budget, time and quality of results.   Rarely do we ever include people in the mix.  “The project went over budget because Manager XYZ hired too many staff” or “We succeeded because the team showed up on time every day.”   What they are essentially saying is that people either followed or didn’t follow a plan or process correctly. 

These statements may have bearing on the outcome of a project, but do they really sum it up?  I come from the belief that people try their best NOT to fail, and that projects end up succeeding or failing based on how well they were planned, what resources were allocated and how well the strategy was built.   Rarely is it ever one thing, but a combination of many factors.  You can blame mistakes for failure or success on good work ethic, but at the end of the day, if a process is faulty, it wont be followed correctly, regardless of who was assigned to follow it. 

What do you think?   Is it people, processes or both?

Category: Information Development
2 Comments »

Calendar
Collapse Expand Close
TODAY: Thu, June 22, 2017
November2010
SMTWTFS
31123456
78910111213
14151617181920
21222324252627
2829301234
Archives
Collapse Expand Close
Recent Comments
Collapse Expand Close