Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Posts Tagged ‘Agile’

by: Phil Simon
17  Sep  2012

The 70-30 Rule

Up until four years ago, I made all of my money on large-scale IT projects. Many of these informed my first book, Why New Systems Fail. Since that time, I gradually made the break into speaking, writing, publishing, and website design.

As a result, my P&L statement looks nothing like it did back in 2008. Yet, in a way, absolutely nothing has changed. For years, this dude has abided by the 70-30 rule. That is, whether it’s a multi-million dollar enterprise system or open-source software like WordPress, I consider myself 70 percent functional. Note that the 70 percent rule applies to both Waterfall and Agile projects.

This means that I learn about the front-end a software application works because that’s where most users spend their time. It doesn’t matter if you’re paying vendors or employees, entering sales, or writing blog posts. Most people spend the vast majority of their hours in an application as functional users, not technical ones.

The Importance of the Other 30 Percent

Yet, at least for me, the other 30 percent is key. At times, functional users will have technical questions about how an application works, where its data is stored, how to change or customize something, and the like. The purely functional consultant or web designer can only get you so far. I’ve never met anyone completely satisfied with a vanilla application after using it for any substantial length of time.

I can tell you that forcing myself to be 30 percent technical has actually increased the depth of my “other” 70 percent. In other words, if you understand a good bit of the technical side, then you will augment your appreciation and knowledge in the functional side.

And I’m hardly the only one who feels this way. I asked my friend, entrepreneur and WordPress designer Todd Hamilton about this.

There are quite a few steps in knowledge from simple end user to developer.  To do either job really well, a person needs an overall knowledge framework of the system on which she is working with from both perspectives.  For the WordPress simple end user, this requires understanding several things: 1) a bit about what a database is; 2) that the WordPress admin is for putting stuff in the database; 3) that the WordPress theme displays that data.

For the WordPress developer, that means understanding: 1) how users intend to interact with the data; 2) what makes sense to them; 3) what is too complex. A good dynamic is where both the developer and the end user share that knowledge framework and then the focus becomes execution instead of achieving understanding.

Hamilton’s exactly right here and his WordPress lessons can be extrapolated to other applications.

Simon Says:Find the Hybrids

When hiring a consultant for any type of technology project, ideally check his or her technical credentials. A purely functional consultant probably isn’t up to speed on the application’s data model–something that might have enormous implications down the road.

Ditto on the technical end. Someone without a lick of functional or application knowledge isn’t going to appreciate the legitimate challenges of most of the audience for that very application.


What say you?

Tags: , ,
Category: Information Management
No Comments »

by: Phil Simon
15  May  2012

The MDM Myth

MDM is exclusively for large, complex organizations strewn across the globe. MDM projects require large budgets and extensive timelines and organizations will only see benefits years down the road.

At least that’s the pervasive MDM myth. But is it true?

On a recent Informatica postJakki Geiger writes about many common traits shared among organizations using MDM applications and technologies. They include:

1) Company size is $500M+

2) Experiencing rapid company growth, likely due to mergers and acquisitions

3) Facing a compelling event such as a new regulation (examples include WEEE in Manufacturing, Basel III in Financial Services,Sunshine Act in Pharmaceuticals) or a change in business dynamics that results in a drop in revenue or profit,  triggering a renewed focus on efficiency and cost savings

For the entire list, check out the post.

While reading the list, I was struck my how many of these traits are big company-specific. That is, someone in a mid-market or mid-sized organization might not appreciate the need for MDM. That doesn’t mean, though, that such a need doesn’t exist.

Size Matters

Without a doubt, not every organization needs to embrace MDM. For instance, the small businesses profiled in The New Small (my third book) run far too few systems to make MDM truly necessary. Indeed, there’s nary a mention of MDM in that book. Growing companies in multiple locations and/or countries, however, are a vastly different story. While there’s no magic number of revenue, employees, or customers to make MDM viable, at some point many organizations cross the MDM chasm–the point at which MDM just plain makes sense. Beyond size, when many people think of MDM projects, they conjure up images of 18-month marathons with uncertain payoffs at the end.

Simon Says

Today, is MDM still primarily just a big company thing? Sure, and that won’t change tomorrow or next week. Increasingly, though, many are coming to understand that MDM certainly can benefit growing companies (read: those not quite at the half-billion mark).

Sure, most small companies and even mid-sized organizations haven’t jumped on the MDM train just yet. In my view, though, that’s starting to change, especially with the increasing array of available open-source MDM alternatives. Talend’s Open Studio for MDM is just one of many legitimate options for organizations loathe to drop hundreds of thousands of dollars (or more) on an traditional software license.

For those worried about long, drawn-out implementations with no end game, fear not. I find the increasingly adoption of Agile methods (even in the MDM world) to be a generally positive development.


What say you?

Tags: ,
Category: Master Data Management
1 Comment »

by: Robert.hillard
20  Nov  2011

Embracing the unexpected

The nineteen century belonged to the engineers.  Western society had been invigorated and changed beyond recognition by the industrial revolution through its early years and by its close the railroads were synonymous with the building of wealth.

The nineteen century was the era that saw the building of modern business with the foundation being established for many of the great companies that we know today.  The management thinkers who defined the discipline cluster around the first part of the twentieth century and it should be no surprise that they were heavily influenced by the engineers.

Business was built around the idea of engineered processes with defined inputs and outputs.  I’ve written before about the shift from process-driven to information-driven business.  In this post, though, I am really focusing on another consequence of the engineering approach to the running of businesses, the expectation of achieving planned outcomes.

There is a lot to be said for achieving a plan.  Investors dream of certainty in their returns.  Complex businesses like to be able to align production schedules.  Staff like knowing that they have a long-term job.

When you’re building a bridge or a railroad, there is certainty in the desired outcome.  Success is measured in terms of a completed project against time and budget.

When your business has a goal of providing products or services into a market, the definition of success is much harder to nail down.  You want your product or service to be profitable, but you are usually flexible on its exact definition.  However, internal structures tend not to have this flexibility built in.  Large businesses operate by ensuring each part of the organisation delivers their component of a new project as specified by the overall design.

This sounds fine until you look at these components in more detail.  Many are fiendishly complex.  In particular the IT can often involve many existing and new systems which have to be interfaced in ways that were never intended when they were originally created.  Staff trained to achieve a single outcome in the market keep on testing customers until they gain (or even bludgeon) acceptance for the product or service design.

Because of the scale of these projects, failure is not an option.  The business engineering philosophy that I’ve described will push the launch through regardless of the obstacles.  However, there is a growing trend in business to try and use “big data” to run experiments and confirm that the design of a new product or service is correct before this effort is undertaken.

There is also another trend in business.  Agile.  Agile methods are characterised by an evolutionary approach to achieving system outcomes.

Individually these trends make sense.  Taken together they may actually be starting to indicate a deeper change.  In a future world we may treat business as an experiment in its own right.  We know what the outcome is that we expect, but we will push our teams to embrace issues and look for systemic obstacles to guide us in new, and potentially more profitable, directions.

When customers don’t react positively to our initial designs, rather than adjust the design to their aesthetic, business should ask whether the product is appropriate at all and consider making a radical shift even at the last minute.

When IT finds that a system change is harder than they expected, they can legitimately ask whether there is a compromise that will deliver a different answer that might be equally acceptable, or sometimes even more useful.

One of the major differences between scientists and engineers is that the former look for the unexpected in their experiments and try to focus on the underlying knowledge they can get from things not going as planned.  Perhaps twenty-first century business needs less people thinking like engineers trying to railroad new products and services into the market and more who are willing to don the lab coat of a scientist who is willing to allow the complexity of modern business to flourish and support their innovation.

Tags: ,
Category: Information Strategy
No Comments »

by: Phil Simon
20  Oct  2011

On Steve Jobs, Agile, and Meeting IM Needs

In an interview in Inc. Magazine, Steve Jobs once said, “You can’t just ask customers what they want and then try to give that to them. By the time you get it built, they’ll want something new.”

Jobs’ wisdom and creative genius is the subject of many books–and doubtless many more. In this post, I’d like to discuss whether his statement above applies to the world of information management. In other words, can organizations respond to line of business (LOB) information management needs in time?

It’s a big question for a blog post, but I’m feeling ambitious today.

In the 1980s, the answer was generally yes. Business needs were fairly static (at least relative to today). Organizations by and large used the Waterfall method of deploying software, meticulously gathering requirements and implementing solutions in a very sequential manner. Protracted and often contentious projects often ensued, but organizations could often wait three years or more for a new application to go live. It was a different and much slower time.

We’re Not in Kansas Anymore

Fast forward to the 2010s and things could not be any more different. Ask most CIOs if three years is an acceptable amount of time to deploy a new system or technology. The answer is almost always no. Yes, major renovations and fundamental changes in architecture (read: embracing the cloud) still take time, but organizations simply can’t wait as long as they did in the 1980s because the world has changed so drastically. Things happen now at light speed.

As an aside, actor George Clooney recently remarked on Charlie Rose that movies reviews via tweets and Facebook updates are now available immediately, not on the following Monday. As a result, the grace period previously afforded bad movies of a weekend or more has been eliminated. If a movie bombs, the world will know by Friday night.


So, what’s an organization to do amidst such rapid technological change? Business needs now may not meet requirements gather six months or a year ago, much less longer than that.

Enter increasingly popular Agile methods. Rather than waiting for the big bang, features are implemented as they are ready, often while others are clearly not. Robert Stroud recently and succinctly defined the difference between Agile and Waterfall methodologies as follows: ”Waterfall is requirements-bound and Agile is time-bound”.

Lest you think that Agile methods are the sole purview of smaller, more nimble companies, larger organizations are getting on the Agile train. No doubt that the web makes this easier, as IT departments no longer have to install partially ready applications on thousands of workstations (a laborious process). Rather, features are instantly deployed via the web and users can access the latest bells and whistles seamlessly through their browsers.

Simon Says

So, let’s return to the initial query in this post: Can organizations respond to line of business (LOB) information management needs in time?

The simple answer is it depends. To be sure, Agile is no panacea. Bad data, misunderstood requirements, and cultural issues can derail even the most promising projects.

Still, the pros of Agile outweigh their cons. Organizations that cling to antiquated deployment methods will clearly not be able to meet the LOB information managements needs in any reasonable time. The web, open source, and the cloud all give organizations the ability to reduce product implementation times. It’s a new world, and the Waterfall is often poorly suited for it.


What say you?

Tags: , ,
Category: Information Development, Information Management
No Comments »

by: Phil Simon
29  Nov  2010

ETL Checkpoints

ETL tools can be extremely involved, especially with complex data sets. At one time or another, many data management professionals have built tools that have done the following:

  • Taken data from multiple places.
  • Transformed into (often significantly) into formats that other systems can accept.
  • Loaded said data into new systems.

In this post, I discuss how to add some basic checkpoints into tools to prevent things from breaking bad.

The Case for Checkpoints

Often, consultants like me are brought into organizations in need of solving urgent data-related problems. Rather than gather requirements and figure everything out, the client usually wants to just start building. Rarely will people listen when consultants advocate the need to take a step back before beginning our development efforts in earnest. While this is a bit of a generalization, few non-technical folks understand:

  • the building blocks required to create effective ETL tools
  • the need to know what you need to do–before you actually have to do it
  • the amount of rework required should developers have an incomplete or inaccurate understanding of what the client needs done

Clients with a need to get something done immediately don’t want to wade through requirements; they want action–and now. The consultant who doth protests too much runs the risk of irritating his/her clients, not to mention being replaced. While you’ll never hear me argue against understanding as much as possible before creating an ETL tool, I have ways to placate demanding clients while concurrently minimizing rework.

Enter the Checkpoint

Checkpoints are simply invaluable tools for preventing things from getting particularly messy. Even simple SQL SELECT statements identifying potentially errant records can be enormously useful. For example, on my current assignment, I need to manipulate a large number of financial transactions from disparate systems. Ultimately, these transactions need to precisely balance against each other. Should one transaction be missing or inaccurate, things can go awry. I might need to review the thirty or so queries that transform the data, looking for an error on my end. This can be time-consuming and even futile.

Enter the checkpoint. Before the client or I even run the balancing routine, my ETL tool spits out a number of audits that identify major issues before anything else happens. These include:

  • Missing currencies
  • Missing customer accounts
  • Null values
  • Duplicate records

While the absence of results on these audits guarantees nothing, both the client and I know not to proceed if we’re not ready. Consider starting a round of golf only two realize on the third whole that you forgot extra balls, your pitching wedge, and drinking water. You’re probably not going to have a great round.

Simon Says

Sure, agile methods are valuable. However, one of the chief limitations of iterative development is that you may well be building something incorrectly or sub-optimally. While checkpoints offer no guarantee, at least they can stop the bleeding before wasting a great deal of time analyzing problems that don’t exist. Use them liberally; should the produce no errors, you can always ignore them, armed with increased confidence that you’re on the right track.


What say you?

Tags: ,
Category: Information Management

by: Phil Simon
01  Nov  2010

Agile Challenges and The Great Divide

Agile software development is a funny thing. Most of what I know about formal Agile methodologies comes from my friend Roland Cuellar. He wrote the chapter on Agile in my second book. While I’m relatively new to formal Agile processes, in fact I have been using them for the vast majority of my career. In this post, I’ll be discussing some of the limitations of Agile within the context of data management.

On my current assignment, my client is a large financial institution in the midst of some M&A activity. I am working closely with a director named Tommy to build an ETL tool. (This is not his real name; I just happen to be listening to Styx right now). Tommy is getting pulled in a bunch of different directions and, using more tact that I have, is trying to successfully navigate his organization’s political waters. As soon as we make the changes to file formats for his internal clients (read: very senior folks), he almost just as quickly comes back with changes. Some are minor; many are not.

The Limitations of Agile

The benefits of Agile–and there are many–are not unequivocal. While my little ETL tool is nowhere near as complex as many enterprise systems, building anything in an iterative process introduces risks such as rework and related time and expense overruns.

Now, I wouldn’t consider myself a hard core software developer. I can conceptualize as well as the next guy but sometimes I just have to see the aftereffects of a change. No one can think of everything conceptually. While versioning can partially control rework, I’d be lying if I claimed that all of my work is linear. At least my client understands this.

Implementing an ostensibly simple change request in a production environment can have one or more of the following ramifications:

  • Break something else down the line.
  • Corrupt or change data.
  • Introduce inefficiencies–i.e., batch processes involve superfluous tables and views that, had the end result been known, could have easily been averted.
  • Lead to excessive dependence on a resource with intimate knowledge of the data and development process. As a result, it might be difficult for an organization to rid itself of an expensive external resource.

What’s more, all of these changes may not be immediately evident. Dealing with tens of thousands of records (or more) means that it’s easy to miss a few. For example, because of some data manipulation, I had to derive a number of fields in a translate table. It would have been easy for me to forget that those fields needed to be updated in several related tables. Fortunately, I’m pretty diligent. Because of the complexity of my task, however, it would have been understandable for me to have missed a few of these interdependencies.

The Great Divide

Agile challenges are exacerbated by the IT-business disconnect, something which most experienced data management professionals have seen over their careers. “The line” wants what they want when they want it, irrespective of the development and data ramifications of their requests. Most business folks could care less about data models, referential integrity, and normalized tables. Changes often come from the top down and IT just needs to make it happen. This breeds frustration in IT folks, put in the unfortunate position of “just doing their job” if they successfully make the changes and “screwing things up” if they don’t.

Simon Says

Bottom line: Agile isn’t a panacea to The Great Divide between IT and the business. It may be better for many projects than a Waterfall methodology and, to be sure, not understanding fundamental business requirements will pose major challenges for any software development process. Ensure that IT is at the table during major development efforts, especially if business end users are unfamiliar with software, technical, and data issues.


What say you?

Tags: ,
Category: Information Management
No Comments »

Collapse Expand Close
TODAY: Wed, March 20, 2019
Collapse Expand Close
Recent Comments
Collapse Expand Close