Posts Tagged ‘ERP’
IT project failures continue to haunt us. Consider the recent BusinessWeek article on the Healthcare.gov. It’s a fascinating read, as it demonstrates that massive problems that plagued the project. In short, it was an unmitigated disaster. From the piece:
Put charitably, the rollout of healthcare.gov has been a mess. Millward Brown Digital, a consulting firm, reports that a mere 1 percent of the 3.7 million people who tried to register on the federal exchange in the first week actually managed to enroll. Even if the problems are fixed, the debacle makes clear that it’s time for the government to change the way it ships code—namely, by embracing the approach to software development that has revolutionized the technology industry.
You could write a book about the failure about a project so large, expensive (nearly $400 million, and important. Even as of this writing, site performance is terrible:
I certainly don’t have any inside information about how the project was (mis)managed. I can’t imagine, however, that incentives were properly aligned. With more than 50 different tech vendors involved, it’s highly probable that most if not all parties behaved in their rational, economic self-interest. Think Freakonomics.
I am reminded of an ERP project on which I worked about five years ago. The CIO routinely ignored the advice of consultants that the system was nowhere near ready to go live. I found out near the end of my contentious assignment that the entire executive team was receiving massive bonuses based upon going live at the end of the year. This no doubt colored the CIO’s perception of show-stopping issues.
Think about it. When a $100,000 bonus is on the line, how can you not minimize the impact employee and vendor data issues? More to the point, how can one truly be objective when that type of carrot is at stake?
I like to think that I have my own moral compass and that, if given the chance, I put the needs of the many above the needs of the few, to paraphrase Aristotle. Silly is the organization, however, that ignores the impact of financial incentives on IT projects.
Structure your compensation based upon long-term organizational goals, not short-term ones. While no guarantee, you’ll increase the chances of successful IT projects.
What say you?
We’re hearing a great deal these days about Big Data and related terms, one of which is Big Data analytics. There are many definitions of this term and here’s one as good as any:
Big Data analytics is the process of examining large amounts of data of a variety of types (Big Data) to uncover hidden patterns, unknown correlations, and other useful information. Such information can provide competitive advantages over rival organizations and result in business benefits, such as more effective marketing and increased revenue.
You’ll get no argument from me on the importance of defining key terms, be it Big Data, analytics, platforms, etc. Many blown IT projects or corporate initiatives can trace their failures to people not being on the same page from day one.
And this is why I’m a bit skeptical of the term Big Data analytics. Is the focus on Big Data? Analytics? Both?
Where’s the Focus?
I’d actually argue that it should be neither. That is, “BDA” is just a means towards the normal business end. To me, the entire point of capturing, storing, and analyzing any data (Big or Small) is to move the needle. Period. Or, if you like, consider the simple diagram below:
How many of us take the chain to the end? Or do things stop prematurely? I worry that the focus on either analytics or Big Data is misplaced. They are all merely means to the traditional business ends: increasing sales, decreasing expenses, etc.
I’ve written thousands of reports in my consulting career and, lamentably, far too many of my clients would want the report for the sake of wanting the report. I can recall several occasions in which I’ve stumped my clients by asking a simple question like, “What do you do with this information?”
Simon Says: Don’t Forget the Endgame
I have no doubt that the analytics available from unstructured data can augment our understanding of customers, users, employees, and just about everyone else. At the same time, though, data for the sake of data is meaningless. Consider two organizations, A and B. The former effectively utilizes Small Data and routinely makes decisions based on analysis, tested hypothesis, and fact. The latter doesn’t touch the vast troves of data at its disposal–both big and small.
All else equal, I’ll bet on Organization A any day of the week and twice on Sunday.
What say you?
In his excellent series on data profiling, Jim Harris details a number of basic steps that organizations can take to assess the completeness and validity of their data, a process called data profiling. Among the many lessons that Harris imparts is the need to assess key data according to the following attributes:
- NULL – count of the number of records with a NULL value
- Missing – count of the number of records with a missing value (i.e., non-NULL absence of data, e.g., character spaces)
- Actual – count of the number of records with an actual value (i.e., non-NULL and non-Missing)
- Completeness – percentage calculated as Actual divided by the total number of records
- Cardinality – count of the number of distinct actual values
- Uniqueness – percentage calculated as Cardinality divided by the total number of records
- Distinctness – percentage calculated as Cardinality divided by Actual
Admittedly, few companies do comprehensive diagnoses of key data elements prior to undertaking a massive system integration project.
A few years ago, I worked on a project for a company implementing a new ERP system (call it XYZ, Inc. here). To be sure, management was no exception to this rule. With regard to employee, customer, and vendor information, XYZ did not profile its data prior commencing the project. Instead, it opted instead for a reactive approach. The predictable result: its data quality suffered during and after the system activation.
Simon Says: Data Profiling Is No Luxury
Do not begin a major system endeavor with both a tight timeline and the assumption that all data can be cleansed during the project. Better yet, make data clean up a separate project before beginning the project in earnest.
It’s a myth to think of data profiling as a luxury. Think of it it’s an investment. The time and money spent profiling data will pay off in spades, forcing the organization to decide which data elements are essential and which are not. If there’s any part of a project that lends itself to milestone consulting, it’s data profiling. Consultants can identify data-oriented issues but should not be expected to resolve them.
What say you?
In my last post, I discussed the sin of pride and information management (IM) projects. Today, let’s talk about envy, defined as “a resentful emotion that occurs when a person lacks another’s (perceived) superior quality, achievement or possession and wishes that the other lacked it.”
I’ll start off by saying that, much like lust, envy isn’t inherently bad. Wanting to do as well as another employee, department, division, or organization can spur improvement, innovation, and better business results. Yes, I’m channeling my inner Gordon Gekko: Greed, for lack of a better word, is good.
With respect to IM, I’ve seen envy take place in two fundamental ways: intra-organizational and inter-organizational Let’s talk about each.
This type of envy takes place when employees at the same company resent the succes of their colleagues. Perhaps the marketing folks for product A just can’t do the same things with their information, technology, and systems that their counterparts representing product B can. Maybe division X launched a cloud-based CRM or wiki and this angers the employees in division Y.
At its core, intra-organizational envy stems from the inherently competitive and insecure nature of certain people. These envious folks have an axe to grind and typically have some anger issues going on. Can someone say schadenfreude?
This type of envy takes place between employees at different companies. Let’s say that the CIO of hospital ABC sees what her counterpart at hospital XYZ has done. The latter has effectively deployed MDM, BI, or cloud-based technologies with apparent success. The ABC CEO wonders why his company is so ostensibly behind its competitor and neighbor.
I’ve seen situations like this over my career. In many instances, organization A will prematurely attempt to deploy more mature or Enterprise 2.0 technologies simply because other organizations have already done so–not because organization A itself is ready. During these types of ill-conceived deployments, massive corners are cut, particularly with respect to data quality and IT and data governance. The CIO of ABC will look at the outcome of XYZ (say, the deployment of a new BI tool) and want the same outcome, even though the two organizations’ challenges are unlikely to be the same in type and magnitude.
Envy is a tough nut to crack in large part because it’s part of our DNA. I certainly cannot dispense pithy advice to counteract thousands of years of human evolution. I will, however, say this: Recognize that envy exists and that it’s impossible to eradicate. Don’t be Pollyanna about it. Try to minimize envy within and across your organization. Deal with outwardly envious people sooner rather than later.
What say you?
Next up: gluttony.
In part four of this series, I discussed lust from an IM standpoint. Now it’s time for pride, defined as
an inwardly directed emotion that carries two common meanings. With a negative connotation, pride refers to an inflated sense of one’s personal status or accomplishments, often used synonymously with hubris. With a positive connotation, pride refers to a satisfied sense of attachment toward one’s own or another’s choices and actions, or toward a whole group of people, and is a product of praise, independent self-reflection, or a fulfilled feeling of belonging.
Let’s apply the concept of pride to this blog.
The Downside of Pride
All too often in the corporate and IM worlds, pride rears its ugly head into things–often with disastrous results. For instance, at one organization at which I worked, a senior IT director and 20-year-veteran (Al) was quite proud of the systems and he and his team had built. “His” systems handled the organization’s payment of employee bonuses and the distribution of stock options.
To be sure, this wasn’t “rocket surgery”, but doing this across more than 60 countries wasn’t as easy as it might sound. Al was so proud of his systems that he fought efforts to deploy an enterprise-wide solution for years, erroneously telling his colleagues that COTS applications couldn’t do what his programs could.
For years, Al was able to postpone the organization’s implementation of PeopleSoft. When he couldn’t delay it any longer, Al fought successfully to have the ERP highly
bastardized customized to “integrate”–and I use that term very loosely–with his systems. The end result: an absolute mess of data coupled with a spaghetti architecture. (For how architecture ought to be, click here.)
A Different Example
Of course, pride need not hamper organizations. Contrast Al with a manager I met while consulting at a VOIP company. Steve was in fact proud of the customer applications that he had built. Without question, they had served a purpose as his company more than tripled in size. As business continued to grow, it was becoming more and more apparent that, on many levels, his creations were reaching their limits. I was nervous speaking with him: I knew that I needed to have a difficult conversation with him.
To his credit, Steve didn’t let pride get in his way. Steve knew that those applications could only do so much. In his own words, he knew that he’d at one point soon need to throw the baby out with the bath water. Whether he built new systems or implemented existing ones in the future, Steve knew that his company would be better off than it was now.
So, what separated Steve and Al? After all, both were proud of what they had accomplished. First, Steve was much younger than Al. Oftentimes people near the end of their careers are concerned about their legacies; they want to leave their mark on their organizations. Beyond age, though, there’s something in the DNA of people. Some folks can be simultaneously proud of what they’ve done–and still realize the limitations of their accomplishments.
What say you?
In my last post, I talked about greed as it relates to IM projects. Long story short, for different reasons, people actively refuse to share information, train employees, or generally cooperate with others.
Today’s topic: sloth, defined by Wikipedia as:
…spiritual or emotional apathy, neglecting what God has spoken, and being physically and emotionally inactive. Sloth or lut can also indicate a wasting due to lack of use, concerning a person, place, thing, skill, or intangible ideal that would require maintenance, refinement, or support to continue to exist.
To be sure, on information management (IM) projects, the ultimate effects of sloth often resemble those of greed–i.e., work just doesn’t get done in a timely manner, if at all. Alternatively, work is just sloppy. However, the motivations behind sloth and greed are typically quite different.
Greed inheres a certain defiance and even anger. For instance, consider Barry, an employee who isn’t happy that his job is changing. No one asked him what he thought. Maybe he has to learn a new skill or application. Either passively or actively, Barry expresses this anger in the workplace. Take away the change in Barry’s job and he would not have been problematic.
By way of contrast, sloth lacks the same type of precursor. When sloth manifests itself, an employee doesn’t necessarily feel aggrieved. Nothing is changing with Lillian’s job and she’s actually pretty happy. Maybe her boss asked her to look into Big Data. However, for whatever reason, she just doesn’t feel like it. She’d rather play Angry Birds while no one is looking.
Now, sloth should not be mistaken for an employee with conflicting and diverging priorities. For instance, on my ERP projects in my career, I would need to meet with the Director of Finance or the Payroll Manager for different reasons. The organization was deploying a new CRM or ERP system and my job involved activating that new system. (Of course, I couldn’t do it alone.) Unfortunately, I would often have trouble scheduling time with individual clients because they often had to deal with emergencies. By definition, these issues trumped any “future plans” that I had to discuss with them. Consequently, my meetings were sometimes canceled.
This isn’t sloth; this is just reality. A problem with testing or training in a new system always loses to an immediate organizational crisis. Consultants need to get used to this. It’s an occupational hazard.
Sloth is often a function of knowledge, curiosity, and personality. Consider the following problem: similar but not identical customer or employee data from two different spreadsheets has to be married–say, 2,000 records.
Sure, there are people who believe that this has to be a manual exercise. Because of this, they just don’t feel like doing this type of monotonous work. But plenty of people are naturally curious; they know that there just has to be a better way to do this. Adventurous and inquisitive folks are rarely lazy. They either know about Excel’s VLOOKUP function. Alternatively, they will search the web or ask others if there’s a better way to marry data. JOIN statements come to mind.
Understanding sloth is the first step in preventing or minimizing it. Ignore it at your own peril.
What say you?
Next up: Pride
In my last post, I discussed the impact of wrath on IM projects. Today’s topic is the second of the deadly sins: greed.
Note that greed and sloth (to be discussed in a future post) are very different sins.
Now, let’s start off by getting our terms straight. By greed, I’m talking about the need for certain employees, groups, and departments to hoard data that ought to be shared throughout the organization. These folks are keeping for themselves what others want and/or need. For instance, consider Steve, a mid-level employee at XYZ who keeps key sales or customer data in a spreadsheet or a standalone database.
Or consider ABC a company that implemented a new system that, for different reasons, was never populated with legacy data. Barbara in Payroll holds key payroll information and will not willingly provide it to Mark in Accounting.
To be sure, organizational greed is hardly confined to data. I’ve seen many employees over the years refuse to train other employees or lift a finger to help a new hire or perceived enemy. Maybe they refuse to meet with consultants hired by senior management to re-engineer a process.
Understanding the Nature of Greed
I could go on with examples but you get my drift. Almost always, greed emanates from some fundamental insecurity within the offending employee. What’s more, at the risk of getting myself in a whole heap of trouble, I’ve found that more senior employees are more likely to be greedy. Now, this is a broad generalization and certainly does not apply across the board. I’ve seen exceptions to this general rule: young employees who wouldn’t share information and near-retirement-aged folks more than happy to show others what they know.
As employees become less secure about their jobs and themselves, they naturally start to think about the future–their futures. It’s just human nature. Many people understandably don’t want to be looking for jobs today. (This feeling increases as we age, what with many familial and personal responsibilities.) We realize that the grass is not always greener. For some of us, this manifests itself in a tendency to attempt to protect our jobs, departments, budgets, fiefdoms, and headcounts–at least until the perceived threat diminishes.
But there’s a critical and countervailing force at play for the greedy: Information wants to be free. As open-source software, open APIs, and open data sources continue to sprout, people are becoming less and less tolerant of employee bottlenecks. Those who refuse to play ball may be able to temporarily stall large-scale information management projects, but eventually, by hook or by crook, those damns always break.
I wish that I had a simple solution for resolving employee-related greed issues. I don’t. Many tomes have been written about managing difficult employees. At a high level, organizations can use two well-worn tools: the carrot and the stick. Consider rewarding employees who share information and knowledge while concurrently punishing those who don’t.
What say you?
Next up: sloth.
The 1993 documentary The War Room tells the story of the 1992 US presidential campaign from a behind-the-scenes’ perspective. The film shows first-hand how Bill Clinton’s campaign team responded to different crises, including allegations of marital infidelity. While a bit dated today, it’s nonetheless a fascinating look into “rapid response” politics just when technology was starting to change traditional political media.
Today, we’re starting to see organizations set up their own data war rooms for essentially the same reasons: to respond to different crises and opportunities. Information Week editor Chris Murphy writes about one such company in “Why P&G CIO Is Quadrupling Analytics Expertise”:
[Procter & Gamble CIO Filippo] Passerini is investing in analytics expertise because the model for using data to run a company is changing. The old IT model was to figure out which reports people wanted, capture the data, and deliver it to the key people weeks or days after the fact. “That model is an obsolete model,” he says.
Murphy hits the nail on the head in this article. Now, let’s delve a bit depper into the need for a new model.
The Need for a New Model
There are at least three factors driving the need for a new information management (IM) model in many organizations. First, let’s look at IT track records. How many organizations invested heavily in the late 1990s and early 2000s on expensive, on-premise ERP, CRM, and BI applications–only to have these investments ultimately disappoint the vast majority of stakeholders? Now, on-premise isn’t the only option. Big Data and cloud computing are gaining traction in many organizations.
Next up: time to respond. Beyond the poor track record of many traditional IT investments, we live in different times relative to even ten years ago. Things happen so much faster today. Why? The usual supects are the explosion of mobility, broadband, tablets, and social media. Ten years ago, the old, reactive requirement-driven IM model might have made sense. Today, however, that model becoming increasingly difficult to justify. For instance, a social media mention might cause a run on products. By the time that proper requirements have been gathered, a crisis has probably exacerbated. An opportunity has probably been squandered.
Third, data analysis and manipulation tools have become much more user-friendly. Long gone are the days in which people needed a computer science or programming background to play with data. Of course, data modeling, data warehousing, and other heavy lifting necessitate more technical skills and backgrounds. But the business layperson, equipped with the right tools and a modicum of training, can easily investigate and drill down on issues related to employees, consumers, sales, and the like.
Against this new backdrop, which of the following makes more sense?
- IT analysts spending the next six weeks or months interacting with users and building reports?
- Skilled users creating their own reports, creating and interpreting their own analytics, and making business decisions with minimal IT involvement (aka, self service)?
Building a data war room is no elixir. You still have to employ people with the skills to manage your organizations data–and hold people accountable for their decisions. Further, rapid response means making decisions without all of the pertinent information. If your organization crucifies those who make logical leaps of faith (but ultimately turn out to be “wrong” in their interpretation of the data), it’s unlikely that this new model will take hold.
What say you?
For many years, I worked implementing different enterprise systems for organizations of all sizes. At some point during the project (hopefully earlier than later), someone would discover that the core application had no place to store a potentially key field. Against that backdrop, the team and I had a few choices:
- customize the system
- store the data in a standalone spreadsheet or database
- add and use a custom field
While a custom field was often not an elixir, it often solved our problem. At the very least, it was usually the lesser of the three evils mentioned above. A custom field would not tax the IT department nearly as much as tweaking the code, and functional end users enjoyed being able to see that information in the native system–and report off of it.
In this post, I’ll review some best practices associated with custom fields.
Tip 1: If necessary, make it required.
Some systems I’ve seen lacked fields for specific information related to employees or vendors. Perhaps the company had to track whether an employee received a laptop or if a vendor required particularly unusual payment terms. (I’m using fairly generic examples here). In that case, adding a custom field made all of the sense in the world.
While optional fields can be beneficial, understand that there are perils associated with not requiring users to enter data for them. In the examples above, requiring a “Yes/No” response theoretically guarantees that someone selects one of the two in that field. (Of course, it might not be the right entry, but that’s a totally different discussion). Note that you might not want to require a field that only applies to two percent of the population, lest you face mass disaffection from your users–and a good number of errors.
Tip 2: Lock it down.
With custom fields, the single biggest mistake I’ve seen organizations make is to give employees the ability to add their own values. Imagine a drop-down list for employee laptops with the following choices:
- DELL INSPIRON
- IBM PC
- DLL (intentionally mispelled)
Lists like this can explode in no time.
Agree on a predefined set of choices and restrict end users from adding their own, unless you trust them and they have been trained. Remember GIGO. Anyone reporting on “YES” only will not get true results–and incorrect business decisions will result.
Tip 3: Audit.
Creating a custom field by itself means very little. Regularly running audit reports can often nip a big data problem in the bud. In the example above, the purchase of more expensive Mac computers might drive higher procurement costs, although I would hope that most accounting departments wouldn’t need to rely upon a custom field.
But perhaps Macs need a software update and the organization needs to quickly amass a list of those with Apple computers. The possibilities are limitless.
Tip 4: Don’t overdo it.
Creating necessary custom fields can certainly bail organizations out of some pretty big problems, but it’s importnat not to rely on them too much. In other words, if your application requires 100 custom fields, maybe it’s time to look at a new system altogether–either enterprise-wide or best-of-breed. Custom fields are typically the places of last resort. Odds are that systems that rely upon them to a large extent are not terribly robust.
Follow these rules for creating custom fields and you should be able to get more out of them.
What say you?
Business leaders often criticize IT for their inability to get with the times. Do the following questions sound familiar?
- Why haven’t we embraced the cloud?
- What’s our open source strategy?
- What are we doing with mobility?
Well, in this post, it’s time to put these criticisms into context–and partially let IT off of the hook.
Learning from the Music Industry
In a recent piece for The Wall Street Journal, Dan Tapscott writes about business models that have yet to adapt to the digital age. The author and co-author of many popular books including Wikinomics, Tapscott knows what he’s talking about.
In the article, Tapscott covers a number of industries, including music. He writes:
Instead of clinging to late-20th-century distribution technologies, like the digital disk and the downloaded file, the music business should move into the 21st century with a revamped business model that converts music from a product to a service.
All music labels and performers should put their music into a commons in the cloud. Instead of purchasing tunes, listeners would pay a small fee–say $4 per month–for access to all the songs in the world. Recordings would be streamed to them via the Internet to any appliance of their choosing–such as their laptop, mobile device, car, or home stereo. Artists would be compensated based on how many times their music had been streamed.
While the particulars above apply to the music industry, that’s hardly the only business struggling with this brave new world. Many industries have yet to get their arms around entirely new economic realities. For starters, I’d put publishing firmly in that category.
So, let’s sayo that you ran IT for SONY Music or Random House. That’s right: You’re the CIO. Would it be fair if your CEO complained about not embracing Enterprise 2.0?
Understanding the Brave New World
The last five years has seen dramatic shifts in the world of technology. Many consumers have become de facto producers. Erstwhile products have been turned into services–and some physical products have morphed into digital ones. For this, we can blame or credit the usual suspects:
- the rise in broadband penetration
- the decline in the price of storage
- the explosion of mobility and apps
- the ubiquity of the Internet
- the growth of the social web
- and others
Of course, there are those in more mature industries and organizations that wish that these technology “improvements” would just stop. From an IT perspective, thousands of organizations in the late 1990s and early 2000s spent millions of dollars configuring their ERP and CRM systems to work a certain way. When it comes to today’s web-centric world, few CIOs are ready for the challenges associated with transforming their enterprises, especially when you consider the following:
- Most CIOs have to accomplish these goals with fewer and fewer financial and human resources.
- Technology changes faster than ever.
- Regulatory requirements are anything but laissez faire.
And it is here where IT often gets a bad rap. How can IT (as a department and as individuals) be expected to embrace entirely new ways of doing things when an organization’s business model is antiquated–or is in serious need of repair? For instance, moving to a SaaS-based set of applications can be hard to justify when the organization does business like it’s 1990.
No one is exculpating IT departments and CIOs for intentionally dragging their feet. When the business makes a clear decision to get with the times and adopt more modern methods, IT has to quickly follow. By the same token, however, IT is by definition a support arm of the organization.
If your business is behind the times, don’t expect IT to be ahead of them.
What do you think?
What say you?
TODAY: Fri, April 28, 2017April2017