Archive for the ‘Information Management’ Category
In my last post, I discussed the impact of wrath on IM projects. Today’s topic is the second of the deadly sins: greed.
Note that greed and sloth (to be discussed in a future post) are very different sins.
Now, let’s start off by getting our terms straight. By greed, I’m talking about the need for certain employees, groups, and departments to hoard data that ought to be shared throughout the organization. These folks are keeping for themselves what others want and/or need. For instance, consider Steve, a mid-level employee at XYZ who keeps key sales or customer data in a spreadsheet or a standalone database.
Or consider ABC a company that implemented a new system that, for different reasons, was never populated with legacy data. Barbara in Payroll holds key payroll information and will not willingly provide it to Mark in Accounting.
To be sure, organizational greed is hardly confined to data. I’ve seen many employees over the years refuse to train other employees or lift a finger to help a new hire or perceived enemy. Maybe they refuse to meet with consultants hired by senior management to re-engineer a process.
Understanding the Nature of Greed
I could go on with examples but you get my drift. Almost always, greed emanates from some fundamental insecurity within the offending employee. What’s more, at the risk of getting myself in a whole heap of trouble, I’ve found that more senior employees are more likely to be greedy. Now, this is a broad generalization and certainly does not apply across the board. I’ve seen exceptions to this general rule: young employees who wouldn’t share information and near-retirement-aged folks more than happy to show others what they know.
As employees become less secure about their jobs and themselves, they naturally start to think about the future–their futures. It’s just human nature. Many people understandably don’t want to be looking for jobs today. (This feeling increases as we age, what with many familial and personal responsibilities.) We realize that the grass is not always greener. For some of us, this manifests itself in a tendency to attempt to protect our jobs, departments, budgets, fiefdoms, and headcounts–at least until the perceived threat diminishes.
But there’s a critical and countervailing force at play for the greedy: Information wants to be free. As open-source software, open APIs, and open data sources continue to sprout, people are becoming less and less tolerant of employee bottlenecks. Those who refuse to play ball may be able to temporarily stall large-scale information management projects, but eventually, by hook or by crook, those damns always break.
I wish that I had a simple solution for resolving employee-related greed issues. I don’t. Many tomes have been written about managing difficult employees. At a high level, organizations can use two well-worn tools: the carrot and the stick. Consider rewarding employees who share information and knowledge while concurrently punishing those who don’t.
What say you?
Next up: sloth.
Most of us have heard of the seven deadly sins: wrath, greed, sloth, pride, lust, envy, and gluttony. Inspired by Simon Laham’s book The Science of Sin, I’m kicking off a seven-part series in which I look at these sins in the context of information management (IM).
Today’s installment: wrath. Think of wrath not as the final sin in the chilling movie Se7en, but as the equivalent of anger. As any psychologist can tell you, anger manifests itself in one of two forms: passive and aggressive.
Let’s cover each from an IM perspective.
In order for just about any large-scale IM project to have a remote chance of being successful, people need to work together. To be sure, collaboration is essential (although I’d argue that it’s a necessary but insufficient condition for success). Yet, for whatever reason, often individuals have axes to grind and won’t work with consultants, vendors, colleagues, and even senior leadership. Rather than outwardly defying others, these folks vacillate. They make excuses. They ensure that other, more important (at least, in their view) priorities take precedence. Or perhaps they’ll do nothing. They’ll ignore an email or not return a phone call.
Without question, this is the more common of the two forms of anger. Now, let’s move to the counterpart of passive anger.
Aggressiveness and outright defiance are much, much less common on IM projects. Rarely will an employee be so recalcitrant that he will flat-out refuse to do something, raise his voice, or physically threaten another person. The reasons are obvious. While employment laws vary considerably by country, in many parts of the world you have no right to a job. For instance, in the United States, at-will employment is the norm with two important exceptions:
- employment contracts with clauses outlawing certain types of behavior
- certain unionized environments (both public and private sectors)
Translation: those that behave in a manner not conducive to workplace tranquility can be terminated.
Yes, intraorganizational aggression tends not to take place very often. That’s not to say, however, that interorganizational aggression rarely happens. On the contrary, many organizations have lamentably bad relationships with some of their suppliers, customers, vendors, and other third parties. Many times, the very of source of this conflict is (you guessed it) data.
For instance, organizations implementing new systems often need to receive special attention from insurance and financial institutions as they test new interfaces. Fair enough, right? The problem: those third parties often have to service thousands of other equally important clients, making it nearly impossible to devote exclusive resources to the organization replacing its legacy systems. The end result is often yelling and screaming.
I’d argue that aggressive anger is actually better for IM projects for one simple reason: you know where the aggrieved party actually stands. With passive anger, you have to guess if John or Jane is really overburdened with other work or is just angry about the tasks asked of him/her.
What say you?
Over the last few months I’ve had the opportunity to collaborate with a range of my Deloitte colleagues on a report into the “digital disruption” of business. The result brings together the impacts of digital technologies, with the explosion of information and the enabling capabilities of cloud. For me, the most important aspect of the collaboration has been the separation of digital from its purely technical connotations. While focused on Australia, the report’s findings are largely applicable to all geographies.
Hear the word “digital” and your mind races to the latest internet or mobile device. Over the past forty years many new technologies have been introduced which have caused disruption and met this definition of digital.
Some examples include the wide introduction from the 1970s of the “digital computer” a term which no longer needs the digital preface. Similarly the digital mobile phone replaced its analogue equivalent in the 1990s introducing security and a raft of new features including SMS – who recalls the number of scandals caused when radio hams listened into analogue calls made by politicians? Digital communications over the internet are simply another example of various analogue predecessors being digitised.
Digital business separates its constituent parts to create independent data and processes which can then be assembled rapidly in a huge number of new and innovative ways. Airlines are a good example. Not that many years ago, the process of ticketing through to boarding a flight was analogue meaning that each step led to the next and could not be separated. Today purchasing, ticketing and boarding a flight are completely independent and can each use completely different processes and digital technology without impacting each other. Passenger handling for airlines is now a digital business.
What this means is that third parties or competing internal systems can work on an isolated part of the business and find new ways of adding value. For retailers this means that the pictures and information supporting products are independent of the website that presents them and certainly the payment processes that facilitate customer transactions. A digital retailer has little trouble sharing information with new logistics, payment and mobile providers to quickly develop more efficient or new routes to market.
If you don’t think that this is going to impact you in the short-term, then the report’s analysis should convince you otherwise. 32% of the Australian economy (and by extrapolation the economies of other countries) will experience a major disruption in 0 to 3 years and a further 33% will be similarly impacted in 3 to 5 years. Impact is one thing, but this report leaves you with a sense of just how much work there is still to be done to respond to the digital challenge.
Up until four years ago, I made all of my money on large-scale IT projects. Many of these informed my first book, Why New Systems Fail. Since that time, I gradually made the break into speaking, writing, publishing, and website design.
As a result, my P&L statement looks nothing like it did back in 2008. Yet, in a way, absolutely nothing has changed. For years, this dude has abided by the 70-30 rule. That is, whether it’s a multi-million dollar enterprise system or open-source software like WordPress, I consider myself 70 percent functional. Note that the 70 percent rule applies to both Waterfall and Agile projects.
This means that I learn about the front-end a software application works because that’s where most users spend their time. It doesn’t matter if you’re paying vendors or employees, entering sales, or writing blog posts. Most people spend the vast majority of their hours in an application as functional users, not technical ones.
The Importance of the Other 30 Percent
Yet, at least for me, the other 30 percent is key. At times, functional users will have technical questions about how an application works, where its data is stored, how to change or customize something, and the like. The purely functional consultant or web designer can only get you so far. I’ve never met anyone completely satisfied with a vanilla application after using it for any substantial length of time.
I can tell you that forcing myself to be 30 percent technical has actually increased the depth of my “other” 70 percent. In other words, if you understand a good bit of the technical side, then you will augment your appreciation and knowledge in the functional side.
And I’m hardly the only one who feels this way. I asked my friend, entrepreneur and WordPress designer Todd Hamilton about this.
There are quite a few steps in knowledge from simple end user to developer. To do either job really well, a person needs an overall knowledge framework of the system on which she is working with from both perspectives. For the WordPress simple end user, this requires understanding several things: 1) a bit about what a database is; 2) that the WordPress admin is for putting stuff in the database; 3) that the WordPress theme displays that data.
For the WordPress developer, that means understanding: 1) how users intend to interact with the data; 2) what makes sense to them; 3) what is too complex. A good dynamic is where both the developer and the end user share that knowledge framework and then the focus becomes execution instead of achieving understanding.
Hamilton’s exactly right here and his WordPress lessons can be extrapolated to other applications.
Simon Says:Find the Hybrids
When hiring a consultant for any type of technology project, ideally check his or her technical credentials. A purely functional consultant probably isn’t up to speed on the application’s data model–something that might have enormous implications down the road.
Ditto on the technical end. Someone without a lick of functional or application knowledge isn’t going to appreciate the legitimate challenges of most of the audience for that very application.
What say you?
An oft-used metaphor in the project management arena is that of an airplane. Put quite simply, enterprise systems (read: ERP, CRM, and others) are supposed to be implemented and activated in a way that allows for a smooth landing. See below:
At least, that’s the theory. Lamentably, because of a cauldron of reasons, delays and cost overruns force most organizations to attempt to “land their planes” as follows:
The result (more often than not): a crash.
The Tension between Short-Term and Long-Term
As I know all too well and have seen far too often, many short-term sacrifices are made to make a go-live date (hence the term package slam). Band-Aids are liberally employed. Data is bastardized. Issues are put on the back burner to be addressed in the future.
Unfortunately, however, those issues are all too often left unaddressed, making them exacerbate over time. Overworked and stressed-out employees often view data considerations as pesky annoyances. (This problem is particularly pronounced in organizations that have yet to recognize the importance of data governance.) Day-to-day realities trump intelligent long-term information management. The future never comes and organizations have to scramble to respond to an internal crisis, legal challenge, or regulatory inquiry. Consultants are called in. Meetings are held. Blame is cast. Consultants are fired. Software vendors are sued.
Simon Says: Don’t Attempt the High-Risk Landing
I completely understand the pressure for an organization to activate a new system–even if prematurely. (The same holds true for upgrades and enhancements.) Resist the temptation.
The drawbacks with the high-risk landing approach are two-fold. First, there are the immediate consequences. These include incorrectly paid vendors and employees, manufacturing gaffes, and the like. These are messy enough.
Long-term, however, the consequences are often more severe. As we learned once again with Knight Capital, some bells simply cannot be unrung; some errors just can’t be fixed. The very confidence that employees, partners, investors, and creditors have in your organization may erode because they don’t think that the organization can effectively carry out its operations.
Don’t make that mistake.
What say you?
There’s very little doubt that we are generating more and more data every day. How much? Well, let me use images rather than words.
The following infographic comes from Fliptop:
Click for the originally sized image.
In a word, wow.
While we know that there’s a boatload of data out there, we are much less certain about who owns this data. That’s why this TechCrunch article on the resurrection of now-defunct content sharing site Digg is so interesting.
To make a long story short, Digg on August 31st of 2012 “launched the Digg Archive, a tool to help users of the old Digg (before July 2012) retrieve a history of their Diggs, Submissions, Saved Articles, and Comments.” Digg had help from Kippt and Pinboard. From the post:
We believe that people own the data they create, so while we work to determine if and how this data makes its way into the new Digg, we wanted to provide a way for users to access their history. It took some digging through the old infrastructure, but the complete Digg Archive is now live.
To be sure, Digg is hardly the only company or organization to feel this way. Google’s Data Liberation Project also comes to mind. In short, the DLP enables users to move their data in and out of Google products.
It’s hard to discuss data ownership today without addressing the corresponding issues of privacy and security–and this leads us right to the elephant in the room: Facebook. Facebook doesn’t exactly make it easy for users to remove their data from the site.
Surprised? Don’t be. As I write in The Age of the Platform, companies like Amazon, Apple, Facebook, and Google try to make their platforms as sticky as possible. And I can think of few stickers ploys than “locking” user data into a site. For that very reason, many people won’t move off of Microsoft Outlook into a cloud-based alternative. There’s no native “transfer to Gmail” button in any version of Outlook I’ve seen. And, again, this is by design.
Well, the forces of data freedom aren’t standing still. Computational knowledge engine Wolfram Alpha is at least letting users see the specific data Facebook keeps on them. Note that Facebook can at any point turn off the API that Wolfram Alpha uses to pull this information. Translation: the forces of data liberation are not without their obstacles and opponents.
Stay tuned. The Data Liberation Wars are just heating up. I firmly expect Twitter, Google, Amazon, and other platform-based companies to face this growing issue in the near- and long-term.
What say you?
One would hope that today most businesses know which of their products sell–and how much. After all, it’s 2012. Beyond that, many companies have spent a great deal of money trying to determine why people buy–and don’t buy–certain products. Now, with advances in information technology, we’re getting closer to answering the when question–as in, when customers spend money.
Not surprisingly, it turns out that mega-retailer Wal-Mart is at the forefront of the all-important when question. It turns out that when people are paid is a major factor in when they buy what they buy. According to a recent Reuters’ article:
“The paycheck cycle remains pronounced, and there continues to be a lot of uncertainty in the global economy,” Chief Financial Officer Charles Holley said in a recorded message on Thursday.
The world’s largest retailer, viewed as a barometer of economic activity, continues to see signs customers are strapped—spending more at the beginning of the month, when they get their paychecks. [emphasis mine]
That is, budget-conscious customers may be more likely to buy a new cell phone when their bank accounts are flush with cash. By contrast, consumers may well buy diapers and food as needed. In other words, demand for “essential” products is probably more inelastic.
Let’s delve deeper into the power of when.
Now, to be sure, the question of when isn’t entirely new nor is it unaddressed. Basic economic theory tells us that certain products are more likely to be purchased with one another–a.k.a., complementary goods. For instance, it’s not uncommon for consumers to buy peanut butter and jelly together or salsa with their chips. There’s a very good reason that these products are often if not always stocked near each other in supermarkets.
So, what is new? I’d argue that even small businesses have never before had access to such vast information and technology. Those that build embracing the power of analytics into their cultures have at the ability to gain remarkable insights into their customers’ behavior.
With tremendously powerful data and technology, new relationships can be discovered. Maybe there’s an idea time of the month to put peanut butter on sale. Perhaps such a discount will spur sales of jelly or bread or bananas. Maybe product placement in certain areas at certain hours of the day mean more units sold. The possibilities are limitless.
I’ll grant that most businesses know what people buy and how they pay for these products (although the explosion of mobile payment methods like Square will introduce new data, considerations, and variables). That’s table stakes these days. But how many really understand other essential data questions such as why and when?
And think about the transformative power of when for the consumer. Yes, we know about daily deal sites like Groupon. When products go viral (on Twitter or on sites like thefancy.com), flash mobs can erupt making moot weekly or monthly sales projections. I’ll bet that the importance of when will continue to increase.
What say you?
At least to me, sometimes Big Data often seems like a bit of an amorphous term. Just what exactly is it, anyway?
Consider the following statistics from the Gang of Four:
- Amazon: Excluding books, the company sells 160 million products on its website. Target sells merely 500,000. Amazon’s reported to have credit cards on file for 300 million customers. 300 million. For more Amazon stats, click here.
- Apple: The company a few months ago passed 25 billion app downloads.
- Facebook: 954 million registered users share more than one billion pieces of content every day.
- Google: As of two years ago, Google handled 34,000 searches per second.
These numbers are nothing less than mind-blowing. While Facebook’s rate of growth seems to be waning, make no mistake: it’s still growing. (Deceleration of growth shouldn’t be confused with deceleration.)
While we’re at it, let’s look at some of Twitter’s numbers. On the company’s five-year anniversary, the company posted the following numbers:
- 3 years, 2 months and 1 day. The time it took from the first tweet to the billionth tweet.
- 1 week. The time it now takes for users to send a billion tweets.
- 50 million. The average number of tweets people sent per day, one year ago.
- 140 million. The average number of tweets people sent per day, in the last month.
- Oddly, 80 percent of all tweets involve Charlie Sheen.
OK, I’m making the last one up, but you get my drift.
A few things strike me about these numbers. First, this is a staggering amount of data. Second, all of this data is kept somewhere. To varying extents, these companies and others are turning data into information and, ultimately knowledge.
What they do with that knowledge varies, but no one can doubt the potential of so much data–even if much of it is noise. Another issue: will people continue to use ad-supported platforms? Will we become sick of having our data sold to the highest bigger? Or, will private, ad-free platforms like app.net flourish?
Even if the latter is true, those private platforms will still be generating data. So, in a way, the explosion of data does not hinge upon the continued growth of open or “somewhat-open” platforms
If you think that consumers are going to be generating and using less data in the upcoming years, you’re living in an alternate reality. Take steps now to ensure that your organization has the software, hardware, and human capabilities to handle vastly increasing amounts of data.
What say you?
Whether or not you’re a fan of big government, if you read this blog then you’re probably at least open to the idea of Big Data. And, when it comes to Big Data, it’s hard to envision any organization with more data at its disposal than the US federal government.
Lamentably and for a variety of reasons well beyond the scope of any individual post, the US government is (putting it very politely) still muddling through Big Data. In fact, it is doing a mere fraction of what it could with so much potentially valuable data. The report, titled “The Big Data Gap” (registration required):
…found that as agencies look to leverage big data, the technology and applications needed to successfully leverage big data are still emerging. Sixty percent of civilian agencies and 42 percent of Department of Defense/intelligence agencies say they are just now learning about big data and how it can work for their agency. Federal IT professionals say improving overall agency efficiency is the top advantage of big data (59 percent) followed by improving speed and accuracy of decisions (51 percent) and the ability to forecast (30 percent).
With so many other pressing priorities, perhaps it’s understandable that the US federal government isn’t exactly leading the way when it comes to IT and cutting-edge data management. (My hunch is that the US isn’t alone here.) Yet, at some point doesn’t embracing Big Data have to become a priority? At what point do the excuses start to evaporate?
Technologist and pundit Tim O’Reilly has spoken and written extensively about the need for government to become a platform. (His eponymous publishing company recently released a large tome, Open Government, expanding on that very notion.)
I can only imagine the innovation that will invariably come from government employees, external developers, and everyday citizens once government embraces Big Data. Minimizing the number of superfluous, redundant, and antiquated data sources will spur a raft of applications and services. Perhaps this will finally allow the public sector to do more with less. Maybe the potential of so much technological advancement is just waiting to be unleashed.
Big Data could happen in one of the following two ways: bottom-up or top-down. I personally think that a senior mandate requiring the use of Big Data is unlikely and not even necessary. Rather, once one employee, once agency, or one department does something innovative and flat-out cool with Big Data, others will want to emulate that success. It’s my hope that then the dominoes will start to fall. Ultimately, employees, agencies, and departments not using Big Data will be the exception, not the rule.
What say you?
Facebook has fallen on hard financial times lately. That 100-billion-dollar valuation now seems wildly optimistic, put mildly. To be sure, Facebook is hardly the only company to slide after its IPO–and it won’t be the last. But is there something more dangerous going on at the world’s largest social network–something that may threaten its very existence?
The Data Problem?
As James Ball writes in in The Guardian about the larger problem facing Mark Zuckerberg’s company:
While Google’s revenues are growing – not a bad feat in the current economy – the huge amounts of extra data it’s accumulating aren’t improving its actual ads: the money the company gets for each advert is actually falling. If more data doesn’t make these companies more cash, the rationale falls away. Google’s adverts make it a huge amount of money, and will continue to do so, but there’s no evidence that more user data is making those adverts more effective at generating profit than they already were.
A large chunk of Facebook’s business model is based on the “more data is better than less data” assumption. In theory, this will bring in advertisers and, ultimately, profits. (Parenthetically, this is precisely why Facebook scares the hell out of Google.)
But The Guardian piece begs the following important and fundamental questions about the value of data:
- Does data eventually reach a point of diminishing returns?
- Is more data always better than less data?
We’ll probably find out over the next few quarters or years if Facebook is able to monetize what is perhaps the largest trove of data in the history of the world. What’s more, if Facebook can’t do it, will any organization be able to make sense–and, more important, money–from vast amounts of user-generated information?
Your Organization is Not Facebook
Now, don’t for one minute dismiss the need for (and value of) Big Data, sentiment analysis, semantic technologies, and other modern data management techniques. Facebook’s struggles hardly prove that there is no legitimate value to be gleaned from such things. Let’s say, for the sake of argument, that Facebook can’t justify such a lofty valuation (now or in the future). This in no way means that its data is worthless. It just might not be worth as much as some think. In other words, the argument here centers around how much that data is worth–not whether the data is worth anything.
It’s my firm belief that the vast majority of organizations need to both manage their existing data better. I can think of few that wouldn’t benefit from increasing the types and amount of data they manage. If there is such a thing as diminishing returns to the value of data, Facebook is much, much closer to realizing it than your organization is. Even if Facebook’s stock plummets to zero (and Larry Page bought drinks for everyone in Silicon Valley), it behooves organizations to embrace the “more is better” data theory. Structured, unstructured, and semi-structured data are extremely valuable assets, not liabilities.
Much like any company, the big question for Facebook is not what kind of data it has. Rather, it’s “What can it do with that information?”
What say you?
TODAY: Thu, April 24, 2014April2014