Archive for October, 2011
I was watching BloombergWest the other day when John Battelle appeared on my screen. For those of you who don’t know, Battelle wears a number of impressive hats. While not writing, he chairs Federated Media Publishing. He is also a visiting professor of journalism at the University of California, Berkeley. In short, he knows what he’s talking about.
Battelle was discussing the evolution of all things technology and, in particular, the movement away from the PC to mobile devices. He also mentioned something called The Data Frame, the theme from the forthcoming Web 2.0 summit. Battelle explains what he means:
For 2011, our theme is “The Data Frame” – focusing on the impact of data in today’s networked economy. We live in a world clothed in data, and as we interact with it, we create more – data is not only the web’s core resource, it is at once both renewable and boundless. [Emphasis mine.]
Consumers now create and consume extraordinary amounts of data. Hundreds of millions of mobile phones weave infinite tapestries of data, in real time. Each purchase, search, status update, and check-in layers our world with more of it. How our industries respond to this opportunity will define not only success and failure in the networked economy, but also the future texture of our culture. And as we’re already seeing, these interactions raise complicated questions of consumer privacy, corporate trust, and our governments’ approach to balancing the two.
Is Battelle ultimately right? I tend to think so. But, beyond that, as I listened to Battelle and researched his notion of The Data Frame, one thing struck me:
Most organizations are under- or unprepared for it.
Now, there are two parts to this fundamental lack of preparation, both of which I’ll discuss in this post.
Particularly in large, conservative organizations, far too often many people don’t think of things in terms of data and information–and this is the most significant problem. Decision makers fail to realize that everything is data. Decisions are made often by gut feel, despite the fact that for years decision analysis tools have existed to assist people in making superior choices. How many people do you know with access to sophisticated BI applications who continue to rely upon Microsoft Excel?
Lamentably, many organizations have yet to get their arms around the web and its implications. Legacy systems still abound and rare is the organization that has completely embraced Enterprise 2.0 and its components, including–and arguably most important–cloud computing.
The bottom line is that not enough people think in terms of data, a limitation that invariably influences the choice of which technologies are–and are not–deployed within organizations. While many in old-school enterprises debate what to do and how to do it, the chasm between them and companies that do get it (read: Amazon, Apple, Facebook, and Google) widens. The latter companies are so valuable and admired today because they are building and deploying sticky, integrated, and data-gathering planks and platforms. They’re not just trying to “get” the web. They did that a long time ago.
The Explosion of Mobility
The web has been here in full force for nearly two decades, but enterprise mobility is a much more recent advent. While a few companies have experimented with internal Apple-like App Stores, these are the exceptions that prove the rule. For this reason, consumers and consumer-based companies–not enterprise IT departments–are leading the current technology revolution. This is in stark contrast to what I call Enterprise 1.0 in The Next Wage of Technologies. In the 1990s, people walked into the office to use the most powerful technology. These days, however, the opposite is often true: many people have more powerful devices on their hips than on their desktops.
Once again, it’s all about the people. It is incumbent upon the powers-that-be to fundamentally alter their mindsets. Data need not be a “icky” problem to manage. On the contrary, it represents myriad opportunities to recognize and harvest. Once change- and risk-averse executives realize this, they can implement the apps, data models, and the like necessary to survive in our dynamic world.
What say you?
It’s no surprise that SaaS and cloud computing solutions are increasing in popularity. From reduced costs in hosting and infrastructure to increased mobility and accessibility, the benefits are profound. Need proof? A recent study by IDC estimates that spending on public IT cloud services will expand at a compound annual growth rate of 27.6%, from $21.5 billion in 2010 to $72.9 billion in 2015. From Google to Salesforce to Amazon to Microsoft… the cloud is getting fuller and heavier.
While the popularity and acceptance of cloud computing has certainly taken off, new questions are being asked regarding the security of third-party data hosting. Has the centralization of IT into a few cloud computing platforms made it easier for the “bad guys” to focus their efforts? With valuable information from all departments ranging from marketing to accounting being openly shared with another organization, are your customer preferences, past orders, mailing lists, HR and financials at an increased risk for cyber threat?
Despite this concern, an alarming study by the Ponemon Institute, which surveyed over 900 IT executives across the world, found that about half of worldwide IT organizations said that no one in their organization evaluates cloud computing providers for security. Worse yet, another half said they were pretty sure that no one in their organizations knew about every cloud computing service that end users in their company were storing data on.
You don’t have to be a weather forecaster to predict this impending thunderstorm.
Can you really rely on the cloud to keep your enterprise information safe? What steps has your organization taken to evaluate SaaS or cloud service providers for security?
When the telephone was invented the first networks were private, typically between rich individuals and their industrial interests. Pretty quickly, though, people wanted to be able to talk to each other without the need to worry about multiple networks and, of course, we regard this as a basic requirement of any telephony solution today.
Similarly, when mobile networks were popularised, there were many attempts to increase subscriber numbers on individual networks by offering incentives to have family and friends connect through the same provider. Those days are ending as people take advantage of ongoing commoditisation and the real cost of mobile phone calls becomes less significant.
Social networks have offered something truly new, the ability to communicate information seamlessly between groups and individuals. MySpace, Facebook and LinkedIn have captured our imaginations as they’ve become a platform for sharing messages, rich media and even sophisticated games, collaboration tools as well as real time messaging and voice or video calls. The products are increasingly becoming part of mainstream communications and look more like networks than online tools. As such, they are facing the same stability, security and connectivity pressures that other commercial communications providers have to deal with.
If history is any guide, it is not the platform that makes them compelling, it is the ability to connect to others. It is the network that attracts as much as any individual features. Therefore it is worth asking: how long will the public accept private or isolated social networks? How long before they demand that they be interconnected?
The alliance between Facebook and Skype is interesting because it completes the picture of the social network as a communications solution. It is also one of the first examples where a social network (Facebook) has allowed another network (Skype) to manage “friends” across network boundaries. In the next few years there will be a push to make this the norm rather than an exception. You will be able to “friend” someone’s Facebook profile from your LinkedIn account, contact a Google+ user directly from Skype and so on. This is no different to making a call across phone networks. If Facebook want their use of a calling platform to be anything more than a toy then this is inevitable and any networks resisting will be abandoned by their members.
Such a move, though, is a threat to the value of social networks. Exclusivity and barriers ensure that the content generated remains exclusive to the network. Facebook and LinkedIn run the risk of losing some of their inherent value when user information is open across networks. There is also the potential for smaller players, such as Plaxo and MySpace, to get a second bite at popularity when the size of the network is less of an impediment (think about the role of niche phone companies today).
Reluctance by members to have their content completely tied-up on a platform is already something that both Google and Facebook have tried to assuage by providing services to allow users to download all of their data themselves through Google Takeout and Facebook’s user data download feature (under account settings). Given that this does nothing more than provide a local backup, this will not quell the demands to open the networks up.
Investors increasingly realise that the value of a business is in the information that it holds and the level of exclusivity of that content. Think of the data that Amazon holds about the buying habits of their customers or the music preferences that Apple tracks through iTunes. As telephone networks have become increasingly commoditised, much of their value and premium has diminished and so it could be with the social networks unless they can find a way to allow their networks to open up without losing the premium and exclusive information that they track about you and me.
In an interview in Inc. Magazine, Steve Jobs once said, “You can’t just ask customers what they want and then try to give that to them. By the time you get it built, they’ll want something new.”
Jobs’ wisdom and creative genius is the subject of many books–and doubtless many more. In this post, I’d like to discuss whether his statement above applies to the world of information management. In other words, can organizations respond to line of business (LOB) information management needs in time?
It’s a big question for a blog post, but I’m feeling ambitious today.
In the 1980s, the answer was generally yes. Business needs were fairly static (at least relative to today). Organizations by and large used the Waterfall method of deploying software, meticulously gathering requirements and implementing solutions in a very sequential manner. Protracted and often contentious projects often ensued, but organizations could often wait three years or more for a new application to go live. It was a different and much slower time.
We’re Not in Kansas Anymore
Fast forward to the 2010s and things could not be any more different. Ask most CIOs if three years is an acceptable amount of time to deploy a new system or technology. The answer is almost always no. Yes, major renovations and fundamental changes in architecture (read: embracing the cloud) still take time, but organizations simply can’t wait as long as they did in the 1980s because the world has changed so drastically. Things happen now at light speed.
As an aside, actor George Clooney recently remarked on Charlie Rose that movies reviews via tweets and Facebook updates are now available immediately, not on the following Monday. As a result, the grace period previously afforded bad movies of a weekend or more has been eliminated. If a movie bombs, the world will know by Friday night.
So, what’s an organization to do amidst such rapid technological change? Business needs now may not meet requirements gather six months or a year ago, much less longer than that.
Enter increasingly popular Agile methods. Rather than waiting for the big bang, features are implemented as they are ready, often while others are clearly not. Robert Stroud recently and succinctly defined the difference between Agile and Waterfall methodologies as follows: ”Waterfall is requirements-bound and Agile is time-bound”.
Lest you think that Agile methods are the sole purview of smaller, more nimble companies, larger organizations are getting on the Agile train. No doubt that the web makes this easier, as IT departments no longer have to install partially ready applications on thousands of workstations (a laborious process). Rather, features are instantly deployed via the web and users can access the latest bells and whistles seamlessly through their browsers.
So, let’s return to the initial query in this post: Can organizations respond to line of business (LOB) information management needs in time?
The simple answer is it depends. To be sure, Agile is no panacea. Bad data, misunderstood requirements, and cultural issues can derail even the most promising projects.
Still, the pros of Agile outweigh their cons. Organizations that cling to antiquated deployment methods will clearly not be able to meet the LOB information managements needs in any reasonable time. The web, open source, and the cloud all give organizations the ability to reduce product implementation times. It’s a new world, and the Waterfall is often poorly suited for it.
What say you?
A strong Information Governance program can faciliate improvement in how information is managed across the organization, from employee skill sets to policies, procedures, processes, organizational structures, and technology. While most organizations are aware of the need to govern their enterprise information, we believe that organizations have traditionally not focused sufficiently on this area. Part of the trouble is that it there is no set method to calculate the value of their information and communicate this value to management.
The difficulty of an intangible asset, such as information, is that it’s nearly impossible to put a dollar amount on it. For this reason, Information or Data Governance programs aiming to leverage information often struggle to determine the potential return on investment of their efforts. Their value calculations are typically based on either specific process improvements or anticipated organisation growth, and a result, can be invalid or miscalculated.
In his post, “What is the Economic Value of Information When Building the MDM Business Case?” Lawrence Dubov suggests a top-down approach to calculating the Economic Value (EV) of information. This approach involves defining a relationship between the market value of the enterprise as a whole vs the value of it’s information (an approach which is also supported by the MIKE2.0 methodology).
The equation looks as such:
EV of Information = P(Information)* Market Capitalization
A weakness of this method is that it may provide only a high level estimate versus the more traditional ROI and NPV methods.
Have you tried this approach to calculate the value of enterprise information? In what instance does it make sense to use this method over another? Please share with us as we continue to expand this offering for our community members.
I recently had the pleasure of speaking on a panel about information management at DataFlux IDEAS with some of the most prominent experts in the field. Among the participants was my friend David Loshin, one of the most astute observers out there.
David made the point that many of the information management technologies available to organizations today have been for quite some time. Only now are many getting around to actually using them–and reaping their benefits. Case in point: cloud computing. Now that it’s become more acceptable, commonplace, and (arguably most important) cheaper, clouds are taking off.
Loshin correctly pointed out that many “new” solutions are, in fact, merely rebranded versions or products introduced years ago. Marketers typically don’t sell technology products and services with claims that they are the same as they were five or ten years ago–hence the need to try and spruce up mature offerings with fancier names. (Of course, some company’s products truly are new, such as Microsoft’s Azure. Steve Ballmer’s company wasn’t exactly quick to embrace the cloud.)
The implications of cloud computing for data management are profound. For instance, organizations will be able to get out of the hosting business and run most of their operations over third-party data centers. In Total Recall, a book that I’m currently reading, co-authors Gordon Bell and Jim Gemmell predictthat, by 2020, very few organizations will actually handle their own data and applications.
Crossing the People Chasm
So, the technology has been ready for years, yet few relatively few large organizations have made the jump. The obvious question is why? In my forthcoming book, The Age of the Platform, I quote cloud computing expert Amy Wohl:
It’s the change in attitude that goes with the change in architecture that requires a new mind-set. That’s where established firms need to take time to see whether they are simply going to consider clouds as an additional offering or whether they are going to substantially remake their businesses to more closely model what the web-centric companies are doing.
In other words, it’s all about the people making the decisions about the technology. It’s only indirectly about the technology itself.
As companies like Amazon, Apple, Facebook, and Google (aka, The Gang of Four)lead the way, doing simply astonishing things with their data, other companies will be forced to look at how. Why can Amazon effectively cross-sell to its customers while so many organizations struggle to figure out a master list of their own? As the gap between progressive and reactive organizations grows, the latter will be forced to adopt newer technologies–or perish for not doing so.
The days of complacency, inefficiency, and guaranteed results are coming to an end. What are you doing about it?
What say you?
One of the key concepts of the MIKE2.0 Methodology is that of an Organisational Model for Information Development. This is an organisation that provides a dedicated competency for improving how information is accessed, shared, stored and integrated across the environment.
Organisational models need to be adapted as the organisation moves up the 5 Maturity Levels for organisations in relation to the Information Development competencies below:
Level 1 Data Governance Organisation – Aware
- An Aware Data Governance Organisation knows that the organisation has issues around Data Governance but is doing little to respond to these issues. Awareness has typically come as the result of some major issues that have occurred that have been Data Governance-related. An organisation may also be at the Aware state if they are going through the process of moving to state where they can effectively address issues, but are only in the early stages of the programme.
Level 2 Data Governance Organisation – Reactive
- A Reactive Data Governance Organisation is able to address some of its issues, but not until some time after they have occurred. The organisation is not able to address root causes or predict when they are likely to occur. “Heroes” are often needed to address complex data quality issues and the impact of fixes done on a system-by-system level are often poorly understood.
Level 3 Data Governance Organisation – Proactive
- A Proactive Data Governance Organisation can stop issues before they occur as they are empowered to address root cause problems. At this level, the organisation also conducts ongoing monitoring of data quality to issues that do occur can be resolved quickly.
Level 4 Data Governance Organisation – Managed
Level 5 Data Governance Organisation – Optimal
The MIKE2.0 Solution for the the Centre of Excellence provides an overall approach to improving Data Governance through a Centre of Excellence delivery model for Infrastructure Development and Information Development. We recommend this approach as the most efficient and effective model for building these common set of capabilities across the enterprise environment.
Feel free to check it out when you have a moment and offer any suggestions you may have to improve it.
I recently had the opportunity to visit four small businesses in a consulting capacity. Along with two other experts, I met some pretty dynamic little companies across the United States. These other experts provided consulting around different types of marketing, customer communications, and product positioning. When it was my turn to engage these small business’ owners, I talked about website design, technology, and data.
No surprise here. That’s my bread and butter.
Here’s another non-surprise: Many small businesses owners–and, by extension, employees at small businesses–do not think in terms of information management. That is, things like managing customer data are typically very informal processes done in a mostly disorganized manner.
Of course, there are some pretty big problems with this. Perhaps the biggest is scale. As these small businesses and their client bases begin to grow, their current manual tools prove far less useful than they had been. For instance, while it may be easy to remember which of your 50 prospective customers responded to an offer in your monthly newsletter, it’s much harder to do the same with 500 or 5,000 prospects. Over the course of these consulting sessions, I suggested that these businesses strongly consider adopting–and utilizing customer relationship management (CRM) applications.
Parallels for Large Enterprises
Remember that small business owners are the antitheses of larger enterprises: the former don’t have nearly as many departments, resources, employees, and the like. As a result, they may be loathe to institutionalize CRM or another data-driven application or process because of the perceived IT resources required.
This is a misperception because, as Brenda Somich points out, hosted CRM is growing. The explosion of the cloud and legitimate alternatives to traditionally on-premise applications mean that small businesses can do a great deal more with fewer resources. They need not break the bank to run a true CRM application–and reap the rewards of doing so.
But there’s a larger point in this post. You may think that small businesses and many proper enterprises have little in common.
And you would often be wrong.
Lamentably, many big companies either intentionally ignore or never get around to developing a future state vision for information management. As for those that have a strategy in place, their daily actions often belie this vision.
As your organization grows, stop and ask yourself:
- Are projects, applications, data, and information being managed in completely random ways?
- Are individual actions are coordinated?
- Is everyone on the same page? If not, why not?
- And how can the different parts of the organization work in concert to maximize compliance, governance, efficiency, and performance?
In other words, ask yourself if it’s time to implement an information management strategy.
With respect to information management, the goal of any organization of any size is not to develop a strategy. Rather, it should be to realize the benefits of that strategy. Alternatively stated, a strategy in and of itself means nothing. Enforcement and diagnoses of that strategy are critical if that strategy is going to actually work.
What say you?
A Structural Overview of MIKE 2.0
If you’re not already familiar, here is an intro to the structure of the MIKE2.0 methodology and associated content:
- A New Model for the Enterprise provides an intro rationale for MIKE2.0
- What is MIKE2.0? is a good basic intro to the methodology with some of the major diagrams and schematics
- Introduction to MIKE2.0 is a category of other introductory articles
- Mike 2.0 How To – provides a listing of basic articles of how to work with and understand the MIKE2.0 system and methodology.
- Alternative Release Methodologies describes current thinking about how the basic structure of MIKE2.0 can itself be modified and evolve. The site presently follows a hierarchical model with governance for major changes, though branching and other models could be contemplated.
We hope you find this of benefit and welcome any suggestions you may have to improve it.
This Week’s Blogs for Thought:
Experimentation, Open Source and Big Data
An interesting article in BusinessWeek on Big Data recently caught my eye. The article mentions different applications that allow organizations to make sense of the vast–and exponentially increasing–amount of unstructured data out there. From the piece:
“When the amount of data in the world increases at an exponential rate, analyzing that data and producing intelligence from it becomes very important,” says Anand Rajaraman, senior vice-president of global e-commerce at Wal-Mart and head of @WalmartLabs, the retailer’s division charged with improving its use of the Web.
More than ever, today intelligent businesses are trying to make sense of millions of tweets, blog posts, comments, reviews, and other form for unstructured data. The obvious question becomes, “How?”
Paying for Value Rather Than Activity
Since the 1980s the costs associated with functions that are shared have been increasingly allocated to business units in such a way as to drive accountability.
For information technology this was relatively easy in the late 1980s as the majority of costs were associated with the expense of infrastructure or processing. Typically the cost of the mainframe was allocated based on usage. Through the 1990s, costs moved increasingly to a project focus with a model that encouraged good project governance and the allocation of infrastructure based on functions delivered.
Communication, Not Tech, is #1 Job for CIOs
Whenever enterprises are asked about the greatest challenges facing IT, they invariably reply that it’s ensuring technology helps drive and change the business, not just polish existing processes to greater efficiency.
TODAY: Tue, March 28, 2017October2011