Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Posts Tagged ‘Cloud computing’

by: Gil Allouche
24  Jun  2014

Tips for Creating a Big Data Infrastructure

Big data is making an impact in nearly every industry across the globe. It’s fundamentally changing the way companies do business and how consumers find and buy products and services.

With the large amount of change that is occurring so rapidly in the big data industry, companies are feeling the pressure to act quickly to stay ahead of the competition. The sooner companies can begin implementing big data, the better, but only if the company is ready. Big data should propel companies forward, not hold them back. A focus for companies as they begin exploring big data implementation is establishing a successful infrastructure. After all, the infrastructure is where everything begins with big data. If it’s implemented correctly, a premier infrastructure will ensure that big data serves its primary purpose — increasing business success and consumer satisfaction.

Here are four things companies can do to establish a successful infrastructure.

1. Establish a Purpose
Why do you want to use big data and what are you going to use it for? In order for a company to establish the right kind of infrastructure it needs to know how much and what kind of data it’s going to be handling, along with how that data is going to help the company succeed. From a workforce standpoint, companies need to understand the staffing needs that come with any type of big data infrastructure and make sure they have they manpower to meet those needs. Last, the business needs to understand the financial implications of establishing a successful infrastructure. There will need to be a significant investment of money if it’s going to succeed. However, that investment will pay great dividends in the future, whereas if companies take shortcuts to reduce expenditures, they will pay significantly more in the future. It will hinder their ability to innovate now and in the future.

2. Secure the Data
Securing data is the most important thing a company can do. An enormous advantage for companies with their own infrastructure is the inherent data security that comes with it. Unlike big data in the cloud, companies control all aspects of data. However, companies still have work to do, and if the company lacks expertise in this area, a big data platform in the cloud can actually be the more secure route. When evaluating a cloud computing provider, it’s important to establish and understand who can access the data, when they can access it and what happens with the data when it’s accessed. Appropriate and simple measures will ensure your data remains safe.

3. Make it Scalable
Your company is going to evolve, so make sure your infrastructure can handle change. In their recently published eBook, “Running a Big Data Infrastructure: Five Areas That Need Your Attention,” this ability to change is called, “Elasticity.” Your infrastructure has to be flexible enough to move through the ebbs and flows of your business — the good times and the bad. This type of flexibility doesn’t necessarily require more and more servers to handle the excess information. It requires great “elasticity” with the existing technology allowing it to effectively handle more or less data than originally intended.

4. Monitor the data
The ebook also points out the importance of constant monitoring. If your company is going to install big data infrastructure, then it signifies that big data will play a big part in the future of your business. Because of that, it’s important that your infrastructure is up to speed, all the time. Delays of any sort in the infrastructure can prove extremely costly to your business. As with any new technology, there are going to be glitches and bugs in the system. Without consistent, constant monitoring those could cause an enormous impact on your company. With monitoring, however, they become small inconveniences instead of big problems.

Implementing big data infrastructure can be one of the best moves for your business if it’s done correctly. It allows companies to access and analyze troves of information that can improve every aspect of a company. Companies will succeed by understanding why they want to implement big data, how it’s going to be used and then focus on making it secure, flexible and monitored.

Tags: ,
Category: Enterprise Data Management
No Comments »

by: Ocdqblog
22  Apr  2014

Outsourcing our Memory to the Cloud

On the recent Stuff to Blow Your mind podcast episode Outsourcing Memory, hosts Julie Douglas and Robert Lamb discussed how, from remembering phone numbers to relying on spellcheckers, we’re allocating our cognitive processes to the cloud.

“Have you ever tried to recall an actual phone number stored in your cellphone, say of a close friend or relative, and been unable to do so?” Douglas asked. She remarked how that question would have been ridiculous ten years ago, but nowadays most of us would have to admit that the answer is yes. Remembering phone numbers is just one example of how we are outsourcing our memory. Another is spelling. “Sometimes I find myself intentionally misspelling a word to make sure the application I am using is running a spellchecker,” Lamb remarked. Once confirmed, he writes without worrying about misspellings since the spellchecker will catch them. I have to admit that I do the same thing. In fact, while writing this paragraph I misspelled several words without worry since they were automatically caught by those all-too-familiar red-dotted underlines. (Don’t forget, however, that spellcheckers don’t check for contextual accuracy.)

Transactive Memory and Collaborative Remembering

Douglas referenced the psychological concept of transactive memory, where groups collectively store and retrieve knowledge. This provides members with more and better knowledge than any individual could build on their own. Lamb referenced cognitive experimental research on collaborative remembering. This allows a group to recall information that its individual members had forgotten.

The memory management model of what we now call the cloud is transactive memory and collaborative remembering on a massive scale. It has pervaded most aspects of our personal and professional lives. Douglas and Lamb contemplated both its positive and negative aspects. Many of the latter resonated with points I made in my previous post about Automation and the Danger of Lost Knowledge.

Free Your Mind

In a sense, outsourcing our memory to the cloud frees up our minds. It is reminiscent of Albert Einstein remarking that he didn’t need to remember basic mathematical equations since he could just look them up in a book when he needed them. Nowadays he would just look them up on Google or Wikipedia (or MIKE2.0 if, for example, he needed a formula for calculating the economic value of information). Not bothering to remember basic mathematical equations freed up Einstein’s mind for his thought experiments, allowing him to contemplate groundbreaking ideas like the theory of relativity.

Forgetting how to Remember

I can’t help but wonder what our memory will be like ten years from now after we have outsourced even more of it to the cloud. Today, we don’t have to remember phone numbers or how to spell. Ten years from now, we might not have to remember names or how to count.

Wearable technology, like Google Glass or Narrative Clip, will allow us to have an artificial photographic memory. Lifelogging will allow us to record our own digital autobiography. “We have all forgot more than we remember,” Thomas Fuller wrote in the 18th century. If before the end of the 21st century we don’t have to remember anything, perhaps we will start forgetting how to remember.

I guess we will just have to hope that a few trustworthy people remember how to keep the cloud working.

Tags: , , ,
Category: Information Development
No Comments »

by: Ocdqblog
30  Dec  2013

Scanning the 2014 Technology Horizon

In 1928, the physicist Paul Dirac, while attempting to describe the electron in quantum mechanical terms, posited the theoretical existence of the positron, a particle with all the electron’s properties but of opposite charge.  In 1932, the experiments of physicist Carl David Anderson confirmed the positron’s existence, a discovery for which he was awarded the 1936 Nobel Prize in Physics.

“If you had asked Dirac or Anderson what the possible applications of their studies were,” Stuart Firestein wrote in his 2012 book Ignorance: How It Drives Science, “they would surely have said their research was aimed simply at understanding the fundamental nature of matter and energy in the universe and that applications were unlikely.”

Nonetheless, 40 years later a practical application of the positron became a part of one of the most important diagnostic and research instruments in modern medicine when, in the late 1970s, biophysicists and engineers developed the first positron emission tomography (PET) scanner.

“Of course, a great deal of additional research went into this as well,” Firestein explained, “but only part of it was directed specifically at making this machine.  Methods of tomography, an imaging technique, some new chemistry to prepare solutions that would produce positrons, and advances in computer technology and programming—all of these led in the most indirect and fundamentally unpredictable ways to the PET scanner at your local hospital.  The point is that this purpose could never have been imagined even by as clever a fellow as Paul Dirac.”

This story came to mind since it’s that time of year when we try to predict what will happen next year.

“We make prediction more difficult because our immediate tendency is to imagine the new thing doing an old job better,” explained Kevin Kelly in his 2010 book What Technology Wants.  Which is why the first cars were called horseless carriages and the first cellphones were called wireless telephones.  But as cars advanced we imagined more than transportation without horses, and as cellphones advanced we imagined more than making phone calls without wires.  The latest generation of cellphones are now called smartphones and cellphone technology has become a part of a mobile computing platform.

IDC predicts 2014 will accelerate the IT transition to the emerging platform (what they call the 3rd Platform) for growth and innovation built on the technology pillars of mobile computing, cloud services, big data analytics, and social networking.  IDC predicts the 3rd Platform will continue to expand beyond smartphones, tablets, and PCs to the Internet of Things.

Among its 2014 predictions, Gartner included the Internet of Everything, explaining how the Internet is expanding beyond PCs and mobile devices into enterprise assets such as field equipment, and consumer items such as cars and televisions.  According to Gartner, the combination of data streams and services created by digitizing everything creates four basic usage models (Manage, Monetize, Operate, Extend) that can be applied to any of the four internets (PeopleThingsInformationPlaces).

These and other predictions for the new year point toward a convergence of emerging technologies, their continued disruption of longstanding business models, and the new business opportunities that they will create.  While this is undoubtedly true, it’s also true that, much like the indirect and unpredictable paths that led to the PET scanner, emerging technologies will follow indirect and unpredictable paths to applications as far beyond our current imagination as a practical application of a positron was beyond the imagination of Dirac and Anderson.

“The predictability of most new things is very low,” Kelly cautioned.  “William Sturgeon, the discoverer of electromagnetism, did not predict electric motors.  Philo Farnsworth did not imagine the television culture that would burst forth from his cathode-ray tube.  Advertisers at the beginning of the last century pitched the telephone as if it was simply a more convenient telegraph.”

“Technologies shift as they thrive,” Kelly concluded.  “They are remade as they are used.  They unleash second- and third-order consequences as they disseminate.  And almost always, they bring completely unpredicted effects as they near ubiquity.”

It’s easy to predict that mobile, cloud, social, and big data analytical technologies will near ubiquity in 2014.  However, the effects of their ubiquity may be fundamentally unpredictable.  One unpredicted effect that we all became painfully aware of in 2013 was the surveillance culture that burst forth from our self-surrendered privacy, which now hangs the Data of Damocles wirelessly over our heads.

Of course, not all of the unpredictable effects will be negative.  Much like the positive charge of the positron powering the positive effect that the PET scanner has had on healthcare, we should charge positively into the new year.  Here’s to hoping that 2014 is a happy and healthy new year for us all.

Tags: , , , , ,
Category: Information Development
No Comments »

by: Phil Simon
05  Nov  2012

Seven Deadly Sins of Information Management, Part 6: Envy

In my last post, I discussed the sin of pride and information management (IM) projects. Today, let’s talk about envy, defined as “a resentful emotion that occurs when a person lacks another’s (perceived) superior quality, achievement or possession and wishes that the other lacked it.”

I’ll start off by saying that, much like lust, envy isn’t inherently bad. Wanting to do as well as another employee, department, division, or organization can spur improvement, innovation, and better business results. Yes, I’m channeling my inner Gordon Gekko: Greed, for lack of a better word, is good.

With respect to IM, I’ve seen envy take place in two fundamental ways: intra-organizational and inter-organizational  Let’s talk about each.

Intra-Organizational Envy

This type of envy takes place when employees at the same company resent the succes of their colleagues. Perhaps the marketing folks for product A just can’t do the same things with their information, technology, and systems that their counterparts representing product B can. Maybe division X launched a cloud-based CRM or wiki and this angers the employees in division Y.

At its core, intra-organizational envy stems from the inherently competitive and insecure nature of certain people. These envious folks have an axe to grind and typically have some anger issues going on.  Can someone say schadenfreude?

Inter-Organizational Envy

This type of envy takes place between employees at different companies. Let’s say that the CIO of hospital ABC sees what her counterpart at hospital XYZ has done. The latter has effectively deployed MDM, BI, or cloud-based technologies with apparent success. The ABC CEO wonders why his company is so ostensibly behind its competitor and neighbor.

I’ve seen situations like this over my career. In many instances, organization A will prematurely attempt to deploy more mature or Enterprise 2.0 technologies simply because other organizations  have already done so–not because organization A itself is ready. During these types of ill-conceived deployments, massive corners are cut, particularly with respect to data quality and IT and data governance. The CIO of ABC will look at the outcome of XYZ (say, the deployment of a new BI tool) and want the same outcome, even though the two organizations’ challenges are unlikely to be the same in type and magnitude.

Simon Says

Envy is a tough nut to crack in large part because it’s part of our DNA. I certainly cannot dispense pithy advice to counteract thousands of years of human evolution. I will, however, say this: Recognize that envy exists and that it’s impossible to eradicate. Don’t be Pollyanna about it. Try to minimize envy within and across your organization. Deal with outwardly envious people sooner rather than later.


What say you?

Next up: gluttony.

Tags: , , ,
Category: Business Intelligence, Information Development, Information Governance, Master Data Management
No Comments »

by: Phil Simon
20  Mar  2012

On Control, Structure, and Strategy

In “Can You Use Big Data? The Litmus Test“, Venkatesh Rao writes about the impact of Big Data on corporate strategy and structure. Rao quotes Alfred Chandler’s famous line, “structure follows strategy.” He goes on to claim that, “when the expressivity of a technology domain lags the creativity of the strategic thinking, strategy gets structurally constrained by technology.”

It’s an interesting article and, while reading it, I couldn’t help but think of some thought-provoking questions around implementing new technologies. That is, today’s post isn’t about Big Data per se. It’s about the different things to consider when deploying any new information management (IM) application.

Two Questions

Let’s first look at the converse of Rao’s claim? Specifically, doesn’t the opposite tend to happen (read:  technology is constrained by strategy)? How many organizations do not embrace powerful technologies like Big Data because they don’t fit within their overal strategies?

For instance, Microsoft could have very easily embraced cloud computing much earlier than it did. Why did it drag its feet and allow other companies to beat it to the punch? Did it not have the financial means? Of course not. I would argue that this was all about strategy. Microsoft for years had monopolized the market the desktop and many on-premise applications like Windows and Office.

To that end, cloud computing represented a threat to Microsoft’s multi-billion dollar revenue stream. Along with open source software and the rise of mobility, in 2007 one could start to imagine a world in which Microsoft would be less relevant than it was in 2005. The move towards cloud computing would have happend with or without Microsoft’s blessing, and no doubt many within the company thought it wise to maximize revenues while it could. (This isn’t inherently good or bad. It just supports the notion that strategy constrains technology as well.)

The Control Factor

Next up, what about control? What role does control play in structure and strategy? How many organizations and their employees have historically resisted implementing new technologies because key players simply refused to relinquish control over key data, processes, and outcomes? In my experience, quite a few.

I think here about my days working on large ERP projects. In the mid-2000s, employee and vendor self-service became more feasible but many organizations hemmed and hawed. They chose not to  deploy these time-saving and ultimately more efficient applications because internal resistance proved far too great to overcome. In the end, quite a few directors and middle managers did not want to cede control of “their” business processes and ownership of “their” data because it would make them less, well, essential.

Simon Says: It’s Never Just about One Thing

Strategy, culture, structure, and myriad other factors all play significant roles in any organization’s decision to deploy–or not to deploy–any given technology. In an ideal organization, all of these essential components support each other. That is, a solid strategy is buttressed by a healthy and change-tolerant structure and organizational culture. One without the other two is unlikely to result in the effective implementation of any technology, whether its mature or cutting-edge.


What say you?

Tags: , ,
Category: Information Strategy, Open Source
No Comments »

by: Phil Simon
09  Jan  2012

2012 Trends: Data Contests

A few years ago and while its stock was still sky-high, Netflix ran an innovative contest with the intent of improving its movie recommendation algorithm. Ultimately, a small team figured out a way for the company to significantly increase the accuracy with which it gently suggests movies to its customers.

Think Wikinomics.

It turns out that these types of data analysis and improvement contests are starting to catch on. Indeed, with the rise of Big Data, cloud computing, open source software, and collaborative commerce, it has never been easier to outsource these “data science projects.”

From a recent BusinessWeek article:

In April 2010, Anthony Goldbloom, an Australian economist, [f]ounded a company called Kaggle to help businesses of any size run Netflix-style competitions. The customer supplies a data set, tells Kaggle the question it wants answered, and decides how much prize money it’s willing to put up. Kaggle shapes these inputs into a contest for the data-crunching hordes. To date, about 25,000 people—including thousands of PhDs—have flocked to Kaggle to compete in dozens of contests backed by Ford (F), Deloitte, Microsoft (MSFT), and other companies. The interest convinced investors, including PayPal co-founder Max Levchin, Google Chief Economist Hal Varian, and Web 2.0 kingpin Yuri Milner, to put $11 million into the company in November.

The potential for these types of projects is hard to overstate. Ditto the benefits.

Think about it. Organizations can publish even extremely large data sets online for the world at large. Interested groups, companies, and even individuals can use powerful tools such as Hadoop to analyze the information and provide recommendations. In the process, these insights can lead to developing new products and services and dramatic enhancements in existing businesses process (see Netflix).


Of course, these organizations will have to offer some type of prize or incentive. Building a better mousetrap may be exciting, but don’t expect too many people to volunteer their time without the expectation of significant reward. Remember that, of the millions of people who visit Wikipedia every day, only a very small percentage of them actually does any editing. If Wikipedia (a non-profit) offered actual remuneration, that number would be significantly higher (although the quality of its edits would probably suffer).

Consider the following examples:

  • A pharmaceutical company has a raft of data on a new and potentially promising new drug.
  • A manufacturing company has years of historical data on its defects.
  • A retailer is trying to understand its customer churn but can’t seem to get its arms around its data.

I could go on, but you get my drift.

Simon Says

While there will always be the need for proprietary data and attendant analysis, we may be entering an era of data democratization. Open Data is here to stay and I can certainly see the growth of marketplaces and companies like Kaggle that match data analysis firms with companies in need of that very type of expertise.

Of course, this need has always existed, but unprecedented power of contemporary tools, technologies, methodologies, and data mean that outsourced analysis and contests have never been easier. No longer do you have to look down the hall, call IT, or call in a Big Four consulting firm to understand your data–and learn from it.


What say you?


Tags: , ,
Category: Enterprise Data Management
1 Comment »

by: Phil Simon
28  Oct  2011

The Data Frame

I was watching BloombergWest the other day when John Battelle appeared on my screen. For those of you who don’t know, Battelle wears a number of impressive hats. While not writing, he chairs Federated Media Publishing. He is also a visiting professor of journalism at the University of California, Berkeley. In short, he knows what he’s talking about.

Battelle was discussing the evolution of all things technology and, in particular, the movement away from the PC to mobile devices. He also mentioned something called The Data Frame, the theme from the forthcoming Web 2.0 summit. Battelle explains what he means:

For 2011, our theme is “The Data Frame” – focusing on the impact of data in today’s networked economy. We live in a world clothed in data, and as we interact with it, we create more – data is not only the web’s core resource, it is at once both renewable and boundless. [Emphasis mine.]

Consumers now create and consume extraordinary amounts of data. Hundreds of millions of mobile phones weave infinite tapestries of data, in real time. Each purchase, search, status update, and check-in layers our world with more of it. How our industries respond to this opportunity will define not only success and failure in the networked economy, but also the future texture of our culture. And as we’re already seeing, these interactions raise complicated questions of consumer privacy, corporate trust, and our governments’ approach to balancing the two.

Is Battelle ultimately right? I tend to think so. But, beyond that, as I listened to Battelle and researched his notion of The Data Frame, one thing struck me:

Most organizations are under- or unprepared for it.

Now, there are two parts to this fundamental lack of preparation, both of which I’ll discuss in this post.


Particularly in large, conservative organizations, far too often many people don’t think of things in terms of data and information–and this is the most significant problem. Decision makers fail to realize that everything is data. Decisions are made often by gut feel, despite the fact that for years decision analysis tools have existed to assist people in making superior choices. How many people do you know with access to sophisticated BI applications who continue to rely upon Microsoft Excel?

Lamentably, many organizations have yet to get their arms around the web and its implications. Legacy systems still abound and rare is the organization that has completely embraced Enterprise 2.0 and its components, including–and arguably most important–cloud computing.

The bottom line is that not enough people think in terms of data, a limitation that invariably influences the choice of which technologies are–and are not–deployed within organizations. While many in old-school enterprises debate what to do and how to do it, the chasm between them and companies that do get it (read: Amazon, Apple, Facebook, and Google) widens. The latter companies are so valuable and admired today because they are building and deploying sticky, integrated, and data-gathering planks and platforms. They’re not just trying to “get” the web. They did that a long time ago.

The Explosion of Mobility

The web has been here in full force for nearly two decades, but enterprise mobility is a much more recent advent. While a few companies have experimented with internal Apple-like App Stores, these are the exceptions that prove the rule. For this reason, consumers and consumer-based companies–not enterprise IT departments–are leading the current technology revolution. This is in stark contrast to what I call Enterprise 1.0 in The Next Wage of Technologies. In the 1990s, people walked into the office to use the most powerful technology. These days, however, the opposite is often true: many people have more powerful devices on their hips than on their desktops.

Simon Says

Once again, it’s all about the people. It is incumbent upon the powers-that-be to fundamentally alter their mindsets. Data need not be a “icky” problem to manage. On the contrary, it represents myriad opportunities to recognize and harvest. Once change- and risk-averse executives realize this, they can implement the apps, data models, and the like necessary to survive in our dynamic world.


What say you?


Tags: , , , ,
Category: Business Intelligence, Information Development

by: Phil Simon
13  Oct  2011

On Clouds, Data, and Technology Maturity

I recently had the pleasure of speaking on a panel about information management at DataFlux IDEAS with some of the most prominent experts in the field. Among the participants was my friend David Loshin, one of the most astute observers out there.

David made the point that many of the information management technologies available to organizations today have been for quite some time. Only now are many getting around to actually using them–and reaping their benefits. Case in point: cloud computing. Now that it’s become more acceptable, commonplace, and (arguably most important) cheaper, clouds are taking off.

Loshin correctly pointed out that many “new” solutions are, in fact, merely rebranded versions or products introduced years ago. Marketers typically don’t sell technology products and services with claims that they are the same as they were five or ten years ago–hence the need to try and spruce up mature offerings with fancier names. (Of course, some company’s products truly are new, such as Microsoft’s Azure. Steve Ballmer’s company wasn’t exactly quick to embrace the cloud.)

The implications of cloud computing for data management are profound. For instance, organizations will be able to get out of the hosting business and run most of their operations over third-party data centers. In Total Recall, a book that I’m currently reading, co-authors Gordon Bell and Jim Gemmell predictthat, by 2020, very few organizations will actually handle their own data and applications.

Crossing the People Chasm

So, the technology has been ready for years, yet few relatively few large organizations have made the jump. The obvious question is why? In my forthcoming book, The Age of the Platform, I quote cloud computing expert Amy Wohl:

It’s the change in attitude that goes with the change in architecture that requires a new mind-set.  That’s where established firms need to take time to see whether they are simply going to consider clouds as an additional offering or whether they are going to substantially remake their businesses to more closely model what the web-centric companies are doing.

In other words, it’s all about the people making the decisions about the technology. It’s only indirectly about the technology itself.

Simon Says

As companies like Amazon, Apple, Facebook, and Google (aka, The Gang of Four)lead the way, doing simply astonishing things with their data, other companies will be forced to look at how. Why can Amazon effectively cross-sell to its customers while so many organizations struggle to figure out a master list of their own? As the gap between progressive and reactive organizations grows, the latter will be forced to adopt newer technologies–or perish for not doing so.

The days of complacency, inefficiency, and guaranteed results are coming to an end. What are you doing about it?


What say you?


Tags: , , , ,
Category: Information Development

by: Phil Simon
08  Aug  2011

In Defense of IT

Business leaders often criticize IT for their inability to get with the times. Do the following questions sound familiar?

  • Why haven’t we embraced the cloud?
  • What’s our open source strategy?
  • What are we doing with mobility?

Well, in this post, it’s time to put these criticisms into context–and partially let IT off of the hook.

Learning from the Music Industry

In a recent piece for The Wall Street Journal, Dan Tapscott writes about business models that have yet to adapt to the digital age. The author and co-author of many popular books including Wikinomics, Tapscott knows what he’s talking about.

In the article, Tapscott covers a number of industries, including music. He writes:

Instead of clinging to late-20th-century distribution technologies, like the digital disk and the downloaded file, the music business should move into the 21st century with a revamped business model that converts music from a product to a service.

All music labels and performers should put their music into a commons in the cloud. Instead of purchasing tunes, listeners would pay a small fee–say $4 per month–for access to all the songs in the world. Recordings would be streamed to them via the Internet to any appliance of their choosing–such as their laptop, mobile device, car, or home stereo. Artists would be compensated based on how many times their music had been streamed.

While the particulars above apply to the music industry, that’s hardly the only business struggling with this brave new world. Many industries have yet to get their arms around entirely new economic realities. For starters, I’d put publishing firmly in that category.

So, let’s sayo that you ran IT for SONY Music or Random House. That’s right: You’re the CIO. Would it be fair if your CEO complained about not embracing Enterprise 2.0?

Understanding the Brave New World

The last five years has seen dramatic shifts in the world of technology. Many consumers have become de facto producers. Erstwhile products have been turned into services–and some physical products have morphed into digital ones. For this, we can blame or credit the usual suspects:

  • the rise in broadband penetration
  • the decline in the price of storage
  • the explosion of mobility and apps
  • the ubiquity of the Internet
  • the growth of the social web
  • and others

Of course, there are those in more mature industries and organizations that wish that these technology “improvements” would just stop. From an IT perspective, thousands of organizations in the late 1990s and early 2000s spent millions of dollars configuring their ERP and CRM systems to work a certain way. When it comes to today’s web-centric world, few CIOs are ready for the challenges associated with transforming their enterprises, especially when you consider the following:

  • Most CIOs have to accomplish these goals with fewer and fewer financial and human resources.
  • Technology changes faster than ever.
  • Regulatory requirements are anything but laissez faire.

And it is here where IT often gets a bad rap. How can IT (as a department and as individuals) be expected to embrace entirely new ways of doing things when an organization’s business model is antiquated–or is in serious need of repair? For instance, moving to a SaaS-based set of applications can be hard to justify when the organization does business like it’s 1990.

Simon Says

No one is exculpating IT departments and CIOs for intentionally dragging their feet. When the business makes a clear decision to get with the times and adopt more modern methods, IT has to quickly follow. By the same token, however, IT is by definition a support arm of the organization.

If your business is behind the times, don’t expect IT to be ahead of them.


What do you think?


What say you?

Tags: , ,
Category: Enterprise2.0, Web2.0
1 Comment »

by: Phil Simon
02  Aug  2011

Data Liberation: The Case For and Against

I’ve been doing a great deal of research on Facebook, one of the main topics of my fourth book. Mark Zuckerberg’s company now sport more than 750 million worldwide users. Facebook’s walled garden intentionally prohibits access from certain sites, chief among them Google. That is, you can’t Google Facebook.

It’s also interesting to note that Facebook tries to stop people from exporting contact data from it. Case in point, the company recently nixed yet another data transfer tool. According to this cNet article:

Open-Xchange’s tool for helping people reconstruct their Facebook contact list on Google+ has fallen victim to Facebook’s revocation of its privileges.

Open-Xchange, a maker of open-source e-mail and collaboration software, last week launched a tool that used the company’s Social OX technology to help people assemble a list of their friends. It used connections to a combination of services such as LinkedIn and e-mail accounts to create a single “magic address book.”

The tool didn’t actually copy e-mail addresses from Facebook–only first and last names. It then matched those names to other e-mail records in the user’s accounts. But Facebook disabled the API (application programming interface) key that the software used to read the names, Open-Xchange Chief Executive Rafael Laguna said.

It’s hard to fault Facebook here. After all, its data is the source of all its value. Running a site with nearly 1 billion users can’t be cheap.

By way of contrast, a few Google employees launched a Data Liberation Project in 2007, a project that my friend Jim Harris recently mentioned to me. You can read the FAQ for yourself, but suffice it to say that there’s a philosophical chasm between the two companies here. For instance, it’s not hard to export your contacts out of Gmail and import them into a third-party app, database, or separate website.

Thoughts for Enterprises

So, should an organization allow its employees, users, and customers to easily get data out of its systems? I have mixed feelings. Let’s focus for a minute on employees.

On one hand, democratization of the data can be an overwhelming positive. In theory, end users can improve their data, adding fields and records that a centralized IT department would not think of doing. After all, each line of business (LOB) should know what it needs more than IT, right? What’s more, LOBs interact with employees, customers, vendors, and other partners very frequently.

On the other hand, liberation can be a dangerous thing. Once data is exported into standalone databases, spreadsheets, and applications, all hell can break loose. Master records can quickly spiral out of control. Employee and vendor records may soon contain conflicting, inaccurate, or incomplete information. To boot, it can take a great deal of time to reconcile any differences. User A may mark a vendor contact as John Smith while User B marks that same contact as Steven Johnson. This can lead to thorny issues such as:

  • Which one is right?
  • Was the invoice paid once, twice, or not at all?
  • Where do we turn to resolve the conflict?

Simon Says

Understand that data liberation has its pros and cons. Consider factors like the maturity of the end users, the number of systems affected by liberation, and the type of data being liberated before you let the genie out of the bottle. Also, don’t make the mistake of letting everyone have access to everything. Make decisions carefully based on business need.

The data liberation bell can’t easily be unrung.


What do you think?


What say you?

Category: Enterprise Data Management
1 Comment »

Collapse Expand Close
TODAY: Tue, March 19, 2019
Collapse Expand Close
Recent Comments
Collapse Expand Close