Posts Tagged ‘cloud’
The debate over Net Neutrality is far from over. While the recent ruling by the FCC to classify broadband internet as a public utility may have changed the argument, debates will undoubtedly still continue to take place. The effects the decision has on the web will likely not be felt, let alone understood, for many years to come, but that hasn’t stopped speculation over what a neutral internet will actually look like and how companies and internet service providers (ISPs) will be impacted. At the same time, the future of cloud computing has become a hot topic as experts debate if Net Neutrality will be a boost to cloud providers or if the overall effect will be negative. Looking at the current evidence and what many providers, companies, and experts are saying, the only thing that’s clear is that few people can agree on what Net Neutrality will mean for the cloud and all the advantages of cloud computing.
The basic idea of Net Neutrality is, in the simplest of terms, to treat all internet traffic the same. Whether from a small niche social site or a major online retail hub, content would be delivered equally. This sounds perfectly reasonable on the surface, but critics of the Net Neutrality concept say all websites simply aren’t equal. Sites like Netflix and YouTube (mainly video streaming sites) eat up large amounts of bandwidth when compared to the rest of the internet, and as streaming sites grow in popularity, they keep eating up more and more web resources. The theory goes that ISPs would provide internet “fast lanes” to those sites willing to pay the fee, giving them more bandwidth in comparison to other sites, which would be stuck in “slow lanes.” It’s this idea that proponents of Net Neutrality want to guard against, and it’s one of the biggest points of contention in the debate.
Obviously, this is a simplified view of Net Neutrality, but it’s a good background when looking at the effect the new ruling could have on cloud computing. First, let’s take a look at how cloud providers may be affected without a neutral internet. Supporters of Net Neutrality say a “fast lane” solution would represent an artificial competitive advantage for those sites with the resources to pay for it. That could mean a lack of innovation on the part of cloud vendors as they spend added funds to get their data moved more quickly while getting a leg up on their competition. A non-neutral internet may also slow cloud adoption among smaller businesses. If a cloud software provider has to pay more for fast lanes, those costs can easily be passed on to the consumer, which would raise the barrier to cloud use. The result may be declining cloud adoption rates, or at the least performance of cloud-based software may degrade.
On the other side of the coin, critics of Net Neutrality say the effect of the policy will end up damaging cloud computing providers. They’re quick to point out that innovation on the internet has been rampant without new government regulations, and that ISPs could easily develop other innovative solutions besides the “fast lane” approach Net Neutrality supporters are so afraid of. Government rules can also be complicated and, in the case of highly technical fields, would need to be constantly updated as new technology is developed. This may give larger companies and cloud providers an advantage over their competition since they would have the resources to devote to lobbyists and bigger legal budgets to dedicate to understanding new rules. There’s also the concern over getting the government involved in the control of pricing and profits in the first place. Needless to say, many aren’t comfortable with giving that level of control to a large bureaucracy and would rather let market freedom take hold.
Some may say that with the new FCC ruling, these arguments don’t apply anymore, but changes and legal challenges will likely keep this debate lively for the foreseeable future. Will Net Neutrality lead to government meddling in cloud provider pricing and contracts? Will a lack of Net Neutrality slow down cloud adoption and give too much power to ISPs? Unfortunately, there’s no way of knowing the far-reaching consequences of the decision on the cloud computing landscape. It could end up having very little impact in the long run, but for now, it appears Net Neutrality will become a reality. Whether that’s a good or bad thing for the cloud remains to be seen.
I think that Zach Nelson (Netsuite’s CEO) was wrong when he said that “cloud is the last computing architecture” but I also believe that his quote is a healthy challenge to take computing and business architectures to a new level.
Nelson went on to say “I don’t think there is anything after it (cloud). What can possibly be after being able to access all of your data any time, anywhere and on any device? There is nothing.” His comments are available in full from an interview with the Australian Financial Review.
Aside from cloud, our industry has had a range of architectures over the decades including client/server, service oriented architecture (SOA) and thin client. Arguably design patterns such as 4GLs (fourth generation languages) and object oriented programming are also architectures in their own right.
I think that we can predict attributes of the next architecture by looking at some of the challenges our technologies face today. Some of these problems have defined solutions while others will require inventions that aren’t yet on the table.
Near the top of any list is security. Our Internet-centred technology is increasingly exposed to viruses, hackers and government eavesdroppers.
One reason that so much of our technology is vulnerable is that most operating systems share code libraries between applications. The most vulnerable application can leave the door open for malicious code to compromise everything running on the same machine. This is part of the reason that the Apple ecosystem has been less prone to viruses than the Windows platform. Learning from this, it is likely that the inefficiency of duplicating code will be a small price to pay for siloing applications from each other to reduce the risk of cross-infection.
At the same time our “public key” encryption is regarded by many as being at risk from increasing computing power for brute force code cracking and even the potential of future quantum computers.
Because there is no mathematical proof that encryption based on the factoring of large numbers won’t be cracked in the future it can be argued to be unsuitable for a range of purposes such as voting. A future architecture might consider more robust approaches such the physical sharing of ciphers.
Societies around the world are struggling with defining what law enforcement and security agencies should and shouldn’t have access to. There is even a growing debate about who owns data. As different jurisdictions converge on “right to be forgotten” legislation, and societies agree on what back doors keys to give to various agencies, future architectures will be key to simplifying the management of the agreed approaches.
The answers will be part of future architectures with clearer tracking of metadata (and indeed definitions of what metadata means). At the same time, codification of access to security agencies will hopefully allow users to agree with governments about what can and can’t be intercepted. Don’t expect this to be an easy public debate as it has to navigate the minefield of national boundaries.
Another issue that is topical to application designers is network latency. Despite huge progress in broadband across the globe, network speeds are not increasing at the same rate as other aspects of computing such as storage, memory or processing speeds. What’s more, we are far closer to fundamental limits of physics (the speed of light) when managing the transmission from servers around the world. Even the most efficient link between New York and London would mean the round-trip for an instruction response is 0.04 seconds at a theoretical maximum of the speed of light (with no latency for routers or a network path that is not perfectly straight). In computing terms 0.04 seconds is the pace of a snail!
The architectural solution has already started to appear with increasing enthusiasm for caching and on-device processing. Mobile apps are a manifestation of this phenomenon which is also sometimes called edge computing.
The cloud provides the means to efficiently synchronise devices, with the benefits of computing on devices across the network with cheap and powerful processing and masses of data right on-hand. What many people don’t realise is that Internet Service Providers (ISPs) are already doing this by caching popular YouTube and other content which is why it seems like some videos take forever while others play almost instantly.
Trying to define the true nature of a person is almost a metaphysical question. Am I an individual, spouse, family member or business? Am I paying a bill on my own behalf or doing the banking for my business partners? Any future architecture will build on today’s approaches and understanding of user identity.
Regardless of whether the answer will be biometrics, social media or two-factor authentication it is likely that future architectures will make it easier to manage identity. The one thing that we know is that people hate username/password management and a distributed approach with ongoing verification of identity is more likely to gain acceptance (see Login with social media).
Governments want to own this space but there is no evidence that there is a natural right for any one trusted organisation. Perhaps Estonia’s model of a digital economy and e-residency might provide a clue. Alternatively, identity could become as highly regulated as national passports with laws preventing individuals from holding more than one credential without explicit permission.
Internet of Things
The Internet of Things brings a new set of challenges with an explosion of the number of connected devices around the world. For anyone who is passionate about technology, it has been frustrating that useful interfaces have been so slow to emerge. We were capable of connecting baby monitors, light switches and home security for years before they were readily available and even today they are still clumsy.
Arguably, the greatest factor in the lag between the technological possibility and market availability has been the challenge of interoperability and the assumption that we need standards. There is a growing belief that market forces are more effective than standards (see The evolution of information standards). Regardless, the evolution of new architectures is essential to enabling the Internet of Things marathon.
Complexity and Robustness
Our technology has become increasingly complex. Over many years we have layers building on layers of legacy hidden under facades of modern interfaces. Not only is this making IT orders of magnitude more expensive, it is also making it far harder to create bulletproof solutions.
Applications that lack robustness are frustrating when they stop you from watching your favourite TV programme, but they could be fatal when combined with the Internet of Things and increasingly autonomous machines such as trains, planes and automobiles.
There is an increasing awareness that the solution is to evolve to simplicity. Future architectures are going to reward simplicity through streamlined integration of services to create applications across platforms and vendors.
Bringing it all together
The next architecture won’t be a silver bullet to all of the issues I’ve raised in this post, but to be compelling it will need to provide a platform to tackle problems that have appeared intractable to-date.
Perhaps, though, the greatest accelerators of every generation of architecture is a cool name (such as “cloud”, “thin client”, “service oriented architecture” or “client/server”). Maybe ambitious architects should start with a great label and work back to try and tackle some of the challenges that technologists face today.
In organisations around the world employees are accidently merging their personal and professional cloud applications with dire results. Some of the issues include the routing of sensitive text messages to family members and the replication of confidential documents onto the servers of competitors.
Our personal and work lives are merging. When done well, this can be a huge boost to our productivity and personal satisfaction. The rise of the smart mobile device has been a major part of this merge, driving Chief Information Officers towards various flavours of Bring Your Own Device (BYOD) policies.
With the advent of a myriad of cloud services delivered over the internet, our individual cloud architectures are becoming far more complex. The increasing trend to move beyond BYOD to Bring Your Own Application (BYOA) means that the way individuals configure their own personal technology has the potential to directly impact their employer.
Bringing applications and data to work
Many of our assumptions about workplace technology are based on the world of the past where knowledge workers were rare, staff committed to one employer at a time and IT was focused on automating back office processes.
In this environment, one employee could be swapped out for another easily and all were prepared to conform to a defined way of working. Our offices were perhaps best compared to Henry Ford’s production line of the early twentieth century where productivity required conformity.
It’s no wonder that IT has put such a high value on a Standard Operating Environment.
However, new styles of working have taken hold (see Your insight might protect your job). Businesses and government are employing more experts on a shared basis. An increasing proportion of the workforce is best described as being made up of “knowledge workers” and a sizeable number of these people are choosing to work in new ways including spreading their week over several organisations. Even fulltime staff find that that their employers have entered into complex joint ventures meaning that the definition of the enterprise is constantly shifting.
Personal productivity is complex and highly dependent on the individual. The approach that works for one person is a complete anathema for another. Telling people to work in a standard way is like adding a straightjacket to their productivity. Employees should be allowed to use their favourite tools to schedule their time, author documents, create presentations and take notes.
Not only is the workplace is moving away from lowest common denominator software, the increasing integration between mobile, tablet and personal computers means that the boundaries between them are becoming less clear. It all adds up to the end of the Standard Operating Environment.
CIOs are right to worry that BYOD and BYOA will result in their data being spread out over an unmanageable number of systems and platforms. It is no longer workable to simply demand that all information be kept in a single source of truth physical database or repository. I’ve previously argued that the foundation of a proper strategy is to separate the information from the business logic or processes that create it (see Your data is in the cloud when…).
Some of the simplest approaches that CIOs have taken to manage their BYOD environment have involved virtualisation solutions where a virtual desktop with the Standard Operating Environment is run over a client which is available on many devices.
While this is progress, it barely touches the productivity question. While workers can choose the form factor of the device that suits them, they are still being forced into the straightjacket of lowest common denominator business applications.
The vendors are going to continue to provide better solutions which put the data in a standard form while allowing access to many (even competing) applications. It’s not about just one copy of the data on a database, but rather allowing controlled replication and digital watermarks that track the movement of this data including loss prevention.
While CIOs may see many downsides, the upsides go beyond the productivity of individual workers.
For example, organisations constantly struggle with managing their staff master data, but in a world of personal social media employees will effectively identify themselves (see Login with social media).
Managing software licenses, even in the most efficient organisation, is still an imperfect science at best with little motivation for users to optimise their portfolio. When employees can bring their own cloud subscriptions to work, with an agreed allowance paid to them, the choices that they make are so much more tailored to their actual needs today rather than what they might need in months or even years to come.
Organisations that have grappled with provisioning new PCs are seeing the advantages of the consumer app stores with employees self-administering deployment between devices. Cloud and hardware providers are increasingly recognising the complex nature of families and are enabling security and deployment of content between members. The good news is that even the simplest family structure is more complicated than almost any enterprise organisation chart!
We see a hint of bad architecture when staff misconfigure their iPhones and end-up with their (potentially sensitive) text messages being shared with their spouse or wider family or when contractors use their personal cloud drive working across more than one organisation. Even worse is when it goes really wrong and a ransomware breach on a personal computer infects all of these shared resources!
The breadth of services that the personal cloud covers is constantly growing. For many, it already includes their telecommunications, voicemail, data storage, diary, expense management, timesheets, authoring of office documents, messaging (email and texts), professional library, project management, diagram tools and analytics. Architecture is even beginning to matter in social media with the convergence of the tools most of us use (see The future of social networks).
Some employers fear the trend of cloud, BYOD and BYOA will lead to the loss of their organisation’s IP. The smart enterprise, however, is realising that well-managed cloud architectures and appropriate taxonomies can help rather than hinder employees to know what’s theirs and what’s yours.
In the near future you may even start choosing staff based on the quality of their personal cloud architecture!
On the recent Stuff to Blow Your mind podcast episode Outsourcing Memory, hosts Julie Douglas and Robert Lamb discussed how, from remembering phone numbers to relying on spellcheckers, we’re allocating our cognitive processes to the cloud.
“Have you ever tried to recall an actual phone number stored in your cellphone, say of a close friend or relative, and been unable to do so?” Douglas asked. She remarked how that question would have been ridiculous ten years ago, but nowadays most of us would have to admit that the answer is yes. Remembering phone numbers is just one example of how we are outsourcing our memory. Another is spelling. “Sometimes I find myself intentionally misspelling a word to make sure the application I am using is running a spellchecker,” Lamb remarked. Once confirmed, he writes without worrying about misspellings since the spellchecker will catch them. I have to admit that I do the same thing. In fact, while writing this paragraph I misspelled several words without worry since they were automatically caught by those all-too-familiar red-dotted underlines. (Don’t forget, however, that spellcheckers don’t check for contextual accuracy.)
Transactive Memory and Collaborative Remembering
Douglas referenced the psychological concept of transactive memory, where groups collectively store and retrieve knowledge. This provides members with more and better knowledge than any individual could build on their own. Lamb referenced cognitive experimental research on collaborative remembering. This allows a group to recall information that its individual members had forgotten.
The memory management model of what we now call the cloud is transactive memory and collaborative remembering on a massive scale. It has pervaded most aspects of our personal and professional lives. Douglas and Lamb contemplated both its positive and negative aspects. Many of the latter resonated with points I made in my previous post about Automation and the Danger of Lost Knowledge.
Free Your Mind
In a sense, outsourcing our memory to the cloud frees up our minds. It is reminiscent of Albert Einstein remarking that he didn’t need to remember basic mathematical equations since he could just look them up in a book when he needed them. Nowadays he would just look them up on Google or Wikipedia (or MIKE2.0 if, for example, he needed a formula for calculating the economic value of information). Not bothering to remember basic mathematical equations freed up Einstein’s mind for his thought experiments, allowing him to contemplate groundbreaking ideas like the theory of relativity.
Forgetting how to Remember
I can’t help but wonder what our memory will be like ten years from now after we have outsourced even more of it to the cloud. Today, we don’t have to remember phone numbers or how to spell. Ten years from now, we might not have to remember names or how to count.
Wearable technology, like Google Glass or Narrative Clip, will allow us to have an artificial photographic memory. Lifelogging will allow us to record our own digital autobiography. “We have all forgot more than we remember,” Thomas Fuller wrote in the 18th century. If before the end of the 21st century we don’t have to remember anything, perhaps we will start forgetting how to remember.
I guess we will just have to hope that a few trustworthy people remember how to keep the cloud working.
It’s fashionable to be able to claim that you’ve moved everything from your email to your enterprise applications “into the cloud”. But what about your data? Just because information is stored over the Internet, it shouldn’t necessarily qualify as being “in the cloud”.
New cloud solutions are appearing at an incredible rate. From productivity to consumer applications the innovations are staggering. However, there is a world of difference between the ways that the data is being managed.
The best services are treating the application separately to the data that supports it and making the content easily available from outside the application. Unfortunately there is still a long way to go before the aspirations of information-driven businesses can be met by the majority of cloud services as they continue to lock away the content and keep the underlying models close to their chest.
An illustrative example of best is a simple drawing solution, Draw.io. Draw.io is a serious threat to products that support the development of diagrams of many kinds. Draw.io avoids any ownership of the diagrams by reading and saving XML and leaving it to its user to decide where to put the content while making it particularly easy to integrate with your Google Drive or Dropbox account, keeping the content both in the cloud and under your control. This separation is becoming much more common with cloud providers likely to bifurcate between the application and data layers.
You can see Draw.io in action as part of a new solution for entity-relationship diagrams in the tools section of www.infodrivenbusiness.com .
Offering increasing sophistication in data storage are the fully integrated solutions such as Workday, Salesforce.com and the cloud offerings of the traditional enterprise software companies such as Oracle and SAP. These vendors are realising that they need to work seamlessly with other enterprise solutions either directly or through third-party integration tools.
Also important to watch are the offerings from Microsoft, Apple and Google which provide application logic as well as facilitating third-party access to cloud storage, but lead you strongly (and sometimes exclusively) towards their own products.
There are five rules I propose for putting data in the cloud:
1. Everyone should be able to collaborate on the content at same time
To be in the cloud, it isn’t enough to back-up the data on your hard disk drive to an Internet server. While obvious, this is a challenge to solutions that claim to offer cloud but have simply moved existing file and database storage to a remote location. Many cloud providers are now offering APIs to make it easy for application developers to offer solutions with collaboration built-in.
2. Data and logic are separated
Just like the rise of client/server architectures in the 1990s, cloud solutions are increasingly separating the tiers of their architecture. This is where published models and the ability to store content in any location is a real advantage. Ideally the data can be moved as needed providing an added degree of flexibility and the ability to comply with different jurisdictional requirements.
3. The data is available to other applications regardless of vendor
Applications shouldn’t be a black box. The trend towards separating the data from the business logic leads inexorably towards open access to the data by different cloud services. Market forces are also leading towards open APIs and even published models.
4. The data is secure
The content not only needs to be secure, but it also needs to be seen to be secure. Ideally it is only visible to the owner of the content and not the cloud application or storage vendor. This is where those vendors offering solutions that separate application logic and storage have an advantage given that much of the security is in the control of the buyer of the service.
5. The data remains yours
I’ve written about data ownership before (see You should own your own data). This is just as important regardless of whether the cloud solution is supporting a consumer, a business or government.
Richard Ordowich, commenting on my Hail to the Chiefs post, remarked how “most organizations need to improve their data literacy. Many problems stem from inadequate data definitions, multiple interpretations and understanding about the meanings of data. Skills in semantics, taxonomy and ontology as well as information management are required. These are skills that typically reside in librarians but not CDOs. Perhaps hiring librarians would be better than hiring a CDO.”
I responded that maybe not even librarians can save us by citing The Library of Babel, a short story by Argentine author and librarian Jorge Luis Borges, which is about, as James Gleick explained in his book The Information: A History, A Theory, A Flood, “the mythical library that contains all books, in all languages, books of apology and prophecy, the gospel and the commentary upon that gospel and the commentary upon the commentary upon the gospel, the minutely detailed history of the future, the interpolations of all books in all other books, the faithful catalogue of the library and the innumerable false catalogues. This library (which others call the universe) enshrines all the information. Yet no knowledge can be discovered there, precisely because all knowledge is there, shelved side by side with all falsehood. In the mirrored galleries, on the countless shelves, can be found everything and nothing. There can be no more perfect case of information glut.”
More than a century before the rise of cloud computing and the mobile devices connected to it, the imagination of Charles Babbage foresaw another library of Babel, one where “the air itself is one vast library, on whose pages are forever written all that man has ever said or woman whispered.” In a world where word of mouth has become word of data, sometimes causing panic about who may be listening, Babbage’s vision of a permanent record of every human utterance seems eerily prescient.
Of the cloud, Gleick wrote about how “all that information—all that information capacity—looms over us, not quite visible, not quite tangible, but awfully real; amorphous, spectral; hovering nearby, yet not situated in any one place. Heaven must once have felt this way to the faithful. People talk about shifting their lives to the cloud—their informational lives, at least. You may store photographs in the cloud; Google is putting all the world’s books into the cloud; e-mail passes to and from the cloud and never really leaves the cloud. All traditional ideas of privacy, based on doors and locks, physical remoteness and invisibility, are upended in the cloud.”
“The information produced and consumed by humankind used to vanish,” Gleick concluded, “that was the norm, the default. The sights, the sounds, the songs, the spoken word just melted away. Marks on stone, parchment, and paper were the special case. It did not occur to Sophocles’ audiences that it would be sad for his plays to be lost; they enjoyed the show. Now expectations have inverted. Everything may be recorded and preserved, at least potentially: every musical performance; every crime in a shop, elevator, or city street; every volcano or tsunami on the remotest shore; every card played or piece moved in an online game; every rugby scrum and cricket match. Having a camera at hand is normal, not exceptional; something like 500 billion images were captured in 2010. YouTube was streaming more than a billion videos a day. Most of this is haphazard and unorganized.”
The Library of Babel is no longer fiction. Big Data is the Library of Babel.
History is replete with examples where ideas are launched with great fanfare and yet fail while subsequent iterations of very similar ideas are hugely successful. The difference between a failed good idea and a success is often only a matter of a few subtle differences.
It can be argued that at any time there are just a few key trends which define technology support for business transformation and innovation in the industry as a whole. In the past we have dealt with network trends that have resulted in better communications, infrastructure trends which have delivered standardisation and technology management trends that have given us a focus on project outcomes.
In each case, the wave that saw these ideas come through was preceded by iterations of similar concepts that simply failed to gain popular support.
The three big trends that we are dealing with in Information Technology at the moment are: digital disruption, cloud and big data. All three tend to get discussed in isolation but are, in fact, closely related. As much as anything, all three reflect the maturing of IT to join other forms of technology where subtle ergonomics, form and design are more important than a simple inventory of features and functions.
Digital is a business trend
Digital technologies are those that contain discrete (discontinuous) elements or values. Contrary to popular misconception, digital does not mean online activities. The opposite of digital is analogue where elements are not independent, such as business processes that cannot be isolated.
Digital business isolates individual business functions and their supply through business services, often through B2C or B2B online channels. Once functions are isolated they can be recombined in new and innovative ways to create disruption. Digital disruption. Examples of this disruption include the recombination of supply chains, retail, financial products, telecommunications and even quite traditional manufacturing and mining activities.
The principles and technologies of digital business are not new, the industry has developed service oriented architecture (SOA) approaches over many years with exactly the same goal. What has changed this time is subtle. Rather than relying on standards for interfacing, digital approaches are relying on market forces to ensure that individual components are able to effectively exchange information. Once again, market self-interest is trumping central planning!
Cloud as a power station
While the move towards cloud is complex, it is certainly not new. Bureau computing through the 1980s (and even earlier) was just as much a cloud capability as today’s services. With the success of the Internet people have been envisioning a move away from owning infrastructure to combining services online for over a decade.
When people talk about cloud, they often refer to the utility model. The argument being that the move is inevitable. After all, no-one generates their own electricity right? In fact, the utility industry knows full well that electricity production is going a full circle with first solar and now increasing gas generation of electricity moving into the home. Similarly, many proponents of cloud argue that it is the continuation of trend towards thin client applications, but similarly software architects are very aware that the rise of mobile apps has seen the return of the thick client with a vengeance!
Centralisation versus decentralization and thick versus thin are simply part of the cyclical nature of IT. Cloud is much more important. It is the vehicle via which discrete services can be combined and delivered. Whether it is the embedding of payment services in ecommerce solutions, CRM for call centres or the provision of storage of photos for home users the really interesting trend is how these services can be combined in new and commercially viable ways.
Big data is an expression of freedom from technology
Big data leverages the digital bread crumbs from all of the new services that cloud and digital, combined, enable and builds on the freedom that organisations are gaining by not having to own or build everything themselves.
When businesses have to own everything, they naturally try to define it in enormous detail. With enterprises that are tightly interconnected (or naturally analogue) enterprise consistency and standards are paramount – hence the focus on master data. As elements of the business become discrete (digital) they can be allowed a level of independence and their data is driven by an internal and external market rather than standards. The result is big data (see my previous post, It’s time for a new definition of big data) which is as different from structured data as databases are from unstructured content.
Big data existed in the past, but without digital and cloud enterprises were reluctant to set it free for fear of losing control of already complex business processes.
User experience is everything
None of digital, cloud or big data are new. What we are seeing is that these technologies are supporting each other in a way that is making them practical and the market is learning to use them in a way that fits with day-to-day business and consumer activities.
Digital business makes sense in a world that is used to sharing big data across organisational boundaries. Cloud is important in a digital world that is not afraid of sharing or migrating whole business functions across boundaries and borders. Big data’s digital bread crumbs rely on the cloud to free-up the organisation from having to maintain control of all information definitions.
Above all else, these trends are being picked-up because they are being manifested in solutions that are easy to understand and use. Without app stores, payment services, CRMs and analytics tools which reflect the way people actually want to structure their businesses none of these trends would be taking hold.
A visit to the AIIM Roadshow 2009 in London, UK, helped me get a quick update on the state of the ECM industry. Doug Miles gave the key note presentation with results from a recent AIIM survey:
- Focus for investment in 2009 will be on Document Management, portals, Enterprise 2.0, Electronic Records Management and Search
- Document capture, MFPs, OCR, digital mailroom are falling behind in importance
- Software expenditure (i.e. license fees) will be up, services including training (which will harm associations like AIIM) are stagnating
- Most organisations achieve the expected return on investment (ROI) with their ECM investment, in particular with hard $ ROI; but interestinly enough, most overachieve their soft $ ROI – This makes ECM look like a pretty low risk investment compared to some of the other IT initiatives.
- 43% of organisation achieve a payback within 12 month on their document capture and digitisation projects alone!
Doug moved on to discuss Microsoft’s offering for ECM, Microsoft Office SharePoint Server (MOSS):
- 12% of respondents stated that MOSS is their chosen ECM suite (enterprise wide)
- Collaboration is stated as the top reason to implement MOSS
- Although Enterprise 2.0 and Search are only 8th and 9th on the priority list, with ~25% of respondents working on that
- 5% and 22% of respondents stated that MOSS is compatible or works in parallel to other ECM solutions
We also got a bit of visionary (wishful?!) thinking:
- Focus for ECM should be on creating an “ECM Central”, a central capability as opposed to single purpose solutions
- Drive ECM towards Service Oriented Architecture (SOA) via single sign-on (SSO), open source and open standards
- Provide enterprise search for retrieval across all repositories and systems
- And consider managing assets “in place” as opposed to endless migration efforts from one system to the next
- Interestingly enough, 35% of respondents want to migrate to a single system
- 34% will provide linkage between systems via portals
- and only 9% are considering or implementing enterprise search
We also heard a bit about Software as a Service (SaasS) and Cloud computing:
- 20% of respondents plan or use Cloud or SaaS for document management
- Top concerns are around security and integration with other systems
Well, who is gonna survive the industry consolidation? IBM, Oracle and Microsoft are battling for the next wave. OpenText remains as the single, independent specialist ECM provider…
TODAY: Mon, April 24, 2017April2017