Archive for July, 2012
Everyone is talking about cloud computing, but most of the debate misses the point. Cloud computing isn’t really about computers at all, it is about business services which are delivered in new ways. Much of the time it is about combining offers in the market.
As with many new things, there is confusion over the terminology with Infrastructure as a Service (IaaS), Software as a Service (SaaS) and Business as a Service (BaaS) all overlapping with more general concepts of cloud. While the more effective use of computing resources is a feature of cloud computing, it is not the most valuable.
As business gets more complex, it is harder and harder for organisations to maintain every capability internally. Even the deployment of packaged software requires both IT and business project staff to have a good understanding of the business problem. This is the reason why cloud solutions for specialised problems like staff expense management have become so popular so quickly. Another reason why these have formed the first generation of deployed solutions is that they are easy to segment from other operations of the organisation.
Cloud delivered computing services, often provided on a pay for service basis also offers an alternative to the buy or lease options traditionally evaluated when acquiring computing resources. Use of on-demand, pay for use services also offers a way of temporarily delivering Information technology in support of merger, acquisition and separation activities.
Where consumers have had access to cloud-based services that simplify their lives by synchronising data, integrating their online purchases or bringing together their files they have grabbed them enthusiastically. Presented with a simple way to manage their telecommunications or banking services they typically embrace it quickly.
In the near future, the cloud will make a range of payment solutions possible, possibly rendering mobile payments (through Near Field Communications or NFC) redundant before they’ve even hit the market. What will make these new services interesting is that they will be independent of merchant or customer technology and rather combine location services with other aspects of the shopping experience.
One of the most common concerns that organisations raise about cloud computing is the potential for data to become fragmented, ungoverned or worse, exposed to foreign parties. All of these concerns are valid, however cloud also offers the opportunity to provide a much tighter framework to protect and govern that very same information. By assigning and contracting responsibilities, a proper governance structure can actually create greater accountability and remove many risks from within an institution.
The only certainty is uncertainty. The competitive market for customers means that everyone is looking for an edge, something to bundle or use to add value. Be it through loyalty points or one-off rewards. With cloud, the third party becomes a service which participates fully in the client experience, but leaves the client relationship intact.
Without cloud, a retailer, financial institution or utility looking to white label a specialised product has to re-host the content on its own website and either hand-over the client to the third party or develop a full application on their own servers. With cloud, organisations can offer third-party products delivered through the cloud as a seamless part of their own service. Not just the website but also the retail, digital and call centre channels.
It is likely that banks will partner with retailers to provide an integrated online experience. Why would a customer want to go all the way through to an online store if all they want to do is repeat a purchase they made in a previous month. Their credit card statement on their internet banking portal is probably the first place they’d like to go to repeat the purchase. Done properly, this will be a true cloud service with a seamless set of rich shopping applications embedded.
One of the best examples of this would be female cosmetics which are often purchased without change over many years. Partnering with retailers or even wholesalers, banks can offer a re-order application within their banking environment without ever needed to develop retailing expertise within the IT or business teams.
While change is a challenge to incumbents, cloud computing provides an exciting opportunity to create an agile organisation and to launch products in response to new entrants almost as quickly as they can.
Whether or not you’re a fan of big government, if you read this blog then you’re probably at least open to the idea of Big Data. And, when it comes to Big Data, it’s hard to envision any organization with more data at its disposal than the US federal government.
Lamentably and for a variety of reasons well beyond the scope of any individual post, the US government is (putting it very politely) still muddling through Big Data. In fact, it is doing a mere fraction of what it could with so much potentially valuable data. The report, titled “The Big Data Gap” (registration required):
…found that as agencies look to leverage big data, the technology and applications needed to successfully leverage big data are still emerging. Sixty percent of civilian agencies and 42 percent of Department of Defense/intelligence agencies say they are just now learning about big data and how it can work for their agency. Federal IT professionals say improving overall agency efficiency is the top advantage of big data (59 percent) followed by improving speed and accuracy of decisions (51 percent) and the ability to forecast (30 percent).
With so many other pressing priorities, perhaps it’s understandable that the US federal government isn’t exactly leading the way when it comes to IT and cutting-edge data management. (My hunch is that the US isn’t alone here.) Yet, at some point doesn’t embracing Big Data have to become a priority? At what point do the excuses start to evaporate?
Technologist and pundit Tim O’Reilly has spoken and written extensively about the need for government to become a platform. (His eponymous publishing company recently released a large tome, Open Government, expanding on that very notion.)
I can only imagine the innovation that will invariably come from government employees, external developers, and everyday citizens once government embraces Big Data. Minimizing the number of superfluous, redundant, and antiquated data sources will spur a raft of applications and services. Perhaps this will finally allow the public sector to do more with less. Maybe the potential of so much technological advancement is just waiting to be unleashed.
Big Data could happen in one of the following two ways: bottom-up or top-down. I personally think that a senior mandate requiring the use of Big Data is unlikely and not even necessary. Rather, once one employee, once agency, or one department does something innovative and flat-out cool with Big Data, others will want to emulate that success. It’s my hope that then the dominoes will start to fall. Ultimately, employees, agencies, and departments not using Big Data will be the exception, not the rule.
What say you?
The NBA season is finally mercifully officially over although as a Warriors fan, it was over long ago. Congratulations to the Miami Heat on winning the championship.
I couldn’t help but think of the many parallels between the operation that is the NBA and how companies out there are engaging consultants. Here’s 10 observations.
1. Although there is some clustering, the NBA All-Star teams were stocked with players from 20 or so teams. Kobe Bryant, Dwight Howard, LeBron James, Chris Paul and Kevin Durant all play for different teams. If a consultancy puts forward its team as the all-league all-star team, with no deficiencies whatsoever, that is a red flag. All teams have them. Both sides should understand this and strive for a best fit, given the realities that talent gets spread around naturally.
2. Consulting teams need a winning formula. Do they know what it is? Will that work in your environment? For the Heat, it was LeBron and Wade. For the Lakers, its Kobe, Bynum and a solid supporting cast. For the Magic, it was Howard and defense first. Oklahoma City went deeper with a solid rotation. Other teams put all shooters on the floor or focus on defense.
3. I did not notice an NBA team, in an effort to save money, put the cheapest, most inexperienced player they could find on the court this season. Heck, there are people who would pay for the glory of playing. No, I think every team tried their best to win as many games as possible. If your consulting team consists of 3 solid players that you are presented with, with the rest to be named later, make sure they are not filling it out with the cheapest players they can find. Of course, that is misguided on their part as well, but sometimes you need to save the consultancies from doing the wrong thing for both of you.
4. Scores and game clocks are not kept in the referee’s head. The referee does not suddenly blow the whistle and say “game over, Suns win 104-99, goodbye.” The time and the score are kept on large scoreboards for all to see throughout the game. Do you have a scoreboard? Does your consultancy? It is important to know how much progress is being made throughout the game.
5. Beyond the starting 5, NBA benches are filled with world-class athletes, many of whom get as much or more playing time as starters. What is your consultancy’s bench? I’m not referring, necessarily, to their employees not on billing, but just what is their contingency plan in case of injury, sudden and unexpected poor performance or if a player were to leave in the middle of the game? Is the consultancy plugged into the culture of the discipline they are engaged in? Do they have a warm network? Do they scout?
6. NBA teams come to expect certain things from the places they play – things like fans, referees, locker rooms, food, transportation, hoops, lights, a marked court and basketballs to play with. What is your consulting team expecting from you? Software? Hardware? Requirements? Access to certain individuals? Physical space? The ability to network their laptops? It would be a drag to see the game try to start without a basketball or to have the lights go out in the 3rd quarter. Clear up expectations ahead of time with your consultancy.
7. When the Pistons show up to the American Airlines Arena in Miami, they expect the Heat to come out of the dressing room to play against. Imagine their surprise should the Warriors come out! Or they have to play against 6 players on the court. Now, they have game-planned for one team (5 players at a time) and get to play an entirely different team. This bit of surprise will not help the Pistons be successful that night. Is there information the consultancy is not asking for that they should be in order to know what they are up against?
8. Sure, playing basketball is fun. However, it’s also work. Players dive after loose balls, flying into the stands if necessary, and are expected to go all out with little consequence to their body. They need to be skilled at avoiding injury, but cannot play overly concerned with it. There are many moments in a consulting project where it’s less fun and more work. Are you hiring a consultancy that is prepared for the potential hard work ahead?
9. NBA teams shoot about 80 field goals per game, hitting less than half. Actually, only a handful of players in the league hit over 50 percent of their field goals. However, you can’t score or win if you don’t shoot. The Harlem Globetrotters are entertaining when they go into their circle and keep passing the ball, but you don’t see that in a real game. Is your consultancy willing to shoot, and are you willing to let them, even though half of the shots aren’t going in, or is the consultancy interested in making entertaining passes, perhaps back to you?
10. Finally, experience counts. At the NBA draft recently, I was reminded by the announcers that some of the second round picks would not even make the NBA. Only 60 players are drafted each year, all with eye-popping highlights from college and European leagues, and some won’t make it?! That’s how tough it is. Is your consultancy getting tough about their talent?
A Structural Overview of MIKE 2.0
If you’re not already familiar, here is an intro to the structure of the MIKE2.0 methodology and associated content:
- A New Model for the Enterprise provides an intro rationale for MIKE2.0
- What is MIKE2.0? is a good basic intro to the methodology with some of the major diagrams and schematics
- Introduction to MIKE2.0 is a category of other introductory articles
- Mike 2.0 How To – provides a listing of basic articles of how to work with and understand the MIKE2.0 system and methodology.
- Alternative Release Methodologies describes current thinking about how the basic structure of MIKE2.0 can itself be modified and evolve. The site presently follows a hierarchical model with governance for major changes, though branching and other models could be contemplated.
We hope you find this of benefit and welcome any suggestions you may have to improve it.
This Week’s Blogs for Thought:
Publishing, Big Data and the Product Launch Reversal
The old publishing model can be summed up in three words: print then sell. It worked for centuries but, over the last several years, has started to crumble.
A recent Wall Street Journal piece entitled “Your E-Book Is Reading You” sheds light on the seismic shift taking place right now in the publishing industry–especially ebooks.
Data Comes Alive!
When most people think of data, images of complex Microsoft workbooks and spreadsheets come to mind. Tables with rows and columns of structured data like dates, stock prices, sales, home sales, and invoices.Historically, many analysts and execs alike have had to think about data in this rather pedestrian way. To some extent, BI projects started in the mid to late 1990s changed that, although many organizations never “got around” to them. Excel was the killer app for this type of thing: simple, relatively powerful, and good enough.
GigaOM Structure: Musings from the Front Lines
I attended GigaOM Structure June 20-21 in San Francisco. Cirro, a company I have been advising, who provides the ability to access any data, on any platform without the complexity of applications integration, launched at this event. GigaOM Structure brings together the leaders innovating, shaping and defining the ongoing evolution in the technology industry – cloud computing. There was also a strong nod given to big data at GigaOM Structure, demonstrating the added capabilities that the cloud has brought to big data. Although the talks were not educational sessions, but rather 20-40 minute interviews and panels with some of the industry pioneers, it was still quite educational.
Facebook has fallen on hard financial times lately. That 100-billion-dollar valuation now seems wildly optimistic, put mildly. To be sure, Facebook is hardly the only company to slide after its IPO–and it won’t be the last. But is there something more dangerous going on at the world’s largest social network–something that may threaten its very existence?
The Data Problem?
As James Ball writes in in The Guardian about the larger problem facing Mark Zuckerberg’s company:
While Google’s revenues are growing – not a bad feat in the current economy – the huge amounts of extra data it’s accumulating aren’t improving its actual ads: the money the company gets for each advert is actually falling. If more data doesn’t make these companies more cash, the rationale falls away. Google’s adverts make it a huge amount of money, and will continue to do so, but there’s no evidence that more user data is making those adverts more effective at generating profit than they already were.
A large chunk of Facebook’s business model is based on the “more data is better than less data” assumption. In theory, this will bring in advertisers and, ultimately, profits. (Parenthetically, this is precisely why Facebook scares the hell out of Google.)
But The Guardian piece begs the following important and fundamental questions about the value of data:
- Does data eventually reach a point of diminishing returns?
- Is more data always better than less data?
We’ll probably find out over the next few quarters or years if Facebook is able to monetize what is perhaps the largest trove of data in the history of the world. What’s more, if Facebook can’t do it, will any organization be able to make sense–and, more important, money–from vast amounts of user-generated information?
Your Organization is Not Facebook
Now, don’t for one minute dismiss the need for (and value of) Big Data, sentiment analysis, semantic technologies, and other modern data management techniques. Facebook’s struggles hardly prove that there is no legitimate value to be gleaned from such things. Let’s say, for the sake of argument, that Facebook can’t justify such a lofty valuation (now or in the future). This in no way means that its data is worthless. It just might not be worth as much as some think. In other words, the argument here centers around how much that data is worth–not whether the data is worth anything.
It’s my firm belief that the vast majority of organizations need to both manage their existing data better. I can think of few that wouldn’t benefit from increasing the types and amount of data they manage. If there is such a thing as diminishing returns to the value of data, Facebook is much, much closer to realizing it than your organization is. Even if Facebook’s stock plummets to zero (and Larry Page bought drinks for everyone in Silicon Valley), it behooves organizations to embrace the “more is better” data theory. Structured, unstructured, and semi-structured data are extremely valuable assets, not liabilities.
Much like any company, the big question for Facebook is not what kind of data it has. Rather, it’s “What can it do with that information?”
What say you?
The old publishing model can be summed up in three words: print then sell. It worked for centuries but, over the last several years, has started to crumble.
A recent Wall Street Journal piece entitled “Your E-Book Is Reading You” sheds light on the seismic shift taking place right now in the publishing industry–especially ebooks. From it:
Publishing has lagged far behind the rest of the entertainment industry when it comes to measuring consumers’ tastes and habits. TV producers relentlessly test new shows through focus groups; movie studios run films through a battery of tests and retool them based on viewers’ reactions. But in publishing, reader satisfaction has largely been gauged by sales data and reviews—metrics that offer a postmortem measure of success but can’t shape or predict a hit. That’s beginning to change as publishers and booksellers start to embrace big data, and more tech companies turn their sights on publishing.
Sound familiar? At least from this observer’s perspective, the publishing industry is hardly alone in being late to the Big Data dance. Now that ebooks have entered the zeitgeist, publishers are finally starting to realize that they can benefit from the data gleaned from readers. In reality, this is no revelation at all: Customer data matters! Same old, same old, right?
Sell Then Print
Actually, Big Data (along with other things) is enabling a fundamental shift in commerce. As the semantic web inches closer, organizations will have increased ability to test the new products and enhancements to existing products before launching them. Throw in funding platforms like like Kickstarter, IndieGoGo, and a bevy of others effectively allow for people to make a widget after they have sold a certain number of them. And then there’s A/B testing, a topic that I discussed on this site recently. Collectively, we will be able to make better decisions.
Does this take the guesswork entirely out of product design, marketing, and R&D? Of course not. But we’re increasingly seeing more data-oriented and and analytical approaches applied to traditionally “warmer and fuzzier” areas of business.
Think that an enhancement will be popular? Test it. Will a new product be embraced in a particular country? Test it. In terms of publishing, the new model is evolving to sell then print–a complete juxtaposition of the old way of doing things. It doesn’t hurt that technology has kept up. In this case, print on demand makes all of this possible.
Simon Says: Let’s Look at the Data
Of course, not everything can tested in a practical way. for instance, drug companies spend roughly $800 million (USD) to take a drug to market. In this case, testing isn’t financially feasible.
Still, many other formerly qualitative questions today lend themselves to quantitative analysis. Does this type of employee tend to do well in this type of environment? Sure, we can guess. But the better answer is “Let’s look at the data.”
What say you?
When most people think of data, images of complex Microsoft workbooks and spreadsheets come to mind. Tables with rows and columns of structured data like dates, stock prices, sales, home sales, and invoices.
Historically, many analysts and execs alike have had to think about data in this rather pedestrian way. To some extent, BI projects started in the mid to late 1990s changed that, although many organizations never “got around” to them. Excel was the killer app for this type of thing: simple, relatively powerful, and good enough.
These days, however, data visualization tools like Tableau and others allow users at all levels within an organization to think of data in a fundamentally different way. To paraphrase from the classic Peter Frampton album, data is starting to come alive.
Stories Over Spreadsheets
Are we talking about the death of the spreadsheet? Of course not. I just don’t see that happening anytime soon. However, no longer is Excel with attendant charts and pivot tables the sole means by which to present data, particularly to decision makers.
In the words of Kris Hammond, CTO of Narrative Science, a joint research project at Northwestern University Schools of Engineering and Journalism, ”For some people, a spreadsheet is a great device. For most people, not so much so. The story. The paragraph. The report. The prediction. The advisory. Those are much more powerful objects in our world, and they’re what we’re used to.”
No argument here, but simple Excel charts can’t possibly do justice to certain types of data. Look at the following figure:
One could make the argument that this is the equivalent of data art.
Get out of the “data is boring” mind-set. It doesn’t have to be. SaaS-based and open-source tools allow even cash-strapped organizations to make data interactive, informative, and dare I say exciting. Forget new colors, fonts, or superficial treatments. More than ever, it’s easy to make your data tell a story, to learn new things from visualized data that would otherwise be lost in plain-Jane columns and rows.
Without question, data can be turned into information and, ultimately, knowledge. Old school employees and execs need to realize that decisions for the most part today should be made based upon solid data, but the presentation of that data need not be boring.
What say you?
TODAY: Mon, April 24, 2017July2012