Archive for the ‘Business Intelligence’ Category
I was recently watching Bloomberg West, my favorite tech show. Emily Chang was interviewing Bill McDermott, co-chief executive officer of SAP AG. McDermott spoke about the company’s stellar second-quarter results and growth strategy.
You can watch the interview below or click here:
During the interview, McDermott commented on SAP clients’ widespread adoption of preconfigured apps–i.e., rapid deployment (RD). I want to touch in this post upon some of the data management issues involved in these types of projects.
By way of background, at a high level RD projects involve a vendor or system integrator effectively plunking down a preconfigured application like BI or CRM. The deployment typically takes a fraction of the time typically involved in these often laborious projects but, importantly, clients lose the ability to customize these applications. Also note that SAP is hardly the only vendor to conceive of this concept. I’ve heard about RD for the better part of a decade from more than a few firms.
Benefits Must be Balanced with Costs
Many CIOs chomp at the bit at the very thought of being able to “bang out” new applications and functionality. This is especially true at cash-strapped organizations. To many senior executives, the tradeoffs of RD projects are more than justified.
I’m not here to argue that point, but understand a few things about RD deployments. First, RD hardly gets around the data quality issues facing legacy systems or Waterfall and Agile projects. Specifically, GIGO still applies. Don’t make the mistake of assuming that a live system or application contains accurate or complete data just because it is live.
Second, many organizations’ data is in such disrepair that a good chunk of it can’t be loaded. Period. ABCDE is not a valid zip code. $0 is not a real salary unless you’re a CEO getting millions in stock options. You get my drift. Data unable to be loaded because it conflicts with application rules will be rejected.
Third, RD projects often involve benchmarks and industry KPIs. That is, a retail organization can compare its employee turnover or sales-per-square-foot to industry averages. That’s all fine and dandy, but remember that RD eschews customizations. The way that organization XYZ calculates turnover or same-store sales may differ slightly or significantly from that of other companies, effectively rendering comparisons moot. In turn, this can quash the user adoption of the very tool that XYZ crammed in.
Simon Says: Take Vendor Promises with a Grain of Salt
I’m not against RD as a concept. Organizations that manage their data well will, all else being equal, get more out of applications than organizations lacking such discipline. Just remember that there’s no magic wand, no secret sauce.
Perhaps embarking on a data cleansing project before commencing an RD project is the way to go. Better yet, see if you can try a cloud-based version of the application (even on a limited basis) to fully appreciate life with the new application before writing a big check.
What say you?
Test Everything. That’s the recent title of a recent Wired magazine article on the merits and usage of A/B testing. The piece is nothing less than fascinating and I encourage you to check it out. In this article, I’d like to chime in with my own thoughts on the subject.
Perhaps first and foremost, A/B testing allows you to generate your own data. That is, organizations can be proactive with regard to data management. This is in stark contrast to the practices of far too many companies that rely almost extensively on much more reactive data management. A/B testing does not eliminate the need for judgment. There’s still some art to go with that science. But, without question, A/B testing allows data management professionals to increase the mix of science in that recipe. From the aforementioned Wiredpiece:
“It [A/B testing] is your favorite copyeditor,” says IGN co-founder Peer Schneider. “You can’t have an argument with an A/B testing tool like Optimizely, when it shows that more people are reading your content because of the change. There’s no arguing back. Whereas when your copyeditor says it, he’s wrong, right?” This comment stings retroactively, as forty-eight hours later I would cost his company umpteen clicks with my misguided “improvement.”
Think about the power of data. In my experience, data naysayers often discount data because, in large part, they’re “not numbers’ people.” While some people will always find reasons to discredit that which they don’t understand, A/B testing can provide some pretty strong ammunition against skeptics. After all, what happens when website layout A, book cover A, or product description page A shows twice the level of engagement as its alternatives? Even a skeptic will have to admit defeat.
Of course, A/B testing is hardly a panacea. The basic laws of statistics still apply, not the least of which is the notion of statistical significance. A site that gets 50,000 unique hits per day can reasonably chop its audience into two and, in the end, feel confident that any results are genuine. Without getting all “statsy”, there’s not much of a chance of either Type I or Type II errors with such sample sizes. Now, if a site gets 50 unique visitors per day, the chance of seeing a “false positive” or failing to see a legitimate cause-effect relationship are considerably higher. Beyond statistics, though, there’s a stylistic or design issue with A/B testing. Consider the famous quote by Steve Jobs:
“It’s really hard to design products by focus groups. A lot of times, people don’t know what they want until you show it to them.” – BusinessWeek, May 25 1998
Do you really want to crowdsource everything? There’s something to be said for the vision of an individual, small team, or small company. Giving everyone a vote may well drive a product to mediocrity. That is, in an attempt to please everyone, you’ll please no one.
Whether A/B testing is right for your organization hinges upon a bevy of factors. Timing, culture, and sample sizes are just a few things to consider. If you go down this road, though, don’t stop just because you don’t like what the data are telling you.
What say you?
In my first book, Why New Systems Fail, I write about the perils of IT projects. Clocking in at over 350 pages, it’s certainly not a short book. In a nutshell, projects fail because of people, organizations, communications, and culture more than purely technological issues.
I began writing that book in mid-2008, long before apps and mobile BI had reached critical mass. While I know that technologies always change, when I leaf through my first text, I find that the book’s lessons are still very relevant today.
For instance, consider mobile BI. In an Information Management piece on the ten most common mobile BI mistakes, Lalitha Chikkatur writes
- Assuming mobile BI implementation is a project, like a traditional BI implementation
- Underestimating mobile BI security concerns
- Rolling out mobile BI for all users
- Believing that return on mobile BI investment cannot be derived
- Implementing mobile BI only for operational data
- Assuming that mobile BI is appropriate for all kinds of data
- Designing mobile BI similar to traditional BI design
- Assuming that BI is the only data source for mobile BI
- Believing mobile BI implementation is a one-time activity
- Claiming that any device is good for the mobile BI app
It’s an interesting list and I encourage you to check ou the entire post.
The Good, the Bad, and the Ugly
I’d argue that you could pretty much substitute ‘mobile BI’ for just about any contemporary enterprise technology, but let’s talk for a minute about the actual devices upon which mobile applications run.
With any technology, there have always been technological laggards and mobile BI is no exception to that rule. Think about the potential success of mobile BI in a typical large healthcare organization vs. a typical retail outlet.
In a very real way, mobile BI is quite similar to electronic medical records (EMRs). The technology behind EMRs has existed for quite some time, yet its low penetration rate is often criticized, especially in the United States. Why? The reasons are multifaceted, but user adoption is certainly at or near the top of the list. (For more on this, check out this HIMSS white paper.)
Generally speaking, many senior doctors have (at least in their view) been doing just fine for decades without having to mess around with smartphones, tablets, and other doohickeys. Old school doctors rely exclusively upon paper charts and pens. Technology just gets in the way and man of them just plain don’t get it.
Does this mean that the successful deployment of mobile BI is impossible at a hospital? Of course not. Just understand that many users will be detractors and naysayers, not advocates.
Retail, on the other hand, is quite a different animal on many levels (read: margins, employee turnover, elasticity of demand for the product, tax implications). Here, though, let’s focus on the end user.
Go into just about any retail store and I’ll bet you a steak vs. Saab that most of its employees aren’t much older than 30. Translation: they grew up on computers and the Internet. As such, they’ll all too willing to experiment with tablets, app, and mobile BI. The fear factor is gone and that fundamental willingness to experiment cannot be overstated.
Technology progresses faster than most users can embrace it–or, at least, want to embrace it. Like any technology, mobile BI can only be successful if your employees allow it to be.
What say you?
|Summary: Mobile BI isn’t so much a new technology as it as new platform for an existing technology.
For years, the idea was greater than the execution. Then a product came along that changed the game and unleashed a flurry of apps and development activity.
Yes, I’m talking about the effect of the iPad on mobile BI, a subject I broached on this very blog a few weeks ago. And I’m not the only one noticing this trend. Nicole Laskowski recently wrote an interesting TechTarget piece on the adoption of mobile BI in the enterprise. According to Laskowski, “mobile BI is enjoying a surge in popularity. According to Gartner, 33% of the 1,364 organizations using BI tools it surveyed are planning to deploy mobile BI this year. That’s in addition to the 8% already using the technology.”
No doubt that the iPad (and its apps) are collectively driving the mobilization of BI. While iPads are inherently cool, they aren’t cool enough by themselves to compel CIOs to write really big checks. Yes, directors and other senior managers still have to make the business case that mobile BI is actually needed. In Laskowski’s words:
The BICC (BI Competency Center) can help determine a company’s BI blueprint and how mobile fits into the overall strategy. [F]or a BICC to be effective, it will need executive sponsorship.
Mobile BI, alone, will still be a hard sell. Businesses should construct a plan that will eventually encompass more use cases and more functionality across the enterprise and beyond BI.
So, much like any other technology, it’s imperative to make the business case. To this end, mobile BI is no different than many other enterprise technologies, especially “non-critical” ones. After all, try CRM and ERP systems are probably higher up on the totem poll for many organizations.
Mobile: A New Platform, The Same Challenges
I’d bet you a Coke that most large organizations dipping their toes into the mobile BI pool have seen this movie before. That is, they’ve probably tried BI in the past–and probably more than once. As a result, mobile BI isn’t so much a new technology as it the deployment of an existing technology on new platform. And the sooner that most organizations understand that, the better.
To be sure, mobile BI deployment may be quicker an much easier than traditional, on-premise projects (especially if done via private app stores). However, don’t mistake easier for easy. The old rules still apply. Failure rates are typically high and ROI is often low or negative. My friend, IT project failure expert Michael Krigsman, believes that “BI is challenging because it really sits between business and IT. Of course, the technical deployment belongs to IT, however, without business engagement the likelihood is low that the deployment will be successful.”
Truer words have never been uttered.
Whether the processing is performed on a mainframe, server, or smartphone, BI is BI. For instance, the pernicious effects of incomplete or inaccurate data exist irrespective of platform. Ditto dysfunctional cultures, IT-business chasms, and other thorny organizational issues. Mobility and apps aren’t silver BI bullets.
What say you?
As the author of four books, I pay close attention to the publishing world, especially with respect to its use of emerging technologies. Brass tacks: I often
doubt wonder if traditional publishers have truly embraced the Information Age. While I understand the resistance, can’t data analysis and business intelligence help publishing houses improve their batting averages (read: select more successful books, reach their readers better, and avoid expensive mishaps)?
These are just some of the issues broached at O’Reilly’s Tools of Change for Publishing conference. This annual event bills itself as the place in which “the publishing and tech industries converge, as practitioners and executives from both camps share what they’ve learned from their successes and failures, explore ideas, and join together to navigate publishing’s ongoing transformation.” From a recent article reflecting upon TOC 2012:
If one thing was clear from this year’s TOC it’s that the publishing business is finally getting serious about data and analytics. This can mean looking at the granularity of day-to-day marketing strategies — such as when to send out a tweet to get maximum re-tweets on your social network (there’s an app for that), making quantitative assessments of the number of books a “library power patron” will buy based on their reading habits (one bought for every two borrowed), or the likelihood of a German to feel bad about downloading a pirated e-book from BitTorrent (not so much).
Years ago, most publishers selected books exclusively upon the recommendations of acquisition editors. AEs have been the gatekeepers, those coveted folks who somehow knew which books would be successful.
Except many of them didn’t.
Bad Batting Averages
In fact, the number of misses by big publishers is pretty astounding. Stephen King received hundreds of scathing rejection letters before he proved himself a book-selling machine. Publishers passed on initial manuscripts of Chicken Soup for the Soul, a franchise that has reached tens of millions of people. John Grisham self-published his first book.
I could go on but you get my point: relying upon hunches and intuition isn’t exactly a recipe for successful decisions, and book sales are no exception to this rule. Slowly, publishes are recognizing this fact and embracing analytics and Big Data. They have to; their margins are being squeezed and they have no choice but to adapt or die. While developing the perfect equation to predict book sales may be impossible (there are always Black Swans), no doubt publishers can benefit from a more information-driven approach to managing their business. After all, it worked for the Oakland A’s, right?
Simon Says: It’s Not Just About Previous Book Sales
Individual judgment will always matter in evaluating any business opportunity. No one is saying that machines, data, and algorithms need to completely supplant the need for human intervention. The data may tell us what, but it may not tell us why? Plus, there are always times in which it makes sense to bet big the other way–to ignore the data. Maybe 20 years ago, an author who sold 20,000 copies of a book might sell more or less the same number. These days, however, publishers are using new and often fuzzier metrics like an author’s (or prospective author’s)
- Twitter followers
- site’s Google PageRank or Alexa ranking
- RSS subscribers
- size of mailing list
- number of Facebook fans
- Klout score
- and others
What say you?
It’s no wonder they call it a dashboard… Like your car, your business could scarcely move forward without it. A well designed intelligence dashboard provides a snapshot of the overall health of your operations and gives immediate insight into areas that need improvement or tuning. In short, it’s key to keeping your finger on the pulse of your business.
At a minimum it helps:
1. Provide needed performance metrics and data for decision making.
2. Save time by showing only key reports.
3. Communicate trends with team members and management.
Most BI analytic and reporting programs (Google, Salesforce, Radian6, Raven, etc. to name a few) offer this feature, but how well are we using it? To get the most of your intelligence software, your analytics dashboard must (at a minumum) be up-to-date, designed with your bottom line in mind, and easily shareable and accessible.
It sounds like a no-brainer, but meeting these requirements are often easier said than done. Most organizations struggle with data quality and input delays. Does your sales or HR department update their progress on a daily or monthly basis? Are all the necessary fields being populated? If the information is not timely or correct, your dashboard and decision making will suffer as a result.
If data quality isn’t your problem, what about the design? When it comes to the layout and contents of your dashboard, you could easily be making a crucial mistake. Are you focusing on the correct reports that impact the bottom line for your department? Is unnecessary information deterring viewers from seeing the real picture? Even the best dashboards need fine tuning as goals and business needs change.
What about accessibility? Possibily the most crucial requirement for having a well-performing intelligence dashboard is the ability for team members to easily access and share information in real time. Allowing proper edit and access levels that allow the right people to pull reports and make changes sounds elementary, but can often be overlooked when designing your system.
What factors do you think contribute to a well-oiled intelligence dashboard, and how well is yours working?
The 1993 documentary The War Room tells the story of the 1992 US presidential campaign from a behind-the-scenes’ perspective. The film shows first-hand how Bill Clinton’s campaign team responded to different crises, including allegations of marital infidelity. While a bit dated today, it’s nonetheless a fascinating look into “rapid response” politics just when technology was starting to change traditional political media.
Today, we’re starting to see organizations set up their own data war rooms for essentially the same reasons: to respond to different crises and opportunities. Information Week editor Chris Murphy writes about one such company in “Why P&G CIO Is Quadrupling Analytics Expertise”:
[Procter & Gamble CIO Filippo] Passerini is investing in analytics expertise because the model for using data to run a company is changing. The old IT model was to figure out which reports people wanted, capture the data, and deliver it to the key people weeks or days after the fact. “That model is an obsolete model,” he says.
Murphy hits the nail on the head in this article. Now, let’s delve a bit depper into the need for a new model.
The Need for a New Model
There are at least three factors driving the need for a new information management (IM) model in many organizations. First, let’s look at IT track records. How many organizations invested heavily in the late 1990s and early 2000s on expensive, on-premise ERP, CRM, and BI applications–only to have these investments ultimately disappoint the vast majority of stakeholders? Now, on-premise isn’t the only option. Big Data and cloud computing are gaining traction in many organizations.
Next up: time to respond. Beyond the poor track record of many traditional IT investments, we live in different times relative to even ten years ago. Things happen so much faster today. Why? The usual supects are the explosion of mobility, broadband, tablets, and social media. Ten years ago, the old, reactive requirement-driven IM model might have made sense. Today, however, that model becoming increasingly difficult to justify. For instance, a social media mention might cause a run on products. By the time that proper requirements have been gathered, a crisis has probably exacerbated. An opportunity has probably been squandered.
Third, data analysis and manipulation tools have become much more user-friendly. Long gone are the days in which people needed a computer science or programming background to play with data. Of course, data modeling, data warehousing, and other heavy lifting necessitate more technical skills and backgrounds. But the business layperson, equipped with the right tools and a modicum of training, can easily investigate and drill down on issues related to employees, consumers, sales, and the like.
Against this new backdrop, which of the following makes more sense?
- IT analysts spending the next six weeks or months interacting with users and building reports?
- Skilled users creating their own reports, creating and interpreting their own analytics, and making business decisions with minimal IT involvement (aka, self service)?
Building a data war room is no elixir. You still have to employ people with the skills to manage your organizations data–and hold people accountable for their decisions. Further, rapid response means making decisions without all of the pertinent information. If your organization crucifies those who make logical leaps of faith (but ultimately turn out to be “wrong” in their interpretation of the data), it’s unlikely that this new model will take hold.
What say you?
It’s that time of year! The holidays are finally over and the joys of housekeeping and number crunching are upon us. How did your business end up in 2011? Above or below your expectations?
For us marketing folks, January is typically the time we square away to audit our marketing efforts from the previous year in preparation for future strategies. And the one big question that should be on everyone’s mind is, how much did those leads really cost me?
The good news is, the numbers should be pretty painless to figure out. With all the wonderful analytic and marketing intelligence provided by companies such as Google, Coremetrics, Optify, Marketo, Raven, etc, there’s no shortage of data to analyze and come up with that magic number that tells you, “was this effort really worth it?” And with the rise of online and social media marketing in the past few years, the cost to acquire good leads has reduced dramatically to the point that any company, regardless of size or budget, can easily compete in the digital marketplace.
So what’s your magic number? In order to determine this, you need to define what you consider a lead to be. Is it someone who fills out a form on your website? Is it someone who clicks on an email campaign? How do you know when your leads are qualified, unqualified, hot or cold? The answer to these questions will differ for every business, but should be defined before calculating the cost of lead acquisition. Once you’ve figured out how you define a lead, you’ll want to find out the number of total leads you acquired and break them down by marketing channel activity.
Next, you’ll want to determine the cost of your marketing channel activities. In the world of online marketing, activities will generally fall into one of 5 campaign categories: Email, Social, SEO, Paid Search or Events/Webinars. The cost of activities for each channel will generally be a combination of time plus resources.
Lastly, to determine the average cost to acquire a lead, you’ll want to divide the total cost of marketing channel activities by the total number of leads acquired from that channel. For those visual folks out there, the equation looks similar to this:
Average cost to acquire lead = Cost of activity time + resources / total leads acquired
Calculate that and tada! That is your magic number (aka the average cost of lead acquisition for that channel). If you want to know the cost of lead acquisition for all channels combined or your marketing department in general, add the total cost for each channel and divide by the total number of leads you received.
Knowing how much value you derrived from your marketing efforts is paramount in developing your marketing plan. It will enable you to determine which activities generate the highest return on your effort (and help you boost the bottom line). Sure it takes some time to figure out, but hey.. they don’t call it crunching because it’s easy!
“A good plan is like a road map: it shows the final destination and usually the best way to get there.”
–H. Stanley Judd
I recently had the opportunity to attend IBM’s Information On Demand Conference in Las Vegas, NV. At breakfast one day, Crysta Anderson of IBM, two attendees, and I talked about some BI implementations gone awry. In this post, I’ll discuss a few of the key problems that have plagued organizations spending millions of dollars on BI products–and still not seen the anticipated results.
A Familiar Scenario?
Recognize this? An organization (call it YYZ here) is mired in a sea of individual reports, overwhelmed end users, bad data, and the like. The CIO hears about a competitor successfully deploying a BI tool, generating meaningful KPIs and insights into the business. After lengthy vendor overview and procurement processes, YYZ makes the leap and rolls out its new BI toys. Two years later, YYZ has yet to see any of the expected benefits.
Many of you have seen this movie before.
So, you’re the CIO of YYZ and your own future doesn’t look terribly bright. What to do? Blame vendors if you want. Consultants also make for good whipping boys. Maybe some regulations stood in your way. And there’s always that recalcitrant department or individual employees who didn’t get with the program. But I have one simple question for you: Where did you start?
As we discussed at that table at the IOD conference, many organizations get off on the wrong foot. They begin their BI implementations from the bottom up. In other words, they take a look at the myriad reports, scripts, queries, and standalone databases used–and not used–and then attempt to recreate all or most of them in the new BI tool. Thousands of hours, hundreds of thousands of dollars, and months of employees’ time are spent replicating the old in the new.
The Wrong Questions
There are two major problem with the bottom-up approach. For one and most obvious, not all reports are needed. The core–and faulty assumption–is that we need tomorrow what we needed yesterday. As a consultant, I personally have seen many people request reports in the “new” system that they would never run. When I would ask a question about the purpose of the report or the ultimate destination, I would far too often hear crickets.
Beyond superfluous reports, there’s a deeper problem with bottom-up BI: it runs counter the whole notion of BI in the first place.
Let me explain. At its highest level, cubes, OLAP, data mining, and other sophisticated analytical techniques and tools all start at the top. That is, the conversation typically begins with high-level questions such as:
- Who are our best customers?
- Where are our profits coming from?
- Which products are selling best in each area?
- Which salespeople seem to be struggling?
An organization cannot start a BI implementation in the weeds and expect to be successful. It needs to start with the right questions, define its terms, and clean up its data. Only then can it begin to see the fruits from the BI tree.
Fight the urge to “just do it” when deploying BI tools. Failing to reassess core business processes, KPIs and metrics, and data quality issues are all recipes for disaster. Consultants and vendors who believe that you can just figure these things out mid-stream should be ignored.
What say you?
I was watching BloombergWest the other day when John Battelle appeared on my screen. For those of you who don’t know, Battelle wears a number of impressive hats. While not writing, he chairs Federated Media Publishing. He is also a visiting professor of journalism at the University of California, Berkeley. In short, he knows what he’s talking about.
Battelle was discussing the evolution of all things technology and, in particular, the movement away from the PC to mobile devices. He also mentioned something called The Data Frame, the theme from the forthcoming Web 2.0 summit. Battelle explains what he means:
For 2011, our theme is “The Data Frame” – focusing on the impact of data in today’s networked economy. We live in a world clothed in data, and as we interact with it, we create more – data is not only the web’s core resource, it is at once both renewable and boundless. [Emphasis mine.]
Consumers now create and consume extraordinary amounts of data. Hundreds of millions of mobile phones weave infinite tapestries of data, in real time. Each purchase, search, status update, and check-in layers our world with more of it. How our industries respond to this opportunity will define not only success and failure in the networked economy, but also the future texture of our culture. And as we’re already seeing, these interactions raise complicated questions of consumer privacy, corporate trust, and our governments’ approach to balancing the two.
Is Battelle ultimately right? I tend to think so. But, beyond that, as I listened to Battelle and researched his notion of The Data Frame, one thing struck me:
Most organizations are under- or unprepared for it.
Now, there are two parts to this fundamental lack of preparation, both of which I’ll discuss in this post.
Particularly in large, conservative organizations, far too often many people don’t think of things in terms of data and information–and this is the most significant problem. Decision makers fail to realize that everything is data. Decisions are made often by gut feel, despite the fact that for years decision analysis tools have existed to assist people in making superior choices. How many people do you know with access to sophisticated BI applications who continue to rely upon Microsoft Excel?
Lamentably, many organizations have yet to get their arms around the web and its implications. Legacy systems still abound and rare is the organization that has completely embraced Enterprise 2.0 and its components, including–and arguably most important–cloud computing.
The bottom line is that not enough people think in terms of data, a limitation that invariably influences the choice of which technologies are–and are not–deployed within organizations. While many in old-school enterprises debate what to do and how to do it, the chasm between them and companies that do get it (read: Amazon, Apple, Facebook, and Google) widens. The latter companies are so valuable and admired today because they are building and deploying sticky, integrated, and data-gathering planks and platforms. They’re not just trying to “get” the web. They did that a long time ago.
The Explosion of Mobility
The web has been here in full force for nearly two decades, but enterprise mobility is a much more recent advent. While a few companies have experimented with internal Apple-like App Stores, these are the exceptions that prove the rule. For this reason, consumers and consumer-based companies–not enterprise IT departments–are leading the current technology revolution. This is in stark contrast to what I call Enterprise 1.0 in The Next Wage of Technologies. In the 1990s, people walked into the office to use the most powerful technology. These days, however, the opposite is often true: many people have more powerful devices on their hips than on their desktops.
Once again, it’s all about the people. It is incumbent upon the powers-that-be to fundamentally alter their mindsets. Data need not be a “icky” problem to manage. On the contrary, it represents myriad opportunities to recognize and harvest. Once change- and risk-averse executives realize this, they can implement the apps, data models, and the like necessary to survive in our dynamic world.
What say you?
TODAY: Sun, December 8, 2013December2013