Archive for the ‘Business Intelligence’ Category
In his 1938 collection of essays World Brain, H. G. Wells explained that “it is not the amount of knowledge that makes a brain. It is not even the distribution of knowledge. It is the interconnectedness.”
This brought to my brain the traditional notion of data warehousing as the increasing accumulation of data, distributing information across the organization, and providing the knowledge necessary for business intelligence.
But is an enterprise data warehouse the Enterprise Brain? Wells suggested that interconnectedness is what makes a brain. Despite Ralph Kimball’s definition of a data warehouse being the union of its data marts, more often than not a data warehouse is a confederacy of data silos whose only real interconnectedness is being co-located on the same database server.
Looking at how our human brains work in his book Where Good Ideas Come From, Steven Johnson explained that “neurons share information by passing chemicals across the synaptic gap that connects them, but they also communicate via a more indirect channel: they synchronize their firing rates, what neuroscientists call phase-locking. There is a kind of beautiful synchrony to phase-locking—millions of neurons pulsing in perfect rhythm.”
The phase-locking of neurons pulsing in perfect rhythm is an apt metaphor for the business intelligence provided by the structured data in a well-implemented enterprise data warehouse.
“But the brain,” Johnson continued, “also seems to require the opposite: regular periods of electrical chaos, where neurons are completely out of sync with each other. If you follow the various frequencies of brain-wave activity with an EEG, the effect is not unlike turning the dial on an AM radio: periods of structured, rhythmic patterns, interrupted by static and noise. The brain’s systems are tuned for noise, but only in controlled bursts.”
Scanning the radio dial for signals amidst the noise is an apt metaphor for the chaos of unstructured data in external sources (e.g., social media). Should we bring order to chaos by adding structure (or at least better metadata) to unstructured data? Or should we just reject the chaos of unstructured data?
Johnson recounted research performed in 2007 by Robert Thatcher, a brain scientist at the University of South Florida. Thatcher studied the vacillation between the phase-lock (i.e., orderly) and chaos modes in the brains of dozens of children. On average, the chaos mode lasted for 55 milliseconds, but for some children it approached 60 milliseconds. Thatcher then compared the brain-wave scans with the children’s IQ scores, and found that every extra millisecond spent in the chaos mode added as much as 20 IQ points, whereas longer spells in the orderly mode deducted IQ points, but not as dramatically.
“Thatcher’s study,” Johnson concluded, “suggests a counterintuitive notion: the more disorganized your brain is, the smarter you are. It’s counterintuitive in part because we tend to attribute the growing intelligence of the technology world with increasingly precise electromechanical choreography. Thatcher and other researchers believe that the electric noise of the chaos mode allows the brain to experiment with new links between neurons that would otherwise fail to connect in more orderly settings. The phase-lock [orderly] mode is where the brain executes an established plan or habit. The chaos mode is where the brain assimilates new information.”
Perhaps the Enterprise Brain also requires both orderly and chaos modes, structured and unstructured data, and the interconnectedness between them, forming a digital neural network with orderly structured data firing in tandem, while the chaotic unstructured data assimilates new information.
Perhaps true business intelligence is more disorganized than we have traditionally imagined, and perhaps adding a little disorganization to your Enterprise Brain could make your organization smarter.
In my last post, I discussed the sin of pride and information management (IM) projects. Today, let’s talk about envy, defined as “a resentful emotion that occurs when a person lacks another’s (perceived) superior quality, achievement or possession and wishes that the other lacked it.”
I’ll start off by saying that, much like lust, envy isn’t inherently bad. Wanting to do as well as another employee, department, division, or organization can spur improvement, innovation, and better business results. Yes, I’m channeling my inner Gordon Gekko: Greed, for lack of a better word, is good.
With respect to IM, I’ve seen envy take place in two fundamental ways: intra-organizational and inter-organizational Let’s talk about each.
This type of envy takes place when employees at the same company resent the succes of their colleagues. Perhaps the marketing folks for product A just can’t do the same things with their information, technology, and systems that their counterparts representing product B can. Maybe division X launched a cloud-based CRM or wiki and this angers the employees in division Y.
At its core, intra-organizational envy stems from the inherently competitive and insecure nature of certain people. These envious folks have an axe to grind and typically have some anger issues going on. Can someone say schadenfreude?
This type of envy takes place between employees at different companies. Let’s say that the CIO of hospital ABC sees what her counterpart at hospital XYZ has done. The latter has effectively deployed MDM, BI, or cloud-based technologies with apparent success. The ABC CEO wonders why his company is so ostensibly behind its competitor and neighbor.
I’ve seen situations like this over my career. In many instances, organization A will prematurely attempt to deploy more mature or Enterprise 2.0 technologies simply because other organizations have already done so–not because organization A itself is ready. During these types of ill-conceived deployments, massive corners are cut, particularly with respect to data quality and IT and data governance. The CIO of ABC will look at the outcome of XYZ (say, the deployment of a new BI tool) and want the same outcome, even though the two organizations’ challenges are unlikely to be the same in type and magnitude.
Envy is a tough nut to crack in large part because it’s part of our DNA. I certainly cannot dispense pithy advice to counteract thousands of years of human evolution. I will, however, say this: Recognize that envy exists and that it’s impossible to eradicate. Don’t be Pollyanna about it. Try to minimize envy within and across your organization. Deal with outwardly envious people sooner rather than later.
What say you?
Next up: gluttony.
For my money, one of the most important business books of the last decade is Chris Anderson’s The Long Tail. In short, advances in technology, the drop in the cost of storage, and the rise of bandwidth collectively mean that the traditional notion of inventory is, in many instances, dead.
Consider that physical bookstores like Barnes and Noble will not stock titles that only sell two or three per year. It’s just not worth their while. However, physical stores matter less and less these days. It’s not 1992 anymore. A little company called Amazon.com now sells oodles of books, to use the technical term. To Jeff Bezos et. al, inventory is essentially unlimited because vast warehouses store less popular books that only sell a few copies per year. What’s more, the rise of e-books and print on demand (POD) only intensify this trend. With the latter, a digital file can be turned into a book in minutes. Brass tacks: More than ever, it’s never been easier to sell niche products.
Not Just Books
If you think that the long tail only applies to books, you’re way, way off. Think CDs, movies, and a bevy of other products. In their recent HBR article “Use Big Data to Find New Micromarkets“, Manish Goyal, Maryanne Q. Hancock, and Homayoun Hatami write about the increasingly ability of companies to segment their customers:
Consider the case of a chemicals company. Instead of looking at current sales by region, as it had always done [emphasis mine], the company examined market share within customer industry sectors in specific U.S. counties. The micromarket analysis revealed that although the company had 20% of the overall market, it had up to 60% in some markets but as little as 10% in others, including some of the fastest-growing segments. On the basis of this analysis, the company redeployed its sales force to exploit the growth.
For instance, one sales rep had been spending more than half her time 200 miles from her home office, even though only a quarter of her region’s opportunity lay there. This was purely because sales territories had been assigned according to historical performance rather than growth prospects. Now she spends 75% of her time in an area where 75% of the opportunity exists — within 50 miles of her office. Changes like these increased the firm’s growth rate of new accounts from 15% to 25% in just one year.
Recently, Jim Harris and I were talking about this subject. The promise of business intelligence has been with us for quite some time, although many organizations have for one reason or another failed to (fully) capitalize upon it. Reciting the reasons here isn’t the best use of my space, but I’d argue that in many organizations BI applications failed because people didn’t want to make ostensibly counterintuitive decisions. For instance, “the data tells me to deploy more salespeople in an area but it just doesn’t feel right.” We often ignore the data, even when the case is clear.
To quote from the HBR article, for “a micromarket strategy to work, however, management must have the courage and imagination to act on the insights revealed by this type of analysis.”
The benefits of the long tail, BI, and many emerging technologies have to be tempered against a slew of data, organizational, and human factors. Technology can only do so much. At a minimum, organizations with cultures that reward (or, at least, fail to punish) people who consistently ignore data fail to capitalize on lucrative opportunities. At worst, they set themselves up for massive failure and potential extinction.
What say you?
I was recently watching Bloomberg West, my favorite tech show. Emily Chang was interviewing Bill McDermott, co-chief executive officer of SAP AG. McDermott spoke about the company’s stellar second-quarter results and growth strategy.
You can watch the interview below or click here:
During the interview, McDermott commented on SAP clients’ widespread adoption of preconfigured apps–i.e., rapid deployment (RD). I want to touch in this post upon some of the data management issues involved in these types of projects.
By way of background, at a high level RD projects involve a vendor or system integrator effectively plunking down a preconfigured application like BI or CRM. The deployment typically takes a fraction of the time typically involved in these often laborious projects but, importantly, clients lose the ability to customize these applications. Also note that SAP is hardly the only vendor to conceive of this concept. I’ve heard about RD for the better part of a decade from more than a few firms.
Benefits Must be Balanced with Costs
Many CIOs chomp at the bit at the very thought of being able to “bang out” new applications and functionality. This is especially true at cash-strapped organizations. To many senior executives, the tradeoffs of RD projects are more than justified.
I’m not here to argue that point, but understand a few things about RD deployments. First, RD hardly gets around the data quality issues facing legacy systems or Waterfall and Agile projects. Specifically, GIGO still applies. Don’t make the mistake of assuming that a live system or application contains accurate or complete data just because it is live.
Second, many organizations’ data is in such disrepair that a good chunk of it can’t be loaded. Period. ABCDE is not a valid zip code. $0 is not a real salary unless you’re a CEO getting millions in stock options. You get my drift. Data unable to be loaded because it conflicts with application rules will be rejected.
Third, RD projects often involve benchmarks and industry KPIs. That is, a retail organization can compare its employee turnover or sales-per-square-foot to industry averages. That’s all fine and dandy, but remember that RD eschews customizations. The way that organization XYZ calculates turnover or same-store sales may differ slightly or significantly from that of other companies, effectively rendering comparisons moot. In turn, this can quash the user adoption of the very tool that XYZ crammed in.
Simon Says: Take Vendor Promises with a Grain of Salt
I’m not against RD as a concept. Organizations that manage their data well will, all else being equal, get more out of applications than organizations lacking such discipline. Just remember that there’s no magic wand, no secret sauce.
Perhaps embarking on a data cleansing project before commencing an RD project is the way to go. Better yet, see if you can try a cloud-based version of the application (even on a limited basis) to fully appreciate life with the new application before writing a big check.
What say you?
Test Everything. That’s the recent title of a recent Wired magazine article on the merits and usage of A/B testing. The piece is nothing less than fascinating and I encourage you to check it out. In this article, I’d like to chime in with my own thoughts on the subject.
Perhaps first and foremost, A/B testing allows you to generate your own data. That is, organizations can be proactive with regard to data management. This is in stark contrast to the practices of far too many companies that rely almost extensively on much more reactive data management. A/B testing does not eliminate the need for judgment. There’s still some art to go with that science. But, without question, A/B testing allows data management professionals to increase the mix of science in that recipe. From the aforementioned Wiredpiece:
“It [A/B testing] is your favorite copyeditor,” says IGN co-founder Peer Schneider. “You can’t have an argument with an A/B testing tool like Optimizely, when it shows that more people are reading your content because of the change. There’s no arguing back. Whereas when your copyeditor says it, he’s wrong, right?” This comment stings retroactively, as forty-eight hours later I would cost his company umpteen clicks with my misguided “improvement.”
Think about the power of data. In my experience, data naysayers often discount data because, in large part, they’re “not numbers’ people.” While some people will always find reasons to discredit that which they don’t understand, A/B testing can provide some pretty strong ammunition against skeptics. After all, what happens when website layout A, book cover A, or product description page A shows twice the level of engagement as its alternatives? Even a skeptic will have to admit defeat.
Of course, A/B testing is hardly a panacea. The basic laws of statistics still apply, not the least of which is the notion of statistical significance. A site that gets 50,000 unique hits per day can reasonably chop its audience into two and, in the end, feel confident that any results are genuine. Without getting all “statsy”, there’s not much of a chance of either Type I or Type II errors with such sample sizes. Now, if a site gets 50 unique visitors per day, the chance of seeing a “false positive” or failing to see a legitimate cause-effect relationship are considerably higher. Beyond statistics, though, there’s a stylistic or design issue with A/B testing. Consider the famous quote by Steve Jobs:
“It’s really hard to design products by focus groups. A lot of times, people don’t know what they want until you show it to them.” – BusinessWeek, May 25 1998
Do you really want to crowdsource everything? There’s something to be said for the vision of an individual, small team, or small company. Giving everyone a vote may well drive a product to mediocrity. That is, in an attempt to please everyone, you’ll please no one.
Whether A/B testing is right for your organization hinges upon a bevy of factors. Timing, culture, and sample sizes are just a few things to consider. If you go down this road, though, don’t stop just because you don’t like what the data are telling you.
What say you?
In my first book, Why New Systems Fail, I write about the perils of IT projects. Clocking in at over 350 pages, it’s certainly not a short book. In a nutshell, projects fail because of people, organizations, communications, and culture more than purely technological issues.
I began writing that book in mid-2008, long before apps and mobile BI had reached critical mass. While I know that technologies always change, when I leaf through my first text, I find that the book’s lessons are still very relevant today.
For instance, consider mobile BI. In an Information Management piece on the ten most common mobile BI mistakes, Lalitha Chikkatur writes
- Assuming mobile BI implementation is a project, like a traditional BI implementation
- Underestimating mobile BI security concerns
- Rolling out mobile BI for all users
- Believing that return on mobile BI investment cannot be derived
- Implementing mobile BI only for operational data
- Assuming that mobile BI is appropriate for all kinds of data
- Designing mobile BI similar to traditional BI design
- Assuming that BI is the only data source for mobile BI
- Believing mobile BI implementation is a one-time activity
- Claiming that any device is good for the mobile BI app
It’s an interesting list and I encourage you to check ou the entire post.
The Good, the Bad, and the Ugly
I’d argue that you could pretty much substitute ‘mobile BI’ for just about any contemporary enterprise technology, but let’s talk for a minute about the actual devices upon which mobile applications run.
With any technology, there have always been technological laggards and mobile BI is no exception to that rule. Think about the potential success of mobile BI in a typical large healthcare organization vs. a typical retail outlet.
In a very real way, mobile BI is quite similar to electronic medical records (EMRs). The technology behind EMRs has existed for quite some time, yet its low penetration rate is often criticized, especially in the United States. Why? The reasons are multifaceted, but user adoption is certainly at or near the top of the list. (For more on this, check out this HIMSS white paper.)
Generally speaking, many senior doctors have (at least in their view) been doing just fine for decades without having to mess around with smartphones, tablets, and other doohickeys. Old school doctors rely exclusively upon paper charts and pens. Technology just gets in the way and man of them just plain don’t get it.
Does this mean that the successful deployment of mobile BI is impossible at a hospital? Of course not. Just understand that many users will be detractors and naysayers, not advocates.
Retail, on the other hand, is quite a different animal on many levels (read: margins, employee turnover, elasticity of demand for the product, tax implications). Here, though, let’s focus on the end user.
Go into just about any retail store and I’ll bet you a steak vs. Saab that most of its employees aren’t much older than 30. Translation: they grew up on computers and the Internet. As such, they’ll all too willing to experiment with tablets, app, and mobile BI. The fear factor is gone and that fundamental willingness to experiment cannot be overstated.
Technology progresses faster than most users can embrace it–or, at least, want to embrace it. Like any technology, mobile BI can only be successful if your employees allow it to be.
What say you?
|Summary: Mobile BI isn’t so much a new technology as it as new platform for an existing technology.
For years, the idea was greater than the execution. Then a product came along that changed the game and unleashed a flurry of apps and development activity.
Yes, I’m talking about the effect of the iPad on mobile BI, a subject I broached on this very blog a few weeks ago. And I’m not the only one noticing this trend. Nicole Laskowski recently wrote an interesting TechTarget piece on the adoption of mobile BI in the enterprise. According to Laskowski, “mobile BI is enjoying a surge in popularity. According to Gartner, 33% of the 1,364 organizations using BI tools it surveyed are planning to deploy mobile BI this year. That’s in addition to the 8% already using the technology.”
No doubt that the iPad (and its apps) are collectively driving the mobilization of BI. While iPads are inherently cool, they aren’t cool enough by themselves to compel CIOs to write really big checks. Yes, directors and other senior managers still have to make the business case that mobile BI is actually needed. In Laskowski’s words:
The BICC (BI Competency Center) can help determine a company’s BI blueprint and how mobile fits into the overall strategy. [F]or a BICC to be effective, it will need executive sponsorship.
Mobile BI, alone, will still be a hard sell. Businesses should construct a plan that will eventually encompass more use cases and more functionality across the enterprise and beyond BI.
So, much like any other technology, it’s imperative to make the business case. To this end, mobile BI is no different than many other enterprise technologies, especially “non-critical” ones. After all, try CRM and ERP systems are probably higher up on the totem poll for many organizations.
Mobile: A New Platform, The Same Challenges
I’d bet you a Coke that most large organizations dipping their toes into the mobile BI pool have seen this movie before. That is, they’ve probably tried BI in the past–and probably more than once. As a result, mobile BI isn’t so much a new technology as it the deployment of an existing technology on new platform. And the sooner that most organizations understand that, the better.
To be sure, mobile BI deployment may be quicker an much easier than traditional, on-premise projects (especially if done via private app stores). However, don’t mistake easier for easy. The old rules still apply. Failure rates are typically high and ROI is often low or negative. My friend, IT project failure expert Michael Krigsman, believes that “BI is challenging because it really sits between business and IT. Of course, the technical deployment belongs to IT, however, without business engagement the likelihood is low that the deployment will be successful.”
Truer words have never been uttered.
Whether the processing is performed on a mainframe, server, or smartphone, BI is BI. For instance, the pernicious effects of incomplete or inaccurate data exist irrespective of platform. Ditto dysfunctional cultures, IT-business chasms, and other thorny organizational issues. Mobility and apps aren’t silver BI bullets.
What say you?
As the author of four books, I pay close attention to the publishing world, especially with respect to its use of emerging technologies. Brass tacks: I often
doubt wonder if traditional publishers have truly embraced the Information Age. While I understand the resistance, can’t data analysis and business intelligence help publishing houses improve their batting averages (read: select more successful books, reach their readers better, and avoid expensive mishaps)?
These are just some of the issues broached at O’Reilly’s Tools of Change for Publishing conference. This annual event bills itself as the place in which “the publishing and tech industries converge, as practitioners and executives from both camps share what they’ve learned from their successes and failures, explore ideas, and join together to navigate publishing’s ongoing transformation.” From a recent article reflecting upon TOC 2012:
If one thing was clear from this year’s TOC it’s that the publishing business is finally getting serious about data and analytics. This can mean looking at the granularity of day-to-day marketing strategies — such as when to send out a tweet to get maximum re-tweets on your social network (there’s an app for that), making quantitative assessments of the number of books a “library power patron” will buy based on their reading habits (one bought for every two borrowed), or the likelihood of a German to feel bad about downloading a pirated e-book from BitTorrent (not so much).
Years ago, most publishers selected books exclusively upon the recommendations of acquisition editors. AEs have been the gatekeepers, those coveted folks who somehow knew which books would be successful.
Except many of them didn’t.
Bad Batting Averages
In fact, the number of misses by big publishers is pretty astounding. Stephen King received hundreds of scathing rejection letters before he proved himself a book-selling machine. Publishers passed on initial manuscripts of Chicken Soup for the Soul, a franchise that has reached tens of millions of people. John Grisham self-published his first book.
I could go on but you get my point: relying upon hunches and intuition isn’t exactly a recipe for successful decisions, and book sales are no exception to this rule. Slowly, publishes are recognizing this fact and embracing analytics and Big Data. They have to; their margins are being squeezed and they have no choice but to adapt or die. While developing the perfect equation to predict book sales may be impossible (there are always Black Swans), no doubt publishers can benefit from a more information-driven approach to managing their business. After all, it worked for the Oakland A’s, right?
Simon Says: It’s Not Just About Previous Book Sales
Individual judgment will always matter in evaluating any business opportunity. No one is saying that machines, data, and algorithms need to completely supplant the need for human intervention. The data may tell us what, but it may not tell us why? Plus, there are always times in which it makes sense to bet big the other way–to ignore the data. Maybe 20 years ago, an author who sold 20,000 copies of a book might sell more or less the same number. These days, however, publishers are using new and often fuzzier metrics like an author’s (or prospective author’s)
- Twitter followers
- site’s Google PageRank or Alexa ranking
- RSS subscribers
- size of mailing list
- number of Facebook fans
- Klout score
- and others
What say you?
It’s no wonder they call it a dashboard… Like your car, your business could scarcely move forward without it. A well designed intelligence dashboard provides a snapshot of the overall health of your operations and gives immediate insight into areas that need improvement or tuning. In short, it’s key to keeping your finger on the pulse of your business.
At a minimum it helps:
1. Provide needed performance metrics and data for decision making.
2. Save time by showing only key reports.
3. Communicate trends with team members and management.
Most BI analytic and reporting programs (Google, Salesforce, Radian6, Raven, etc. to name a few) offer this feature, but how well are we using it? To get the most of your intelligence software, your analytics dashboard must (at a minumum) be up-to-date, designed with your bottom line in mind, and easily shareable and accessible.
It sounds like a no-brainer, but meeting these requirements are often easier said than done. Most organizations struggle with data quality and input delays. Does your sales or HR department update their progress on a daily or monthly basis? Are all the necessary fields being populated? If the information is not timely or correct, your dashboard and decision making will suffer as a result.
If data quality isn’t your problem, what about the design? When it comes to the layout and contents of your dashboard, you could easily be making a crucial mistake. Are you focusing on the correct reports that impact the bottom line for your department? Is unnecessary information deterring viewers from seeing the real picture? Even the best dashboards need fine tuning as goals and business needs change.
What about accessibility? Possibily the most crucial requirement for having a well-performing intelligence dashboard is the ability for team members to easily access and share information in real time. Allowing proper edit and access levels that allow the right people to pull reports and make changes sounds elementary, but can often be overlooked when designing your system.
What factors do you think contribute to a well-oiled intelligence dashboard, and how well is yours working?
The 1993 documentary The War Room tells the story of the 1992 US presidential campaign from a behind-the-scenes’ perspective. The film shows first-hand how Bill Clinton’s campaign team responded to different crises, including allegations of marital infidelity. While a bit dated today, it’s nonetheless a fascinating look into “rapid response” politics just when technology was starting to change traditional political media.
Today, we’re starting to see organizations set up their own data war rooms for essentially the same reasons: to respond to different crises and opportunities. Information Week editor Chris Murphy writes about one such company in “Why P&G CIO Is Quadrupling Analytics Expertise”:
[Procter & Gamble CIO Filippo] Passerini is investing in analytics expertise because the model for using data to run a company is changing. The old IT model was to figure out which reports people wanted, capture the data, and deliver it to the key people weeks or days after the fact. “That model is an obsolete model,” he says.
Murphy hits the nail on the head in this article. Now, let’s delve a bit depper into the need for a new model.
The Need for a New Model
There are at least three factors driving the need for a new information management (IM) model in many organizations. First, let’s look at IT track records. How many organizations invested heavily in the late 1990s and early 2000s on expensive, on-premise ERP, CRM, and BI applications–only to have these investments ultimately disappoint the vast majority of stakeholders? Now, on-premise isn’t the only option. Big Data and cloud computing are gaining traction in many organizations.
Next up: time to respond. Beyond the poor track record of many traditional IT investments, we live in different times relative to even ten years ago. Things happen so much faster today. Why? The usual supects are the explosion of mobility, broadband, tablets, and social media. Ten years ago, the old, reactive requirement-driven IM model might have made sense. Today, however, that model becoming increasingly difficult to justify. For instance, a social media mention might cause a run on products. By the time that proper requirements have been gathered, a crisis has probably exacerbated. An opportunity has probably been squandered.
Third, data analysis and manipulation tools have become much more user-friendly. Long gone are the days in which people needed a computer science or programming background to play with data. Of course, data modeling, data warehousing, and other heavy lifting necessitate more technical skills and backgrounds. But the business layperson, equipped with the right tools and a modicum of training, can easily investigate and drill down on issues related to employees, consumers, sales, and the like.
Against this new backdrop, which of the following makes more sense?
- IT analysts spending the next six weeks or months interacting with users and building reports?
- Skilled users creating their own reports, creating and interpreting their own analytics, and making business decisions with minimal IT involvement (aka, self service)?
Building a data war room is no elixir. You still have to employ people with the skills to manage your organizations data–and hold people accountable for their decisions. Further, rapid response means making decisions without all of the pertinent information. If your organization crucifies those who make logical leaps of faith (but ultimately turn out to be “wrong” in their interpretation of the data), it’s unlikely that this new model will take hold.
What say you?
TODAY: Tue, March 11, 2014March2014