Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Posts Tagged ‘Business Intelligence’

by: Jonathan
19  Jun  2015

Key Considerations for a Big Data Strategy

Most companies by now understand the inherent value found in big data. With more information at their fingertips, they can make better decisions regarding their businesses. That’s what makes the collection and analysis of big data so important today. Any company that doesn’t see the advantages that big data brings may quickly find themselves falling behind their competitors. To benefit even more from big data, many companies are employing big data strategies. They see that it is not enough to simply have the data at hand; it must be utilized in the most effective manner to maximize its potential. Coming up with the best big data strategy, however, can be difficult, especially since every organization has different needs, goals, and resources. When creating a big data strategy, it’s important for companies to consider several main issues that can greatly affect its implementation.

When first developing a big data strategy, businesses will need to look at the current company culture and change it if necessary. This essentially means to encourage employees throughout the whole organization to get into the spirit of embracing big data. That includes people on the business side of things along with those in the IT department. Big data can change the way things are done, and those who are resistant to those changes could be holding the company back. For that reason, they should be encouraged to be more open about the effect of big data and ready to accept any changes that come about. Organizations should also encourage their employees to be creative with their big data solutions, basically fostering an atmosphere of experimentation while being willing to take more risks.

As valuable as big data can be, simply collecting it for the sake of collecting big data will often result in failure. Every big data strategy needs to account for specific business objectives and goals. By identifying precisely what they want to do with their data, companies can enact a strategy that drives toward that single objective. This makes the strategy more effective, allowing organizations to avoid wasting money and resources on efforts that won’t benefit the company. Knowing the business objectives of a big data strategy also helps companies identify what data sources to focus on and what sources to steer clear from.

It’s the value that big data brings to an organization that makes it so crucial to properly use it. When creating a big data strategy, businesses need to make sure they view big data as a company-wide asset, one which everyone can use and take advantage of. Too often big data is seen as something meant solely for the IT department, but it can, in fact, benefit the organization as a whole. Big data shouldn’t be exclusive to only one group within a company. On the contrary, the more departments and groups can use it, the more valuable it becomes. That’s why big data strategies need a bigger vision for how data can be used, looking ahead to the long-term and avoiding narrowly-defined plans. This allows companies to dedicate more money and resources toward using big data, which helps them to innovate and use it to create new opportunities.

Another point all organizations need to consider is the kind of talent present in their companies. Data scientists are sought by businesses the world over because they can provide a significant boost to accomplishing established big data business goals. Data scientists are different from data analysts since they can actually build new data models, whereas analysts can only use models that have been pre-made. As part of a big data strategy, the roles and responsibilities of data scientists need to be properly defined, giving them the opportunity to help the organization achieve the stated business objectives. Finding a good data scientist with skills involving big data platforms and ad hoc analysis that are appropriate for the industry can be difficult with demand so high, but the value they can add is well worth it.

An organized and thoughtful big data strategy can often mean the difference between successful use of big data and a lot of wasted time, effort, and resources. Companies have a number of key considerations to account for when crafting their own strategies, but with the right mindset, they’ll know they have the right plans in place. Only then can they truly gain value from big data and propel their businesses forward.

Tags: ,
Category: Business Intelligence
No Comments »

by: Ocdqblog
29  Jun  2014

Overcoming Outcome Bias

What is more important, the process or its outcome? Information management processes, like those described by the MIKE2.0 Methodology, drive the daily operations of an organization’s business functions as well as support the tactical and strategic decision-making processes of its business leaders. However, an organization’s success or failure is usually measured by the outcomes produced by those processes.

As Duncan Watts explained in his book Everything Is Obvious: How Common Sense Fails Us, “rather than the evaluation of the outcome being determined by the quality of the process that led to it, it is the observed nature of the outcome that determines how we evaluate the process.” This is known as outcome bias.

While an organization is enjoying positive outcomes, such as exceeding its revenue goals for the current fiscal period, outcome bias basks processes in a rose-colored glow. Information management processes must be providing high-quality data to decision-making processes, which business leaders are using to make good decisions. However, when an organization is suffering from negative outcomes, such as a regulatory compliance failure, outcome bias blames it on broken information management processes and poor data quality that lead to bad decision-making.

“Judging the merit of a decision can never be done simply by looking at the outcome,” explained Jeffrey Ma in his book The House Advantage: Playing the Odds to Win Big In Business. “A poor result does not necessarily mean a poor decision. Likewise a good result does not necessarily mean a good decision.”

“We are prone to blame decision makers for good decisions that worked out badly and to give them too little credit for successful moves that appear obvious after the fact,” explained Daniel Kahneman in his book Thinking, Fast and Slow.

While risk mitigation is an oft-cited business justification for investing in information management, Kahneman also noted how outcome bias can “bring undeserved rewards to irresponsible risk seekers, such as a general or an entrepreneur who took a crazy gamble and won. Leaders who have been lucky are never punished for having taken too much risk. Instead, they are believed to have had the flair and foresight to anticipate success. A few lucky gambles can crown a reckless leader with a halo of prescience and boldness.”

Outcome bias triggers overreactions to both success and failure. Organizations that try to reverse engineer a single, successful outcome into a formal, repeatable process often fail, much to their surprise. Organizations also tend to abandon a new process immediately if its first outcome is a failure. “Over time,” Ma explained, “if one makes good, quality decisions, one will generally receive better outcomes, but it takes a large sample set to prove this.”

Your organization needs solid processes governing how information is created, managed, presented, and used in decision-making. Your organization also needs to guard against outcomes biasing your evaluation of those processes.

In order to overcome outcome bias, Watts recommended we “bear in mind that a good plan can fail while a bad plan can succeed—just by random chance—and therefore judge the plan on its own merits as well as the known outcome.”

 

Tags: , , ,
Category: Data Quality, Information Development
No Comments »

by: Ocdqblog
16  Jul  2013

Push Down Business Decisions

In his recent Harvard Business Review blog post Are You Data Driven? Take a Hard Look in the Mirror, Tom Redman distilled twelve traits of a data-driven organization, the first of which is making decisions at the lowest possible level.

This is how one senior executive Redman spoke with described this philosophy: “My goal is to make six decisions a year.  Of course that means I have to pick the six most important things to decide on and that I make sure those who report to me have the data, and the confidence, they need to make the others.”

“Pushing decision-making down,” Redman explained, “frees up senior time for the most important decisions.  And, just as importantly, lower-level people spend more time and take greater care when a decision falls to them.  It builds the right kinds of organizational capability and, quite frankly, appears to create a work environment that is more fun.”

I have previously blogged about how a knowledge-based organization is built upon a foundation of bottom-up business intelligence with senior executives providing top-down oversight (e.g., the strategic aspects of information governance).  Following Redman’s advice, the most insightful top-down oversight is driving decision-making to the lowest possible level of a data-driven organization.

With the speed at which decisions must be made these days, organizations can not afford to risk causing a decision-making bottleneck by making lower-level employees wait for higher-ups to make every business decision.  While faster decisions aren’t always better, a shorter decision-making path is.

Furthermore, in the era of big data, speeding up your data processing enables you to integrate more data into your decision-making processes, which helps you make better data-driven decisions faster.

Well-constructed policies are flexible business rules that empower employees with an understanding of decision-making principles, trusting them to figure out how to best apply them in a particular context.

If you want to pull your organization, and its business intelligence, up to new heights, then push down business decisions to the lowest level possible.  Arm your frontline employees with the data, tools, and decision-making guidelines they need to make the daily decisions that drive your organization.

Tags: , ,
Category: Business Intelligence, Information Governance
2 Comments »

by: Ocdqblog
27  Jun  2013

Bottom-Up Business Intelligence

The traditional notion of data warehousing is the increasing accumulation of structured data, which distributes information across the organization, and provides the knowledge base necessary for business intelligence.

In a previous post, I pondered whether a contemporary data warehouse is analogous to an Enterprise Brain with both structured and unstructured data, and the interconnections between them, forming a digital neural network with orderly structured data firing in tandem, while the chaotic unstructured data assimilates new information.  I noted that perhaps this makes business intelligence a little more disorganized than we have traditionally imagined, but that this disorganization might actually make an organization smarter.

Business intelligence is typically viewed as part of a top-down decision management system driven by senior executives, but a potentially more intelligent business intelligence came to mind while reading Steven Johnson’s book Emergence: The Connected Lives of Ants, Brains, Cities, and Software.

(more…)

Tags: , ,
Category: Business Intelligence
1 Comment »

by: Ocdqblog
18  Apr  2013

The Enterprise Brain

In his 1938 collection of essays World Brain, H. G. Wells explained that “it is not the amount of knowledge that makes a brain.  It is not even the distribution of knowledge.  It is the interconnectedness.”

This brought to my brain the traditional notion of data warehousing as the increasing accumulation of data, distributing information across the organization, and providing the knowledge necessary for business intelligence.

But is an enterprise data warehouse the Enterprise Brain?  Wells suggested that interconnectedness is what makes a brain.  Despite Ralph Kimball’s definition of a data warehouse being the union of its data marts, more often than not a data warehouse is a confederacy of data silos whose only real interconnectedness is being co-located on the same database server.

Looking at how our human brains work in his book Where Good Ideas Come From, Steven Johnson explained that “neurons share information by passing chemicals across the synaptic gap that connects them, but they also communicate via a more indirect channel: they synchronize their firing rates, what neuroscientists call phase-locking.  There is a kind of beautiful synchrony to phase-locking—millions of neurons pulsing in perfect rhythm.”

The phase-locking of neurons pulsing in perfect rhythm is an apt metaphor for the business intelligence provided by the structured data in a well-implemented enterprise data warehouse.

“But the brain,” Johnson continued, “also seems to require the opposite: regular periods of electrical chaos, where neurons are completely out of sync with each other.  If you follow the various frequencies of brain-wave activity with an EEG, the effect is not unlike turning the dial on an AM radio: periods of structured, rhythmic patterns, interrupted by static and noise.  The brain’s systems are tuned for noise, but only in controlled bursts.”

Scanning the radio dial for signals amidst the noise is an apt metaphor for the chaos of unstructured data in external sources (e.g., social media).  Should we bring order to chaos by adding structure (or at least better metadata) to unstructured data?  Or should we just reject the chaos of unstructured data?

Johnson recounted research performed in 2007 by Robert Thatcher, a brain scientist at the University of South Florida.  Thatcher studied the vacillation between the phase-lock (i.e., orderly) and chaos modes in the brains of dozens of children.  On average, the chaos mode lasted for 55 milliseconds, but for some children it approached 60 milliseconds.  Thatcher then compared the brain-wave scans with the children’s IQ scores, and found that every extra millisecond spent in the chaos mode added as much as 20 IQ points, whereas longer spells in the orderly mode deducted IQ points, but not as dramatically.

“Thatcher’s study,” Johnson concluded, “suggests a counterintuitive notion: the more disorganized your brain is, the smarter you are.  It’s counterintuitive in part because we tend to attribute the growing intelligence of the technology world with increasingly precise electromechanical choreography.  Thatcher and other researchers believe that the electric noise of the chaos mode allows the brain to experiment with new links between neurons that would otherwise fail to connect in more orderly settings.  The phase-lock [orderly] mode is where the brain executes an established plan or habit.  The chaos mode is where the brain assimilates new information.”

Perhaps the Enterprise Brain also requires both orderly and chaos modes, structured and unstructured data, and the interconnectedness between them, forming a digital neural network with orderly structured data firing in tandem, while the chaotic unstructured data assimilates new information.

Perhaps true business intelligence is more disorganized than we have traditionally imagined, and perhaps adding a little disorganization to your Enterprise Brain could make your organization smarter.

Tags: , , , ,
Category: Business Intelligence
3 Comments »

by: Phil Simon
15  Oct  2012

Seven Deadly Sins of Information Management, Part 4: Lust

Last week, I discussed sloth. I examined the inherent tendency of some people to postpone working–often to the detriment of the organization. This week, I’ll cover lust defined by Wikipedia as:

an emotion or feeling of intense desire in the body. The lust can take any form such as the lust for knowledge, the lust for sex or the lust for power. It can take such mundane forms as the lust for food as distinct from the need for food. Lust is a powerful psychological force producing intense wanting for an object, or circumstance fulfilling the emotion.

Alright, but what does that have to do with IM projects? Moreover, is lust inherently bad?

Lust and IM

Many organizations want to do more than they are currently doing–and, just as important, are currently capable of doing. Think about the new CIO or CEO who comes in to an organization, astonished to see that it isn’t doing anything with respect to Big Data, semantic technologies, business intelligence, or other “Enterprise 2.0″ things. Or perhaps that new CXO finally wants the organization to embrace a more data-oriented mind-set.

New leaders want to put their stamps on their organizations. After all, they were recruited because they had a specific vision of where to take the company. If promoted from within the organization, the same principle applies. Succession plans exist for a reason, and few people are supposed to just maintain the status quo. (Yes, this even applies to Tim Cook.)

Is lust inherently bad?

I’d argue no. In an IM context, lust is a fundamentally function of wanting to do something better, whether it’s selling more widgets, increasing profits, doing more with technology, and the like. So, what’s the problem with lust? Isn’t it tantamount to self-improvement?

In short, lust can get leaders, departments, and organizations in trouble when it involves reaching way beyond the capabilities of the enterprise, individual personnel, and/or departments. For instance, it’s sure easy to envy Apple, Facebook, and Google. However, there might be a very good reason (or ten) that a mature organization cannot interpret data as well as those companies.

When listening to the oft-compelling pitches from enterprise software vendors and consulting firms, one starts to dream, “What if we could do exactly what Google does? Wouldn’t that be great?”

Yes, it would, but here’s the rub: Your organization isn’t Google. Specifically, Google doesn’t struggle with basic data management. Its employees don’t cling to antiquated ways of doing things. The company doesn’t rely upon a cauldron of legacy systems to run its enterprise. Google employees don’t have to manually cobble together user statistics over a period of months. Google doesn’t struggle to provide real-time statistics to advertisers, users, and partners.

If your organization does these things, then it’s unlikely that it will be able to reap the benefits of Google-type technologies.

Simon Says

Use lust sparingly. Yes, it can serve as motivation to get better, but don’t let it cloud your judgment. Realize the limitations of your organization. Take steps now to improve data management, organizational culture and agility, and the like now. Only then can you satisfy your lustful IM desires.

Feedback

What say you?

Next week: pride.

Tags: , ,
Category: Information Development
1 Comment »

by: Usmanahmad
15  Dec  2010

Business Intelligence for Revenue Assurance in Communication Industry (Episode-1)

People working in Communication Industry (Fixed and Mobile companies, circuit switched and IP) hear questions in executive meetings, for example

• How can we be sure that every minute of usage on our network produce an appropriate minute of revenue?
• Are the fixed charges for the dedicated circuits offered billed and collected?
• Does customer billing for pay-per-view selections match with settlements to content providers?
• Are we calculating bills correctly? What errors are being found in our billing verification efforts?

For decades, a dedicated department, with the name ‘Revenue Assurance (RA)’, is digging the answers for hundreds of such questions.

Revenue Assurance is the process of ensuring that a communication service provider is billing and collecting revenue appropriate to the sales and usage of the provider’s products and services.

Revenue assurance (RA) has been an issue for telecoms for as long as telephones have existed.

Why revenue assurance is especially important in telecom companies? It’s because of the volume of the problem. Telecom Industry on average estimate that 10-20% of total industry ratable billing events are incorrect or unbilled due to leakage in switch recording, inaccurate inter-carrier billing, lack of inventory usage monitoring and invalid invoicing. It is estimated that the industry loss at $90B annually, or 14% of gross earnings. (Source: Blahamy, Mena. Telcordia Revenue Guard Solutions Overview. June 16, 2004.)

The main sources of a revenue leakages are at different spots in a telecom company for example, these can be Network related (Call records not passed from switches, call records not corrected by Mediation, call records are not rated correctly by billing system, Interconnect errors), leakages can be Mediation related leakages (prepaid/postpaid filtering issues due to most common reason of prepaid to postpaid migration, duplicated records, dropped records for example call forwarding/incoming calls should be discarded by mediation but for some reasons it doesn’t discard those etc.), leakages can be Billing Related leakages (Incorrect call plans, over discounting, late billing etc.), Intelligence Network (IN) related leakages, GPRS setup related leakages in GGSN/SGSN/SASN, MMSC related leakages. Most of these controls contribute to the revenue leakages.

Business Intelligence (BI) is contributing to identify these leakages. A BI model can be designed to test the traffic before and after passing through the different control points in a communication.

An example of a BI RA Report is to check the traffic before and after mediation system; Traffic is copied (in data warehouse) before entering into mediation system and then it is again copied after exiting from mediation system. Main purpose of mediation is to receive the raw traffic from MSC/MMSC/GGSN/SGSN/SASN, filter it according to business rules (for example discard all records of call forwarding/incoming calls because these are not rated now a days, a customer doesn’t get bill to receive a call). Send prepaid data in IN, Postpaid data in BSCS. This process can send wrong data at wrong system. For example, if a customer has migrated from prepaid to postpaid network, it should be sent to postpaid. If mediation system doesn’t know about this migration, it will send this data to IN rather than to BSCS for billing. This is a simple example of revenue leakage. A mediation system can, by many means, do errors in this filtering, causing postpaid traffic not going for rating in billing system.

Common attributes in BI revenue assurance reporting are following

Metrics: Billed Revenue / Originating Minutes / Carried Minutes of Use / Network Recorded Mins / Billed MOU / Events / Bill Cut-off to Despatch / Adjustments / Disputes / Payments / Receivables / Credits / Errors # / Error rate / Error point / Collections / Back bill / Order / Margin / File / Invoices/ Outbound / Inbound / Roaming / Processing fees / Transaction costs / Cost per bill / Inventory

Dimensions: Geography / Product / Product Group / Interconnect Operator / Time / Gender / Accounting Period / Control Point / Check Point / Payment type / Call type (Incoming, Outgoing) / Customer type (prepaid / post-paid) / Party / Channel / Package / Network Equipment

It has been seen that Mobile CSPs continue to lose more revenue than fixed line CSPs, with incumbents, once again, losing the least revenue than other CSPs. There were also large geographic variances, with CSPs in North America, Asia Pacific and Central and Eastern Europe losing more than the global average. However, our experience with CSPs around the world has shown that many of these problems are solvable by putting the right processes and systems in place. (Source: Azure web site)

Tags: , , ,
Category: Business Intelligence, Information Governance
2 Comments »

by: Phil Simon
04  Oct  2010

Part III: The Semantic Web and Complementary Technologies

In my previous post, I discussed the business case for the semantic web and how it is affecting customer service. In the third part of the series, I address the semantic web in the context of specific enterprise technologies.

IE: No, the Other One

Throw out the term IE and most people think of one of the following:

  • Internet Explorer
  • The Latin term id est (i.e., or i.e.). This typically means that is. Example: Phil didn’t embarrass himself on the golf course today–i.e., he shot an 85. (Which I did, by the way, a few weeks ago. Polite applause…)

But there’s another type of IE and it’s critical from a semantic technology perspective: Information Extraction.

In their book Semantic Web Technologies: Trends and Research in Ontology-based Systems, John Davies, Rudi Studer, and Paul Warren define this type of (IE) as “a technology based on analyzing natural language in order to extract snippets of information.” IE allows users to easily find five types of information:

Contrast IE against what most enterprises rely upon today: basic information retrieval (IR). IR finds relevant text and returns a simple list. While this is useful, IR forces the user to determine the most relevant piece(s). IE, on the other hand, automatically does this analysis for the end user; it only returns the most germane results and, most important, in a superior format. This may be a spreadsheet that allows for sorting, filtering, and adding fields. In other words, there is greater context.

An Example

All of this may seem a bit abstract. Let’s make it more real. Consider searching for golfers with Google. No doubt that you know what basic Google search results look like. However, what if you performed that same search using a semantic technology? Consider the output below:

Pretty neat, eh? Note how each golfer’s picture, name, and date of birth appear by default. But what if you want to see more fields, such as earnings? No problem. Just type in earnings and you can see just how much money folks like Tiger Woods have made by being able to hit a white ball (when he’s not, er, doing other things).

Semantic Technologies

To make magic like this happen obviously requires, among other things, a great deal of technology behind the scenes. Beyond technology, however, accurate data, metadata, and tags are needed.

IE is more efficient and ultimately useful than IR. However, the technological requirements for IE far exceed those of IR. While IR can rely upon simple keywords or text, IE requires much more, often including technologies such as:

  • Natural Language Processing (NLP)
  • Artificial Intelligence
  • Machine Learning
  • Data Mining

To the layperson, all of this is irrelevant. They merely want a way to solve a problem, such as reducing the time required to find the most relevant emails. To this end, consider what one technolgoy company, Meshin, does. It takes a semantic approach to email, using NLP and other technologies to ultimately locate relevant emails quicker and better than traditional methods.

Simon Says

Information extraction trumps information retrieval. However, let’s remember that we’re in the early stages of Web 3.0 and the economy isn’t great. For these reasons, don’t expect simple IR to go away anytime soon. To be sure, accurate IR is certainly far better than the dismal search functionality of Web 1.0 in the 1990s. As semantic technologies develop, the social web matures, and the economy improves, expect search and the semantic web to do the same.

Don’t believe me? Check out what Eric Schmidt of Google said on a recent interview with Charlie Rose. Things are going to get interesting over the next five years.

Feedback

What say you?

Tags: ,
Category: Enterprise Search, Semantic Web
1 Comment »

Calendar
Collapse Expand Close
TODAY: Tue, March 28, 2017
March2017
SMTWTFS
2627281234
567891011
12131415161718
19202122232425
2627282930311
Archives
Collapse Expand Close
Recent Comments
Collapse Expand Close