Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Archive for the ‘Business Intelligence’ Category

by: Gil Allouche
20  Aug  2014

Lessons from How Amazon Uses Big Data

Entering the field of big data is no cake walk. Due to the many nuances of big data architecture combined with the fact that it’s so new, relatively speaking, it can be overwhelming for many executives to implement. Or, instead of being overwhelming, there’s simply a lack of experience and understanding. That misunderstanding leads management, all too often, to use big data inefficiently.

One of the best ways for companies to learn about big data and how they can effectively implement it is by analyzing those who have used big data successfully for years. is one of those companies.

There’s no doubting the data expertise of One of the key innovators in big data technology, the global giant has given us lesson after lesson on how to successfully gather, analyze and then implement data analytics. Not only has it effectively used big data for it’s own purposes, but with services like Amazon Elastic MapReduce, the company has successfully leveraged its own data use to help others.

Amazon is full of lessons on how to successfully use big data, here are a few –

Focus on the Customer

One of Amazon’s premier uses of big data has been customer recommendations. If you have an Amazon account and you take a quick look at your Amazon home page you’ll notice as you scroll down there are recommendations based on your browsing history, additional recommendations, sale items based on what you’ve bought and searched for in the past. While this type of things occurs frequently today, Amazon was one of the first companies to do this.

Amazon has put a focus on using it’s big data to give its customers a personalized and focused buying experience. Interestingly, by giving customers this personalized experience, the customer tends to buy more than they would otherwise. It’s a simple solution for many problems.

For companies implementing big data, a key focus needs to be the consumer. If companies want to succeed in big data or at all, the consumer has to come first. The more satisfied they are, the better off you’ll be.

Be Focused

It’s impossible to know all Amazon’s uses of big data. Still, though, another lesson we can learn from the online retailing giant is to have an extreme focus on big data gathering and use.

Amazon gathers extremely large amounts of data each second, let alone each day and year. At that rate it would be easy to lose focus on what data is being gathered, why it’s being gathered, and how exactly it can help the customer. But, Amazon doesn’t let that happen. It’s very strategic  both in gathering data and implementing changes and upgrades because of that data.

Too many companies let big data overwhelm them. They don’t have a clear focus when they begin, and they never end up obtaining one. The data they’ve gathered goes to waste and they completely miss the opportunity and potential.

Big Data Works

Amazon is one of the most successful companies today. It’s a true global empire. Consumers shop on Amazon for everything. They are leaders in the e-book, e-reader and tablet industries and they’ve recently entered the foray into TV boxes and phones.

Behind all this success is a rock-solid determination to gather data and use it efficiently. It’s gone where other companies were afraid to go and it’s achieved success other companies wish they had.

Among many contributing factors, Amazon has leveraged its big data expertise in extremely innovative and effective ways. It’s taken big data to the next level. It has shown — time and again — that big data works. It’s shown that if companies want to take their operations and success to the next level then big data is a key component.

Make Big Data Work for You

Amazon is a great example of big data use. It’s not necessarily about the size of the company or the size of the data that’s most important. As Amazon has illustrated, it’s about tailoring to the customer’s needs, being focused and having a plan and actually using the technology. Big data works for Amazon, now make it work for you.

Category: Business Intelligence
No Comments »

by: RickDelgado
14  Aug  2014

A Look at Cyber Security Trends for 2014

We’re now more than halfway through 2014, and as with any year, the world of technology has been rapidly progressing and evolving. This year, there’s been more discussion than ever about numerous topics such as the benefits of big data, the Internet of Things, mobile technology, and how to make the most of cloud computing. There’s plenty of excitement to be had so far and much more on the way, but in the fast moving technological environment we now live in, there’s also reason to worry. Security in particular, whether it’s network security, computer security, or IT security, is foremost on many business leaders’ minds. To prepare for what the future may hold, it’s important to look back at some of the recent trends to see the threats and solutions having the biggest impact on cyber security.


Securing Internet Connections


Perhaps one of the biggest movements to happen in recent months is the expansion of devices now connected to the internet. While this can be seen through the adoption of smartphones and tablets all over the world, it also applies to other everyday objects that now find themselves with web access. That expansion is only expected to increase over time, with the number of internet-connected objects predicted to explode from 10 billion today to 50 billion by 2020. Many are using the term the Internet of Things to explain the phenomenon, and while it opens up innovative new options for making life easier and more connected, it does lead to a greater attack surface for attackers to take advantage of. That’s why companies are looking to make the Internet of Things more secure, but not by simply expanding traditional security procedures, which would prove ineffective. One method aims to reduce that amount of attack surface, limiting the possibilities of an infiltration. The method includes using some basic defensive measures such as frequent software patching, advanced user identity and network management, and the elimination of infrastructure dark space. These strategies can end up reducing attack surface by as much as 70%.


Cloud Security


In the past few years, businesses have begun truly utilizing cloud computing in new ways. Now more than ever, cloud providers are offering new services that can help companies be more efficient and productive. But as businesses move to the cloud, so are attackers. The reason for this is that with movement to the cloud, businesses will often send their corporate data there as well. Cloud security is very much a work in progress, and attackers have been eager to infiltrate the cloud to steal not just business data but people’s personal data as well. Attackers may in fact hold sensitive data for ransom, sort of like blackmail, in order to extract value of their own from it. Cloud vendors will need to provide stronger password capabilities and reinforced cloud data access policies to ensure this doesn’t happen.


Increased Mobile Malware


Nearly everyone has a smartphone these days, and this fact has not gone unnoticed by attackers. While smartphones are certainly convenient, they are also frighteningly vulnerable. One study shows 80% of smartphones have no malware protection at all, which makes them a prime target for cybercriminals looking to gain access to them. The amount of malware aimed at iPhones and Android devices is growing exponentially, as is the number of devices that have been infected. Of particular concern is the increase in Android malware, but whatever device you use, securing mobile technology will take time. Improvement are already being made, but it will take time before they become a common feature on smartphones.


(Tweet This: One study show 80% of smartphones have no malware protection. #mobile #security)


Third Party Security


Cyber security is also being made a much more important priority for third party organizations. You’ve likely heard of the massive security breach that hit Target, costing the mega-corporation tens of millions of dollars, not to mention compromising sensitive information for millions of customers. The attackers were able to gain access to Target’s systems by infiltrating a third party organization, which already had access to the Target network. Breaching the third party made access to the larger internal system much easier. With this damaging breach, companies are now working harder than ever to secure their supply chains, with more emphasis being placed on increasing security for third parties. The process to do this won’t be easy, but as seen in Target’s case, the alternative is simply too costly.


Security will never be perfect. Businesses will have to be constantly vigilant as they search for attackers intending to inflict harm and steal data. While no security measure can deal with all present and future threats flawlessly, companies are working hard to make sure cyber security is ready to meet these challenges. As security improves, businesses and individuals can rest a little easier knowing their information is protected.


Category: Business Intelligence
No Comments »

by: RickDelgado
07  Aug  2014

How Big Data Can Help the Sales Team

With technology playing such a prominent role in businesses today, people in all fields are being impacted in new, exciting ways. Perhap one field that is dealing with some particularly big challenges is sales. For decades, sales teams have operated under certain strategies that have proven effective, but with the rise of the internet and social media, the balance of power between salesperson and customer has subtly yet noticeably shifted. Not only is there more information available to customers, but sales teams now have access to unprecedented amounts of data. In fact, in one survey more than 80% of salespeople said they feel challenged by how much data is out there and the amount of time it takes to research what they need. With these challenges out there, sales teams need help in establishing relationships with more promising customers and getting more sales opportunities. Big data may in fact be the key to reaching these goals, while more answers may lie in unexpected places.


(Tweet This: One survey says 82% of salespeople are challenged by the amount of #bigdata out there. #sales)


The way salespeople interact with prospective customers has changed drastically in the past few years. Traditionally, customers would contact a business while in search of a product or service and be put in touch with a sales representative, who would in turn have all the information and answers for the customer. That’s rarely how things work anymore. Customers are now able to get all the information they need through internet sites, social media, and other networks that weren’t available in previous decades. That means they are aware of competitors’ information, company or product weaknesses, and what close friends and associates are saying. That puts salespeople at a disadvantage, but big data can help with this problem. In this case, the answers can come from the marketing team. For years, marketers have been collecting data on customers, discovering their interests, passions, and values, all while gathering what insights they can on what motivates them. Relaying this data to sales teams can help salespeople better understand potential customers, getting to know them and being able to respond to their concerns even if customers don’t voice them.


Big data can also help sales teams identify the prospects that are the most promising. This can be done through predictive analytics–applying big data to better predict which customers are most interested and will respond more positively to a sales pitch. Big data allows businesses to analyze each account they have and correctly pick the right time and method for dealing with them. This strategy has the potential to lead to some impressive results. In one case, business outsourcing company ADP’s sales team used big data tools for a year. The results was 52% more sales opportunities for the company along with an increase in sales productivity of 29%.


The results from using big data have many companies rushing to implement their own big data strategies. Luckily, there are plenty of big data sales apps and tools to choose from. Most of them revolve around integrating with a customer relationship management (CRM) platform designed specifically for sales divisions. These sales apps have plenty of advantages, including capabilities in predictive lead prioritization, predictive lead scoring, and other areas. As more companies utilize advances in flash array storage technology, the tools will become more available and cost-effective. These apps and tools, when utilizing big data properly, can help develop and foster sales relationships and identify changes in a buyer’s behavior, helping sales teams respond in real time to these changes along with fluctuations in the sales cycle. Only through big data can these apps parse the data and recognize the patterns to make these accurate predictions.


Another helpful tool that big data provides is the ability to evaluate members of the sales team. In much the same way big data can determine the most promising customers and how to approach them, the performance of a sales representative can also be analyzed to better see who is meeting expectations, who is doing better, and who is not meeting the established standards. By identifying where salespeople need improvement, managers can create more personalized training programs, effectively getting rid of a blanket approach that isn’t the best way to handle an entire sales team. This contextual coaching can even be evaluated based on performances in real time.


There is still much progress to be made for sales teams looking to make use of big data. The potential is there, and those that have taken advantage of it are already seeing the great results. With a more effective sales team, businesses will prosper and be more productive, gaining new customers all the time while keeping the customers they already have.


Category: Business Intelligence, Enterprise Data Management
No Comments »

by: Alandduncan
19  Jul  2014

Data Quality Profiling Considerations

Data profiling is an excellent diagnostic method for gaining additional understanding of the data. Profiling the source data helps inform both business requirements definition and detailed solution designs for data-related project, as well as enabling data issues to be managed ahead of project implementation.

Profiling of a data set will be measured with reference to and agreed Data Quality Dimensions (e.g. per those proposed in the recent DAMA white paper).

Profiling may be required at several levels:

• Simple profiling with a single table (e.g. Primary Key constraint violations)
• Medium complexity profiling across two or more interdependent tables (e.g. Foreign Key violations)
• Complex profiling across two or more data sets, with applied business logic (e.g. reconciliation checks)

Note that field-by-field analysis is required to truly understand the data gaps.

Any data profiling analysis must not only identify the issues and underlying root causes, but must also identify the business impact of the data quality problem (measured by effectiveness, efficiency, risk inhibitors). This will help identify any value in remediating the data – great for your data quality Business Case. Root cause analysis also helps identify any process outliers and and drives out requirements for remedial action on managing any identified exceptions.

Be sure to profile your data and take baseline measures before applying any remedial actions – this will enable you to measure the impact of any changes.

I strongly recommend Data Quality Profiling and root-cause analysis to be undertaken as an initiation activity as part of all data warehouse, master data and application migration project phases.

Category: Business Intelligence, Data Quality, Enterprise Data Management, Information Development, Information Governance, Information Management, Information Strategy, Information Value, Metadata
No Comments »

by: Alandduncan
01  Jul  2014

Information Requirements Gathering: The One Question You Must Never Ask!

Over the years, I’ve tended to find that asking any individual or group the question “What data/information do you want?” gets one of two responses:

“I don’t know.” Or;

“I don’t know what you mean by that.”

End of discussion, meeting over, pack up go home, nobody is any the wiser. Result? IT makes up the requirements based on what they think the business should want, the business gets all huffy because IT doesn’t understand what they need, and general disappointment and resentment ensues.

Clearly for Information Management & Business Intelligence solutions, this is not a good thing.

So I’ve stopped asking the question. Instead, when doing requirements gathering for an information project, I go through a workshop process that follows the following outline agenda:

Context setting: Why information management / Business Intelligence / Analytics / Data Governance* is generally perceived to be a “good thing”. This is essentially a very quick précis of the BI project mandate, and should aim at putting people at ease by answering the question “What exactly are we all doing here?”

(*Delete as appropriate).

Business Function & Process discovery: What do people do in their jobs – functions & tasks? If you can get them to explain why they do those things – i.e. to what end purpose or outcome – so much the better (though this can be a stretch for many.)

Challenges: what problems or issues do they currently face in their endeavours? What prevents them from succeeding in their jobs? What would they do differently if they had the opportunity to do so?

Opportunities: What is currently good? Existing capabilities (systems, processes, resources) are in place that could be developed further or re-used/re-purposed to help achieve the desired outcomes?

Desired Actions: What should happen next?

As a consultant, I see it as part of my role to inject ideas into the workshop dialogue too, using a couple of question forms specifically designed to provoke a response:

“What would happen if…X”

“Have you thought about…Y”

“Why do you do/want…Z”.

Notice that as the workshop discussion proceeds, the participants will naturally start to explore aspects that relate to later parts of the agenda – this is entirely ok. The agenda is there to provide a framework for the discussion, not a constraint. We want people to open up and spill their guts, not clam up. (Although beware of the “rambler” who just won’t shut up but never gets to the point…)

Notice also that not once have we actively explored the “D” or “I” words. That’s because as you explore the agenda, any information requirements will either naturally fall out of the discussion as it proceed, or else you can infer the information requirements arising based on the other aspects of the discussion.

As the workshop attendees explore the different aspects of the session, you will find that the discussion will touch upon a number of different themes, which you can categorise and capture on-the-fly (I tend to do this on sheets of butchers paper tacked to the walls, so that the findings are shared and visible to all participants.). Comments will typically fall into the following broad categories:

* Functions: Things that people do as part of doing business.
* Stakeholders: people who are involved (including helpful people elsewhere in the organisation – follow up with them!)
* Inhibitors: Things that currently prevent progress (these either become immediate scope-change items if they are show-stoppers for the current initiative, or else they form additional future project opportunities to raise with management)
* Enablers: Resources to make use of (e.g. data sets that another team hold, which aren’t currently shared)
* Constraints: “non-negotiable” aspects that must be taken into account. (Note: I tend to find that all constraints are actually negotiable and can be overcome if there is enough desire, money and political will.)
* Considerations: Things to be aware of that may have an influence somewhere along the line.
* Source systems: places where data comes from
* Information requirements: Outputs that people want

Here’s a (semi) fictitious example:

e.g. ADD: “What does your team do?”

Workshop Victim Participant #1: “Well, we’re trying to reconcile the customer account balances with the individual transactions.”

ADD: And why do you wan to do that?

Workshop Victim Participant #2: “We think there’s a discrepancy in the warehouse stock balances, compared with what’s been shipped to customers. The sales guys keep their own database of customer contracts and orders and Jim’s already given us dump of the data, while finance run the accounts receivables process. But Sally the Accounts Clerk doesn’t let the numbers out under any circumstances, so basically we’re screwed.”

Functions: Sales Processing, Contract Mangement, Order Fulfilment, Stock Management, Accounts Receivable.
Stakeholders: Warehouse team, Sales team (Jim), Finance team.
Inhibitors: Finance don’t collaborate.
Enablers: Jim is helpful.
Source Systems: Stock System, Customer Database, Order Management, Finance System.
Information Requirements: Orders (Quantity & Price by Customer, by Salesman, by Stock Item), Dispatches (Quantity & Price by Customer, by Salesman, by Warehouse Clerk, by Stock Item), Financial Transactions (Value by Customer, by Order Ref)

You will also probably end up with the attendees identifying a number of immediate self-assigned actions arising from the discussion – good ideas that either haven’t occurred to them before or have sat on the “To-Do” list. That’s your workshop “value add” right there….

Workshop Victim Participant #1: “I could go and speak to the Financial Controller about getting access to the finance data. He’s more amenable to working together than Sally, who just does what she’s told.”

Happy information requirements gathering!

Category: Business Intelligence, Data Quality, Enterprise Data Management, Information Development, Information Governance, Information Management, Information Strategy, Information Value, Master Data Management, Metadata
No Comments »

by: RickDelgado
25  Jun  2014

Cloud Computing Trends to Watch for in 2014 and Beyond

Businesses tapping into the potential of cloud computing now make up the vast majority of enterprises out there. If anything, it’s those companies disregarding the cloud that have fallen way behind the rest of the pack. According to the most recent State of the Cloud Survey from RightScale, 87% of organizations are using the public cloud. Needless to say, businesses have figured out just how advantageous it is to make use of cloud computing, but now that it’s mainstream, experts are trying to predict what’s next for the incredibly useful technology. Here’s a look at some of the latest trends for cloud computing for 2014 and what’s to come in the near future.

1. IT Costs Reduced

There are a multitude of reasons companies have gotten involved with cloud computing. One of the main reasons is to reduce operational costs. This has proven true so far, but as more companies move to the cloud, those savings will only increase. In particular, organizations can expect to see a major reduction in IT costs. Adrian McDonald, president of EMEA at EMC, says the unit cost of IT could decrease by more than 38%. This development could allow for newer, more creative services to come from the IT department.

2. More Innovations

Speaking of innovations, the proliferation of cloud computing is helping business leaders use it for more creative solutions. At first, many felt cloud computing would allow companies to run their business in mostly the same way only with a different delivery model. But with cloud computing becoming more common, companies are finding ways to obtain new insights into new processes, effectively changing the way they were doing business before.

3. Engaging With Customers

To grow a business, one must attract new customers and hold onto those that are already loyal. Customer engagement is extremely important, and cloud computing is helping companies find new ways to do just that. By powering systems of engagement, cloud computing can optimize how businesses interact with customers. This is done with database technologies along with collecting and analyzing big data, which is used to create new methods of reaching out to customers. With cloud computing’s easy scalability, this level of engagement is within the grasp of every enterprise no matter the size.

4. More Media

Another trend to watch out for is the increased use of media among businesses, even if they aren’t media companies. Werner Vogels, the vice president and CTO of Amazon, says that cloud computing is giving businesses media capabilities that they simply didn’t have before. Companies can now offer daily, fresh media content to customers, which can serve as another avenue for revenue and retention.

5. Expansion of BYOD

Bring Your Own Device (BYOD) policies are already pretty popular with companies around the world. With cloud computing reaching a new high point, expect BYOD to expand even faster. With so many wireless devices being used by people, that necessitates use of the cloud in order to store and access valuable company data. IT personnel are also finding ways to use cloud services through mobile device management, mainly to organize and keep track of each worker’s activities.

 6. More Hybrid Cloud

Whereas before there was a lot of debate over whether public or private cloud should be used by a company, it has now become clear that businesses are choosing to use hybrid clouds. The same RightScale cloud survey mentioned before shows that 74% of organizations have already developed a hybrid cloud strategy, with more than half of them already using it. Hybrid clouds combine private cloud security with the power and scalability of public clouds, basically giving companies the advantages of both. It also allows IT to come up with customized solutions while maintaining a secure infrastructure.
These are just a few of the trends that are happening as cloud computing expands. Its growth has been staggering, fueling greater innovation in companies as they look to save on operational costs. As more and more businesses get used to what the cloud has to offer and how to take full advantage of its benefits, we can expect even greater developments in the near future. For now, the technology will continue to be a valuable asset to every organization that makes the most of it.

Category: Business Intelligence
No Comments »

by: Alandduncan
24  May  2014

Data Quality Profiling: Do you trust in the Dark Arts?

Why estimating Data Quality profiling doesn’t have to be guess-work

Data Management lore would have us believe that estimating the amount of work involved in Data Quality analysis is a bit of a “Dark Art,” and to get a close enough approximation for quoting purposes requires much scrying, haruspicy and wet-finger-waving, as well as plenty of general wailing and gnashing of teeth. (Those of you with a background in Project Management could probably argue that any type of work estimation is just as problematic, and that in any event work will expand to more than fill the time available…).

However, you may no longer need to call on the services of Severus Snape or Mystic Meg to get a workable estimate for data quality profiling. My colleague from QFire Software, Neil Currie, recently put me onto a post by David Loshin on, which proposes a more structured and rational approach to estimating data quality work effort.

At first glance, the overall methodology that David proposes is reasonable in terms of estimating effort for a pure profiling exercise – at least in principle. (It’s analogous to similar “bottom/up” calculations that I’ve used in the past to estimate ETL development on a job-by-job basis, or creation of standards Business Intelligence reports on a report-by-report basis).

I would observe that David’s approach is predicated on the (big and probably optimistic) assumption that we’re only doing the profiling step. The follow-on stages of analysis, remediation and prevention are excluded – and in my experience, that’s where the real work most often lies! There is also the assumption that a pre-existing checklist of assessment criteria exists – and developing the library of quality check criteria can be a significant exercise in its own right.

However, even accepting the “profiling only” principle, I’d also offer a couple of additional enhancements to the overall approach.

Firstly, even with profiling tools, the inspection and analysis process for any “wrong” elements can go a lot further than just a 10-minute-per-item-compare-with-the-checklist, particularly in data sets with a large number of records. Also, there’s the question of root-cause diagnosis (And good DQ methods WILL go into inspecting the actual member records themselves). So for contra-indicated attributes, I’d suggest a slightly extended estimation model:

* 10mins: for each “Simple” item (standard format, no applied business rules, fewer that 100 member records)
* 30 mins: for each “Medium” complexity item (unusual formats, some embedded business logic, data sets up to 1000 member records)
* 60 mins: for any “Hard” high-complexity items (significant, complex business logic, data sets over 1000 member records)

Secondly, and more importantly – David doesn’t really allow for the human factor. It’s always people that are bloody hard work! While it’s all very well to do a profiling exercise in-and-of-itself, the result need to be shared with human beings – presented, scrutinised, questioned, validated, evaluated, verified, justified. (Then acted upon, hopefully!) And even allowing for the set-aside of the “Analysis” stages onwards, then there will need to be some form of socialisation within the “Profiling” phase.

That’s not a technical exercise – it’s about communication, collaboration and co-operation. Which means it may take an awful lot longer than just doing the tool-based profiling process!

How much socialisation? That depends on the number of stakeholders, and their nature. As a rule-of-thumb, I’d suggest the following:

* Two hours of preparation per workshop ((If the stakeholder group is “tame”. Double it if there are participants who are negatively inclined).
* One hour face-time per workshop (Double it for “negatives”)
* One hour post-workshop write-up time per workshop
* One workshop per 10 stakeholders.
* Two days to prepare any final papers and recommendations, and present to the Steering Group/Project Board.

That’s in addition to David’s formula for estimating the pure data profiling tasks.

Detailed root-cause analysis (Validate), remediation (Protect) and ongoing evaluation (Monitor) stages are a whole other ball-game.

Alternatively, just stick with the crystal balls and goats – you might not even need to kill the goat anymore…

Category: Business Intelligence, Data Quality, Enterprise Data Management, Information Development, Information Governance, Information Management, Information Strategy, Information Value, Master Data Management, Metadata
No Comments »

by: Alandduncan
16  May  2014

Five Whiskies in a Hotel: Key Questions to Get Started with Data Governance

A “foreign” colleague of mine once told me a trick his English language teacher taught him to help him remember the “questioning words” in English. (To the British, anyone who is a non-native speaker of English is “foreign.” I should also add that as a Scotsman, English is effectively my second language…).

“Five Whiskies in a Hotel” is the clue – i.e. five questioning words begin with “W” (Who, What, When, Why, Where), with one beginning with “H” (How).

These simple question words give us a great entry point when we are trying to capture the initial set of issues and concerns around data governance – what questions are important/need to be asked.

* What data/information do you want? (What inputs? What outputs? What tests/measures/criteria will be applied to confirm whether the data is fit for purpose or not?)
* Why do you want it? (What outcomes do you hope to achieve? Does the data being requested actually support those questions & outcome? Consider Efficiency/Effectiveness/Risk Mitigation drivers for benefit.)
* When is the information required? (When is it first required? How frequently? Particular events?)
* Who is involved? (Who is the information for? Who has rights to see the data? Who is it being provided by? Who is ultimately accountable for the data – both contents and definitions? Consider multiple stakeholder groups in both recipients and providers)
* Where is the data to reside? (Where is it originating form? Where is it going to?)
* How will it be shared? (How will the mechanisms/methods work to collect/collate/integrate/store/disseminate/access/archive the data? How should it be structured & formatted? Consider Systems, Processes and Human methods.)

Clearly, each question can generate multiple answers!

Aside: in the Doric dialect of North-East of Scotland where I originally hail from, all the “question” words begin with “F”:
Fit…? (What?) e.g. “Fit dis yon feel loon wint?” (What does that silly chap want?)
Fit wye…? (Why?) e.g. “Fit wye div ye wint a’thin’?” (Why do you want everything?)
Fan…? (When?) e.g. “Fan div ye wint it?” (When you you want it?)
Fa…? (Who?) e.g. “Fa div I gie ‘is tae?” (Who do I give this to?)
Far…? (Where?) e.g. “Far aboots dis yon thingumyjig ging?” (Where exactly does that item go?)
Foo…? (How?) e.g. “Foo div ye expect me tae dae it by ‘e morn?” (How do you expect me to do it by tomorrow?)

Whatever your native language, these key questions should get the conversation started…

Remember too, the homily by Rudyard Kipling:

Category: Business Intelligence, Data Quality, Enterprise Data Management, Information Development, Information Governance, Information Management, Information Strategy, Information Value, Master Data Management, Metadata
No Comments »

by: Alandduncan
13  May  2014

Is Your Data Quality Boring?

Is this the kind of response you get when you mention to people that you work in Data Quality?!

Let’s be honest here. Data Quality is good and worthy, but it can be a pretty dull affair at times. Information Management is something that “just happens”, and folks would rather not know the ins-and-outs of how the monthly Management Pack gets created.

Yet I’ll bet that they’ll be right on your case when the numbers are “wrong”.


So here’s an idea. The next time you want to engage someone in a discussion about data quality, don’t start by discussing data quality. Don’t mention the processes of profiling, validating or cleansing data. Don’t talk about integration, storage or reporting. And don’t even think about metadata, lineage or auditability. Yaaaaaaaaawn!!!!

Instead of concentrating on telling people about the practitioner processes (which of course are vital, and fascinating no doubt if you happen to be a practitioner), think about engaging in a manner that is relevant to the business community, using language and examples that are business-oriented. Make it fun!

Once you’ve got the discussion flowing in terms of the impacts, challenges and inhibitors that get in the way of successful business operations, then you can start to drill into the underlying data issues and their root causes. More often than not, a data quality issue is symptomatic of a business process failure rather than being an end in itself. By fixing the process problem, the business user gains a benefit, and the data in enhanced as a by-product. Everyone wins (and you didn’t even have to mention the dreaded DQ phrase!)

Data Quality is a human thing – that’s why its hard. As practitioners, we need to be communicators. Lead the thinking, identify the impact and deliver the value.

Now, that’s interesting!

Category: Business Intelligence, Data Quality, Enterprise Data Management, Information Governance, Information Management, Information Strategy, Information Value, Master Data Management, Metadata
No Comments »

by: Alandduncan
12  May  2014

The Information Management Tube Map

Just recently, Gary Allemann posted a guest article on Nicola Askham’s Blog, which made an analogy between Data Governance and the London Tube map. (Nicola also on Twitter. See also Gary Allemann’s blog, Data Quality Matters.)

Up until now, I’ve always struggled to think of a way to represent all of the different aspects of Information Management/Data Governance; the environment is multi-faceted, with the interconnections between the component capabilities being complex and not hierarchical. I’ve sometimes alluded to there being a network of relationship between elements, but this has been a fairly abstract concept that I’ve never been able to adequately illustrate.

And in a moment of perspiration, I came up with this…

I’ll be developing this further as I go but in the meantime, please let me know what you think.

(NOTE: following on from Seth Godin’ plea for more sharing of ideas, I am publishing the Information Management Tube Map under Creative Commons License Attribution Share-Alike V4.0 International. Please credit me where you use the concept, and I would appreciate it if you could reference back to me with any changes, suggestions or feedback. Thanks in advance.)

Category: Business Intelligence, Data Quality, Enterprise Data Management, Information Development, Information Governance, Information Management, Information Strategy, Information Value, Master Data Management, Metadata
No Comments »

Collapse Expand Close
TODAY: Sun, November 29, 2015
Collapse Expand Close
Recent Comments
Collapse Expand Close