Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Archive for October, 2010

by: Bsomich
28  Oct  2010

Enterprise Collaboration: The New Black

In its recently issued report, “The Top 15 Technology Trends EA Should Watch: 2011 To 2013″  Forrester rates the category “Collaboration/Web 2.0/social media” second in terms of enterprise impact through 2013, behind only mobile devices and apps.  

As more and more organizations are making the shift towards becoming a more socially connected workforce, many of them are missing a clue of what it takes to actually get there.  Sharing, transparency, teamwork are all concepts that have been around for ages but have recently taken on a new priority in organizational strategy.   Some could even call it the “new black.”  

Before embarking on this mission, executives and technology managers alike need to consider the requirements of making the transition.  In my opinion, the organization with the greatest flexibility, creativity and acceptance toward change will be the one who is more able to embrace the benefits of Enterprise 2.0 and collaborative technologies.   In addition to having a solid platform for knowledge sharing and collaboration, culture considerations and change management strategies are both necessary in changing the mindset of employees.  

What do you think is required of an organization to fully “get” Enterprise 2.0?

Category: Information Development
3 Comments »

by: Robert.hillard
27  Oct  2010

New Master Data Management book by Berson and Dubov

In 2007 Alex Berson and Larry Dubov released their book “Master Data Management and Customer Data Integration for a Global Enterprise”. It was the first comprehensive look at the field and has been an enormous success in the market. I personally continue to buy it for my clients to read as a way of explaining the complexity of MDM. For that reason, I was delighted to receive a pre-release copy of the their fully updated second edition, recognising the ongoing and rapid development of this field. The new book is called simply “Master Data Management and Data Governance”, taking away the references to CDI which seems to have become comfortably subsumed in the profession by MDM.

The book is a complete refresh, without losing the core content in the original that made it such a good primer on the topic. There is a new chapter providing different industry perspectives, with specific language and suggestions that will be extremely useful for practitioners in financial services, telecommunications, healthcare, hospitality, manufacturing, life sciences, logistics, retail, social services, security and government intelligence. Regulatory requirements have also evolved and the book reflects many of these changes (although the book retains a US-focus in this respect).

Particularly pleasing to me is additional content (and focus given the revised title) on data management and governance, with MDM being treated as providing the tools to navigate this business critical asset. The book provides a solid set of governance principles (including reference back to MIKE2.0) that make an excellent starting point for most organizations. Berson and Dubov have also now included support for data modellers including comparisons of approaches and the impact these options have on the overall architecture.

Finally, like the rest of the book, the chapter on vendors and their products has been completely revised reflecting the many changes in the software landscape since the first edition of the book. Anyone wanting a good summary of available technologies is going to be pleased by the detail provided in chapter 18.

Available from 6 December, I will be purchasing copies for many of my clients for Christmas!

Category: Master Data Management
No Comments »

by: Bsomich
27  Oct  2010

Profile Spotlight: Nathan Jones

Nathan Jones 

Nathan Jones is a Manager in Deloitte’s Information Management & Integration practice in London, specialising in information quality and enterprise data management.

He has focused on operational business intelligence, data management, data quality and data warehousing for nearly 10 years, and worked with clients from blue chip energy and finance firms through SME’s to large and small public sector organisations.

Connect with Nathan

Category: Member Profiles
No Comments »

by: Phil Simon
25  Oct  2010

The Semantic Web, Part V: Getting Ready

This series on the semantic web started with an introduction in Part I. In Part II, I made the business case for going semantic. Part III focused on complementary technologies. In Part IV, I provided a case study of how one company is already successfully reaping the benefits of the semantic web. In this final post in the series, I’ll take a look at what organizations can do right now to get ready for the semantic web.

Note that much hinges on type of organization. While conceptually quite similar, the specific things that a health care organization needs to do differ substantially from what a publisher needs to do. You’d never go into a hospital and ask for the latest Harry Potter book in Chinese, having it printed and bound while you wait. (David Siegel, author of Pull: The Power of the Semantic Web to Transform Your Business, sees a day in the future when that will actually be commonplace.) For more on the different needs and requirements of different industries, check out the agenda from the latest SemTech conference. Bottom line: One size certainly does not fit all.

Start Playing

While the semantic web is very much in its infancy (as are the technologies that will ultimately support it), it’s important to begin conceptualizing it. High-level planning is certainly part and parcel to this, but seeing some examples of semantic technologies in action will only increase excitement about the long, potentially arduous journey that your organization is about to take. One such example is Friend of a Friend (FOAF), a popular semantic application of that:

…uses RDF to describe the relationships people have to other people and the “things” around them. FOAF permits intelligent agents to make sense of the thousands of connections people have with each other, their jobs and the items important to their lives; connections that may or may not be enumerated in searches using traditional web search engines. Because the connections are so vast in number, human interpretation of the information may not be the best way of analyzing them.

As mentioned previously in this series, Google Squared is another semantic tool that people can access immediately to get their arms around what is admittedly a pretty big paradigm shift.

Get Your Data Ready

It seems to me that the biggest opponents of the semantic have good reason for their skepticism: their data is in tatters. Organizations that cannot manage their structured data are in dire straights, for doing so is much easier than managing its unstructured brethren. Bad data is bad data regardless of context; contextualizing millions of records is no secret sauce. Take data quality and governance seriously now and your odds of semantic success will skyrocket.

Understand that Your Mileage May Vary

As the Best Buy case study shows, results from semantic efforts can be quite impressive. However, setting the bar too high is never a good idea–especially in the short term. Temper expectations, especially in the short term. Semantic technologies will have a network effect (read: more will benefit as more people and organizations use them). This is just like each incarnation of the web and plenty of technologies before that. Do you think that Amazon.com would have done so well if only a handful of people shopped online? Of course not. This will take time.

Expect Resistance but Resist Opponents

Organizations that want to embrace the semantic web ultimately will–and will be successful for their efforts. While there’s a great deal of work to be done, progress is made on many projects on a daily basis. Balancing the short term with the long term has always been a bedrock of intelligent management. It’s just plain silly to expect everyone to be on board with things such as :

  • what to do
  • what not to do
  • how to do it
  • when to do it

Even in progressive organizations, people will differ. Legitimate objections are one thing; carping and outward negativity are another. Ensure that your organization is staffed with the right type of forward-thinking folks.

Monitor and Learn from Other Projects

It’s silly to try to pay attention to all things semantic. However, keeping an eye on different–and successful–projects is just plain smart. While each semantic project may have its own path, web sites, and conferences, we can learn from other examples of semantic projects.

Simon Says: Technology Isn’t Enough

In researching this series, perhaps the best description of the semantic web comes from ChurchCrunch:

‘contextualization’ of data and technologies: Imagine the web ‘knowing’ intimately who you are without it asking or receiving any direct input from the user or the visitor.

As I channel my inner geek, this is a very cool and powerful goal. It won’t happen over night but, rest assured, it will happen. The question is: Will your organization be ready to take advantage of it?

In Siegel’s words, “The 21st century company will be a pull company. That is, they will set up their products, services, and information to be pulled by the customer as needed. Many of today’s IT departments and CIOs don’t get this. Semantic technologies will play an important role, because they are much more agile. But in the end, if you don’t change the company’s mindset about shifting from push to pull, all the semantic software in the world won’t save you.”

Feedback

What say you?

Tags: , ,
Category: Semantic Web
1 Comment »

by: Bsomich
21  Oct  2010

How to Make MDM Manageable Again

MDM projects are notorious for scope creep.   A main reason is that we get too caught up in the features of our software system and not enough in our goals and what we need it to do.  We tend to over-customize and in effect, over complicate our systems “because we can” to the point we forget the core of why we started.

We need to get back to the basics of MDM to make it manageable again.  According to Steven Jones, it’s really about two things:

  1. The cross references of the core entity between all the systems in the IT estate
  2. The data quality and governance around the core entity to uplift its business value

The reality is that MDM can be easy, we just make it difficult by adding variables and changing relationships based on the constraints of whatever program we’re using.  The underlying goal doesn’t change and our approach to get there shouldn’t change either.  

MIKE2.0 offers a great framework for Information Management professionals to map their MDM strategy and provides guidance to those looking for assistance with data management projects.  Feel free to check it out when you have a moment and offer your feedback.  See something that needs improvement?  Contributions here and to the wiki are always appreciated.

Category: Information Development
No Comments »

by: Phil Simon
19  Oct  2010

A Semantic Web Case Study

This series on the semantic web started with an introduction in Part I, continued with the business case in Part II, and turned to complementary technologies in Part III. It’s now time to put theory into practice with a case study. With the requisite background out of the way, let’s look at what one company is doing with the semantic web and related technologies. And this isn’t some obscure company of which no one has heard. This is Best Buy.

Best Buy is using the GoodRelations ontology to make its sales’ data much more meaningful to everyone and everything, including suppliers, real and potential customers, applications, systems, and the public at large.

GoodRelations

Of course, this begs the question: What is GoodRelations? According to its website, GoodRelations is an e-commerce ontology that allows for the use of “a standardized vocabulary for product, price, and company data that

  1. can be embedded into existing static and dynamic Web pages and that
  2. can be processed by other computers

This increases the visibility of your products and services in the latest generation of search engines, recommender systems, and novel mobile or social applications.”

Best Buy was smart in going semantic. It did initially go all in. Rather, as Jay Myers points out, Best Buy initially rolled out GoodRelations across a number of stores. Buoyed by early successes, only then did the company embrace GoodRelations company-wide. Today, it’s at least “a little semantic” at each of its more than 1,000 stores.

Motivation

Simple search means that traditional SEO dictates what appears at the top of searches. In other words, it can be difficult for a company to reach its intended customers–or, more accurately, for potential customers to reach the right company. To be sure, Google is amazing at simple search and smart folks type in additional words to provide greater context to basic searches. In the end, many ultimately find more relevant results with their queries.

However, at present, one thing is obvious: simple searches often do not find the most meaningful results. Bottlenecks and frustrated customers can result that, in turn, reduce sales. (For more information on some of the challenges of simple search and how semantic technologies such as GoodRelations can overcome these, check out this presentation.)

If you’re anything like me, you’re not patient when trying to find something. This is not 1995, when search first appeared. Results were laughably inaccurate and people often gave up without finding what they wanted. While we have made a great deal of progress with search, in large part thanks to two folks name Larry and Sergei and a horde of imitators, there’s still a great deal of room for improvement. We are at the just the tip of the iceberg with search, as Google CEO Eric Schmidt recently pointed out on the Charlie Rose show.

Companies such as Best Buy should be commended for being proactive. They understand where the web is going and, despite being a market leader, they are avoiding The Innovator’s Dilemma.

Results

The adoption of GoodRelations has hardly been universal and I certainly can’t claim that it’s the best or only ontology that meet retailers’ needs. But focusing on “the best” of anything often misses the point. What are the odds that everyone in any company–much less one as big as Best Buy–would agree on what to do and how to do it?

Best Buy has already seen concrete results by utilizing GoodRelations. Myers recently revealed that “since they launched their beta Semantic Product Web, augmented with GoodRelations and RDFa, they’ve seen a 30% increase in traffic to their pages. Again, there could be many reasons for this—not a perfectly controlled experiment—but the correlation is remarkable.”

Simon Says

It takes a great deal of work to transform data from a bunch of structured or unstructured junk into the accurate, contextual, and complete information capable of supporting the semantic web. Absent good data, GoodRelations or any other ontology is likely to yield disappointing results. Web 3.0–or whatever you want to call it–hinges upon technologies, good data, and proactive management. It’s refreshing to see a large company lead the way and avoid the complacency often associated with organizations typically content to maintain the status quo.

On a different level, Best Buy is instructive for refusing to take baby steps. So often with large companies–and I speak from years of personal experience, naysayers and opponents of substantive change block efforts to dramatically improve technologies, data, and management practices. Clearly, this is not the case with Best Buy. I have no doubt that they’ll benefit from deploying semantic technologies in the long-term.

Feedback

What say you?

Tags:
Category: Semantic Web
7 Comments »

by: Bsomich
19  Oct  2010

Profile Spotlight: John Morris

John Morris

John Morris is a highly experienced Data Migration professional with a 25 year history in information technology.

For the last ten years he has specialised solely in delivering Data Migration projects. During that time he has worked on some of the biggest migration projects in the UK and for some of the major systems integrators.

John is the author of “Practical Data Migration” the top selling book on data migration published by the BCS (British Computing Society).

John is a regular speaker at conferences in the UK and abroad. He is passionate about elevating Data Migration to the position it deserves as a skill-set and marketplace in its own right.

In 2006 he co-founded Iergo as a specialist data migration consultancy. Iergo only specialise in Data Migration bringing their many years of experience to the problem domain. 

Connect with John.

Category: Member Profiles
No Comments »

by: Bsomich
16  Oct  2010

Video: Getting Familiar with MIKE2.0

New to MIKE2.0?  Check out this community intro video to see how you can put an open methodology to work for you.

Category: Information Development
No Comments »

by: Bsomich
15  Oct  2010

10 Guiding Principles for Better Business Intelligence

Business Intelligence (BI) refers to the skills, processes, technologies, applications and practices used to support decision making, and is crucial component of strategy for businesses to operate successfully.   MIKE2.0 has a valuable wiki article on this topic that shares guiding principles to help information management professionals develop a strong BI program.   

Below are the basics:

1) Keep the strategy at the vision level

Establish the Blueprint and never start from scratch – use best practice frameworks. Keep things at a strategic level while still following a diligent approach to requirements.

2) Use a requirements-driven approach

Even when using off-the-shelf information models, requirements must drive the solution. Plan to go through multiple iterations of requirements gathering.

3) Develop a BusinessTime model for synchronisation

Be prepared to handle growing requirements for the synchronisation of data in real-time into the analytical environment. Focus heavily on the “time dimension” as part of your architecture.

4) Use a well-architected approach

An analytical environment is not a dumping group for data. Data that is not integrated or conformed does not provide the value users want.

5) Investigate & fix DQ problems early

Data quality issues make it difficult to integrate data into the analytical environment and can make user reports worthless. Start with data profiling to identify high risk areas in the early stages of the project.

6) Use standards to reduce complexity

The Business Intelligence environment is inherently complex – to maximise benefits to the user the system must be easy to use. One of the most important things that can be done is to develop a set of open and common standards related to data, integration and infrastructure.

7) Build a metadata-driven solution

A comprehensive approach metadata management is the key to reducing complexity and promoting reusability across infrastructure. A metadata-driven approach makes it easier for users to understand the meaning of data and to understand how lineage of data across the environment.

8 ) Store data at a detailed and integrated level

Aggregation and integration is far easier when you store data at a detailed level. It you don’t store detailed analytical data, some users will typically not get all the information they want.

9) Design for continuous, increment-based delivery

Analytical environments should built through a “journey”.

10) Use a detailed, method-based approach

Methods such as MIKE2.0 can help provide a task-oriented approach with detailed supporting artifacts. 

Need more info?  Comments or additions?  Please let us know in the comments section below.

Category: Information Development
No Comments »

by: Bsomich
13  Oct  2010

Profile Spotlight: Themos Kalafatis

Themos Kalafatis

Themos Kalafatis is a predictive analytics consultant at MetronLabs in Greece.  Prior to, he was a Solutions Consultant at HP and a Data Mining Consultant at Intracom SA.  He has over 10 years Data Mining and 5 years Text Mining experience with specific application of Data Mining Techniques in:

1) Banking
2) Insurance
3) Telecoms
4) Real Estate
5) Retail

As well as application of Text Mining and Information Extraction in:

1) Banking
2) Financial Markets (Sentiment – News Categorization)
3) Real Estate
4) User Opinions – Brand Sentiment / Reputation
5) Social Media (Twitter)

He blogs regularly on the subject of business intelligence and predictive analytics.

Connect with Themos.

Category: Member Profiles
No Comments »

Calendar
Collapse Expand Close
TODAY: Fri, June 23, 2017
October2010
SMTWTFS
262728293012
3456789
10111213141516
17181920212223
24252627282930
31123456
Archives
Collapse Expand Close
Recent Comments
Collapse Expand Close