Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Posts Tagged ‘data modelling’

by: Ocdqblog
31  Jul  2013

Metadata and the Baker/baker Paradox

In his recent blog post What’s the Matter with ‘Meta’?, John Owens lamented the misuse of the term metadata — the metadata about metadata — when discussing matters within the information management industry, such as metadata’s colorful connection with data quality.  As Owens explained, “the simplest definition for ‘Meta Data’ is that it is ‘data about data’.  To be more precise, metadata describes the structure and format (but not the content) of the data entities of an enterprise.”  Owens provided several examples in his blog post, which also received great commentary.

Some commenters resisted oversimplifying metadata as data about data, including Rob Karel who, in his recent blog post Metadata, So Mom Can Understand, explained that “at its most basic level, metadata is something that helps to better describe the data you’re trying to remember.”

This metadata crisis reminded me of the book Moonwalking with Einstein: The Art and Science of Remembering Everything, where author Joshua Foer described a strange kind of forgetfulness that psychologists have dubbed the Baker-baker Paradox.

As Foer explained it: “A researcher shows two people the same photograph of a face and tells one of them the guy is a baker and the other that his last name is Baker.  A couple days later, the researcher shows the same two guys the same photograph and asks for the accompanying word.  The person who was told the man’s profession is much more likely to remember it than the person who was given his surname.  Why should that be?  Same photograph.  Same word.  Different amount of remembering.”

“When you hear that the man in the photo is a baker,” Foer explained, “that fact gets embedded in a whole network of ideas about what it means to be a baker: He cooks bread, he wears a big white hat, he smells good when he comes home from work.”

“The name Baker, on the other hand,” Foer continued, “is tethered only to a memory of the person’s face.  That link is tenuous, and should it dissolve, the name will float off irretrievably into the netherworld of lost memories.  (When a word feels like it’s stuck on the tip of the tongue, it’s likely because we’re accessing only part of the neural network that contains the idea, but not all of it.)”

“But when it comes to the man’s profession,” Foer concluded, “there are multiple strings to reel the memory back in.  Even if you don’t at first remember that the man is a baker, perhaps you get some vague sense of breadiness about him, or see some association between his face and a big white hat, or maybe you conjure up a memory of your own neighborhood bakery.  There are any number of knots in that tangle of associations that can be traced back to his profession.”

Metadata makes data better, helping us untangle the knots of associations among the data and information we use everyday.  Whether we be bakers or Bakers, or professions or people described by other metadata, the better we can describe ourselves and our business, the better our business will be.

Although we may not always agree on the definitions demarcating metadata, data, and information, let’s not forget that what matters most is enabling better business the best we can.

Tags: , , ,
Category: Data Quality, Information Development, Metadata
No Comments »

by: Sean.mcclowry
17  Apr  2008

Open Source and Open Standards for IM in Capital Markets

As part of MIKE2.0, we believe we are presenting a unique perspective in the area of standards development. Our approach is to create a collaborative community for the development of standards for Information Management, including those that apply to Capital Markets.

Some interesting work around open source and open standards is developing in relation to market data:

  • Market Data Definition Language (MDDL) is an extensible Markup Language (XML) derived specification, which facilitates the interchange of information about financial instruments used throughout the world’s markets. A community is build around MDDL, including a wiki-based development environment.

With open content and collaborative technologies, it’s easy for these projects to work together and we’ve starting doing this through MIKE2.0 with references to these projects.

Tags: , , ,
Category: Enterprise Data Management
2 Comments »

by: Robert.hillard
12  Oct  2007

A way to measure your data models

MIKE2.0 uses “small world” measures to assess data models (see Small Worlds Data Transformation Measure). The challenge many users have found is that for large models, it is very difficult to calculate these metrics. Thomas Isaksson has created an open source tool and provided the code on SourceForge which automates the process , supporting any RDBMS for which you have a JDBC driver or ERWin (via CSV output).

We hope that this initiative will further the encourage the adoption of these data model metrics and help demystify the traditional modelling process. Please take the time to use, test and extend this beta code.

Tags: , ,
Category: Enterprise Data Management, MIKE2.0
No Comments »

by: Sean.mcclowry
10  Sep  2007

How Web 2.0 and Small Worlds Theory may shape data modelling

In Small Worlds Data Transformation Measures, Rob wrote about the challenges of data modelling in today’s complex, federated enterprise. This is explained through an overview on Graph Theory, which provides the foundation for the relational modelling techniques first developed by Ted Cod over 30 years ago.

Relational Theory has been one of the stalwarts of software engineering. It is governed by a Codd’s rules, which have fundamentally stayed intact despite the rapid advances in other areas of software engineering – a testament to their effectiveness and simplicity.

While evolutions have taken place over time and there have been some variations to approach (e.g. dimensional modelling), the changes have built on the relational theory foundation and abided by its design principles.

But is it time for a change? Are some of the issues we are seeing today the result of the foundation starting to crumble due to complexity? Or is it that there are so many violations of Codd’s Rules? While the latter is certainly a contributing factor, it may be that relational theory is starting to wear under the weight of our modern systems infrastructures – and the issues will continue to get worse. Whereas there does not appear to be an equivalent approach to relational theory that will address the issues we see today, we think Small Worlds Theory and Web 2.0 may provide some ideas for a new approach.

Small Worlds Theory helps provide rationale for a different approach to modelling information. Small Worlds Theory tells us that for a complex system to be manageable it must be designed as an efficient network and that many systems (biological, social or technological) follow this approach. Although the information across organizations is highly federated, it does not inter-relate through an efficient network. As opposed to building a single enterprise data model, it is the services model that includes the modelling of “data in motion” that should be incopoporated into the comprehensive approach.

In addition to better modelling of federated data, new techniques should also to bring in unstructured content. This includes the information from the “informal network” such as that developed in wikis and blogs. While there are standards to add structure to unstructured content, their uptake has been slow. People prefer a quick and easy approach to classification, especially for content that is more informal in nature.

Therefore, the approach may involve the use of categories and taxonomies to bring together collaborative forms of communications and link it to the formal network. Both Andi Rindler and Jeremy Thomas have discussed some work we are doing in their area on the MIKE2.0 project on their blog posts. We’re also starting to see the implementation of some very cool ideas for dynamically bringing together tagging concepts such as the Tagline Generator.

In summary, whereas an approach based on a mathematical foundation is a required to provide a solution equivalent to Codd’s and there is a grand vision for a “semantic web”, we may chip away at the problem through a variety of techniques. Just as Search is already providing a common for mechanism for data access, other techniques may help with information federation and unstructured content.

Tags: , , , ,
Category: Enterprise Data Management, Information Strategy
No Comments »

Calendar
Collapse Expand Close
TODAY: Mon, December 18, 2017
December2017
SMTWTFS
262728293012
3456789
10111213141516
17181920212223
24252627282930
31123456
Archives
Collapse Expand Close
Recent Comments
Collapse Expand Close