Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Wiki Home
Collapse Expand Close

Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Four Layers Of Information

From MIKE2.0 Methodology

Share/Save/Bookmark
Jump to: navigation, search

Despite the arguments about the best technical architecture to provide information to the business, almost every organisation has a number of things in common. First most have a management structure split into non-executive supervisors (usually a board), an executive team and an army of middle managers. Second, they have an enormous appetite for complex data at every level. Finally, in every industry sector, there is terminology which defines activity and success.

Consider the two extremes of business.

At the top of the management tree are a group of executives. Each has a short tenure, often less than two years. During that time, they need to streamline the information they have to achieve an agenda or corporate change.

At the bottom or foundation of the enterprise are the operational processes, systems and people who make the business run. These processes and systems are rich in raw data. Data which is highly de-normalised, that is there are many duplications of content and little integration across business processes.

Between these two extremes, sits an army of middle managers who spend vast amounts of time responding to the executive requests for information by mapping the operational data into metrics.

Every enterprise, and in particular every leader, has a preferred strategy for tackling the challenges that the market, stakeholders or the sector creates. This strategy needs to be measured in terms that a board or executive leadership can understand and support. These measures are usually straightforward metrics, some examples include concepts like unit gross margins, resource utilisation and return on equity. The metrics used to drive the business change rapidly as both the leadership and organisational strategies evolve.

Because of the rushed nature of the requests made by executives for data (after all they only have a small window of opportunity to demonstrate their success) few quality controls are put in place along the way. The lack of controls is despite the move to greater financial and process regulation worldwide in almost every aspect of business and government, since these controls have almost exclusively been applied to operational activities of the enterprise.

While the tenure of the executive team is usually short, the ranks of middle management usually include a substantial body of staff who have been in place for an extended period of time and have considerable corporate memory. These staff are usually frustrated by the requests for the same data by generation after generation of executive management with small differences between the requirements. Similarly they are frustrated that there is usually little appetite to put in place a more strategic framework to provide this content.

The information layers

At a logical level, the enterprise has four layers of information. At the top and bottom, as already described are the metrics and operational data respectively. To make sense of the operational data there has to be a layer of normalisation, it is simply the only abstract tool available which integrates and describes data in atomic terms.

For all the reasons that we’ve described to-date, it is difficult to interpret a normalised model. The natural tendency is to create dimensions and formally or informally define a dimensional model. Such a view of the enterprise sits between the normalised and metric layers of the enterprise.

Four Layers
Four layers with integration
Considered together, the four layers can be visualised as a pyramid. Of course, without a proper architecture, each layer needs extensive integration using spreadsheets and other tools.

Each layer of information has its own characteristics.

Metrics are dependent on both the organisational structure and the strategy of the business, with both changing rapidly. This is because the executives defining the strategy and hence the metrics have a high rate of turnover. The dependency on the organisation structure is due to divisional reporting requirements.

The dimensional view streamlines products and divisions into consistent confirmed dimensions, however the particular schemas that are defined are dependent on the metrics that are required hence the strategy of the day. As a result, the dimensional view also changes rapidly. Given that the organisational structure usually changes marginally more often than the corporate strategy the dimensional view is slightly more stable than the metric view.

The principles of third-normal form modelling mean that the normalised view of the enterprise should be independent of both strategy and organisation structure.

The normalised model should describe the fundamental business data as generated by the underlying business processes and not be biased towards the information that is required for analysis. For this reason, the normalised view that develops over time should be very stable.

The operational data is geared towards to front-end systems which are operated within individual departments or divisions, hence is highly dependent on the organisational structure.

Are they real?

The layered view of enterprise data is not only an idealised approach, it also reflects the reality for almost all organisations. In most cases, however, these layers exist in an informal, virtual or ad-hoc manner.

The first step to understanding why this is the case is to examine the following the transitions from operational to dimensional view and from normalised model to the enterprise metrics. In each case skipping the normalised or dimensional views respectively.

In the first case, we’ll attempt to make the transition of data from an operational view to the dimensional view. Recall that a dimensional model consists of a fact table linked to multiple conformed (ie., consistent) dimension tables. The conformed dimension tables can, in turn, link to further fact tables.

Manufacturing supply chain.JPG
Consider a manufacturing supply system, which is divided into internal and external supply. That is, the parts that are manufactured within other members of the group versus parts coming from external suppliers.
The internal stock transfers are handled by one system while the external purchases are handled by another. In each case, there is a list of parts, shipping dates and quantities. Naturally, because they are different systems, the data looks quite different. For the sake of a simple example, assume that the only difference between the two systems is that for internal stock movements, parts are described using an internal code while external purchases are used using an industry standard code. The assembly system (a third system) contains a mapping from the external code to the internal code.
Manufacturing supply chain 2.JPG

A dimensional model that provides simple information about part availability would want to track estimated stock-on-hand based on current and future orders is simple to conceive.

Supply chain dimensional model.JPG
The part register would provide an amount of existing stock using internal part codes, the internal stock transfers system would indicate the amount of stock that is due to arrive (again using internal part codes) and the external purchasing system would provide similar information, but this time using the industry part codes.

In the first instance, you might consider the conversion to be trivial with a simple one-for-one mapping, perhaps external code could even be an attribute of the internal part dimension. The translate requires a small amount of mental gymnastics, but nothing that couldn’t be computed without resorting to an intermediary database.

However, such a mapping rarely exists, if it did then the different systems would have adopted the same coding system. It has to be assumed that one internal part code could refer to more than one industry part number. Perhaps colour doesn’t matter internally, but externally red, green and blue are coded differently. Similarly, one external part number could refer to more than one internal part code, perhaps referring to the country of origin (possibly required for regulatory requirements).
Supply chain mapping.JPG

Such a mapping is still sufficiently simple that it could be completed by the use of in-memory variables during an extract, transform and load process (commonly referred to as ETL), however that doesn’t change the fact that the in-memory transformation is effectively representing the data in a normalised form. As has already been shown, small additions to the logic and interfaces to other business processes will mean that the transformation model becomes increasing complex reaching a point rapidly where it needs to be implemented as a physical database. Regardless of whether this data is held permanently, it is in some way normalised before becoming dimensional.

Supply chain more complex.JPG
Consider now the second transformation, from a normalised model through to the metrics. Again, consider the same business problem, but this time the stock-on-hand data has been represented in an extended normalised model.

Two new concepts have been added, the individual shipment which represents the amount of stock and the draw down on each shipment (which would also connect to the manufacturing process model). We can imagine various metrics that an executive would want to see, but imagine a very simple metric which indicated the quantity of parts at a point of time compared to the average over time: (total unit quantity of parts)/(total average unit quantity of parts).

The first step would be to get a quantity on-hand for each part at each point in time (perhaps for simplicity an end-of-month position would be appropriate). This could be represented as a table derived from the aggregate of each part’s shipments and draw down.

Part number
Month
Quantity-on-hand
1
January 2008
43
1
February 2008
27
1
March 2008
35
2
January 2008
40
2
February 2008
39
2
March 2008
45
3
January 2008
20
3
February 2008
25
3
March 2008
29

From this simple list the average number of parts on-hand in any given month is 101. The parts on hand in March 2008 was 109 which means that the metric of aggregate parts to average parts is: 109/101 = 1.08

A closer examination of the table shows that it is in the same form as a dimensional analysis as shown earlier. Again, it could be argued that this model could be implemented as an in-memory process during the extract, transform and load (ETL) process that creates the metric but it doesn’t take much imagination to see that the addition of just one or two further dimensions will extend the complexity of this transformation beyond the capacity of even the most powerful machine.

Further, most executives when presented with a metric such as this one will follow-up with questions about the breakdown. If the result of 1.08 is not acceptable then the executive might wish to see a breakdown by part which would require middle managers to again repeat the calculation extracting the next level of granularity. More likely that they will anticipate the next requirement and store the dimensional result in a spreadsheet.

Regardless of whether the dimensional results are kept for a period or transitory, it is almost always necessary as it was in this example to create a dimensional view of normalised data in order to create a metric.

Selling the architecture

Middle managers are generally frustrated by the amount of work that they have to undertake to provide short-term metrics for new executive management. Attempts to promote more sustainable solutions and correct errors that permeate enterprise decision making are often met with an executive response that “now is not the right time” and that “you have to survive the short term to be part of the long-term”. It is important to remember that the executive generally has a short tenure and little appetite for anything that they see as requiring a longer attention span.

Ironically, there is another group who is both more senior and more likely to be a willing sponsor. The board, or in the case of public organisations, the political executive. These senior stakeholders usually have a longer-tenure than the executive team. They also are in the position of having a greater legal accountability.

Often a leadership team is put in place based on their plans to implement a clean set of metrics to describe the business to investors. The board is ultimately accountable for the quality of information that is provided to investors and it is entirely possible that errors, ignored by the executive team, could result in an investor law suit in future years when problems emerge or a middle manager blows the whistle.

Because non-executive boards are usually drawn from the ranks of retired corporate leaders, their experience generally pre-dates the information revolution and they have little understanding of the volume of data that is flowing around the enterprise. Often they assume that standard audit processes are enough to guarantee that they are meeting their diligence obligations. However, a quick analysis of the information used by investors or, in the case of public agencies, the public, shows that they rely increasingly on sophisticated measures that are drawn from non-ledger sources deep inside the business.

The argument for implementing the architecture is made stronger by the reality that the layers exist anyway. The point being that the data is being created by virtue of temporary processes which simply need to be discovered and the content recorded. As a first step, this at least provides a more robust audit trail that the non-executive leadership team can look back to in the event of future problems. Further it provides a baseline that can be used to support the transition between executive teams.

Having launched a pincer movement on the executive team from below and above, a virtue can be created of the necessity by rapidly providing decision support systems which allow for the follow-up questioning that has not previously been possible.

During the first implementations, it is important to remember that the role of the architecture is to record data that is already being created rather than try to solve every content problem at once. It is far more valuable to have content with a known level of accuracy than it is to have a dataset of a higher quality but for which that level of quality is not understood. By simply encoding in databases data that is being created in spreadsheets, the organisation is gaining an enormous amount of protection and opening the door to productivity improvements as teams begin to share complex data management processes.

Further, new business opportunities to package and sell data in different forms often manifest themselves. In almost every business, where there is complex data there is a third-party who can make effective use of that content. Strategists should look beyond customer data when considering third-party interest in datasets that the organisation generates. Suppliers with complex supply chain processes can extract greater efficiencies if they have more accurate demand metrics. Retailers with shelf-space can sell advertising and other facilities for a greater price if they can demonstrate better targeting of demographics. Financial institutions can better share risk if their partners can gain real-time information about facilities under threat of default.

Wiki Contributors
Collapse Expand Close