Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Archive for July, 2011

by: Bsomich
27  Jul  2011

Profile Spotlight: Daniel Galassi


Daniel Galassi

Daniel Galassi is an experienced consultant and a big supporter of open standards. He has over 7 years of focus working on business intelligence, data warehousing and application integration projects across various industries.  His current expertise is with OBIEE, focused around designing and building BI solutions, applying best practices, design principles and relevant controls. Daniel has a strong background in the banking and financial services; telecommunications; retail; and fast moving consumer goods industries.

Taking advantage of the experience gained in previous application integration projects, he is able to effectively advise clients on ways to make business intelligence solutions work together with other information systems.

Connect with Daniel.

Category: Member Profiles
1 Comment »

by: Phil Simon
25  Jul  2011

The Sea Change and Enterprise 1.5

“Man is a tool-using animal. Without tools he is nothing, with tools he is all.”

Thomas Carlyle

The Sea Change is an oft-used metaphor connoting transformation. Its origin stems from phrase in the song “Full fathom five” in Shakespeare’s The Tempest. In this post, I take a look at the sea change taking place in many organizations.

During the age of Enterprise 1.0, many organizations began:

  • provisioning hardware
  • granting email, Internet, and VPN access
  • handling security and administrative issues
  • assisting with the deployments of ERP and CRM applications
  • implementing dashboards or some type of business intelligence tool
  • embracing the basic tenants of governance
  • dealing with more stringent regulatory requirements, such as those stemming from Sarbanes-Oxley

Over the last three to five years, many organizations have made significant strides in modernizing their applications and IT departments. This sea change has involved:

  • provisioning mobile devices
  • building apps or connecting to different application programming interfaces (APIs)
  • working with different lines of business (LOBs) to deploy collaborative tools
  • handling vastly increased amounts of both structured and unstructured data
  • moving legacy apps to the cloud

In other words, they seem to have moved beyond Enterprise 1.0. At the same time, though, they have fallen short of Enterprise 2.0. Many organizations are works in progress, no doubt hurt by The Great Recession and attendant budget cuts.

Where are we now?

While we are not out of the woods yet, IT budgets have reversed their downward trend. However, let’s not overstate the importance of being able to hire a few people or upgrade antiquated servers. Organizations cannot embrace this brave new world that easily. In fact, many continue to struggle getting their arms around clouds, social business, and mobility–something pointed out by observers at the 2011 Enterprise 2.0 Conference in Boston, MA.

The average organization is somewhere between Enterprise 1.0 and Enterprise 2.0. Call it Enterprise 1.5. And that’s alright.

Simon Says: Getting to Where We Need to Be

Foolish is the organization whose leaders get stuck on monikers such as Enterprise 2.0. Does terminology really matter to the store manager in a large retail organization trying to make better decisions? How much productivity and revenue is lost when organizations force employees to do things manually when a mobile app would automate a key process?

Focusing on priorities, objectives, and bottom-line results should be the goal of every employee, department, division, and organization. It’s all fine and dandy to deploy social and cloud-based applications equipped with the latest bells and whistles. Maybe a software vendor will publish a flattering case study about your company’s proactive and visionary management. You might even speak at a conference in front of a bunch of people. But does any of this ultimately matter to your employees, colleagues, and shareholders?

The quote at the beginning of the post is apropos: We need tools to survive and do the work. Don’t lose sight of the fact, however, that the tools are a means to an end, not the end in and of themselves.

Feedback

What do you think?

 

What say you?

Tags:
Category: Enterprise Data Management, Enterprise2.0
1 Comment »

by: Robert.hillard
24  Jul  2011

CIOs need to measure the right things

If you’re a Chief Information Officer (CIO) there are three things that your organization expects of you: 1) keep everything running; 2) add new capabilities; and 3) do it all as cheaply as possible.  The metrics that CIOs typically use to measure these things include keeping a count of the number of outages, number of projects delivered and budget variances.  The problem with this approach is that it fails to take account of complexity.

When I talk with senior executives, regardless of their role, the conversation inevitably turns to the frustration that they feel about the cost and complexity of doing even seemingly simple things such as preparing a marketing campaign, adding a self-service capability or combining two services into one.  No matter which way you look at it, it costs more to add or change even simple things in organisations due to the increasing complexity that a generation of projects have left behind as their legacy.  It should come as no surprise that innovation seems to come from greenfield startups, many of which have been funded by established companies who’s own legacy stymies experimentation and agility.

This doesn’t have to be the case.  If a CIO accepts the assertion that complexity caused by the legacy of previous projects is the enemy of agility, then they should ask whether they are explicitly measuring the complexity that current and future projects are adding to the enterprise.  CIOs need to challenge themselves to engineer their business in such a way that it is both flexible and agile with minimal complexity compared to a startup.

The reason that I spend so much time writing about information rather than processes or business functions is that the modern enterprise is driven by information.  So much so, that metrics tracking the management and use of information are very effective predictors of issues with the development of new processes and obstacles to the delivery of new functionality.  There appear to be no accurate measures of the agility of enterprise technology that focus on just processes or functions without information but there are measures of information that safely ignore processes and functions knowing that well organized information assets enable new processes and functions to be created with ease.

The CIO who wants to ensure their organisation has the capability to easily implement new functions in the future should look to measure how much information the organization can handle without introducing disproportionate complexity.  The key is in structuring information assets in such a way as to ensure that complexity is compartmentalized within tightly controlled units with well understood boundaries and properly defined interfaces to other information assets.  These interfaces act as dampeners of complexity or turbulence, allowing for problems or changes to be constrained and their wider impact minimized.

Creating such siloes may seem to go against conventional wisdom of having an integrated atomic layer and enterprise approach to data.  Nothing could be further from the truth, it is simply ensuring that the largest possible quantity of information is made available to all stakeholders with the minimum possible complexity.

The actual measures themselves are described in my book, Information-Driven Business,  as the “small worlds data measure” for complexity and “information entropy” for the quantity.  Applying the measures is surprisingly easy, the question each CIO then needs to answer is how to describe these measures in a way that engages their business stakeholders.  If technology leaders are hoping to avoid difficult topics with their executive counterparts then this will be impossible, but if they are willing to share their inside knowledge on modern information technology then the “us and them” culture can start to be broken down.

Tags: , ,
Category: Information Governance, Information Strategy, Information Value
5 Comments »

by: Bsomich
21  Jul  2011

Information Development in Action

While it is possible to get there in “small steps”, we believe most organizations require transformational change in how they manage their information. This is why MIKE2.0 promotes new strategy concepts and continuous improvement.

This approach can result in:

  • Collaborative situational awareness - operators in a government agency can understand its threats from a number of asymmetric and real-time sources. As a result, they can quickly make optimal decisions.
  • Next-generation policy development - a policy analyst can make objective, evidence-based decisions on new policy development. Decision making becomes more collaborative, involving colleagues and stakeholders.
  • Conversations with the customer - a bank can engage with customers through online channels, supplemented with rich historical information to address customer needs and wants. This is a significantly enhanced and more efficient approach to meeting in a branch.
  • A patient-centric healthcare system – significantly improved efficiency and patient insight. Better sharing of information and longitudinal analysis balances the need for patient confidentiality.
  • Demonstrating and executing an appropriate governance model for all information held by the organization

Information Development in action is something we increasingly see today. Many of the biggest changes have been in government, but it impacts all industries. Our goal with MIKE2.0 is to provide a methodology, community, and fundamental change in approach to managing enterprise information. Please check it out when you have a moment. And as always, we welcome your contributions to improve it.

Category: Information Development
No Comments »

by: Phil Simon
18  Jul  2011

How Open is Too Open?

If you’re in the technology or information management spaces, you’ve probably heard the axiom “Information wants to be free.” While accounts vary, many attribute the quote to Stewart Brand in the 1960s.

Today, information is becoming increasingly more prevalent and less expensive, spawning concepts such as Open Data, defined as:

the idea that certain data should be freely available to everyone to use and republish as they wish, without restrictions from copyrightpatents or other mechanisms of control. While not identical, open data has a similar ethos to those of other “Open” movements such as open sourceopen content, and open access. The philosophy behind open data has been long established (for example in the Mertonian tradition of science), but the term “open data” itself is recent, gaining popularity with the rise of the Internet and World Wide Web and, especially, with the launch of open-data government initiatives such as Data.gov.

Again, you may already know this. But consider what companies such as Facebook are doing with their infrastructure and server specs–the very definition of traditionally closed apparata. The company has launched a project, named OpenCompute, that turns the proprietary data storage models of Amazon and Google on their heads. From the site:

We started a project at Facebook a little over a year ago with a pretty big goal: to build one of the most efficient computing infrastructures at the lowest possible cost.

We decided to honor our hacker roots and challenge convention by custom designing and building our software, servers and data centers from the ground up.

The result is a data center full of vanity free servers which is 38% more efficient and 24% less expensive to build and run than other state-of-the-art data centers.

But we didn’t want to keep it all for ourselves. Instead, we decided to collaborate with the entire industry and create the Open Compute Project, to share these technologies as they evolve.

In a word, wow.

Mixed Feelings

Both of these projects reveal complex dynamics at play. On one hand, there’s no doubt that more, better, and quicker innovation results from open source endeavors. Crowdsourced projects benefit greatly from vibrant developer communities. In turn, these volunteer armies create fascinating extensions, complementary projects, and new directions for existing applications and services.

I could cite many examples, but perhaps the most interesting is WordPress. Its community is nothing less than amazing–and the number of themes, extensions, and plug-ins grows daily, if not hourly. The vast majority of its useful tools are free or nearly free. And development begets more development, creating a network effect. Millions of small businesses and solopreneurs make their livings via WordPress in one form or fashion.

On the other hand, there is such a thing as too open–and WordPress may be an equally apropos example here. Because the software is available to all of the world, it’s easier for hackers to launch Trojan horses, worms, DoS attacks, and malware aimed at popular WordPress sites. To be sure, determined hackers can bring down just about any site (WordPress or not), but when they have the keys to the castle, it’s not exactly hard for them to wreak havoc.

Simon Says

Does Facebook potentially gain by publishing the design of its data centers and servers for all to see? Of course. But the risks are substantial. I can’t tell you that those risks are or are not worth the rewards. I just don’t have access of all of the data.

But I certainly wouldn’t feel comfortable doing as much if I ran IT for a healthcare organization or an e-commerce company. Imagine a cauldron of hackers licking their lips at the trove of stealing highly personal and valuable information surely used for unsavory purposes.

When considering how open to be, look at the finances of your organization. Non-profits and startups might find that the squeeze of erring on the side of openness is worth the juice. For established companies with very sensitive data and a great deal to lose, however, it’s probably not wise to be too open.

Feedback

What say you?

Tags: , , ,
Category: Information Management, Open Source
No Comments »

by: Bsomich
13  Jul  2011

Profile Spotlight: Matthew Moore.


Matthew Moore

Matthew Moore has been working in the Knowledge Management field for over 10 years.  He is currently a Director at Innotecture, an information management consulting firm based in Australia.  Matt’s professional experience includes knowledge management, learning and development, internal communications and community development working with such companies as PricewaterhouseCoopers, IBM, Oracle and the Australian government.

Connect with Matt.

Category: Member Profiles
No Comments »

by: Phil Simon
11  Jul  2011

The Data Curves

Supply of–and demand for–data have never been higher. Executives often say, “I have all of the data that I can handle. I just need more information.”

Consider just the data generated from mobile devices. From a recent ReadWriteWeb piece:

Worldwide mobile data traffic is due to increase 26-fold to 75 exabytes annually, says networking giant Cisco in its latest report, the Cisco Visual Networking Index Global Mobile Data Traffic Forecast for 2010 to 2015. To put that in perspective, that’s the equivalent of 19 billion DVDs, 536 quadrillion SMS text messages or 75 times the amount of global Internet IP data (fixed and mobile data) in the year 2000.

In a word, wow. At least there is some good news. Storing data has never been cheaper, a sign that shows no signs of abating. Consider that Amazon just announced its new AWS pricing. Outbound data transfer price for US-Standard, US-West and Europe regions are as follows:

  • $0.000 – first 1 GB / month data transfer out
  • $0.150 per GB – up to 10 TB / month data transfer out
  • $0.110 per GB – next 40 TB / month data transfer out
  • $0.090 per GB – next 100 TB / month data transfer out
  • $0.080 per GB – data transfer out / month over 150 TB

So, if I have 9 terabytes of data, outbound transfer costs will amount to $135 (USD)/month.

Opposing Forces

In other words, the amount of data out there is growing exponentially while its cost is dropping in a similar manner. This is crudely represented in my simple chart below:

chart

So, every day, we are generating more data  and storing it becomes cheaper. But do we need to store all of that data? The answer is usually no. This, of course, begs the obvious question: Which data do we need to keep?

And there’s the rub. I am of the opinion that it’s better to have it and not need it than need it and not have it. What’s more, keeping the data in one central repository is generally the way to go. Among the benefits of a “master data” strategy is the minimization of those pesky duplicate records that vex end-users trying to make business decisions based upon extra, inaccurate, or incomplete information.

What’s more, reporting is vastly simplified. To be sure, sophisticated reporting tools allow developers and extremely technical folks to pull data from myriad sources, tables and data models. However, performance tends to suffer and many non-technical users aren’t exactly proficient at writing complex SQL statements. This maximizes the need for IT to be involved when, in an ideal world, they should not have to “bless” each reporting request.

Simon Says

It’s never easy to determine which data is essential, which is nice-to-have, and which is no longer relevant. To boot, ask ten different users in an organization and you’re likely to receive ten different answers. Still, it behooves most organizations to routinely ask if they are storing the right information in the right manners. After all, needs are hardly static, especially these days.

Data storage is anything but a “set it and forget it” type of thing. Ignore changing requirements at your own peril.

Feedback

What say you?

Tags: ,
Category: Information Development
No Comments »

by: Bsomich
07  Jul  2011

Can you trust your intelligence?

It happens all to often: A sales and marketing team gathers for their weekly status meeting and each person is armed with the same report showing different results. A debate ensues as to whose report has the accurate numbers. Attentions are diverted, time is wasted and productivity is lost.

How strongly can you trust your information intelligence?  What should you do when your system becomes more of a liability than an asset? 

There is one basic truth I know about information systems: When it comes to number crunching, the reports you build are only as intelligent as the data you input and the filters you create to organize it.  When installing a CRM system, I try to do the following at a minimum:

1) Designate a system owner and set user permission levels for accessing and editing data.  

2) Determine what intelligence needs reported to management.

3) Determine what data fields are needed to report on that create this intelligence.

4) Create your data fields in the proper, reportable format (i.e. if it’s a number, create a numeric field, not a text field).

5) Define your report filters based on the fields needed to report the intelligence.

6) Build standard report templates that everyone has access to but cannot change without consent from the system owner.

7) Communicate to staff using the system the importance of entering good quality data. Set required fields as needed.

What are your thoughts? How do you ensure your information intelligence is trustworthy?

Category: Business Intelligence, Data Quality
1 Comment »

by: Phil Simon
05  Jul  2011

Metadata and Collaborative Filtering

Photo from fleshforblood

In last week’s post, I discussed how Apple doesn’t sweat perfection. At a very high level, the company gets the big stuff right. In this post, I’d like to extend that concept to unstructured data and Collaborative Filtering (CF), defined by Wikipedia as:

the process of filtering for information or patterns using techniques involving collaboration among multiple agents, viewpoints, data sources, etc. Applications of collaborative filtering typically involve very large data sets. Collaborative filtering methods have been applied to many different kinds of data including sensing and monitoring data – such as in mineral exploration, environmental sensing over large areas or multiple sensors; financial data – such as financial service institutions that integrate many financial sources; or in electronic commerce and web 2.0 applications where the focus is on user data, etc.

For instance, let’s say that I am a fan of Rush, the criminally underrated Canadian power trio. In iTunes, I rate their songs, along with those from other bands. Because I enjoy Rush’s music, I am likely to like Genesis, Pink Floyd, Yes, and other 70s prog rock bands. (I do.) I rate many songs by those bands high as well, helping Apple learn about my listening habits.

Now, multiply that by millions of people. While no two people may have precisely the same taste in music (or books, apps, movies, TV shows, or art, for that matter), the technology and data collectively allow Apple to learn a great deal about group listening habits. As a result, its technology will enable relevant recommendations to me.

The Law of Large Numbers

But let’s say that some people out there have odd tastes. They like Rush but they’re also huge Beyoncé fans. (There’s nothing wrong with her or her music, it’s just that most people don’t like both her and Rush.) They give high marks to the latest Beyoncé album, as well as Rush’s most recent release. Won’t this throw off Apple’s rating systems?

Or what about those who hate Rush? Sadly, many people not only don’t share my passion for the band, but actively despise their cerebral approach to lyrics and odd time signatures. And, yes, many people hate the band because of Geddy Lee’s voice. What if they intentionally rate Pink Floyd songs high but Rush songs low? What if they rate Rush songs high and intentionally suggest songs in a completely unrelated genre such as Gangsta Rap? Won’t this make Apple’s recommendations less relevant?

In a word, no. Apple’s recommendation technology takes advantage of the law of large numbers which states, in a nutshell, that large sample sizes can withstand a few inaccurate entries, selections, and flat-out errors.

And Apple isn’t alone. Google, Facebook, Netflix, Pandora, Amazon.com, and a host of other companies utilize this law. Couple with accurate metadata, these companies are able to make remarkably accurate suggestions and learn a great deal about their users and customers. For more on the concept of metadata, click here.

Simon Says

Accuracy and data quality is still very important, particularly in structured data sets. One errant entry in a table can cause many problems, something that I have seen countless times. With unstructured data, however, the bar is much lower. Embrace large datasets, for they are much better able to withstand corrupt or inaccurate information.

Feedback

What say you?

 

Tags:
Category: Information Development, Semantic Web
No Comments »

Calendar
Collapse Expand Close
TODAY: Mon, April 24, 2017
July2011
SMTWTFS
262728293012
3456789
10111213141516
17181920212223
24252627282930
31123456
Archives
Collapse Expand Close
Recent Comments
Collapse Expand Close