Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.
by: Bsomich
01  Mar  2014

Community Update

Missed what happened in the MIKE2.0 community? Check out our bi-weekly update:

 

 
 logo.jpg

Business Drivers for Better Metadata Management

There are a number Business Drivers for Better Metadata Management that have caused metadata management to grow in importance over the past few years at most major organisations. These organisations are focused on more than just a data dictionary across their information – they are building comprehensive solutions for managing business and technical metadata.

Our wiki article on the subject explores many factors contributing to the growth of metadata and guidance to better manage it:  

Feel free to check it out when you have a moment.

Sincerely,MIKE2.0 Community

Contribute to MIKE:

Start a new article, help with articles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

Join Us onimages.jpg

Did You Know?
All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

 

This Week’s Blogs for Thought:

Big Data Strategy: Tag, Cleanse, Analyze

Variety is the characteristic of big data that holds the most potential for exploitation, Edd Dumbill explained in his Forbes article Big Data Variety means that Metadata Matters. “The notion of variety in data encompasses the idea of using multiple sources of data to help understand a problem. Even the smallest business has multiple data sources they can benefit from combining. Straightforward access to a broad variety of data is a key part of a platform for driving innovation and efficiency.”

Read more.

The Race to the IoT is a Marathon

The PC era is arguably over and the age of ubiquitous computing might finally be here.  Its first incarnation has been mobility through smartphones and tablets.  Many pundits, though, are looking to wearable devices and the so-called “internet of things” as the underlying trends of the coming decade. It is tempting to talk about the internet of things as simply another wave of computing like the mainframe, mid-range and personal computer.  However there are as many differences as there are similarities.Read more.

The Data of Damocles

While the era of Big Data invokes concerns about privacy and surveillance, we still tender our privacy as currency for Internet/mobile-based services as the geo-location tags, date-time stamps, and other information associated with our phone calls, text messages, emails, and social networking status updates become the bits and bytes of digital bread crumbs we scatter along our daily paths as our self-surveillance avails companies and governments with the data needed to track us, target us with personalized advertising, and terrorize us with the thought of always being watched.

Read more.

 

  Forward this message to a friend

Category: Information Development
No Comments »

by: Ocdqblog
25  Feb  2014

Big Data Strategy: Tag, Cleanse, Analyze

Variety is the characteristic of big data that holds the most potential for exploitation, Edd Dumbill explained in his Forbes article Big Data Variety means that Metadata Matters. “The notion of variety in data encompasses the idea of using multiple sources of data to help understand a problem. Even the smallest business has multiple data sources they can benefit from combining. Straightforward access to a broad variety of data is a key part of a platform for driving innovation and efficiency.”

But the ability to take advantage of variety, Dumbill explained, is hampered by the fact that most “data systems are geared up to expect clean, tabular data of the sort that flows into relational database systems and data warehouses. Handling diverse and messy data requires a lot of cleanup and preparation. Four years into the era of data scientists, most practitioners report that their primary occupation is still obtaining and cleaning data sets. This forms 80% of the work required before the much-publicized investigational skill of the data scientist can be put to use.”

Which begs the question Mary Shacklett asked with her TechRepublic article Data quality: The ugly duckling of big data? “While it seems straightforward to just pull data from source systems,” Shacklett explained, “when all of this multifarious data is amalgamated into vast numbers of records needed for analytics, this is where the dirt really shows.” But somewhat paradoxically, “cleaning data can be hard to justify for ROI, because you have yet to see what clean data is going to deliver to your analytics and what the analytics will deliver to your business.”

However, Dumbill explained, “to focus on the problems of cleaning data is to ignore the primary problem. A chief obstacle for many business and research endeavors is simply locating, identifying, and understanding data sources in the first place, either internal or external to an organization.”

This is where metadata comes into play, providing a much needed context for interpreting data and helping avoid semantic inconsistencies that can stymie our understanding of data. While good metadata has alway been a necessity, big data needs even better metadata. “The documentation and description of datasets with metadata,” Dumbill explained, “enhances the discoverability and usability of data both for current and future applications, as well as forming a platform for the vital function of tracking data provenance.”

“The practices and tools of big data and data science do not stand alone in the data ecosystem,” Dumbill concluded. “The output of one step of data processing necessarily becomes the input of the next.” When approaching big data, the focus on analytics, as well as concerns about data quality, not only causes confusion about the order of those steps, but also overlooks the important role that metadata plays in the data ecosystem.

By enhancing the discoverability of data, metadata essentially replaces hide-and-seek with tag. As we prepare for a particular analysis, metadata enables us to locate and tag the data most likely to prove useful. After we tag which data we need, we can then cleanse that data to remove any intolerable defects before we begin our analysis. These three steps—tag, cleanse, analyze—form the basic framework of a big data strategy.

It all begins with metadata management. As Dumbill said, “it’s not glamorous, but it’s powerful.”

Category: Data Quality, Information Development
No Comments »

by: Robert.hillard
22  Feb  2014

The race to the internet of things is a marathon

The PC era is arguably over and the age of ubiquitous computing might finally be here.  Its first incarnation has been mobility through smartphones and tablets.  Many pundits, though, are looking to wearable devices and the so-called “internet of things” as the underlying trends of the coming decade.

It is tempting to talk about the internet of things as simply another wave of computing like the mainframe, mid-range and personal computer.  However there are as many differences as there are similarities.

While the internet of things is arguably just another approach to solving business problems using computing devices, the units of measure are not so directly tied to processing power, memory or storage.  The mass of interconnections mean that network latency, switching and infrastructure integration play just as much a part, for the first time divorcing a computing trend from “Moore’s Law” and the separate but related downward trend of storage costs.

Does it matter?

The internet of things brings the power of the information economy to every part of society, providing an opportunity to repeat many of the gains that business and government experienced through the last decade of the twentieth century.  Back then there was a wave of new applications that allowed for the centralisation of administration, streamlining of customer service and outsourcing of non-core activities.

Despite all of the apparent gains, however, most infrastructure has remained untouched.  We still drive on roads, rely on electricity and utilise hospitals that are based on technology that really hasn’t benefited from the same leap in productivity that has supercharged the back office.

The 40 year problem

It’s not hard to build the case to drive our infrastructure through the network.  Whether it is mining, transport, health or energy, it is clear that there are mass benefits and efficiencies that are there for the taking.  Even simple coordination at a device level will run our communities and companies better for everyone’s benefit.  Add in the power of analytics and we can imagine a world that is dancing in synchronisation.

However, just as this new wave of computing doesn’t adhere to Moore’s Law, it also lacks the benefit of rapid consumer turnover.  We’ve accepted the need to be constantly replacing our PCs and smartphones, driving in turn the growth of an industry.  But it’s one thing to upgrade one or two devices per person every few years, it is quite another to replace the infrastructure that surrounds us.

Much of what we work with day-to-day in roads, rail, energy and all of the other infrastructure around us has a lifespan that is measured in decades or even half centuries.  Even with a compelling business case, it is not likely to be enough to replace embedded equipment that still has decades of economic value ahead of it.

If we don’t want to wait until our grandchildren are old to benefit from the internet of things, we have to find a way to implement it in a staggered way.

Making a staggered implementation work

Anyone who has renovated their house will have considered whether to build in new technology.  For most, network cabling is a basic, but centralised control of lighting and other devices is a harder ask.  Almost anything that is embedded in the walls runs the risk of being obsolete even before the builder hands over the keys.

The solution is independent and interoperable devices.  Rather than building-in security cameras, consumers are embracing small portable network cameras that they can access through cloud-based services.  The same providers are increasingly offering low-cost online light switches that don’t require anything more than an electrician to replace the faceplate.  This new generation of devices can be purchased as needed rather than all at once and raise little risk of uneconomic obsolescence.

Google’s purchase of NEST appears to be an investment in this incremental, cloud-based, future of interconnected home-based devices.  By putting it in their stable of technologies, it is a fair bet that they see the value being in offering analytics to optimise and better maintain a whole range of services.

Just as companies focused on the consumer at home are discovering an approach that works, so too are those vendors that think about bigger infrastructure including energy, mining and major transport assets.

Future proofing

The car industry is going to go through a revolution over the next decade.  After years of promise, new fuel technologies are bearing fruit.  It’s going to be both an exciting and uncertain ride for all involved.  Hybrid, electric pluggable, electric swappable and hydrogen all vie for roles in the future.

Just as important than as fuel technology is the ability to make cars autonomous.  So-called “range anxiety”, the fear of running out of power in an electric car, is eliminated if the car’s computer makes the driving decisions including taking responsibility for ensuring an adequate charge or access to charging stations.

Arguably the most exciting developments are in the car software rather than hardware.  Like smartphones, manufacturers are treating their vehicles as platforms for future apps.  New cars are including technology that is beyond the capabilities of current software such as that needed to run an autonomous vehicle.

Early beneficiaries are the insurance industry (telematics), car servicing providers (who can more actively target customers) and a whole range of innovators who are able to invent solutions that as drivers we never even knew we needed.

Perhaps a tie-up between innovators like Tesla and Apple will show what connected cars and infrastructure could do!

Winning the marathon

Unlike previous waves of computing, the internet of things is going to take many years to reach its full potential.  During this transition business opportunities that are unimaginable today are going to emerge.  Many of these lie in the data flowing across the internet of things that is genuinely big (see It’s time for a new definition of big data).

These new solutions can be thought of as permutations of cloud services (see Cloud computing should be about new business models).  Those wanting to try to imagine the world of possibilities should remember that the last twenty years have taught us that the consumer will lead the way and that incumbents are seldom the major innovators.

However, incumbent energy and infrastructure companies that do want to take the lead could do well by partnering with consumer technology companies.  The solutions they come up with are likely to include completely different relationships between a range providers, services and customers.

Category: Enterprise Content Management, Enterprise2.0, Information Strategy, Information Value, Web2.0
No Comments »

by: Phil Simon
17  Feb  2014

Big Data: There Is No Guarantee

I know two things about business. First, success is never the result of one thing, a point echoed in one of my very favorite business books, The Halo Effect: … and the Eight Other Business Delusions That Deceive Managers. Second, there is no such thing as a guarantee.

Over the last decade, I’ve spent a great deal of time refining my view on management. More recently, I’ve thought extensively about thinking about Big Data and emerging technologies. I’ve spoken about these subjects many times and written hundreds of thousands of words on these topics.

Before continuing with Big Data, let’s review a little history.

On MiniDiscs

As The Guardian recently pointed out, “twenty years ago Sony launched a format that promised CD clarity and cassette convenience–but the world just wasn’t interested.” Then, in 1998, the company ”declared that 1998 was to be the ‘Year of the MiniDisc’” and launched a $30M marketing campaign.
Minidisc_Sony_MZ1
So, what went wrong? Well, at the time that Sony was going all-in on the MiniDisc, the Internet was enabling the mass–and illegal–sharing of mp3 files. It’s impossible to know, sans sites like Napster, if the MiniDisc would have exploded or at least done markedly better. It’s certainly possible, but is it really unfathomable to consider vastly differently scenario? Maybe we’d be talking today about Sony the same way that we talk about Apple. Instead, the outlook on Sony isn’t exactly rosy.

And it’s that lack of predictability that scares many folks who are considering taking the big plunge. What are the guarantees that the millions spent on data scientists, new technologies like Hadoop, and new approaches will pay off?

None.

Simon Says: Big Data Offers No Guarantees

An organization that “does Big Data” well may still ultimately fail for all sorts of reasons. Cultural issues, competitive factors, regulatory changes, new entrants, key employee attrition, and just plain dumb luck all play major roles in an organization’s success.

As a result, it’s essential to embrace holistic and probabilistic thinking. Put differently, it’s never about only one thing. I’ll bet a great deal of money that organizations that recognize the opportunities inherent in Big Data–and then act on them–will, all else being equal, do better than those that remain in the dark.

Feedback

What say you?

Category: Information Management
No Comments »

by: Bsomich
15  Feb  2014

MIKE2.0 Community Update.

Missed what happened in the MIKE2.0 community this week? Read our community update below:

 
 logo.jpg

Data Governance: How competent is your organization?

One of the key concepts of the MIKE2.0 Methodology is that of an Organisational Model for Information Development. This is an organisation that provides a dedicated competency for improving how information is accessed, shared, stored and integrated across the environment.

Organisational models need to be adapted as the organisation moves up the 5 Maturity Levels for organisations in relation to the Information Development competencies below:

Level 1 Data Governance Organisation – Aware

  • An Aware Data Governance Organisation knows that the organisation has issues around Data Governance but is doing little to respond to these issues. Awareness has typically come as the result of some major issues that have occurred that have been Data Governance-related. An organisation may also be at the Aware state if they are going through the process of moving to state where they can effectively address issues, but are only in the early stages of the programme.
Level 2 Data Governance Organisation – Reactive
  • A Reactive Data Governance Organisation is able to address some of its issues, but not until some time after they have occurred. The organisation is not able to address root causes or predict when they are likely to occur. “Heroes” are often needed to address complex data quality issues and the impact of fixes done on a system-by-system level are often poorly understood.
Level 3 Data Governance Organisation – Proactive
  • A Proactive Data Governance Organisation can stop issues before they occur as they are empowered to address root cause problems. At this level, the organisation also conducts ongoing monitoring of data quality to issues that do occur can be resolved quickly.
Level 4 Data Governance Organisation – Managed
Level 5 Data Governance Organisation – Optimal

The MIKE2.0 Solution for the the Centre of Excellence provides an overall approach to improving Data Governance through a Centre of Excellence delivery model for Infrastructure Development and Information Development. We recommend this approach as the most efficient and effective model for building these common set of capabilities across the enterprise environment.

Feel free to check it out when you have a moment and offer any suggestions you may have to improve it.

Sincerely,

MIKE2.0 Community

Contribute to MIKE:Start a new article, help with articles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

Join Us on

images.jpg

Did You Know?
All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

 

This Week’s Blogs for Thought:

Share the Love… of Data Quality!

A recent news article on Information-Management.com suggested a link between inaccurate data and “lack of a centralized approach.” But I’m not sure that “lack of centralization” is the underlying issue here; I’d suggest the challenge is generally more down to “lack of a structured approach”, and as I covered in my blog post “To Centralise or not to Centralise, that is the Question”, there are organizational cultures that don’t respond well (or won’t work at all) to a centralized approach to data governance.

Read more.

Avoid Daft Definitions for Sound Semantics

A few weeks ago, while reading about the winners at the 56th Annual Grammy Awards, I saw that Daft Punk won both Record of the Year and Album of the Year, which made me wonder what the difference is between a record and an album. Then I read that Record of the Year is awarded to the performer and the production team of a single song. While Daft Punk won Record of the Year for their song “Get Lucky”, the song was not lucky enough to win Song of the Year (that award went to Lorde for her song “Royals”). My confusion about the semantics of the Grammy Awards prompted a quick trip to Wikipedia, where I learned that Record of the Year is awarded for either a single or individual track from an album.

Read more.

Social Data: Asset and Liability

In late December of 2013, Google Chairman Eric Schmidt admitted that ignoring social networking had been a big mistake. ”I guess, in our defense, we were busy working on many other things, but we should have been in that area and I take responsibility for that,” he said.
Brass tacks: Google’s misstep and Facebook’s opportunistic land grab of social media have resulted in a striking data chasm between the two behemoths. As a result, Facebook can do something that Google just can’t.

Read more.

Forward to a Friend!Know someone who might be interested in joining the Mike2.0 Community? Forward this message to a friend

Questions? Please email us at mike2@openmethodology.org.

Category: Information Development
No Comments »

by: Alandduncan
11  Feb  2014

Share the love… of Data Quality

A distributed approach to Data Quality may result in better data outcomes

A recent news article on Information-Management.com suggested a link between inaccurate data and “lack of a centralized approach.”

But I’m not sure that “lack of centralization” is the underlying issue here; I’d suggest the challenge is generally more down to “lack of a structured approach”, and as I covered in my blog post “To Centralise or not to Centralise, that is the Question”, there are organizational cultures that don’t respond well (or won’t work at all) to a centralized approach to data governance.

When you then extend this to the more operational delivery processes of Data Quality Management, I’d go so far as to suggest that a distributed and end-user oriented approach to managing data quality is actually desirable, for several reasons:

* Many organisations just haven’t given data quality due consideration, and the impacts can be significant, but often hidden.
* Empowering users to thinks about and act upon data challenges can become a catalyst for a more structured, enterprise wide approach.
* By managing data quality issues locally, knowledge and expertise is maintained as close to point-of-use as possible.
* In environments where funding is hard to come by or where there isn’t appetite to establish critical mass for data quality activity, progress can still be made and value can still be delivered

I also observe two trends in business, that have been consistent in the twenty-plus years that I’ve been working, which are contributing to make a centralised delivery of data outcomes ever-more difficult:

1) Human activity has become more and more complex. We’re living in a mobile, connected, graphical, multi-tasking, object-oriented, cloud-serviced world, and the rate at which we’re collecting data is showing no sign of abatement. It may well be that our data just isn’t “controllable” in the classic sense any more, and that what’s really needed is mindfulness. (I examined this in my post “Opening Pandora’s Box“)

2) Left to their own devices, business systems and processes will tend to decay towards a chaotic state over time, and it is management’s role to keep injecting focus and energy into the organisation. If this effort can be spread broadly across the organisation, then there is an overall cultural change towards better data. (I covered aspects of this in my post “Business Entropy – Bringing Order to the Chaos“)

Add the long-standing preoccupation that management consultants have with mapping “Business Process” rather than mapping “Business Data” and you end up in the situation that data does not get nearly enough attention. (And once attention IS payed, then the skills and capabilities to do something about it are often lacking).

Change the culture, change the result – that doesn’t require centralisation to make it happen.

Category: Information Development
1 Comment »

by: Ocdqblog
11  Feb  2014

Avoid Daft Definitions for Sound Semantics

A few weeks ago, while reading about the winners at the 56th Annual Grammy Awards, I saw that Daft Punk won both Record of the Year and Album of the Year, which made me wonder what the difference is between a record and an album.

Then I read that Record of the Year is awarded to the performer and the production team of a single song. While Daft Punk won Record of the Year for their song “Get Lucky”, the song was not lucky enough to win Song of the Year (that award went to Lorde for her song “Royals”).

My confusion about the semantics of the Grammy Awards prompted a quick trip to Wikipedia, where I learned that Record of the Year is awarded for either a single or individual track from an album. This award goes to the performer and the production team for that one song. In this context, record means a particular recorded song, not its composition or an album of songs.

Although Song of the Year is also awarded for a single or individual track from an album, the recipient of this award is the songwriter who wrote the lyrics to the song. In this context, song means the song as composed, not its recording.

The Least Ambiguous Award goes to Album of the Year, which is indeed awarded for a whole album. This award goes to the performer and the production team for that album. In this context, album means a recorded collection of songs, not the individual songs or their compositions.

These distinctions, and the confusion it caused me, seemed eerily reminiscent of the challenges that happen within organizations when data is ambiguously defined. For example, terms like customer and revenue are frequently used without definition or context. When data definitions are ambiguous, it can easily lead to incorrect uses of data as well as confusing references to data during business discussions.

Not only is it difficult to reach consensus on data definitions, definitions change over time. For example, Record of the Year used to be awarded to only the performer, not the production team. And the definition of who exactly counts as a member of the production team has been changed four times over the years, most recently in 2013.

Avoiding semantic inconsistencies, such as the difference between a baker and a Baker, is an important aspect of metadata management. Be diligent with your data definitions and avoid daft definitions for sound semantics.

Category: Information Development
1 Comment »

by: Phil Simon
09  Feb  2014

Social Data: Asset and Liability

In late December of 2013, Google Chairman Eric Schmidt admitted that ignoring social networking had been a big mistake. ”I guess, in our defense, we were busy working on many other things, but we should have been in that area and I take responsibility for that,” he said.

Brass tacks: Google’s misstep and Facebook’s opportunistic land grab of social media have resulted in a striking data chasm between the two behemoths. As a result, Facebook can do something that Google just can’t.

To his credit, Mark Zuckerberg has not been complacent with this lead. This is an age of ephemera. He is building upon his company’s lead in social data. Case in point: the launch of Graph Search.

The rationale here is pretty straightforward: Why let Google catch up? With Graph Search, Facebook users can determine which of their friends have gone to a Mexican restaurant in the last six months in San Francisco. What about which friends like the Rolling Stones or The Beatles? (Need to resell a ticket? Why use StubHub here? Maybe Facebook gets a cut of the transaction?) These are questions and problems that Google can’t address but Facebook can.

All good for Zuck et. al, right? Not really. It turns out that delivering relevant social data in a timely manner is proving remarkably elusive, even for the smart cookies at Facebook.

The New News Feeds

As Wired reported in May of 2013, Facebook “redesigned its News Feed with bolder images and special sections for friends, photos, and music, saying the activity stream will become more like a ‘personalized newspaper’ that fits better with people’s mobile lifestyles.” Of course, many users didn’t like the move, but that’s par for the course these days. You’re never going to make 1.2 billion users happy.

But Facebook quickly realized that it didn’t get the relaunch of News Feed right. Not even close. Just a few weeks before Schimdt’s revealing quote, Business Insider reported that Facebook was at it again, making major tweaks to its feed and then halting its new launch. This problem has no simple solution.

Simon Says: Big Data Is a Full-Time Job

Big Data is no picnic. “Managing” it isn’t easy, even for billion-dollar companies such as Facebook. The days of “set it and forget it” have long passed. Organizations need to be constantly monitoring the effectiveness of their data-driven products and services, to say nothing of testing for security issues. (Can someone say Target?)

Feedback

What say you?

Category: Information Development
No Comments »

by: John McClure
06  Feb  2014

A Tale of 30 Nuclear Plants

In 2012 the New York Times commissioned a study showing that worldwide, server farms’ energy use is equivalent to 30 nuclear power plants, roughly 2 percent of all electricity used. In fact, the report continued “energy efficiency varies widely from company to company … on average, they were using only 6 percent to 12 percent of the electricity powering their servers to perform computations. The rest was essentially used to keep servers idling and ready in case of a surge in activity that could slow or crash their operations.”

Okay, let’s do the math. This means industry data centers are seriously inefficent to the tune of about 27 billion watts each year, and undoubtedly growing since 2010. And that doesn’t count the huge, spinning flywheels or thousands of lead-acid batteries ready to take over when electrical fluctuations occur.

“It’s just not sustainable,” said Mark Bramfitt, a former utility executive. “It’s a waste,” said Dennis P. Symanski, a senior researcher at the Electric Power Research Institute (EPRI), a nonprofit industry group.

Why is this happening? “You do have to take into account that the explosion of data is what aids and abets this,” said Mr. Taylor of the Uptime Institute. “At a certain point, no one is responsible anymore, because no one, absolutely no one, wants to go in that room and unplug a server.” How did this come about? “We’ve overprovisioned … for years,” as Mr. Cappuccio, managing vice president and chief of research at Gartner, explained. “Let’s overbuild just in case we need it” he says, agreeing with Mr. Symanski of EPRI who shares that “they don’t get a bonus for saving on the electric bill. They get a bonus for having the data center available 99.999 percent of the time.”

With ever-growing energy needs, there’s no surprise data centers are among electrical utilities’ most prized customers. But “what’s driving that massive growth is the end-user expectation of anything, anytime, anywhere,” said David Cappuccio, a managing vice president and chief of research at Gartner, the technology research firm. “We’re what’s causing the problem.”

Problems & Solutions. It’s time to come to the aid of your country! The near-term consequences — absolute climate chaos — are simply too much a threat to soft-pedal what we all must now do.

Contact one of the many “green solutions” providers springing up every day who help enterprises significantly reduce ongoing server farm energy requirements. Hire industrial engineers as site-wide evangelists who specifically develop champion and monitor compliance to energy-saving programs.

Without immediate strong commitment to reducing energy use for whatever reasons important to an enterprise then it’s possible, just possible, the enterprise isn’t operating in the best interests of its shareholders, its customers, its workforce or ultimately, itself.

Indeed, as Mr. Cappuccio (Gartner Grup) might ask: “How in the world can you run a business like that?”

Category: Sustainable Development
No Comments »

by: Bsomich
01  Feb  2014

Missed what happened in the MIKE2.0 Community this week?

 logo.jpg

Big Data Solution Offering 

Have you seen the latest offering in our Composite Solutions suite?

The Big Data Solution Offering provides an approach for storing, managing and accessing data of very high volumes, variety or complexity.

Storing large volumes of data from a large variety of data sources in traditional relational data stores is cost-prohibitive, and regular data modeling approaches and statistical tools cannot handle data structures with such high complexity. This solution offering discusses new types of data management systems based on NoSQL database management systems and MapReduce as the typical programming model and access method.

Read our Executive Summary for an overview of this solution.

We hope you find this new offering of benefit and welcome any suggestions you may have to improve it.

Sincerely,

MIKE2.0 Community

Popular Content

Did you know that the following wiki articles are most popular on Google? Check them out, and feel free to edit or expand them!

What is MIKE2.0?
Deliverable Templates
The 5 Phases of MIKE2.0
Overall Task List
Business Assessment Blueprint
SAFE Architecture
Information Governance Solution

Contribute to MIKE:

Start a new article, help with articles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

Join Us on images.jpg

 

This Week’s Blogs for Thought:

Your data is in the cloud when…

It’s fashionable to be able to claim that you’ve moved everything from your email to your enterprise applications “into the cloud”.  But what about your data?  Just because information is stored over the Internet, it shouldn’t necessarily qualify as being “in the cloud.”

Read more.

The Top of the Data Quality Bell Curve

“Information is the value associated with data,” William McKnight explains in his book Information Management: Strategies for Gaining a Competitive Advantage with Data.  “Information is data under management that can be utilized by the company to achieve goals.”  Does that data have to be perfect in order to realize its value and enable the company to achieve its goals?  McKnight says no.
Data quality, according to McKnight, “is the absence of intolerable defects.”

Read more.

Business Entropy: Bringing order to the chaos

In his recent post, Jim Harris drew an analogy between the interactions of atomic particles, sub-atomic forces and the working of successful collaborative teams, and coined the term ego-repulsive force. Jim’s post put me in mind of another aspect of physics that I think has parallels with our business world – the Second Law of Thermodynamics, and the concept of entropy.

In thermodynamics, “entropy” describes a measure of the number of ways a closed system can be arranged; such systems spontaneously evolve towards a state of equilibrium, which are at maximum entropy (and therefore, maximum disorder). I observe that this mechanism also holds true for the workings of organisations – and there is a law of Business Entropy at work.

Read more.

Forward this message to a friendQuestions?

If you have any questions, please email us at mike2@openmethodology.org.

 

Category: Information Development
No Comments »

Calendar
Collapse Expand Close
TODAY: Sun, July 27, 2014
July2014
SMTWTFS
293012345
6789101112
13141516171819
20212223242526
272829303112
Recent Comments
Collapse Expand Close