Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.
by: Robert.hillard
20  Dec  2014

Our machines will not outsmart us

Over the millennia we have been warned that the end of the world is nigh.  While it will no doubt be true one day, warnings by Stephen Hawking in a piece he co-authored on artificial intelligence don’t fill me with fear (See Transcending Complacency on Superintelligent Machines).  I disagree with the commentators across the board who are warning that the machine will outsmart us by the 2030s and that it could become a Terminator-style race between us and them.

Hawking and his co-authors argue that “[s]uccess in creating AI would be the biggest event in human history.  Unfortunately, it might also be our last unless we learn how to avoid the risks.”  They go on to compare artificial intelligence (AI) to an alien life form of superior intelligence who would owe us no comfort or future on this planet.

These comments relate to the so-called “singularity”, a term popularised by writer Vernor Vinge where, sometime in the vicinity of the 2030s, AI can out think humans.

I have previously written about the limits of current AI research (see Your insight might protect your job).  Although current techniques (which I argue are the second generation of AI) cannot scale to provide the cognitive leaps that are necessary for real insight, it would be wrong to assume that a third generation isn’t on the horizon.

Despite the potential for a third (yet to be imagined) generation of technology for AI, there are three reasons why I disagree that such machines will take over the world or even outsmart us.

1. It’s all about the user interface

Simply applying Big Data analytics to the content of the internet will not create a machine that is smarter than us.  If a machine is to be of our world, it needs to be able to interact with it through a user interface.

The human brain only works well in conjunction with opposable thumbs, something that few other intelligent animals can’t compete with us on.  Regardless of how intelligent a dolphin is, the lack of a good interface means that it can’t manipulate the world around it and in-turn learn from these interactions.

Like previous generations of computing, it is all about the user interface.  Robotics is likely to overcome these constraints but current predictions of the internet coming alive due to its complexity are fanciful.  Far from running out of our control, we are re-architecting our technology to remove the risk of runaway complexity by segmenting the systems that touch our physical world.  This segmentation is like cutting off the closest thing that the internet has to opposable thumbs.

2. We will become the machine

Information, knowledge and intelligence directly equate to power.  Humans never give-up power easily and choose political alliances with adversaries rather than cede control.

Any competition between humans and machines is likely to follow the same lines.  Rather than cede to the machine, we will join with them.  I’ve previously written about what might become the first direct neural interfaces (see Will the bionic eye solve information overload?).  It is inconceivable that we won’t choose to augment our own brains with the internet in the coming decades.

Such a future virtually guarantees supremacy of our species against any machine competition, but it does paint a future which is perhaps uncomfortable from our vantage point today.

3. We aren’t their competitors

Despite what you might read, we live the majority of our lives in the physical world.  We eat food, enjoy socialising in person and interact with our hobbies in three dimensions.

Our computers live almost entirely in memory of the machines that we have made.  They are creatures of the internet.  While we visualise the internet through our browsers, apps and other tools, we are visitors to this space.  We twist its contents to represent metaphors of the physical world (for example, paper for writing on and rooms for meeting in).

Some scientists argue that the virtual world is an entirely valid reality.  Nick Bostrom has even gone so far as to wonder whether our own universe is the virtual world of some supercomputer experiment being run by an alien life form (see Are you living in a computer simulation?)  If that is the case, we need to be very afraid of the alien “off” switch!

Regardless of the simulation argument, any virtual reality of the internet where AI may take shape is not our reality.  It is as if we are of different universes, but like the multiverse that is regaining popularity in theoretical physics, we do have an increasingly symbiotic relationship.

Category: Enterprise2.0
No Comments »

by: Bsomich
20  Dec  2014

MIKE2.0 Community Update

Missed what’s been happening in the MIKE2.0 data management community? Read on!

 

Click to view this email in a browser

  
 logo.jpg

Available for Order: Information Development Using MIKE2.0

Have you heard? Our new book, “Information Development Using MIKE2.0” is available for order. 

The vision for Information Development and the MIKE2.0 Methodology have been available in a collaborative, online fashion since 2006, and are now made available in print publication to a wider audience, highlighting key wiki articles, blog posts, case studies and user applications of the methodology. 

Authors for the book include Andreas Rindler, Sean McClowry, Robert Hillard, and Sven Mueller, with additional credit due to Deloitte, BearingPoint and over 7,000 members and key contributors of the MIKE2.0 community. The book has been published in paperback as well as all major e-book publishing platforms. 

Get Involved:

To get your copy of the book, visit our order page on Amazon.com. For more information on MIKE2.0 or how to get involved with our online community, please visit www.openmethodology.org.

Sincerely,

MIKE2.0 Community  

Contribute to MIKE:

Start a new article, help with articles under construction or look for other ways to contribute

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

Join Us on
 images.jpg

 

Did You Know?
All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

 

This Week’s Blogs for Thought:

Keep Your Holidays Merry and Bright: Don’t Delay in Improving Security

With the holidays upon us, most businesses are dealing with what is usually their busiest time of the year. It’s a period of excitement and increased sales, but it’s also a time of worry and concern. In the wake of the recent data breaches at large retailers like Target and Home Depot, many businesses are approaching the holidays with a more cautious attitude, particularly toward security. Hackers have the potential to steal data and cause millions of dollars in damages, essentially crippling any business no matter their size. What’s even more alarming is that many companies haven’t responded effectively to the threat of security breaches.

Read more.

Pros and Cons of Hadoop and Cloud Providers

Selecting a big data solution can be tricky at times with the different options available for enterprises. Deciding between a cloud big data analytics provider and an on-premise Hadoop solution comes down to recognizing the pros and cons of both options and how they will affect the bottom line.

Read more.

Open Data Gray Areas 

In a previous post, I discussed some data quality and data governance issues associated with open data. In his recent blog post How far can we trust open data?, Owen Boswarva raised several good points about open data.
“The trustworthiness of open data,” Boswarva explained, “depends on the particulars of the individual dataset and publisher. Some open data is robust, and some is rubbish. That doesn’t mean there’s anything wrong with open data as a concept. The same broad statement can be made about data that is available only on commercial terms. But there is a risk attached to open data that does not usually attach to commercial data.”

Read more.

Forward to a Friend!

Know someone who might be interested in joining the Mike2.0 Community? Forward this message to a friend

Questions?

If you have any questions, please email us at mike2@openmethodology.org

 

Category: Information Development
No Comments »

by: RickDelgado
16  Dec  2014

Are Tech Bans in the Workplace Effective?

You see them everywhere nowadays, whether you’re walking down the street, riding the bus, attending a sporting event, or eating at a restaurant. Smartphones have become a common sight, providing an easy and convenient way to communicate with family and friends through email, texting, and social media. The number of functions smartphones can possess is staggering, so it’s no wonder they’ve become such a major fixture in our lives. As mobile devices creep into everyday life, there’s one place where they might not be entirely welcome: the office. Smartphones have entered the workplace, and with workers performing their jobs on the road more frequently, they can often be the one connection they need to keep up with their workloads. But more and more businesses are looking at smartphones as more of a nuisance than an advantage. Now, some organizations are even considering banning smartphones and similar technology.

If company leaders want to look for reasons to ban personal smartphones and tablets in the office, they don’t have to look far. A recent study backs up many concerns and worries generated by the infiltration of smartphones in work environments. According to the study, a whopping 95 percent of employees say they’re distracted during the workday. While the first source of distraction is coworkers, 45 percent did say that they were distracted by technology like text messages and personal emails. People can become addicted to their smartphones, and it doesn’t take much for an employee to become sucked into their Facebook newsfeed.

In addition to the worries over distractions and missing productivity, business leaders also have significant fears over the security risks that might increase when people use their own devices in the workplace. Many businesses have actually encouraged people to bring personal mobile devices to work with bring your own device (BYOD) policies, but the result has been an increase in the volume and severity of security incidents. When workers are part of a BYOD policy, that means they’ll connect their devices to the business network. Should the device have malware or other damaging code on it, the network and other systems–along with other devices–may become infected as well, fully compromising IT security. Plus, there’s always the fear that an employee’s device will get lost or stolen, and if it has sensitive company data on it, the company may suffer damage.

So worries over distractions and security risks seem to show the need to ban smartphones and other mobile devices from the workplace, but that would ignore the benefits that come from bringing this technology. While they may cause distractions, smartphones have also been shown to increase productivity among employees. That’s one of the reasons workers like to use their own devices as well as a reason so many businesses have adopted BYOD in the first place. Employees can get more work done since they are already familiar with their devices and know how best to use them. BYOD policies can also save companies money since workers would be responsible for the costs of those devices.

There’s also the question of whether a ban on smartphones is even effective. While it’s true that some bans in Finland have been successful, tech bans in other parts of the world are more of a mixed bag. Research shows that around 70 percent of workers already bring their personal devices to work every day. Of those who do so, about 20 percent use personal smartphones that are restricted by their employers. As mentioned before, people grow attached to their smartphones, and many will likely bring them in even if told not to. Workers are also increasingly reliant on their devices for doing work-related tasks. Apps and other features have been developed with businesses in mind, making smartphones a valuable tool in the workplace. Many of the distractions found on smartphones (email, social media) can also be accessed through normal desktop computers as well. In short, tech bans might seem like a good idea at first, but they may prove ineffective in the long run.

Smartphones can be much more than a distraction or security risk. While businesses may be understandably cautious about mobile devices, their employees can get more done in less time. Ultimately, tech bans don’t accomplish what they set out to do, so a more active approach like monitoring mobile activity may be helpful in keeping workers on task and minimizing security threats. A few common sense rules and guidelines can go a long way to getting the most out of smartphones, which effectively avoids an outright ban.

 

Category: Business Intelligence
No Comments »

by: Ocdqblog
14  Dec  2014

Open Data Grey Areas

In a previous post, I discussed some data quality and data governance issues associated with open data. In his recent blog post How far can we trust open data?, Owen Boswarva raised several good points about open data.

“The trustworthiness of open data,” Boswarva explained, “depends on the particulars of the individual dataset and publisher. Some open data is robust, and some is rubbish. That doesn’t mean there’s anything wrong with open data as a concept. The same broad statement can be made about data that is available only on commercial terms. But there is a risk attached to open data that does not usually attach to commercial data.”

Data quality, third-party rights, and personal data were three grey areas Boswarva discussed. Although his post focused on a specific open dataset published by an agency of the government of the United Kingdom (UK), his points are generally applicable to all open data.

Data Quality

As Boswarva remarked, the quality of a lot of open data is high even though there is no motivation to incur the financial cost of verifying the quality of data being given away for free. The “publish early even if imperfect” principle also encourages a laxer data quality standard for open data. However, “the silver lining for quality-assurance of open data,” Boswarva explained is that “open licenses maximize re-use, which means more users and re-users, which increases the likelihood that errors will be detected and reported back to the publisher.”

Third-Party Rights

The issue of third-party rights raised by Boswarva was one that I had never considered. His example was the use of a paid third-party provider to validate and enrich postal address data before it is released as part of an open dataset. Therefore, consumers of the open dataset benefit from postal validation and enrichment without paying for it. While the UK third-party providers in this example acquiesced to open re-use of their derived data because their rights were made clear to re-users (i.e., open data consumers), Boswarva pointed out that re-users should be aware that using open data doesn’t provide any protection from third-party liability and, more importantly, doesn’t create any obligation on open data publishers to make sure re-users are aware of any such potential liability. While, again, this is a UK example, that caution should be considered applicable to all open data in all countries.

Personal Data

As for personal data, Boswarva noted that while open datasets are almost invariably non-personal data, “publishers may not realize that their datasets contain personal data, or that analysis of a public release can expose information about individuals.” The example in his post centered on the postal addresses of property owners, which without the names of the owners included in the dataset, are not technically personal data. However, it is easy to cross-reference this with other open datasets to assemble a lot of personally identifiable information that if it were contained in one dataset would be considered a data protection violation (at least in the UK).

 

Category: Data Quality, Information Governance
No Comments »

by: RickDelgado
12  Dec  2014

How Much Longer Until Flash Storage is the Only Storage?

In terms of trends, few are as clear and popular as flash storage. While other technology trends might be more visible among the general public (think the explosion of mobile devices), the rise of flash storage among enterprises of all sizes has the potential to make just as big of an impact in the world, even if it happens beneath the surface. There’s little question that the trend is growing and looks to continue over the next few years, but the real question revolves around flash storage and the other mainstream storage option: hard disk drives (HDD). While HDD remains more widely used, flash storage is quickly gaining ground. The question then becomes, how long do we have to wait before flash storage not only overtakes hard drives but becomes the only game in town? A careful analysis reveals some intriguing answers and possibilities for the future, but one that emphasizes a number of obstacles that still need to be overcome.

First, it’s important to look at why flash storage has become so popular in the first place. One of the main selling points of flash storage or solid-state drives (SSD) is its speed. Compared to hard drives, flash storage has much faster processing power. This is achieved by storing data on rewritable memory cells, which doesn’t require moving parts like hard disk drives and their rotating disks (this also means flash storage is more durable). Increased speed and better performance means apps and programs can launch more quickly. The capabilities of flash storage have become sorely needed in the business world since companies are now dealing with large amounts of information in the form of big data. To properly process and analyze big data, more businesses are turning to flash, which has sped up its adoption.

While it’s clear that flash array storage features a number of advantages in comparison to HDD, these advantages don’t automatically mean it is destined to be the sole storage option in the future. For such a reality to come about, solutions to a number of flash storage problems need to be found. The biggest concern and largest drawback to flash storage is the price tag. Hard drives have been around a long time, which is part of the reason the cost to manufacture them is so low. Flash storage is a more recent technology, and the price to use it can be a major barrier limiting the number of companies that would otherwise gladly adopt it. A cheap hard drive can be purchased for around $0.03 per GB. Flash storage is much more expensive at roughly $0.80 per GB. While that not seem like much, keep in mind that’s about 27 times more expensive. For businesses being run on a tight budget, hard drives seem to be the more practical solution.

Beyond the price, flash storage may also suffer from performance problems down the line. While it’s true that flash storage is faster than HDD, it also has a more limited lifespan. Flash cells can only be rewritten so many times, so the more times a business uses it, the more performance will suffer. New technology has the potential to increase that lifespan, but it’s still a concern that enterprises will have to deal with in some fashion. Another problem is that many applications and systems that have been in use for years were designed with hard drives in mind. Apps and operating systems are starting to be created with SSD as the primary storage option, but more changes to existing programs need to happen before flash storage becomes the dominant storage solution.

So getting back to the original question, when will flash storage be the new king of storage options? Or is such a future even likely? Experts differ on what will happen within the next few years. Some believe that it will be a full decade before flash storage is more widely used than hard drives. Others have said that looking at hard drives and flash storage as competitors is the wrong perspective to have. They say the future lies with not one or the other but rather both used in tandem through hybrid systems. The idea would be to use flash storage for active data that is used frequently, while hard drives would be used for bulk storage and archive purposes. There are also experts who say discussion over which storage option will win out is pointless because within the next decade, better storage technologies like memristors, phase-change memory, and even atomic memory will become more mainstream. However the topic is approached, current advantages featured in flash storage make it an easy choice for enterprises with the resources to use it. For now, the trend of more flash looks like it will continue its impressive growth.

Category: Web Content Management
No Comments »

by: Gil Allouche
10  Dec  2014

Pros and cons of Cloud providers and Hadoop solutions

Selecting a big data solution can be tricky at times with the different options available for enterprises. Deciding between a cloud big data analytics provider and an on-premise Hadoop solution comes down to recognizing the pros and cons of both options and how they will affect the bottom line.

Pros of On-premise Hadoop Solutions

As with any on-premise solution, on-premise Hadoop allows businesses to have complete control over their Hadoop cluster and perhaps more importantly their data. While the cloud is getting more accessible to industries facing heavy security and compliance regulations, some companies may prefer to keep everything in-house. On-premise Hadoop also avoids the complexity or potential log-in of vendor SLA agreements.

Cons of On-premise Hadoop solutions

Investing in Hadoop hardware can prove to be expensive depending on how it’s being used. Despite using low-cost commodity servers, extending to thousands of nodes can result in significant costs requiring attention to problems typically uncommon but become increase in frequency with a large number of servers.

Hadoop is also complex to maintain and manage. Companies will have to dedicate certain employees to deploying clusters every time a query is made. Once a cluster’s capacity is reached, it will also eat up resources adding additional nodes to th ecluster.

Pros of Cloud providers

One of the pros of using a cloud provider versus an on-premise Hadoop solution is the scalable nature of a cloud provider. Cloud platforms allow for total scalability allowing companies to access unlimited storage on demand. Enterprises can easily upscale or downscale depending on the IT requirements allowing business growth to be supported without expensive changes to your existing IT systems.

Another pro of using a cloud provider versus an on-premise Hadoop solution is the flexibility of solutions available both in and out of the workplace. Employees can more easily access files using devices like smartphones and laptops. Organizations can simultaneously share documents and other files over the cloud while supporting both internal and external collaboration.

Cons of Cloud Providers

Due to the massive growth of cloud computing, organizations are starting to rely on managed data centers run by cloud experts trained in maintaining and scaling shared, private and hybrid Clouds. Companies who do not have their own data scientists will have to make changes to their current cloud computing structure to meet their evolving data needs.

A risk organizations make with cloud providers is relying on the provider’s level of security and responsiveness to technical issues. Though rare, the cloud provider may have downtime that impacts a businesses’ ability to run queries or meet the customer’s demand for queries.

The path businesses take will depend on individual needs and circumstances as there are pros and cons to each type of solution. As big data moves mainstream, you will want to consider how you will take advantage of this resource.

 

Category: Business Intelligence
No Comments »

by: RickDelgado
06  Dec  2014

Keep Your Holidays Merry and Bright: Don’t Delay in Improving Holiday Security

With the holidays upon us, most businesses are dealing with what is usually their busiest time of the year. It’s a period of excitement and increased sales, but it’s also a time of worry and concern. In the wake of the recent data breaches at large retailers like Target and Home Depot, many businesses are approaching the holidays with a more cautious attitude, particularly toward security. Hackers have the potential to steal data and cause millions of dollars in damages, essentially crippling any business no matter their size. What’s even more alarming is that many companies haven’t responded effectively to the threat of security breaches. A recent study has shown that up to 58 percent of retailers are actually less secure compared to a year ago. While some may have added new network security features, cyber attackers have had added time to get inside a business system and take advantage of any weaknesses they have found. The lesson is that organizations need to work on their security for the holidays, and they need to do so immediately. Any delay could be costly.

 

When it comes to improving business security, one of the first steps is to identify where a company may be vulnerable. This can be accomplished primarily through vulnerability scans. These scans are basically an automated test that businesses can run to find weaknesses within their networks and systems. Any vulnerabilities may eventually be used by hackers to infiltrate the network and steal valuable and sensitive information. This is a particular concern during the holidays since the number of credit card purchases increases dramatically. With the weaknesses properly identified, companies will know where to focus their attention.

 

Finding vulnerabilities as soon as possible is especially important because current hacking techniques are different than those used years ago. While some hackers may still employ traditional hit-and-run tactics, many others have the long game in mind. Companies that experience attacks during the holidays may actually be suffering from an infiltration that occurred as long as six months ago. Surprisingly, recent research has also shown that attacks during the holiday shopping season don’t actually increase in number, but that doesn’t mean hackers aren’t busy. Many may infiltrate a network during that time but not launch an attack until many months have passed. The main point is that finding vulnerabilities quickly is the first step businesses need to take, and fixing those problems needs to follow immediately.

 

Companies also need to be on guard for other cyber attacks targeting their business. One of the most common during the holidays is spear phishing. Spear phishing isn’t necessarily targeted toward a company’s network but rather at the employees. The idea is to deceive people into believing an email or similar message is real and have them click on a link. That link usually leads to downloading malware or some other type of virus. During the holidays, spear phishing usually comes in the form of fake charity emails, false shipping confirmations, or a fraudulent bank notification. Since users are making more unusual purchases during this time of year, they are more susceptible to believing this type of scam. While it may seem like this is more of a problem for individuals than for an organization, employees are using personal mobile devices at work much more often through BYOD policies, and those devices often connect to the company’s network. If those devices have been infected with malware, the business could be in trouble.

 

Combating this type of cyber attack requires companies to inform and train their employees. Workers need to know about the security threats that are out there. That means spotting the warning signs and knowing how to respond to them. This is especially important during the holidays since many workers are only seasonal and may not receive adequate training. Even if the job is temporary, employees still need to be kept up to date about the risks and how to prevent them. Taking this proactive step immediately can help businesses avoid security breaches during the holidays and into the future.

 

The last thing any business wants is to deal with a security breach during the holidays. Though the threats may feel overwhelming, it’s never too late to start improving security, finding vulnerabilities, and educating employees about the dangers they may face. Fighting the threats is an ongoing battle that should receive extra attention at any time of year, not just the holiday shopping season. With better security, businesses can feel more confident about protecting customer information and preparing for another busy year ahead.

 

Category: Business Intelligence
No Comments »

by: Bsomich
06  Dec  2014

MIKE2.0 Community Update

 
 logo.jpg

Did You Know? 

MIKE’s Integrated Content Repository brings together the open assets from the MIKE2.0 Methodology, shared assets available on the internet and internally held assets. The Integrated Content Repository is a virtual hub of assets that can be used by an Information Management community, some of which are publicly available and some of which are held internally.

Any organisation can follow the same approach and integrate their internally held assets to the open standard provided by MIKE2.0 in order to:

  • Build community
  • Create a common standard for Information Development
  • Share leading intellectual property
  • Promote a comprehensive and compelling set of offerings
  • Collaborate with the business units to integrate messaging and coordinate sales activities
  • Reduce costs through reuse and improve quality through known assets

The Integrated Content Repository is a true Enterprise 2.0 solution: it makes use of the collaborative, user-driven content built using Web 2.0 techniques and technologies on the MIKE2.0 site and incorporates it internally into the enterprise. The approach followed to build this repository is referred to as a mashup.

Feel free to try it out when you have a moment- we’re always open to new content ideas.

Sincerely,

MIKE2.0 Community  

Contribute to MIKE:

Start a new article, help witharticles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

Join Us on
 images.jpg

 

Did You Know?
All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

 

This Week’s Blogs for Thought:

Could Data Governance Actually Endanger Data?

One of my favorite books is SuperFreakonomics by economist Steven Levitt and journalist Stephen Dubner, in which, as with their first book and podcast, they challenge conventional thinking on a variety of topics, often revealing counterintuitive insights about how the world works. One of the many examples from the book is their analysis of the Endangered Species Act (ESA) passed by the United States in 1973 with the intention to protect critically imperiled species from extinction.

Read more.

Trading Your Way to IT Simplicity

Stop reading now if your organisation is easier to navigate today than it was 3, 5 or 10 years ago.  The reality that most of us face is that the general ledger that might have cost $100,000 to implement twenty or so years ago will now cost $1 million or even $10 million.  Just as importantly, it is getting harder to implement new products, services or systems. The cause of this unsustainable business malaise is the complexity of the technology we have chosen to implement.

Read more.
Improving Network Security for Small Businesses 

You’ve likely read all about them–the massive security breaches and cyber attacks hitting major corporations like Home Depot, Target, and even the New York Times. These damaging attacks have cost these companies millions of dollars in damages, and they’re just a portion of all the security risk stories out there. As a small business owner, you may be tempted to think your company doesn’t have to worry as much about cyber attackers inflicting damage on your operations. After all, compared to a big business, your company has relatively few resources and doesn’t leave nearly as big of a footprint on the market. That belief, however, could leave you and your business vulnerable.

Read more.

 

Category: Information Development
No Comments »

by: Ocdqblog
30  Nov  2014

Could data governance actually endanger data?

One of my favorite books is SuperFreakonomics by economist Steven Levitt and journalist Stephen Dubner, in which, as with their first book and podcast, they challenge conventional thinking on a variety of topics, often revealing counterintuitive insights about how the world works.

One of the many examples from the book is their analysis of the Endangered Species Act (ESA) passed by the United States in 1973 with the intention to protect critically imperiled species from extinction.

Levitt and Dubner argued the ESA could, in fact, be endangering more species than it protects. After a species is designated as endangered, the next step is to designate the geographic areas considered critical habitats for that species. After an initial set of boundaries is made, public hearings are held, allowing time for developers, environmentalists, and others to have their say. The process to finalize the critical habitats can take months or even years. This lag time creates a strong incentive for landowners within the initial geographic boundaries to act before their property is declared a critical habitat or out of concern that it could attract endangered species. Trees are cut down to make their land less hospitable or development projects are fast-tracked before ESA regulation would prevent them. This often has the unintended consequence of hastening the destruction of more critical habitats and expediting the extinction of more endangered species.

This made me wonder whether data governance could be endangering more data than it protects.

After a newly launched data governance program designates the data that must be governed, the next step is to define the policies and procedures that will have to be implemented. A series of meetings are held, allowing time for stakeholders across the organization to have their say. The process to finalize the policies and procedures can take weeks or even months. This lag time provides an opportunity for developing ways to work around data governance processes once they are in place, or ways to simply not report issues. Either way this can create the facade that data is governed when, in fact, it remains endangered.

Just as it’s easy to make the argument that endangered species should be saved, it’s easy to make the argument that data should be governed. Success is a more difficult argument. While the ESA has listed over 2,000 endangered species, only 28 have been delisted due to recovery. That’s a success rate of only one percent. While the success rate of data governance is hopefully higher, as Loraine Lawson recently blogged, a lot of people don’t know if their data governance program is on the right track or not. And that fact in itself might be endangering data more than not governing data at all.

 

Category: Information Governance
No Comments »

by: Robert.hillard
22  Nov  2014

Trading your way to IT simplicity

Stop reading now if your organisation is easier to navigate today than it was 3, 5 or 10 years ago.  The reality that most of us face is that the general ledger that might have cost $100,000 to implement twenty or so years ago will now cost $1 million or even $10 million.  Just as importantly, it is getting harder to implement new products, services or systems.

The cause of this unsustainable business malaise is the complexity of the technology we have chosen to implement.

For the general ledger it is the myriad of interfaces. For financial services products it is the number of systems that need to keep a record of every aspect of business activity.  For telecommunications it is the bringing together the OSS and BSS layers of the enterprise.  Every function and industry has its own good reasons for the added complexity.

However good the reasons, the result is that it is generally easier to innovate in a small nimble enterprise, even a start-up, than in the big corporates that are the powerhouse of our economies.

While so much of the technology platform creates efficiencies, often enormous and essential to the productivity of the enterprise, it generally doesn’t support or even permit rapid change.  It is really hard to design the capacity to change into the systems that support the organisation.  The more complex an environment becomes the harder it is to implement change.

Enterprise architecture

Most organisations recognise the impact of complexity and try to reduce it by implementing an enterprise architecture in one form or another.  Supporting the architecture is a set of principles which, if implemented in full, will support consistency and dramatically reduce the cost of change.  Despite the best will in the world, few businesses or governments succeed in realising their lofty architectural principles.

The reason is that, while architecture is seen as the solution, it is too hard to implement.  Most IT organisations run their business through a book of projects.  Each project signs-up to an architecture but quickly implements compromises as challenges arise.

It’s no wonder that architects are perhaps the most frustrated of IT professionals.  At the start of each project they get wide commitment to the principles they espouse.  As deadlines loom, and the scope evolves, project teams make compromises.  While each compromise may appear justified they have the cumulative effect of making the organisation more rather than less complex.

Complexity has a cost.  If this cost is full appreciated, the smart organisation can see the value in investing in simplification.

Measuring simplicity

While architects have a clear vision of what “simple” looks like, they often have a hard time putting a measure against it.  It is this lack of a measure that makes the economics of technology complexity hard to manage.

Increasingly though, technologists are realising that it is in the fragmentation of data across the enterprise that real complexity lies.  Even when there are many interacting components, if there is a simple relationship between core information concepts then the architecture is generally simple to manage.

Simplicity can be achieved through decommissioning (see Value of decommissioning legacy systems) or by reducing the duplication of data.  This can be measured using the Small Worlds measure as described in MIKE2.0 or chapter 5 of my book Information-Driven Business.  The idea is further extended as “Hillard’s Graph Complexity” in Michael Blaha’s book, UML Database Modelling Workbook.

In summary, the measure looks at how many steps are required to bring together key concepts such as customer, product and staff.  The more fragmented information is, the more difficult any business change or product implementation becomes.

Consider the general ledger discussed earlier.  In its first implementation in the twentieth century, each key concept associated with the chart of accounts would have been managed in a master list whereas by the time we implement the same functionality today there would be literally hundreds if not thousands of points where various parts of the chart of accounts are required to index interfaces to subsidiary systems across the enterprise.

Trading simplicity

One approach to realising these benefits is to have dedicated simplification projects.  Unfortunately these are the first projects that get cut if short-term savings are needed.

Alternatively, imagine if every project that adds complexity (a little like adding pollution) needed to offset that complexity with equal and opposite “simplicity credits”.  Having quantified complexity, architects are well placed to define whether each new project simplifies the enterprise or adds complexity.

Some projects simply have no choice but to add complexity.  For example, a new marketing campaign system might have to add customer attributes.  However, if they increase the complexity they should buy simplicity “offsets” a little like carbon credits.

The implementation of a new general ledger might provide a great opportunity to reduce complexity by bringing various interfaces together or it could add to it by increasing the sophistication of the chart of the accounts.

In some cases, a project may start off simplifying the enterprise by using enterprise workflow or leveraging a third-party cloud solution, however in the heat of implementation be forced to make compromises that make it a net complexity “polluter”.

The CIO has a role to act as the steward of the enterprise and measure this complexity.  Project managers should not be allowed to forget their responsibility to leave the organisation cleaner and leaner at the conclusion of their project.  They should include the cost of this in their project budget and purchase offsetting credits from others if they cannot deliver within the original scope due complicating factors.

Those that are most impacted by complexity can pick their priority areas for funding.  Early wins will likely reduce support costs and errors in customer service.  Far from languishing in the backblocks of the portfolio, project managers will be queueing-up to rid the organisation of many of these long-term annoyances to get the cheapest simplicity credits that they can find!

Category: Enterprise2.0, Information Governance, Information Strategy
No Comments »

Calendar
Collapse Expand Close
TODAY: Sun, December 21, 2014
December2014
SMTWTFS
30123456
78910111213
14151617181920
21222324252627
28293031123
Recent Comments
Collapse Expand Close