Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.
by: Ocdqblog
29  Jun  2014

Overcoming Outcome Bias

What is more important, the process or its outcome? Information management processes, like those described by the MIKE2.0 Methodology, drive the daily operations of an organization’s business functions as well as support the tactical and strategic decision-making processes of its business leaders. However, an organization’s success or failure is usually measured by the outcomes produced by those processes.

As Duncan Watts explained in his book Everything Is Obvious: How Common Sense Fails Us, “rather than the evaluation of the outcome being determined by the quality of the process that led to it, it is the observed nature of the outcome that determines how we evaluate the process.” This is known as outcome bias.

While an organization is enjoying positive outcomes, such as exceeding its revenue goals for the current fiscal period, outcome bias basks processes in a rose-colored glow. Information management processes must be providing high-quality data to decision-making processes, which business leaders are using to make good decisions. However, when an organization is suffering from negative outcomes, such as a regulatory compliance failure, outcome bias blames it on broken information management processes and poor data quality that lead to bad decision-making.

“Judging the merit of a decision can never be done simply by looking at the outcome,” explained Jeffrey Ma in his book The House Advantage: Playing the Odds to Win Big In Business. “A poor result does not necessarily mean a poor decision. Likewise a good result does not necessarily mean a good decision.”

“We are prone to blame decision makers for good decisions that worked out badly and to give them too little credit for successful moves that appear obvious after the fact,” explained Daniel Kahneman in his book Thinking, Fast and Slow.

While risk mitigation is an oft-cited business justification for investing in information management, Kahneman also noted how outcome bias can “bring undeserved rewards to irresponsible risk seekers, such as a general or an entrepreneur who took a crazy gamble and won. Leaders who have been lucky are never punished for having taken too much risk. Instead, they are believed to have had the flair and foresight to anticipate success. A few lucky gambles can crown a reckless leader with a halo of prescience and boldness.”

Outcome bias triggers overreactions to both success and failure. Organizations that try to reverse engineer a single, successful outcome into a formal, repeatable process often fail, much to their surprise. Organizations also tend to abandon a new process immediately if its first outcome is a failure. “Over time,” Ma explained, “if one makes good, quality decisions, one will generally receive better outcomes, but it takes a large sample set to prove this.”

Your organization needs solid processes governing how information is created, managed, presented, and used in decision-making. Your organization also needs to guard against outcomes biasing your evaluation of those processes.

In order to overcome outcome bias, Watts recommended we “bear in mind that a good plan can fail while a bad plan can succeed—just by random chance—and therefore judge the plan on its own merits as well as the known outcome.”

 

Category: Data Quality, Information Development
No Comments »

by: RickDelgado
25  Jun  2014

Cloud Computing Trends to Watch for in 2014 and Beyond

Businesses tapping into the potential of cloud computing now make up the vast majority of enterprises out there. If anything, it’s those companies disregarding the cloud that have fallen way behind the rest of the pack. According to the most recent State of the Cloud Survey from RightScale, 87% of organizations are using the public cloud. Needless to say, businesses have figured out just how advantageous it is to make use of cloud computing, but now that it’s mainstream, experts are trying to predict what’s next for the incredibly useful technology. Here’s a look at some of the latest trends for cloud computing for 2014 and what’s to come in the near future.

1. IT Costs Reduced

There are a multitude of reasons companies have gotten involved with cloud computing. One of the main reasons is to reduce operational costs. This has proven true so far, but as more companies move to the cloud, those savings will only increase. In particular, organizations can expect to see a major reduction in IT costs. Adrian McDonald, president of EMEA at EMC, says the unit cost of IT could decrease by more than 38%. This development could allow for newer, more creative services to come from the IT department.

2. More Innovations

Speaking of innovations, the proliferation of cloud computing is helping business leaders use it for more creative solutions. At first, many felt cloud computing would allow companies to run their business in mostly the same way only with a different delivery model. But with cloud computing becoming more common, companies are finding ways to obtain new insights into new processes, effectively changing the way they were doing business before.

3. Engaging With Customers

To grow a business, one must attract new customers and hold onto those that are already loyal. Customer engagement is extremely important, and cloud computing is helping companies find new ways to do just that. By powering systems of engagement, cloud computing can optimize how businesses interact with customers. This is done with database technologies along with collecting and analyzing big data, which is used to create new methods of reaching out to customers. With cloud computing’s easy scalability, this level of engagement is within the grasp of every enterprise no matter the size.

4. More Media

Another trend to watch out for is the increased use of media among businesses, even if they aren’t media companies. Werner Vogels, the vice president and CTO of Amazon, says that cloud computing is giving businesses media capabilities that they simply didn’t have before. Companies can now offer daily, fresh media content to customers, which can serve as another avenue for revenue and retention.

5. Expansion of BYOD

Bring Your Own Device (BYOD) policies are already pretty popular with companies around the world. With cloud computing reaching a new high point, expect BYOD to expand even faster. With so many wireless devices being used by people, that necessitates use of the cloud in order to store and access valuable company data. IT personnel are also finding ways to use cloud services through mobile device management, mainly to organize and keep track of each worker’s activities.

 6. More Hybrid Cloud

Whereas before there was a lot of debate over whether public or private cloud should be used by a company, it has now become clear that businesses are choosing to use hybrid clouds. The same RightScale cloud survey mentioned before shows that 74% of organizations have already developed a hybrid cloud strategy, with more than half of them already using it. Hybrid clouds combine private cloud security with the power and scalability of public clouds, basically giving companies the advantages of both. It also allows IT to come up with customized solutions while maintaining a secure infrastructure.
These are just a few of the trends that are happening as cloud computing expands. Its growth has been staggering, fueling greater innovation in companies as they look to save on operational costs. As more and more businesses get used to what the cloud has to offer and how to take full advantage of its benefits, we can expect even greater developments in the near future. For now, the technology will continue to be a valuable asset to every organization that makes the most of it.

Category: Business Intelligence
No Comments »

by: Gil Allouche
24  Jun  2014

Tips for Creating a Big Data Infrastructure

Big data is making an impact in nearly every industry across the globe. It’s fundamentally changing the way companies do business and how consumers find and buy products and services.

With the large amount of change that is occurring so rapidly in the big data industry, companies are feeling the pressure to act quickly to stay ahead of the competition. The sooner companies can begin implementing big data, the better, but only if the company is ready. Big data should propel companies forward, not hold them back. A focus for companies as they begin exploring big data implementation is establishing a successful infrastructure. After all, the infrastructure is where everything begins with big data. If it’s implemented correctly, a premier infrastructure will ensure that big data serves its primary purpose — increasing business success and consumer satisfaction.

Here are four things companies can do to establish a successful infrastructure.

1. Establish a Purpose
Why do you want to use big data and what are you going to use it for? In order for a company to establish the right kind of infrastructure it needs to know how much and what kind of data it’s going to be handling, along with how that data is going to help the company succeed. From a workforce standpoint, companies need to understand the staffing needs that come with any type of big data infrastructure and make sure they have they manpower to meet those needs. Last, the business needs to understand the financial implications of establishing a successful infrastructure. There will need to be a significant investment of money if it’s going to succeed. However, that investment will pay great dividends in the future, whereas if companies take shortcuts to reduce expenditures, they will pay significantly more in the future. It will hinder their ability to innovate now and in the future.

2. Secure the Data
Securing data is the most important thing a company can do. An enormous advantage for companies with their own infrastructure is the inherent data security that comes with it. Unlike big data in the cloud, companies control all aspects of data. However, companies still have work to do, and if the company lacks expertise in this area, a big data platform in the cloud can actually be the more secure route. When evaluating a cloud computing provider, it’s important to establish and understand who can access the data, when they can access it and what happens with the data when it’s accessed. Appropriate and simple measures will ensure your data remains safe.

3. Make it Scalable
Your company is going to evolve, so make sure your infrastructure can handle change. In their recently published eBook, “Running a Big Data Infrastructure: Five Areas That Need Your Attention,” this ability to change is called, “Elasticity.” Your infrastructure has to be flexible enough to move through the ebbs and flows of your business — the good times and the bad. This type of flexibility doesn’t necessarily require more and more servers to handle the excess information. It requires great “elasticity” with the existing technology allowing it to effectively handle more or less data than originally intended.

4. Monitor the data
The ebook also points out the importance of constant monitoring. If your company is going to install big data infrastructure, then it signifies that big data will play a big part in the future of your business. Because of that, it’s important that your infrastructure is up to speed, all the time. Delays of any sort in the infrastructure can prove extremely costly to your business. As with any new technology, there are going to be glitches and bugs in the system. Without consistent, constant monitoring those could cause an enormous impact on your company. With monitoring, however, they become small inconveniences instead of big problems.

Implementing big data infrastructure can be one of the best moves for your business if it’s done correctly. It allows companies to access and analyze troves of information that can improve every aspect of a company. Companies will succeed by understanding why they want to implement big data, how it’s going to be used and then focus on making it secure, flexible and monitored.

Category: Enterprise Data Management
No Comments »

by: Robert.hillard
22  Jun  2014

The Quantum Computer dream could be killed by Information Management

For years now the physics community has been taking the leap into computer science through the pursuit of the quantum computer.  As weird as the concepts underpinning the idea of such a device are, even weirder is the threat that this machine of the future could pose to business and government today.

There are many excellent primers on quantum computing but in summary physicists hope to be able to use the concept of superposition to allow one quantum computer bit (called a “qubit”) to carry the value of both zero and one at the same time and also to interact with other qubits which also have two simultaneous values.

A quantum computer would be hoped to come up answers to useful questions with far fewer processing steps than a conventional computer as many different combinations would be evaluated at the same time.  Algorithms that use this approach are generally in the category of solution finding (best paths, factors and other similar complex problems).

As exciting as the concept of a quantum computer sounds, one of the applications of this approach would be a direct threat to many aspects of modern society.  Shor’s algorithm provides an approach to integer factorisation using a quantum computer which is like a passkey to the encryption used across our digital world.

The cryptography techniques that dominate the internet are based on the principle that it is computationally infeasible to find the factors of a large number.  However, Shor’s algorithm provides an approach that would crack the code if a quantum computer could actually be built.

Does it matter today?

We’re familiar with businesses of today being disrupted by new technology tomorrow.  But just as weird, as the concept of quantum superposition is the possibility that the computing of tomorrow could disrupt the business of today!

We are passing vast quantities of data across the internet.  Much of it is confidential and encrypted.  Messages that we are confident will remain between the sender and receiver.  These include payments, conversations and, through the use of virtual private networks, much of the internal content of both companies and government.

It is possible that parties hoping to crack this content in the future are taking the opportunity to store it today.  Due to the architecture of the internet, there is little to stop anyone from intercepting much of this data and storing it without anyone having any hint of its capture.

In the event that a quantum computer, capable of running Shor’s algorithm, is built the first thought will need to be to ask what content could have been intercepted and what secrets might be open to being exposed.  The extent of the exposure could be so much greater than might appear at first glance.

How likely is a quantum computer to be built?

There is one commercially available device marketed as a quantum computer, called the D-Wave (from D-Wave Systems).  Sceptics, however, have published doubts that it is really operating based on the principles of Quantum Computing.  Even more importantly, there is no suggestion that it is capable of running Shor’s algorithm or that it is a universal quantum computer.

There is a great deal of evidence that the principles of quantum computing are consistent with the laws of physics as they have been uncovered over the past century.  At the same time as physics is branching into computing, the information theory branch of computing is expanding into physics.  Many recent developments in physics are borrowing directly from the information discipline.

It is possible, though, that information theory as applied to information management problems could provide confidence that a universal quantum computer is not going to be built.

Information entropy

Information entropy was initially constructed by Claude Shannon to provide a tool for quantifying information.  While the principles were deliberately analogous to thermal entropy, it has subsequently become clear that the information associated with particles is as important as the particles themselves.  Chapter 6 of my book, Information-Driven Business, explains these principles in detail.

It turns out that systems can be modelled on information or thermal entropy interchangeably.  As a result, a quantum computer that needs to obey the rules of information theory also needs to obey the laws of thermal entropy.

The first law of thermodynamics was first written by Rudolf Clausius in 1850 as: “In all cases in which work is produced by the agency of heat, a quantity of heat is consumed which is proportional to the work done; and conversely, by the expenditure of an equal quantity of work an equal quantity of heat is produced”.

Rewording over time has added sophistication but fundamentally, the law is a restatement of the conservation of energy.  Any given system cannot increase the quantity of energy or, as a consequent of the connection between thermal and information entropy, the information that it contains.

Any computing device, regardless of whether it is classical or quantum in nature, consumes energy based on the amount of information that is being derived as determined by the information entropy of the device.  While it is entirely possible that massive quantities of information could be processed in parallel, there is no escaping the requirement to adhere to this requirement with a quantum computer truly delivering this level of computing requiring the same order of energy as the thousands or even millions of classical computers required to deliver the same result.

I anticipate that developers of quantum computers will either find that the quantity of energy required to process is prohibitive or that their qubits will constantly frustrate their every effort to maintain coherence for long enough to complete useful algorithms.

Could I be wrong?

Definitely!  In a future post I propose to create a scorecard tracking the predictions I’ve made over the years.

However, anyone who claims to really understand quantum mechanics is lying.  Faced with the unbelievably complex wave functions required for quantum mechanics which seem to defy any real world understanding, physicist David Mermin famously advised his colleagues to just “Shut up and calculate!”.

Because of the impact of a future quantum computer on today’s business, the question is far from academic and deserves almost as much investment as the exploration of these quantum phenomena do in their own right.

At the same time, the investments in quantum computing are far from wasted.  Even if no universal quantum computer is possible, the specialised devices that are likely to follow the D-Wave machine are going to prove extremely useful in their own right.

Ultimately, the convergence of physics and computer science can only benefit both fields as well as the business and government organisations that depend on both.

Category: Information Management, Information Strategy
No Comments »

by: Bsomich
21  Jun  2014

Community Update.

 

 
 logo.jpg

Have you seen our Open MIKE Series? 

The Open MIKE Podcast is a video podcast show which discusses aspects of the MIKE2.0 framework, and features content contributed to MIKE 2.0 Wiki Articles, Blog Posts, and Discussion Forums.

You can scroll through the Open MIKE Podcast episodes below:

For more information on MIKE2.0 or how to get involved with our online community, please visit www.openmethodology.org.

Sincerely,

MIKE2.0 Community  

Contribute to MIKE:

Start a new article, help witharticles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

Join Us on
 images.jpg

 

Did You Know?
All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

 

This Week’s Blogs for Thought:

Open and Secure Personal DataIn his book Open Data Now: The Secret to Hot Startups, Smart Investing, Savvy Marketing, and Fast Innovation, Joel Gurin explained a type of Open Data called Smart Disclosure, which was defined as “the timely release of complex information and data in standardized, machine-readable formats in ways that enable consumers to make informed decisions.”
As Gurin explained, “Smart Disclosure combines government data, company information about products and services, and data about an individual’s own needs to help consumers make personalized decisions. Since few people are database experts, most will use this Open Data through an intermediary—a choice engine that integrates the data and helps people filter it by what’s important to them, much the way travel sites do for airline and hotel booking. These choice engines can tailor the options to fit an individual’s circumstances, budget, and priorities.”

Read more.

Careers in Technology: Is there a future?

Is there a future for careers in Information Technology?  Globally, professional societies such as the British Computer Society and the Australian Computer Society have long argued that practitioners need to be professionals.  However, there is a counter-argument that technology is an enabler for all professions and is more generally a capability of many rather than a profession of the few.

Read more.

An Open Source Solution for Better Performance Management 

Today, many organizations are facing increased scrutiny and a higher level of overall performance expectation from internal and external stakeholders. Both business and public sector leaders must provide greater external and internal transparancy to their activities, ensure accounting data faces up to compliance challenges, and extract the return and competitive advantage out of their customer, operational and performance information: Managers, investors and regulators have a new perspective on performance and compliance.

Read more.

Forward to a Friend!

Know someone who might be interested in joining the Mike2.0 Community? Forward this message to a friend

Questions?

If you have any questions, please email us at mike2@openmethodology.org.

 

 


If you no longer wish to receive these emails, please reply to this message with “Unsubscribe” in the subject line or simply click on the following link: Unsubscribe

Category: Information Development
No Comments »

by: Ocdqblog
17  Jun  2014

Open and Secure Personal Data

In his book Open Data Now: The Secret to Hot Startups, Smart Investing, Savvy Marketing, and Fast Innovation, Joel Gurin explained a type of Open Data called Smart Disclosure, which was defined as “the timely release of complex information and data in standardized, machine-readable formats in ways that enable consumers to make informed decisions.”

As Gurin explained, “Smart Disclosure combines government data, company information about products and services, and data about an individual’s own needs to help consumers make personalized decisions. Since few people are database experts, most will use this Open Data through an intermediary—a choice engine that integrates the data and helps people filter it by what’s important to them, much the way travel sites do for airline and hotel booking. These choice engines can tailor the options to fit an individual’s circumstances, budget, and priorities.”

Remember (if you are old enough) what it was like to make travel arrangements before websites like Expedia, Orbitz, Travelocity, Priceline, and Kayak existed, and you can imagine the immense consumer-driven business potential for applying Smart Disclosure and choice engines to every type of consumer decision.

“Smart Disclosure works best,” Gurin explained, “when it brings together data about the services a company offers with data about the individual consumer. Smart Disclosure includes giving consumers data about themselves—such as their medial records, cellphone charges, or patterns of energy use—so they can choose the products and services uniquely suited to their needs. This is Open Data in a special sense: it’s open only to the individual whom the data is about and has to be released to each person under secure conditions by the company or government agency that holds the data. It’s essential that these organizations take special care to be sure the data is not seen by anyone else. Many people may balk at the idea of having their personal data released in a digital form. But if the data is kept private and secure, giving personal data back to individuals is one of the most powerful aspects of Smart Disclosure.”

Although it sounds like a paradox, the best way to secure our personal data may be to make it open. Currently most of our own personal data is closed—especially to us, which is the real paradox.

Some of our personal data is claimed as proprietary information by the companies we do business with. Data about our health is cloaked by government regulations intended to protect it, but which mostly protects doctors from getting sued while giving medical service providers and health insurance companies more access to our medical history than we have.

If all of our personal data was open to us, and we controlled the authorization of secure access to it, our personal data would be both open and secure. This would simultaneously protect our privacy and improve our choice as consumers.

 

Category: Information Development
No Comments »

by: Bsomich
07  Jun  2014

Community Update.

Missed what’s been happening in the MIKE2.0 community? Check out our bi-weekly update:

 

logo.jpg

Available for Order: Information Development Using MIKE2.0

Have you heard? Our new book, “Information Development Using MIKE2.0” is available for order.

The vision for Information Development and the MIKE2.0 Methodology have been available in a collaborative, online fashion since 2006, and are now made available in print publication to a wider audience, highlighting key wiki articles, blog posts, case studies and user applications of the methodology.

Authors for the book include Andreas Rindler, Sean McClowry, Robert Hillard, and Sven Mueller, with additional credit due to Deloitte, BearingPoint and over 7,000 members and key contributors of the MIKE2.0 community. The book has been published in paperback as well as all major e-book publishing platforms.

Get Involved:

To get your copy of the book, visit our order page on Amazon.com. For more information on MIKE2.0 or how to get involved with our online community, please visitwww.openmethodology.org.

Sincerely,

MIKE2.0 Community  

Contribute to MIKE:

Start a new article, help witharticles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

Join Us on
 images.jpg

 

Did You Know?
All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

 

This Week’s Blogs for Thought:

Open and Big Data

In his book Open Data Now: The Secret to Hot Startups, Smart Investing, Savvy Marketing, and Fast Innovation, Joel Gurin explained that Open Data and Big Data are related but very different.

While various definitions exist, Gurin noted that “all definitions of Open Data include two basic features: the data must be publicly available for anyone to use, and it must be licensed in a way that allows for its reuse.

Read more.

Careers in Technology: Is there a future?

Is there a future for careers in Information Technology?  Globally, professional societies such as the British Computer Society and the Australian Computer Society have long argued that practitioners need to be professionals.  However, there is a counter-argument that technology is an enabler for all professions and is more generally a capability of many rather than a profession of the few.

Read more.

Data Quality Profiling: Do you trust in the dark arts? 

Why estimating Data Quality profiling doesn’t have to be guess-work. 

Data Management lore would have us believe that estimating the amount of work involved in Data Quality analysis is a bit of a “Dark Art,” and to get a close enough approximation for quoting purposes requires much scryingharuspicy and wet-finger-waving, as well as plenty of general wailing and gnashing of teeth. (Those of you with a background in Project Management could probably argue that any type of work estimation is just as problematic, and that in any event work will expand to more than fill the time available).

Read more.

Forward to a Friend!

Know someone who might be interested in joining the Mike2.0 Community? Forward this message to a friend

 

Category: Information Development
No Comments »

by: Ocdqblog
28  May  2014

Open Data and Big Data

In his book Open Data Now: The Secret to Hot Startups, Smart Investing, Savvy Marketing, and Fast Innovation, Joel Gurin explained that Open Data and Big Data are related but very different.

While various definitions exist, Gurin noted that “all definitions of Open Data include two basic features: the data must be publicly available for anyone to use, and it must be licensed in a way that allows for its reuse. Open Data should also be in a form that makes it relatively easy to use and analyze, although there are gradations of openness. And there’s general agreement that Open Data should be free of charge or cost just a minimal amount.”

“Big Data involves processing very large datasets to identify patterns and connections in the data,” Gurin explained. “It’s made possible by the incredible amount of data that is generated, accumulated, and analyzed every day with the help of ever-increasing computer power and ever-cheaper data storage. It uses the data exhaust that all of us leave behind through our daily lives. Our mobile phones’ GPS systems report back on our location as we drive; credit card purchase records show what we buy and where; Google searches are tracked; smart meters in our homes record our energy usage. All are grist for the Big Data mill.”

Private and Passive versus Public and Purposeful

Gurin explained that Big Data tends to be private and passive, whereas Open Data tends to be public and purposeful.

“Big Data usually comes from sources that passively generate data without purpose, without direction, or without even realizing that they’re creating it. And the companies and organizations that use Big Data usually keep the data private for business or security reasons. This includes the data that large retailers hold on customers’ buying habits, that hospitals hold about their patients, and that banks hold about their credit card holders.”

By contrast, Open Data “is consciously released in a way that anyone can access, analyze, and use as he or she sees fit. Open Data is also often released with a specific purpose in mind—whether the goal is to spur research and development, fuel new businesses, improve public health and safety, or achieve any number of other objectives.”

“While Big Data and Open Data each have important commercial uses, they are very different in philosophy, goals, and practice. For example, large companies may use Big Data to analyze customer databases and target their marketing to individual customers, while they use Open Data for market intelligence and brand building.”

Big and Open Data

Gurin also noted, however, that some of the most powerful results arise when Big Data and Open Data overlap.

“Some government agencies have made very large amounts of data open with major economic benefits. National weather data and GPS data are the most often-cited examples. U.S. census data and data collected by the Securities and Exchange Commission and the Department of Health and Human Services are others. And nongovernmental research has produced large amounts of data, particularly in biomedicine, that is now being shared openly to accelerate the pace of scientific discovery.”

Data Open for Business

Gurin addressed the apparent paradox of Open Data: “If Open Data is free, how can anyone build a business on it? The answer is that Open Data is the starting point, not the endpoint, in deriving value from information.” For example, even though weather and GPS data have been available for decades, those same Open Data starting points continue to spark new ideas, generating new, and profitable, endpoints.

While data privacy still requires sensitive data not be shared without consent and competitive differentiation still requires an organization’s intellectual property not be shared, that still leaves a vast amount of other data which, if made available as Open Data, will make more data open for business.

 

Category: Information Development
No Comments »

by: Robert.hillard
25  May  2014

Careers in Technology

Is there a future for careers in Information Technology?  Globally, professional societies such as the British Computer Society and the Australian Computer Society have long argued that practitioners need to be professionals.  However, there is a counter-argument that technology is an enabler for all professions and is more generally a capability of many rather than a profession of the few.

At the same time, many parents, secondary school teachers and even tertiary educators have warned students that a Technology career is highly risky with many traditional roles being moved to lower cost countries such as India, China and The Philippines.  Seeing headlines in the newspapers in recent years headlining controversy over the use imported works in local Technology roles has only served to further unsettle potential graduates.

Technologists as agents of change

Organisations increasingly realise that if they don’t encourage those who have information and insight about the future of technology in their business, they be creating a lumbering hierarchy that is incapable of change.

How should companies seek out those innovations that will enable the future business models that haven’t been invented yet?  Will current technology savings cause “pollution” that will saddle future business initiatives with impossible complexity?  Is the current portfolio of projects simply keeping the lights on or is it preparing for real change?  Does the organisation have a group of professionals driving change in their business in the years to come or do they have a group of technicians who are responding without understanding why?

These questions deeply trouble many businesses and are leading to a greater focus on having a group of dedicated technology professionals at every level of the organisation and often dispersed through the lines of business.

The recognition of the need for these change agents should answer the question on the future of the profession.  At a time when business needs innovation which can only achieved through technology, society is increasingly worried about a future where their every interaction might be tracked.

While the Information Technology profession has long talked about the ethics of information and privacy, it is only recently that society is starting to care.  With the publicity around the activities of government and big business starting to cause wide concern, it is likely that the next decade will see a push towards greater ownership of data by the customer, more sophisticated privacy and what is being dubbed “forget me” legislation where companies need to demonstrate they can completely purge all record of an individual.

While every business will have access to advice at senior levels, it is those who embed Information Technology professionals at every level through their organisation that will have the ability to think ahead to the consequences of each decision.

A professional’s perspective

These decisions often form branches in the road.  While requirements can often be met in different, but apparently similar paths, the difference between the fastest route and the slowest is sometimes measured in orders of magnitude.  Sometimes these decisions turn out to be difference between success and failure.  A seemingly innocuous choice to pick a particular building block, external interface or language can either be lauded or regretted many years later.

Ours is a career that has invited many to join from outside and the possibilities that the digital and information economy create had enticed many who have tinkered to make Information Technology their core focus.  While this is a good thing, it is critical that those relying on technology skills can have confidence in the decisions that are being made both now and in the future.

Practitioners who have developed their knowledge in an ad-hoc way, without the benefit of testing their wider coverage of the discipline, are at risk of making decisions that meet immediate requirements but which cut-off options for the future or leave the organisation open to structural issues which only become apparent in decades to come.  In short, these people are often good builders but poor architects.

But is there a future at all?

Casual observers of the industry can be forgiven for thinking that the constant change in technology means that skills of future practitioners will be so different to those of today as to make any professional training irrelevant.  Anyone who holds this view would be well served by reading relevant Technology articles from previous eras such as the 1980s when there was a popular perception that so-called “fourth generation languages” would mean the end of computer programming.

While the technical languages of choice today are different to those of the 1970s, 80s and subsequent decades, the fundamental skills are the same.  What’s more, anyone who has developed professional (as opposed to purely technical) skills as a developer using any language can rapidly transition to any new language as it becomes popular.  True Technology professionals are savvy to the future using the past as their guide and make good architecture their goal.

The way forward

Certainly the teaching and foundations of Technology need to change.  There has been much too much focus on current technical skills.  The successful Technologist has a feel for the trends based on history and is able to pick-up any specific skill as needed through their career.

Senior executives, regardless of their role, express frustration about the cost and complexity of doing even seemingly simple things such as preparing a marketing campaign, adding a self-service capability or combining two services into one.  No matter which way you look at it, it costs more to add or change even simple things in organisations due to the increasing complexity that a generation of projects have left behind as their legacy (see Value of decommissioning legacy systems).

It should come as no surprise that innovation seems to come from Greenfield start-ups, many of which have been funded by established companies whose own legacy stymies experimentation and agility.

This need to start again is neither productive nor sustainable.  Once a business accepts the assertion that complexity caused by the legacy of previous projects is the enemy of agility, then they need to ask whether their Technology capabilities are adding to the complexity while solving immediate problems or if they are encouraging Technology professionals to create solutions that not only meet a need but also simplify the enterprise in preparation for an unknown tomorrow.

Category: Information Development, Information Governance
No Comments »

by: Alandduncan
24  May  2014

Data Quality Profiling: Do you trust in the Dark Arts?

Why estimating Data Quality profiling doesn’t have to be guess-work

Data Management lore would have us believe that estimating the amount of work involved in Data Quality analysis is a bit of a “Dark Art,” and to get a close enough approximation for quoting purposes requires much scrying, haruspicy and wet-finger-waving, as well as plenty of general wailing and gnashing of teeth. (Those of you with a background in Project Management could probably argue that any type of work estimation is just as problematic, and that in any event work will expand to more than fill the time available…).

However, you may no longer need to call on the services of Severus Snape or Mystic Meg to get a workable estimate for data quality profiling. My colleague from QFire Software, Neil Currie, recently put me onto a post by David Loshin on SearchDataManagement.com, which proposes a more structured and rational approach to estimating data quality work effort.

At first glance, the overall methodology that David proposes is reasonable in terms of estimating effort for a pure profiling exercise – at least in principle. (It’s analogous to similar “bottom/up” calculations that I’ve used in the past to estimate ETL development on a job-by-job basis, or creation of standards Business Intelligence reports on a report-by-report basis).

I would observe that David’s approach is predicated on the (big and probably optimistic) assumption that we’re only doing the profiling step. The follow-on stages of analysis, remediation and prevention are excluded – and in my experience, that’s where the real work most often lies! There is also the assumption that a pre-existing checklist of assessment criteria exists – and developing the library of quality check criteria can be a significant exercise in its own right.

However, even accepting the “profiling only” principle, I’d also offer a couple of additional enhancements to the overall approach.

Firstly, even with profiling tools, the inspection and analysis process for any “wrong” elements can go a lot further than just a 10-minute-per-item-compare-with-the-checklist, particularly in data sets with a large number of records. Also, there’s the question of root-cause diagnosis (And good DQ methods WILL go into inspecting the actual member records themselves). So for contra-indicated attributes, I’d suggest a slightly extended estimation model:

* 10mins: for each “Simple” item (standard format, no applied business rules, fewer that 100 member records)
* 30 mins: for each “Medium” complexity item (unusual formats, some embedded business logic, data sets up to 1000 member records)
* 60 mins: for any “Hard” high-complexity items (significant, complex business logic, data sets over 1000 member records)

Secondly, and more importantly – David doesn’t really allow for the human factor. It’s always people that are bloody hard work! While it’s all very well to do a profiling exercise in-and-of-itself, the result need to be shared with human beings – presented, scrutinised, questioned, validated, evaluated, verified, justified. (Then acted upon, hopefully!) And even allowing for the set-aside of the “Analysis” stages onwards, then there will need to be some form of socialisation within the “Profiling” phase.

That’s not a technical exercise – it’s about communication, collaboration and co-operation. Which means it may take an awful lot longer than just doing the tool-based profiling process!

How much socialisation? That depends on the number of stakeholders, and their nature. As a rule-of-thumb, I’d suggest the following:

* Two hours of preparation per workshop ((If the stakeholder group is “tame”. Double it if there are participants who are negatively inclined).
* One hour face-time per workshop (Double it for “negatives”)
* One hour post-workshop write-up time per workshop
* One workshop per 10 stakeholders.
* Two days to prepare any final papers and recommendations, and present to the Steering Group/Project Board.

That’s in addition to David’s formula for estimating the pure data profiling tasks.

Detailed root-cause analysis (Validate), remediation (Protect) and ongoing evaluation (Monitor) stages are a whole other ball-game.

Alternatively, just stick with the crystal balls and goats – you might not even need to kill the goat anymore…

Category: Business Intelligence, Data Quality, Enterprise Data Management, Information Development, Information Governance, Information Management, Information Strategy, Information Value, Master Data Management, Metadata
No Comments »

Calendar
Collapse Expand Close
TODAY: Tue, November 25, 2014
November2014
SMTWTFS
2627282930311
2345678
9101112131415
16171819202122
23242526272829
30123456
Recent Comments
Collapse Expand Close