Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.
by: Bsomich
01  Apr  2015

Community Announcement: MIKE2.0 and DAMA-International

FOR IMMEDIATE RELEASE

Contact:
MIKE2.0 Governance Association
Pfingstweidstrasse 60, CH-8050
Zürich, Switzerland

March 31, 2015 – Zürich, Switzerland - This week, the MIKE2.0 Governance Association (MGA) is pleased to announce an agreement with DAMA-International commencing the transition of the MIKE2.0 community to DAMA-International. The agreement was announced on Monday evening at the DAMA-I Chapter Meeting at Enterprise Data World 2015 (EDW 15) in Washington, DC.

Under the transition, the MIKE2.0 community will now be provided by the existing DAMA-I Chapter structure and related member services and activities for the continued development and extension of the MIKE2.0 framework. “We at DAMA-I are delighted and honored to have the opportunity to progress and build upon the combined wisdom within MIKE2.0 and DAMA DMBOK. Merging practical with theoretical provides the ultimate approach to managing data,” said Sue Geuens, President of DAMA International, in a recent interview at Enterprise Data World.

 About MIKE2.0

With 869 articles and 29,675 members contributing their knowledge and experience, Method for an Integrated Knowledge Environment (MIKE2.0) is an online community providing a comprehensive methodology that can be applied across a number of different projects within the Information Management space. It was released under open source in 2006 although earlier efforts of the project date back to 2003. The MIKE2.0 Governance Association (MGA) has been the governing body since 2009. In 2013, the MGA team authored Information Development using MIKE2.0, translating many of the community’s core content assets to print to better reach an offline audience.

After nearly a decade of community operations, the MIKE2.0 Governance Association is looking forward to the start of a new chapter under DAMA.  By solidifying this relationship, MGA is able to guarantee that all registered users and contributors to the MIKE2.0 project will continue to have an active community within which they can continue their professional networking, skills development, and intellectual contributions while utilizing and building the MIKE2.0 framework for information management.

“We are excited to reach an agreement with DAMA to solidify a sustainable future for MIKE2.0,” said Brenda Somich, community manager for MIKE2.0. “After years of dedication to this initiative, our team is grateful to know that the community will continue to expand and grow. As with the constant change and evolutionary nature of information, we are happy to announce that MIKE2.0 will also evolve.”

For any inquiries about MIKE2.0 and the acquisition by DAMA, please contact Rob Hillard, MGA board member and co-founder of MIKE2.0.

Category: Information Development
No Comments »

by: Robert.hillard
28  Mar  2015

The change you can’t see, or what’s your horse carcass?

I had the pleasure this month of launching the Australian edition of Deloitte’s Tech Tends 2015 report.  For a number of years now, we’ve put our necks on the line to predict what will happen in the immediate, and slightly longer, term.  Looking back over recent years, we can see the rise of cloud, the reinvention of the role of core IT systems and the evolution of management of technology in business.

Interestingly, the time we’ve been doing these predictions in this particular format has coincided with a peculiar period in computing history when most of the innovation found in business started life in the consumer world.  To a large degree, this trend is the result of the boom of smartphones in the late naughties.

This is not the long-term norm.  Over the five decades since computing became an important part of business technology, the enterprise led and hobbyists had to pick up the scraps until they could get the price to a point where a product that appealed to consumers could be mass produced.

The return of the enterprise

This year has seen a return to the norm.  Some of the hyped technologies like 3-D printing turn out to have more application in business than the home.  Freed from looking to consumers, business has renewed confidence to innovate from the ground-up.  This, in-turn, has the potential to accelerate innovation and enable disruptive rather than evolutionary trends.

It is often hard to move consumer technologies quickly when big investments are required.  To get a return on this capital, large numbers of people need to be moved to open their wallets to get an acceptable return.  Enterprise solutions, on the other hand, can focus on niche problems without needing to worry about standards or mass movements of people due to the rapid return on capital.

It’s entirely possible, for instance, that big business investments in autonomous vehicles will have the same impact as large-scale manufacturing did on advancing robotics in the 1970s and 1980s.  The renewed focus is particularly evident in mining where large distances and controlled environments make early investment not only possible but also rapid returns feasible.

This exciting new period of innovation leads us to ask whether we are really planning for our society of the future or if we are limiting our thinking.  The answer matters because there is a lot of money being invested by both governments and business based on their current assumptions on what the future will bring.

Horse carcasses

In the late nineteenth century, city planners were dealing the exponential growth in populations and wealth.  They were planning how to deal with one of the most visible forms of technology in every street: the horse.  In the last part of the 1800s, New York City had nearly 200,000 horses.  With the tough conditions they worked under, many horses could expect to live just two to four years with the carcasses being a problem on par with the food they required and the manure they produced.

The Times of London famously predicted around this time that every street would be buried under nine feet of horse-generated waste by 1950!

It’s no wonder that these were the problems that city planners thought they would be dealing with through the then-new twentieth century.  Of course, the arrival of the motor car was both foreseeable by anyone looking back at the history of the internal combustion engine and almost unforseen by city planners of the time.

I wonder whether there are horse carcasses that we simply can’t look past when planning our society of the next century.

Reinventing our cities

A candidate list of these “horse carcasses” would have to start with transport, the issue that planners of the turn of the last century couldn’t look past.  Debate rages in cities around the world about the amount of investment that should go into roads and public transport (predominantly rail).  At the same time, the autonomous car, which seemed a dream just a few years ago, is very much on the verge of reality.

Autonomous vehicles, operating in concert and optimised through the power of analytics, can increase the density of road traffic by an order of magnitude, allowing cities to utilise existing roads with little need to upgrade capacity for many decades.  Similarly, a considered approach to shared resources means that public transport as we know it today is effectively rendered obsolete.

Imagine a future where you simply press a button on your smart device and a vehicle takes you where you want to go.  No waiting, timetables or congestion.

Reinventing our society

Any serious vision of the future has to consider economics and the future of growth.  For the vast majority of the history of humanity the underlying economic growth rate has been a fraction of one percent compared to the high growth achieved since the industrial revolution.

The big question is whether society should build-in assumptions about growth and the only tool to achieve it: technology.  Looking back, from the vantage of the late twenty first century, the answer will seem obvious and we will kick ourselves if we haven’t either taken advantage of the great opportunity to leverage our growth or conversely saved wisely for a more austere future.

One of the most important decisions that the growth or austerity alternative futures will drive is our willingness to invest in our personal wellbeing through healthcare.  We are planning our society around increasing healthcare costs and actively thinking about rationing schemes around the world.  But what happens if the human body simply becomes a technology problem which is solved.

It is very hard to look past humans living for 80 to 100 years.  While recent centuries have seen a substantial increase in our life expectancy, it hasn’t been transformational.  Rather more people have enjoyed a healthier life towards the higher end of the expected lifespan.

Science, however, could be on the verge of breaking through the current barriers of our biology with an almost unimaginable impact on our society.

Reinventing our work lives

Futurists of the 1960s and 1970s expected the twenty-first century to be challenged by a lack of employment.  While the first decades show no sign of realising this prediction, they could still be right.  The second generation of artificial intelligence could finally achieve this vision.

Such a society could be extremely wealthy and choose to put the interests of its people first by sharing the opportunity to contribute while rewarding outcomes over effort.  Conversely, the wealth could easily get concentrated in the hands of a few with little opportunity for those without a role to enjoy the spoils.

Reinventing our diets

Finally, even what we eat is likely to change in ways that seem unimaginable.  Most futurists agree that an increasingly wealthy world will create a huge demand for protein, primarily through meat which requires an enormous amount of land to produce.

An alternative now seems likely.  Technology which can produce meat independently of any animal through stem cells is nearing maturity.  Such a product could be indistinguishable from meat produced from a slaughtered animal.  The change this would cause in agribusiness will be of the same magnitude as the digital disruption so many industries are already experiencing.

Maybe this is the carcass we can’t look past today.  Rather than a horse, this carcass belongs to cows, sheep and poultry.

Category: Enterprise2.0
No Comments »

by: Jonathan
26  Mar  2015

5 Challenges facing the Internet of Things

Our constant need to be connected has expanded beyond smartphones and tablets into a wider network of interconnected objects. These objects, often referred to as the Internet of Things (IoT), have the ability to communicate with other devices and are constantly connected to the internet in order to record, store and exchange data.

This idea of an “always on, always connected” device does seem a little big brother-ish, but there are definitely some benefits that come with it. For example, we are already seeing smarter thermostats, like Nest, that allow us to remotely control the temperature of our homes. We also have appliances and cars with internet connectivity, that can learn our behavior, and act on their own to provide us with greater functionality. However, while this is an accelerating trend with already many objects on the market, there are still a number of challenges facing IoT, which will continue to hinder its progress and widespread adoption.

Security

It seems as if every discussion surrounding networks and the internet is always followed by a discussion on security. Given the recent publicity of damaging security breaches at major corporations, it’s hard to turn a blind eye to the dangers of more advanced cyber attacks. There’s no hiding the fact that the introduction of IoT will create a number of additional vulnerabilities that’ll need to be protected. Otherwise, these devices will simply turn into easy access points for cyber criminals. Given that IoT is new technology, there aren’t a lot of security options designed specifically for them. Furthermore, the diversity in device types makes uniform solutions very difficult. Until we see greater security measures and programs designed to handle IoT devices, many will remain hesitant to adopt them for personal and professional use.

Privacy

On the coattails of security comes privacy. One of bigger debates in this age of data is who actually owns the data being created. Is it the users of these devices, the manufacturers, or those who operate the networks. Right now, there’s no clear answer. Regardless, while we are left arguing who owns what information, these devices are tracking how we use them. Your car knows which route you take to work, and your home knows what temperature you prefer in the mornings. In addition, when you consider that almost everything requires an online profile to operate these days, there can be a tremendous amount of private information available to many different organizations. For all we know, our televisions are watching us as we watch our favorite shows, and sending that information to media companies.

Interoperability

In order to create a pure, interconnected IoT ecosystem, there needs to be a seamless experience between different devices. Currently, we haven’t yet achieved that level of interoperability. The problem is that there are so many different makes and models, it’s incredibly difficult to create an IoT system with horizontal platforms that are communicable, operable, and programmable. Right now, IoT communication is fragmented, and many devices are still not able to ‘talk’ with one another. Manufacturers will need to start playing nice with each other, and create devices that are willing to work with competitors.

WAN Capacity

Existing Wide Area Networks (WAN) have been built for moderate-bandwidth requirements capable of handling current device needs. However, the rapid introduction of new devices will dramatically increase WAN traffic, which could strangle entreprise bandwidth. With the growing popularity of Bring Your Own Device policies, people will begin using IoT devices at work, forcing companies to make the necessary upgrades, or suffer crawling speeds and weakened productivity.

Big Data

IoT technology will benefit and simplify many aspects of our lives, but these devices serve a dual purpose, benefiting organizations hungry for information. We live in an era of big data, where organizations are looking to collect information from as many sources as possible in the hopes of learning more about customers and markets. IoT technology will greatly expand the possibilities of data collection. However, the problem then becomes managing this avalanche of data. Storage issues aside, we’ve only just developed improved ways of handling big data analytics, but technologies and platforms will need to further evolve to handle additional demands.

Category: Web2.0
No Comments »

by: RickDelgado
24  Mar  2015

The Debate Continues: The Future Impact of Net Neutrality on the Cloud

The debate over Net Neutrality is far from over. While the recent ruling by the FCC to classify broadband internet as a public utility may have changed the argument, debates will undoubtedly still continue to take place. The effects the decision has on the web will likely not be felt, let alone understood, for many years to come, but that hasn’t stopped speculation over what a neutral internet will actually look like and how companies and internet service providers (ISPs) will be impacted. At the same time, the future of cloud computing has become a hot topic as experts debate if Net Neutrality will be a boost to cloud providers or if the overall effect will be negative. Looking at the current evidence and what many providers, companies, and experts are saying, the only thing that’s clear is that few people can agree on what Net Neutrality will mean for the cloud and all the advantages of cloud computing.

The basic idea of Net Neutrality is, in the simplest of terms, to treat all internet traffic the same. Whether from a small niche social site or a major online retail hub, content would be delivered equally. This sounds perfectly reasonable on the surface, but critics of the Net Neutrality concept say all websites simply aren’t equal. Sites like Netflix and YouTube (mainly video streaming sites) eat up large amounts of bandwidth when compared to the rest of the internet, and as streaming sites grow in popularity, they keep eating up more and more web resources. The theory goes that ISPs would provide internet “fast lanes” to those sites willing to pay the fee, giving them more bandwidth in comparison to other sites, which would be stuck in “slow lanes.” It’s this idea that proponents of Net Neutrality want to guard against, and it’s one of the biggest points of contention in the debate.

Obviously, this is a simplified view of Net Neutrality, but it’s a good background when looking at the effect the new ruling could have on cloud computing. First, let’s take a look at how cloud providers may be affected without a neutral internet. Supporters of Net Neutrality say a “fast lane” solution would represent an artificial competitive advantage for those sites with the resources to pay for it. That could mean a lack of innovation on the part of cloud vendors as they spend added funds to get their data moved more quickly while getting a leg up on their competition. A non-neutral internet may also slow cloud adoption among smaller businesses. If a cloud software provider has to pay more for fast lanes, those costs can easily be passed on to the consumer, which would raise the barrier to cloud use. The result may be declining cloud adoption rates, or at the least performance of cloud-based software may degrade.

On the other side of the coin, critics of Net Neutrality say the effect of the policy will end up damaging cloud computing providers. They’re quick to point out that innovation on the internet has been rampant without new government regulations, and that ISPs could easily develop other innovative solutions besides the “fast lane” approach Net Neutrality supporters are so afraid of. Government rules can also be complicated and, in the case of highly technical fields, would need to be constantly updated as new technology is developed. This may give larger companies and cloud providers an advantage over their competition since they would have the resources to devote to lobbyists and bigger legal budgets to dedicate to understanding new rules. There’s also the concern over getting the government involved in the control of pricing and profits in the first place. Needless to say, many aren’t comfortable with giving that level of control to a large bureaucracy and would rather let market freedom take hold.

Some may say that with the new FCC ruling, these arguments don’t apply anymore, but changes and legal challenges will likely keep this debate lively for the foreseeable future. Will Net Neutrality lead to government meddling in cloud provider pricing and contracts? Will a lack of Net Neutrality slow down cloud adoption and give too much power to ISPs? Unfortunately, there’s no way of knowing the far-reaching consequences of the decision on the cloud computing landscape. It could end up having very little impact in the long run, but for now, it appears Net Neutrality will become a reality. Whether that’s a good or bad thing for the cloud remains to be seen.

 

Category: Enterprise Data Management
No Comments »

by: Bsomich
19  Mar  2015

MIKE2.0 Community Update

Missed what’s been happening in the MIKE2.0 data management community? Read on:

Click to view this email in a browser

  
 logo.jpg

The Transformation to a Data-Driven Business: Join us at EDW15!

Attending Enterprise Data World in DC later this month? We look forward to seeing you!

The Enterprise Data World (EDW) Conference is recognized as the most comprehensive educational conference on data management in the world.

What to Expect at EDW 2015:

  • Enterprise Data Strategy
  • Data Governance Program Implementation
  • Building for New Demands of Data Architecture
  • Data Quality Measurements and Scorecarding
  • Big Data Trends and Technologies
  • Rolling out Master Data Management
  • EIM – Transforming into Data-driven Business
  • Real-time Analytics & Business Intelligence
  • Best Practices in all aspects of Enterprise Data Management
  • Agile Data Methods

Most importantly, you’ll hear from DAMA and the MGA team about some exciting new developments planned for the MIKE2.0 community this year.

We hope to see you there!

Not registered yet? Visit http://edw2015.dataversity.net or contact us for details.

Sincerely,MIKE2.0 Community

Contribute to MIKE:

Start a new article, help with articles under construction or look for other ways to contribute

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

Join Us on
images.jpg

Did You Know?
All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

 

This Week’s Blogs for Thought:

Cloud Services: Open vs Proprietary  

There’s nothing more punk-rock than the sort of DIY ethics currently fueling open-source communities. The general subversiveness combined with an apparent twice-a-week minimum black t-shirt rule among developers may make the open source scene look kind of like a cool-guy/girl clique, at least from an outsider’s perspective.

Everybody is rebelling against something, right?

Read more.

5 Ways Big Data is Changing the World of Design

Back in the analog days, designers used hands-on tools to bring their creations to light. But in this day and age of information and advanced technology big data and analytics tools are transforming the world of design as never before.In a November 2014 article on wired.com Paul Papas, “the Global Leader for the IBM’s Interactive Experience practice, a next-generation digital agency, consultancy, and systems integrator,” discusses the revolutionizing power of big data in all facets and fields of design. Compiled from the author’s views and insights is this list of 5 ways big data is transforming the world of design.

Read more. 

Making the Case for Jargon, Acronyms and Clear Language

All over the web, authors are ranting about the misuse of the English language in business.  It’s an easy article to write, picking out examples of jargon and the general torturing of sentences in the name of explaining apparently simple concepts while making the writer seem more impressive.

Some great examples from business can be found in this Forbes article: The Most Annoying, Pretentious and Useless Business Jargon.  Examples that I like (or hate) include terms like “swim lanes”, “best practice” and “core competence.”

Read more. 

  Forward this message to a friend

 

Category: Information Development
No Comments »

by: Bsomich
17  Mar  2015

The Transformation to a Data-Driven Business: Join us at EDW15!

Attending Enterprise Data World in DC later this month? We look forward to seeing you!

The Enterprise Data World (EDW) Conference is recognized as the most comprehensive educational conference on data management in the world.

What to Expect at EDW 2015:

  • Enterprise Data Strategy
  • Data Governance Program Implementation
  • Building for New Demands of Data Architecture
  • Data Quality Measurements and Scorecarding
  • Big Data Trends and Technologies
  • Rolling out Master Data Management
  • EIM – Transforming into Data-driven Business
  • Real-time Analytics & Business Intelligence
  • Best Practices in all aspects of Enterprise Data Management
  • Agile Data Methods

Most importantly, you’ll hear from DAMA and the MGA team about some exciting new developments planned for the MIKE2.0 community this year.

We hope to see you there!

Not registered yet? Visit http://edw2015.dataversity.net or contact us for details.

Category: Information Development
No Comments »

by: RickDelgado
26  Feb  2015

Cloud Services: Open Source vs. Proprietary

There’s nothing more punk-rock than the sort of DIY ethics currently fueling open-source communities. The general subversiveness combined with an apparent twice-a-week minimum black t-shirt rule among developers may make the open source scene look kind of like a cool-guy/girl clique, at least from an outsider’s perspective.

Everybody is rebelling against something, right?

In the cloud computing ecosystem the basic theme is rebellion against failure, according to whatever that means to whomever is considering the question. And within that question is the other major decision; whether the given needs call for an open-source, or proprietary architecture. So let’s take a closer look at what the major differences between those two models mean for businesses.

Charging for Software

Generally, open source models are free and won’t charge for the use of software. Proprietary models may offer free packages at first, but ultimately always end up costing the customer. Many updates to proprietary software are free, but significant upgrades and the ability to add new packages often comes with a fee. Charges can also come in the form of a per-user fee. Open source options are based more on the development of a community. They take direction from the demands of the market and tend to start with a small collection of developers and users. Successful projects are quickly picked up, while others are left to languish in obscurity.

Vendor Limitations

Vendor lock-ins occur with proprietary software. This means that the website and software used with a proprietary vendor can’t be taken to another provider. It also limits the ability to use other providers with the knowledge to use a particular product. In contrast, open source products are more flexible and allow users to move between different systems freely. Open source cloud computing offers a greater range of compatibility between several different products. Typically, if a proprietary solution goes out of business the end-user is left with an unusable product. With open source projects, there is usually another project or fork that can take off where the old one left off.

Modifying System Code

Proprietary software doesn’t allow the manipulation of the source code. Even simple modifications to change styling or add features are not permitted with proprietary software. This can be beneficial for users who are happy with a set of features that is completely managed by one company. For those who like to tinker and adjust software to their needs, it may not be an ideal solution. Open source options allow for modifications and a company can even create an entire fork based off the existing software. When a feature doesn’t exist within an open source application, a developer can be hired to incorporate the feature into the product.

Licensing and Hosting Costs

Using proprietary software isn’t for the faint of heart or light of wallet. Licensing and hosting fees are often higher with proprietary software. By using open source options, users can avoid having to pay operating system costs, and per-product fees to use the software. This provides more flexibility to those who run open source platforms. A new software package or feature can be quickly added on to an existing installation without the need to purchase a license. Additionally, proprietary software requires the use of commercial databases, which further add to the total cost of operation.

User Documentation

Product documentation is often more involved and useful with open source software. The reason for this is the large communities that often follow and support open source projects. Help documentation for proprietary software is often only surface level. This is partially due to the service-based nature of proprietary software. It’s more profitable when consumers have to rely on the company for support and technical services. However, this can negatively impact business if an update goes wrong and technical support can’t immediately correct the issue. Open source applications come with substantial documentation that is typically updated with each product release and freely available online.

Security and Performance Considerations

When you have an entire community of developers poking and prodding at an application, you tend to have better security. Many of the features that are put into proprietary software are designed to keep the software from being modified. This adds bloat to the code and prevents the option for a light and lean product. Additionally, excess code leaves more room for security and stability flaws. With open source software, there are many more eyes looking at the code and fixes tend to come in much more quickly than with proprietary software. Stability and advanced threat defense tends to be tighter with open source applications, as long as users keep their software updated. Out of date applications are just as vulnerable to hacking and infiltration as proprietary systems.

Summary

Open source and proprietary cloud services both aim to provide end-users with reliable software. Some users prefer the backing of a large company like Amazon or Microsoft, with a tailored list of compatible programs and services. Others prefer the interoperability and flexibility of open source alternatives like OpenStack or Eucalyptus. It’s not necessarily an issue of right or wrong per se. It just depends what the user’s specific needs are. For some open source software is the obvious choice, while those who want more predictably managed solutions may find proprietary solutions the ideal choice.

Category: Business Intelligence, Open Source
No Comments »

by: Jonathan
25  Feb  2015

5 Ways Big Data is Transforming the World of Design

Back in the analog days, designers used hands-on tools to bring their creations to light. But in this day and age of information and advanced technology big data and analytics tools are transforming the world of design as never before.

In a November 2014 article on wired.com Paul Papas, “the Global Leader for the IBM’s Interactive Experience practice, a next-generation digital agency, consultancy, and systems integrator,” discusses the revolutionizing power of big data in all facets and fields of design. Compiled from the author’s views and insights is this list of 5 ways big data is transforming the world of design.

1. Creativity – To illustrate how far design has come in the digital age, Papas has us first picture an architect at a drafting table laboring over a blueprint, or an auto designer modeling next year’s car out of clay. “With some variation,” says Papas, “those were creative tools that designers, architects and artists relied on to render their inspirations, refine their concepts, and finalize them into market-ready products.” While those tools may have some application today, Papas points out how high-performance computing, “has radically transformed the creative process in pharma, automotive, and government R&D.” Thanks to computer modeling and simulation capabilities, designers can render, test, refine and prove products in a virtual world before they go into production.

2. Innovation – “Today, data continues to affect the design of products in new and innovative ways,” says Papas. While no specific examples are mentioned in the article, Google’s autonomous cars controlled by real-time big data analytics comes to mind as an example of innovation in the automobile industry. Smartphones, which in actuality are powerful portable computers that in many ways have transformed our lives, are another example of innovation made possible by big data. According to Papas, “What’s truly revolutionary is how marketers and other business leaders are using data in the design of something much more intimate and essential — the personalized experiences that millions of individuals will have with their products, services or brands.” Which brings us to…

3. Experience Design – In the analog days the goal of designers was to create products that seamlessly combined form and function. In the era of big data, that model has been upgraded to what is being referred to as “experience design.” As Papas explains, when you pull “experience design” apart, “you have equal parts the design of beautiful, elegant interfaces and the creation of irresistible experiences that are smart, individualized, trusted and valuable — all 100 percent dependent on the astute use of data.” According to Papas, today’s businesses are defining their agendas by two forces—“massively available information and new models of individual engagement.” So powerful are these two forces for business that Papas says, “experience design is rapidly becoming a de facto element in contemporary business strategy.”

4. Behavioral Design – Not unlike the clay automobile designers use to mold and shape basic designs, big data is giving rise to a new medium, human behavior. Described by Papas as, “a harder trick to pull off than modeling metal,” designers are using human behavior to, “learn and modify designs before they’re implemented, as insight from data gives companies the ability to understand context, and learn and evolve with the consumer and create unique, reciprocal experiences.”

As evidence of the power of behavioral design, Papas cites Brown University’s dilemma of either upgrading its existing engineering school or moving the entire engineering department off campus to a larger, potentially better facility. “Through a deep analysis of a hodgepodge of data — from faculty collaboration patterns to course enrollments,” says Papas, “Brown discovered patterns showing an enormous amount of cross-fertilization between the school’s communities.” Based on the insights obtained from behavioral data that showed how the off- campus option would “negatively affect students, faculty collaboration and research dollars,” the university chose not to make the move.

5. The “Market of One”Thanks to big data analytics, Papas says that, “the long-anticipated ability to really find, know and engage the proverbial “market of one” is finally at hand.” While not mentioned in the article, images of consumers being engaged on their mobile devices by companies in relevant and meaningful ways—in real-time and in context—serve as an example of how marketers are able to reach the once elusive “market of one.”

Big data is truly transforming the world of design.

Going forward, Papas predicts that the powerful, data-intensive tools that designers now have at their disposal will continue to “render the designs and create the experiences that will unlock the next great level of possibility and value for enterprise in every industry.”

Category: Business Intelligence
No Comments »

by: Robert.hillard
22  Feb  2015

Making the case for jargon, acronyms and clear language

All over the web, authors are ranting about the misuse of the English language in business.  It’s an easy article to write, picking out examples of jargon and the general torturing of sentences in the name of explaining apparently simple concepts while making the writer seem more impressive.

Some great examples from business can be found in this Forbes article: The Most Annoying, Pretentious and Useless Business Jargon.  Examples that I like (or hate) include terms like “swim lanes”, “best practice” and “core competence”.

There are good reasons for using jargon Rather than take cheap shots, let’s start by asking why people use jargon in the first place.  There are often good reasons for using most of the terms, even the ones that we all hate.  Every discipline has its own special language, whether it is science, architecture, law or business across every industry.  This use of language isn’t restricted to technical terminology, it also includes the way that concepts are expressed which can seem obscure or even absurd to an outsider.

While often justified, the use of jargon acts as a tool to exclude these outsiders from participating in the conversation.  This is a problem because there is good evidence that some of the most exciting solutions to problems are multi-disciplinary.

Experts seldom realise that they might be missing out on a breakthrough which comes from another field.  Business, for instance, is borrowing from ideas in physics (such as thermodynamics) in understanding the dynamics of markets as well as biology to understand how organisations evolve.

Just as fields of science, medicine and law have their own language, so does every aspect of business such as human resources, finance, sales et cetera.  Even in general business, diversity of thought comes up with a better answer almost every time.  Getting to those ideas is only possible if the discussion is intelligible to all.

Jargon to avoid debate

While many discussions use jargon and acronyms as a legitimate shortcut, some use of jargon reflects a lack of understanding by the participants or even an attempt to avoid debate in the first place.  Take the example of “metadata”, a term which has appeared in many countries as governments struggle with appropriate security, privacy and retention regimes.

A plain English approach would be to describe the definition of metadata in full in every discussion rather take the shortcut of using the term on its own.  The reality is that even landing on a definition can lead to an uncomfortable debate, but definitely one worth having as the Australian Attorney General learned to his determent in this interview where the purpose of a very important debate was lost in a confusing discussion on what metadata actually is.

The Attorney General isn’t on his own, many executives have been challenged in private and public forums to explain the detail behind terms they’ve commonly used only to come unstuck.

Sometimes people use jargon, like metadata, swim lanes and best practice because they are avoiding admitting they don’t know the detail.  Other times, they are legitimately using the terms to avoid having to repeat whole paragraphs of explanation.  Of course, this is where acronyms come into their own.

Balancing the needs of the reader and author

Given that it takes at least five times as long to write something as it does to read it (and a factor of ten is more realistic for anything complex) the authors of documents and emails can be forgiven for taking a shortcut or two.

The problem is when they forget that every communication is an exchange of information and while information has value, the value is not necessarily the same for both the writer and reader.

For example, there is little that is more annoying that receiving an email which requires extensive research to work out what the TLAs contained within it actually stand for.  Of course, a TLA is a “three letter acronym” (the most popular length of acronym with examples including everything from “BTW” for “by the way” through to LGD for “loss given default”).

Our propensity for short messages has only increased due to our rapid adoption of texts, instant messaging and Twitter.  I’ve written before about the role of email in business (see Reclaim email as a business tool).  Clarity of meaning is fundamental to all communication regardless of the medium.

Given that it does take so much longer to write than to read, it makes sense for the writer to take short-cuts.  However, if the information is of the same value to both the writer and reader, then the shortcut needs to offer a tenfold benefit to the writer to make-up for the additional cost in time to the reader who has to decode what the shortcut means.

This equation gets worse if there are multiple readers, if the benefit is greater to the writer than the reader or when the writer has the advantage of context (that is, they are already thinking about the topic so the jargon and acronyms are already on their mind).

In short, there is seldom a benefit to using jargon or acronyms in an email without taking a moment to either spell them out or provide a link to definitions that are easily accessed.

Is this the best you can do?

Perhaps the need to make sure that a reader’s time is spent wisely is best summed up in this anecdote told by Ambassador Winston Lord about Henry Kissinger (former US Secretary of State).

After giving Henry Kissinger a report that Ambassador Winston Lord had worked diligently on, Kissinger famously asked him if this “Is this the best you can do?” This question was repeated on each draft until Lord reached the end of his tolerance, saying “Damn it, yes, it’s the best I can do.”  To which Kissinger replied: “Fine, then I guess I’ll read it this time.” (sourced from Walter Isaacson, “Kissinger: A Biography”, Simon & Schuster 1992).

Category: Enterprise Content Management, Information Value
No Comments »

by: Bsomich
20  Feb  2015

MIKE2.0 Community Update

Missed what’s been happening in the MIKE2.0 community? Read on!

 

  
 logo.jpg

Business Drivers for Better Metadata Management

There are a number Business Drivers for Better Metadata Management that have caused metadata management to grow in importance over the past few years at most major organisations. These organisations are focused on more than just a data dictionary across their information – they are building comprehensive solutions for managing business and technical metadata.

Our wiki article on the subject explores many factors contributing to the growth of metadata and guidance to better manage it:   

Feel free to check it out when you have a moment.

Sincerely,MIKE2.0 Community

Contribute to MIKE:

Start a new article, help with articles under construction or look for other ways to contribute

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

Join Us on
images.jpg

Did You Know?
All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

 

This Week’s Blogs for Thought:

Keeping Big Data Secure: Should You Consider Data Masking?  

Big data is a boon to every industry. And as data volumes continue their exponential rise, the need to protect sensitive information from being compromised is greater than ever before. The recent data breach of Sony Pictures, and new national threats from foreign factions serve as a cautionary tale for government and private enterprise to be constantly on guard and on the lookout for new and better solutions to keep sensitive information secure.

Read more.

The Architecture After Cloud

I think that Zach Nelson (Netsuite’s CEO) was wrong when he said that “cloud is the last computing architecture” but I also believe that his quote is a healthy challenge to take computing and business architectures to a new level. Nelson went on to say “I don’t think there is anything after it (cloud). What can possibly be after being able to access all of your data any time, anywhere and on any device? There is nothing.”  His comments are available in full from an interview with the Australian Financial Review. Aside from cloud, our industry has had a range of architectures over the decades including client/server, service oriented architecture (SOA) and thin client.  Arguably design patterns such as 4GLs (fourth generation languages) and object oriented programming are also architectures in their own right. I think that we can predict attributes of the next architecture by looking at some of the challenges our technologies face today.Read more.

MSP Cloud Computing Strategies to Consider in 2015

Managed service providers (MSP) have some difficult decisions to make in the coming year. Many of the pressing questions they’re facing revolve around cloud computing as the cloud has become a technology now being embraced by mainstream businesses of all sizes and types. Years ago, the dilemma surrounding the cloud was a relatively easy one to address, especially when clients were asking questions about what is cloud computing and how it could ultimately benefit their organizations.Read more.

  Forward this message to a friend

 

 

Category: Information Development
No Comments »

Calendar
Collapse Expand Close
TODAY: Thu, August 27, 2015
August2015
SMTWTFS
2627282930311
2345678
9101112131415
16171819202122
23242526272829
303112345
Archives
Collapse Expand Close
Recent Comments
Collapse Expand Close