Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.
by: Robert.hillard
27  Sep  2014

The rule of 150 applied to data

Anthropologist Robin Dunbar has used his research in primates over recent decades to argue that there is a cognitive limit to the number of social relationships that an individual can maintain and hence a natural limit to the breadth of their social group.  In humans, he has proposed that this number is 150, the so-called “Dunbar’s number”.

In the modern organisation, relationships are maintained using data.  It doesn’t matter whether it is the relationship between staff and their customers, tracking vendor contracts, the allocation of products to sales teams or any other of the literally thousands of relationships that exist, they are all recorded centrally and tracked through the data that they throw off.

Social structures have evolved over thousands of years using data to deal with the inability of groups of more than 150 to effectively align. One of the best examples of this is the 11th century Doomsday Book ordered by William the Conqueror.  Fast forward to the 21st century and technology has allowed the alignment of businesses and even whole societies in ways that were unimaginable 50 years ago.

Just as a leadership team needs to have a group of people that they relate to that falls within the 150 of Dunbar’s number, they also need to rely on information which allows the management system to extend that span of control.  For the average executive, and ultimately for the average executive leadership team, this means that they can really only keep a handle on 150 “aspects” of their business, reflected in 150 “key data elements”.  These elements anchor data sets that define the organisation.

Key Data Elements

To overcome the constraints of Dunbar’s number, mid-twentieth century conglomerates relied on a hierarchy with delegated management decisions whereas most companies today have heavily centralised decision making which (mostly) delivers a substantial gain in productivity and more efficient allocation of capital.  They can only do this because of the ability to share information efficiently through the introduction of information technology across all layers of the enterprise.

This sharing, though, is dependent on the ability of an executive to remember what data is important.  The same constraint of the human brain to know more than 150 people also applies to the use of that information.  It is reasonable to argue that the information flows have the same constraint as social relationships.

Observing hundreds of organisations over many years, the variety of key data elements is wide but their number is consistently in the range of one to a few hundred.  Perhaps topping out at 500, the majority of well-run organisations have nearer to 150 elements dimensioning their most important data sets.

While decisions are made through metrics, it is the most important key data elements that make up the measures and allow them to be dimensioned.

Although organisations have literally hundreds of thousands of different data elements they record, only a very small number are central to the running of the enterprise.  Arguably, the centre can only keep track of about 150 and use them as a core of managing the business.

Another way of looking at this is that the leadership team (or even the CEO) can really only have 150 close relationships.  If each relationship has one assigned data set or key data element they are responsible for then the overall organisation will have 150.

Choosing the right 150

While most organisations have around 150 key data elements that anchor their most important information, few actually know what they are.  That’s a pity because the choice of 150 tells you a lot about the organisation.  If the 150 don’t encompass the breadth of the enterprise then you can gain insight into what’s really important to the management team.  If there is little to differentiate the key data elements from those that a competitor might choose then the company may lack a clear point of difference and be overly dependent on operational excellence or cost to gain an advantage.

Any information management initiative should start by identifying the 150 most important elements.  If they can’t narrow the set down below a few hundred, they should be suspicious they haven’t gotten to the core of what’s really important to their sponsors.  They should then look to ask the question of whether these key data elements span the  enterprise or pick organisational favourites; whether they offer differentiation or are “me too” and whether they are easy or hard for a competitor to emulate.

The identification of the 150 key data elements provides a powerful foundation for any information and business strategy.  Enabling a discussion on how the organisation is led and managed.  While processes evolve quickly, the information flows persist.  Understanding the 150 allows a strategist to determine whether the business is living up to its strategy or if its strategy needs to be adjusted to reflect the business’s strengths.

Category: Enterprise Data Management, Enterprise2.0, Information Development, Information Governance, Information Management, Information Strategy, Information Value, Metadata
No Comments »

by: RickDelgado
27  Sep  2014

What to Look for When Implementing a BYOD Policy at a Hospital

Businesses everywhere seem to be quickly latching onto the concept of Bring Your Own Device (BYOD) in the workplace. The idea is simple: have employees bring personal devices into work where they can use them for their jobs. For your average business, this seems to be a great way to improve productivity and job satisfaction, but could such a thing work for a hospital? Obviously hospitals are a very different kind of business, where physicians, nurses, and patients interact to get the best care possible. Having personal devices in hand can make the whole operation run much smoother. Many hospitals out there have seen BYOD as a great way to boost productivity and improve patient outcomes. In fact, one survey showed that 85% of hospitals have adopted some form of BYOD. For the few who have not yet made the transition but are looking into it, a number of tips have popped up that could prove helpful.

Keep It Simple (At First)

If the idea is to make physicians’ jobs easier and more convenient, than a BYOD policy is a great way to make that a reality, but it is a pretty big change for a hospital or any other healthcare facility. That’s why when making this change, it’s important to keep things simple to start out. Doctors and staff need time to get accustomed to using their personal devices in their work environment. Often that means downloading helpful applications that can make doing their jobs easier, or getting used to any new security requirements that IT departments define. A simple start allows everyone to ease into the new policy, and any changes that are needed can be done without disrupting the entire program. More complex arrangements can be made later.

Create an Environment of Communication

When dealing with such a drastic change, people will have different ideas about what works and what doesn’t. That’s why everyone involved–from the top hospital administrators to the late night IT shift to the surgical team–will need to be open to new ideas. A doctor may have a unique perspective on how a smartphone or wearable device might be used to help patients. Or an IT worker may see potential in a new app that physicians haven’t thought about using yet. Being able to communicate these ideas can help in the forming of the new BYOD policy. If everyone collaborates together, they will end up with guidelines and suggestions that will keep most people happy, which is what BYOD is all about.

Address Security

One of the biggest concerns over BYOD in any business is that of security. That is certainly true for a hospital as well. When dealing with sensitive patient information, BYOD security must be at the top of any hospital’s priority list. The concerns over security don’t come from thin air. Studies from security companies have shown that around 40% of healthcare employees use personal devices that have no password protection. Many employees also access unsecured networks with their devices. Security can’t be overlooked if a hospital wants to successfully implement a BYOD policy. That means devices will need to be examined to make sure they’re compatible with the hospital’s security network. Some devices may even be rejected. While this limitation may hinder efforts to fully adopt BYOD, any measure taken to ensure tighter security will be worth it in the end.

Focus on Privacy

On that same note, patient privacy must still be taken into account, especially now that so many patient records are now electronic. Of particular concern is making sure all records, devices, and procedures legally comply with the latest HIPAA rules. Hospitals need to work to make sure the data found in their networks and on their employees’ devices is protected. Even though workers may be using their own smartphones or tablets, the data they access on them is still the responsibility of the hospital. Should any of that data get lost or stolen, it will be the hospital that has to deal with subsequent legal proceedings. A BYOD policy that takes all of this into account and plans for the future can saves hospitals a lot of headaches down the line.

Adopting a BYOD program for a hospital is no easy task. Numerous factors need to be taken into consideration to avoid potentially devastating consequences. Luckily, enough hospitals have made the leap to provide helpful lessons on how implementing these policies should be done. With enough expertise and innovation, a hospital creating a BYOD policy for the first time should be able to make that transition with minimal complications.

Category: Information Development
No Comments »

by: Ocdqblog
26  Sep  2014

Is Informed Consent outdated in the Information Age?

Facebook experiment from late 2012 made news earlier this year and raised the ethical question of whether, by using free services provided via the Internet and mobile apps, we have granted informed consent to be experimented on for whatever purposes.

The On the Media TLDR audio podcast recently posted an interview with Christian Rudder, the co-founder of the free dating website OkCupid, who recently blogged about how OkCupid experiments on its users in a post with the intentionally provocative title We Experiment On Human Beings!

While this revelation understandably attracted a lot of attention, at least OkCupid is not trying to hide what it’s doing. Furthermore, as Rudder blogged, “guess what, everybody: if you use the Internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.”

During the interview, Rudder made an interesting comparison between what websites like Facebook and OkCupid do and how psychologists and other social scientists have been experimenting on human beings for decades. This point resonated with me since I have read a lot of books that explore how and why humans behave and think the way we do. Just a few examples are Predictably Irrational by Dan Ariely, You Are Not So Smart by David McRaney, Nudge by Richard Thaler and Cass Sunstein, and Thinking, Fast and Slow by Daniel Kahneman.

Most of the insights discussed in these books are based on the results of countless experiments, most of which were performed on college students since they can be paid in pizza, course credit, or beer money. The majority of the time the subjects in these experiments are not fully informed about the nature of the experiment. In fact, many times they are intentionally misinformed in order to not skew the results of the experiment.

Rudder argued the same thing is done to improve websites. So why do we see hallowed halls when we envision the social scientists behind university research, but we see creepy cubicles when we envision the data scientists behind website experimentation? Perhaps we trust academic more than commercial applications of science.

During the interview, Rudder addressed the issue of trust. Users of OkCupid are trusting the service to provide them with good matches and Rudder acknowledged how experimenting on users can seem like a violation of that trust. “However,” Rudder argued, “doing experiments to make sure that what we’re recommending is the best job that we could possibly do is upholding that trust, not violating it.”

It’s easy to argue that the issue of informed consent regarding experimentation on a dating or social networking website is not the same as informed consent regarding government surveillance, such as last year’s PRISM scandal. The latter is less about experimentation and more about data privacy, where often we are our own worst enemy.

But who actually reads the terms and conditions for a website or mobile app? If you do not accept the terms and conditions, you can’t use it, so most of us accept them by default without bothering to read them. Technically, this constitutes informed consent, which is why it may simply be an outdated concept in the information age.

The information age needs enforced accountability (aka privacy through accountability), which is less about informed consent and more about holding service providers accountable for what they do with our data. This includes the data resulting from the experiments they perform on us. Transparency is an essential aspect of that accountability, allowing us to make an informed decision about what websites and mobile apps we want to use.

However, to Rudder’s point, we are fooling ourselves if we think that such transparency would allow us to avoid using the websites and mobile apps that experiment on us. They all do. They have to in order to be worth using.

 

Category: Information Governance
No Comments »

by: Bsomich
20  Sep  2014

Community Update.

Missed what’s been happening in the MIKE2.0 community? Check out our bi-weekly update:

 

Click to view this email in a browser

 logo.jpg

When to Use a NoSQL Database 

If your company has complicated, large sets of data that it’s looking to analyze, and that data isn’t simple, structured or predictable data then SQL is not going to meet your needs. While SQL specializes in many things, large amounts of unstructured data is not one of those areas. There are other methods for gathering and analyzing your data that will be much more effective and efficient and probably cost you less too.

NoSQL, which stands for “Not Only Standard Query Language,” is what your company needs. It’s perfect for dealing with huge data sets of information that aren’t always going to be structured — meaning every batch that’s gathered and analyzed could potentially be different. It’s a relatively new process, but it’s becoming increasingly important in the big data world because of its immense capabilities.

NoSQL is known for it’s flexibility, scalability and relatively low cost, all of which SQL doesn’t offer. SQL relies on traditional column and row processing with predefined schemas. When you’re dealing with unstructured information, it’s nearly impossible to work with schemas, because there’s no way to know what type of information you’ll be getting each time. Also, with SQL it can be costly to scale data and the actual processing can be very slow. NoSQL on the other hand, makes the process cheaper, and it can also do real-time analyzing with extremely large data sets. It gives companies the flexibility to gather as much or as little information as it needs to be successful.

We invite you to read more about this topic on our blog. And as always, we welcome any suggestions you may have to improve it.

Sincerely,

MIKE2.0 Community

New! Popular Content

Did you know that the followingwiki articles are most popular on Google? Check them out, and feel free to edit or expand them!

What is MIKE2.0?
Deliverable Templates
The 5 Phases of MIKE2.0
Overall Task List
Business Assessment Blueprint
SAFE Architecture
Information Governance Solution

Contribute to MIKE:

Start a new article, help witharticles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

Join Us on
 images.jpg

 

 

This Week’s Blogs for Thought:

What Movie Descriptions Can Teach Us About Metadata

In a previous post on this blog, I reviewed what movie ratings teach us about data quality. In this post I ponder another movie-related metaphor for information development by looking at what movie descriptions teach us about metadata.

Read more.

How CIOs Can Discuss the Contribution of IT

Just how productive are Chief Information Officers or the technology that they manage?  With technology portfolios becoming increasingly complex it is harder than ever to measure productivity.  Yet boards and investors want to know that the capital they have tied-up in the information technology of the enterprise is achieving the best possible return.
For CIOs, talking about value improves the conversation with executive colleagues.  Taking them aside to talk about the success of a project is, even for the most strategic initiatives, usually seen as a tactical discussion.  Changing the topic to increasing customer value or staff productivity through a return on technology capital is a much more strategic contribution.

Read more.
Let the Computers Calculate and the Humans Cogitate

Many organizations are wrapping their enterprise brain around the challenges of business intelligence, looking for the best ways to analyze, present, and deliver information to business users.  More organizations are choosing to do so by pushing business decisions down in order to build a bottom-up foundation.

However, one question coming up more frequently in the era of big data is what should be the division of labor between computers and humans?
Read more.

 

 

Forward to a Friend!

Know someone who might be interested in joining the Mike2.0 Community? Forward this message to a friendQuestions?

If you have any questions, please email us at mike2@openmethodology.org.

 

 

Category: Information Development
No Comments »

by: Gil Allouche
16  Sep  2014

When to Use a NoSQL Database

If your company has complicated, large sets of data that it’s looking to analyze, and that data isn’t simple, structured or predictable data then SQL is not going to meet your needs. While SQL specializes in many things, large amounts of unstructured data is not one of those areas. There are other methods for gathering and analyzing your data that will be much more effective and efficient and probably cost you less too.

NoSQL, which stands for “Not Only Standard Query Language,” is what your company needs. It’s perfect for dealing with huge data sets of information that aren’t always going to be structured — meaning every batch that’s gathered and analyzed could potentially be different. It’s a relatively new process, but it’s becoming increasingly important in the big data world because of its immense capabilities.

NoSQL is known for it’s flexibility, scalability and relatively low cost, all of which SQL doesn’t offer. SQL relies on traditional column and row processing with predefined schemas. When you’re dealing with unstructured information, it’s nearly impossible to work with schemas, because there’s no way to know what type of information you’ll be getting each time. Also, with SQL it can be costly to scale data and the actual processing can be very slow. NoSQL on the other hand, makes the process cheaper, and it can also do real-time analyzing with extremely large data sets. It gives companies the flexibility to gather as much or as little information as it needs to be successful.

Scale
NoSQL really takes information scalability to the next level. In the past, companies have too often been hindered in their data gathering ability because of the limited capabilities of then current technologies to handle large sets of information. Part of NoSQL’s strength comes not only from it’s ability to scale information much better than SQL, but also because it works so well in the cloud. With cloud services, companies have nearly unlimited scalability. Many NoSQL databases also offer integration with other big data tools, such as Apache’s Spark as a Service.

Flexibility
Along the same lines as scalability, companies can now gather more information than was ever possible with SQL. That ability means companies can be much more flexible in the information that they gather and process. In the past, companies also had to be selective of the type and amount of information gathered for fear of reaching storage and processing limits. NoSQL eliminates that difficulty. It’s important that companies have more information, not for information’s sake, but to benefit the consumer. Data sets that companies were previously hindered from utilizing can now be of great benefit.

Price
NoSQL makes scaling and gathering information much more cost efficient than SQL for three reasons. First, it’s increased ability to process large amounts of information quickly and effectively mean companies can make changes faster which positively affects the bottom line. Second, it’s much cheaper to scale information with NoSQL than it is with SQL, which also saves money. Last, because of NoSQL and big data in the cloud, many companies now have access to big data without having to pay the enormous startup costs that come with an onsite infrastructure — something that used to be a necessity for big data.

Speed
Not only does NoSQL offer increased scalability and flexibility at a lower price, but it also provides increased analytical speed. Companies can gather and analyze large data sets, not just quickly, but in real-time. No longer are companies forced to wait hours, days or even weeks to get the information back. The uses for this type of quick analysis are nearly endless. It’s especially important in today’s social media-filled world. Companies can instantly see what is being said about them, how their products are being received, what competitors are doing and what consumers want to see in the future.

Conclusion
NoSQL isn’t perfect and still lacks some of the capabilities of SQL, but it has many strengths that make it a great option for companies looking to gather and analyze large data sets that are unstructured and unpredictable.

Category: Enterprise Data Management
No Comments »

by: Ocdqblog
15  Sep  2014

What Movie Descriptions teach us about Metadata

In a previous post on this blog, I reviewed what movie ratings teach us about data quality. In this post I ponder another movie-related metaphor for information development by looking at what movie descriptions teach us about metadata.

Nightmare on Movie Night

It’s movie night. What are you in the mood for? Action Adventure? Romantic Comedy? Science Fiction? Even after you settle on a genre, picking a movie within it can feel like a scene from a Suspense Thriller. Even if you are in the mood for a Horror film, you don’t want to turn movie night into a nightmare by watching a horrible movie. You need reliable information about movies to help you make your decision. You need better movie metadata.

Tag the Movie: A Netflix Original

In his article about How Netflix Reverse Engineered Hollywood, Alexis Madrigal explained how Netflix uses large teams of people specially trained to watch movies and tag them with all kinds of metadata.

Madrigal described the process as “so sophisticated and precise that taggers receive a 36-page training document that teaches them how to rate movies on their sexually suggestive content, goriness, romance levels, and even narrative elements like plot conclusiveness. They capture dozens of different movie attributes. They even rate the moral status of characters. When these tags are combined with millions of users viewing habits, they become Netflix’s competitive advantage. The company’s main goal as a business is to gain and retain subscribers. And the genres that it displays to people are a key part of that strategy.”

The Vocabulary and Grammar of Movies

As Madrigal investigated how Netflix describes movies, he discovered a well-defined vocabulary. Standardized adjectives were consistently used (e.g., Romantic, Critically Acclaimed, Dark, Suspenseful). Phrases beginning with “Based on” revealed where the idea for a movie came from (e.g., Real Life, Children’s Book, Classic Literature). Phrases beginning with “Set in” oriented where or when a movie was set (e.g., Europe, Victorian Era, Middle East, Biblical Times). Phrases beginning with “From the” dated the decade the movie was made in. Phrases beginning with “For” recommended appropriate age ranges for children’s movies. And phrases beginning with “About” provided insight about the thematic elements of a movie (e.g., Food, Friendship, Marriage, Parenthood).

Madrigal also discovered the rules of grammar Netflix uses to piece together the components of its movie vocabulary to generate more comprehensive genres in a consistent format, such as Romantic Dramas Based on Classic Literature Set in Europe From the 1940s About Marriage (one example of which is Pride and Prejudice).

What Movie Descriptions teach us about Metadata

The movie metadata created by Netflix is conceptually similar to the music metadata created by Pandora with its Music Genome Project. The detailed knowledge of movies that Netflix has encoded as supporting metadata makes movie night magic for their subscribers.

Todd Yellin of Netflix, who guided their movie metadata production, described the goal of personalized movie recommendations as “putting the right title in front of the right person at the right time.” That is the same goal lauded in the data management industry for decades: delivering the right data to the right person at the right time.

To deliver the right data you need to know as much as possible about the data you have. More important, you need to encode that knowledge as supporting metadata.

 

Category: Information Development, Metadata
No Comments »

by: Bsomich
30  Aug  2014

MIKE2.0 Community Update

Missed what’s been happening in the MIKE2.0 Community? Check out our bi-weekly update:

Click to view this email in a browser

 logo.jpg

Have you seen our Open MIKE Podcast series? 

The Open MIKE Podcast is a video podcast show, hosted by Jim Harris, which discusses aspects of the MIKE2.0 framework, and features content contributed to MIKE 2.0 Wiki ArticlesBlog Posts, and Discussion Forums.

We kindly invite any existing MIKE contributors to contact us if they’d like to contribute any audio or video segments for future episodes.

Check it out and feel free to check out our community overview video for more information on how to get involved with MIKE2.0.

On Twitter? Contribute and follow the discussion via the #MIKEPodcast hashtag.

Sincerely,

MIKE2.0 Community

New! Popular Content

Did you know that the following wiki articles are most popular on Google? Check them out, and feel free to edit or expand them!

What is MIKE2.0?
Deliverable Templates
The 5 Phases of MIKE2.0
Overall Task List
Business Assessment Blueprint
SAFE Architecture
Information Governance Solution

Contribute to MIKE:

Start a new article, help witharticles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

Join Us on
images.jpg

This Week’s Blogs for Thought:

RDF PROVenance Ontology

This post features a special 3-part focus on the RDF PROVenance Ontology, a Recommendation from the Worldwide Web Consortium that is expected to be a dominant interchange standard for enterprise metadata in the immediate future, replacing the Dublin Core. Followingintroduction, reviews and commentary about the ontology’sRDF classes and RDF properties should stimulate lively discussion of the report’s conclusions.

Read more. 

Are There Alternatives to Data Hoarding? 

“In a microsecond economy,” Becca Lipman recently blogged, “most data is only useful in the first few milliseconds, or to an extent, hours after it is created. But the way the industry is collecting data, or more accurately, hoarding it, you’d think its value lasts a lifetime. Yes, storage costs are going down and selecting data to delete is no easy task, especially for the unstructured and unclassified sets. And fear of deleting something that could one day be useful is always going to be a concern. But does this give firms the go-ahead to be data hoarders?”

Read more.

Analogue Business with a Digital Facade

Everyone is talking about digital disruption and the need to transform their company into a “digital business”.  However, ask most people what a digital business is and they’ll talk in terms of online shopping, mobile channels or the latest wearable device.

In fact we have a long tradition of using the word “digital” as a prefix to new concepts while we adapt.  Some examples include the wide introduction from the 1970s of the “digital computer” a term which no longer needs the digital prefix.Read more.

 

 

 

Category: Information Development
No Comments »

by: Ocdqblog
27  Aug  2014

Are there alternatives to Data Hoarding?

“In a microsecond economy,” Becca Lipman recently blogged, “most data is only useful in the first few milliseconds, or to an extent, hours after it is created. But the way the industry is collecting data, or more accurately, hoarding it, you’d think its value lasts a lifetime. Yes, storage costs are going down and selecting data to delete is no easy task, especially for the unstructured and unclassified sets. And fear of deleting something that could one day be useful is always going to be a concern. But does this give firms the go-ahead to be data hoarders?”

Whether we choose to measure it in terabytes, petabytes, exabytes, HoardaBytes, or how harshly reality bites, we have been hoarding data long before the data management industry took a super-sized tip from McDonald’s and put the word “big” in front of its signature sandwich. At least McDonald’s starting phasing out their super-sized menu options in 2004, stating the need to offer healthier food choices, a move that was perhaps motivated in some small way by the success of Morgan Spurlock’s Academy Award Nominated documentary film Super Size Me.

Much like fast food is an enabler for our chronic overeating and the growing epidemic of obesity, big data is an enabler for our data hoarding compulsion and the growing epidemic of data obesity.

Does this data smell bad to you?

Are there alternatives to data hoarding? Perhaps we could put an expiration date on data, which after it has been passed we could at least archive, if not delete, the expired data. One challenge with this approach is that with most data you can not say exactly when it will expire. Even if we could, however, expiration dates for data might be as meaningless as the expiration dates we currently have for food.

As Rose Eveleth reported, “these dates are—essentially—made up. Nobody regulates how long milk or cheese or bread stays good, so companies can essentially print whatever date they want on their products.” Eveleth shared links to many sources that basically recommended ignoring the dates and relying on seeing if the food looks or smells bad.

What about regulatory compliance?

Another enabler of data hoarding is concerns about complying with future regulations. This is somewhat analogous to income tax preparation in the United States where many people hoard boxes of receipts for everything in hopes of using them to itemize their tax deductions. Even though most of the contents are deemed irrelevant when filing an official tax return, some people still store the boxes in their attic just in case of a future tax audit.

How useful would data from this source be?

Although calculating the half-life of data has always been problematic, Larry Hardesty recently reported on a new algorithm developed by MIT graduate student Dan Levine and his advisor Jonathan How. By using the algorithm, Levine explained, “the usefulness of data can be assessed before the data itself becomes available.” Similar algorithms might be the best future alternative to data hoarding, especially when you realize that the “keep it just in case we need it” theory often later faces the “we can’t find it amongst all the stuff we kept” reality.

What Say You?

Have you found alternatives to data hoarding? Please share your thoughts by leaving a comment below.

 

Category: Enterprise Data Management, Information Governance
No Comments »

by: Robert.hillard
23  Aug  2014

Analogue business with a digital facade

Everyone is talking about digital disruption and the need to transform their company into a “digital business”.  However, ask most people what a digital business is and they’ll talk in terms of online shopping, mobile channels or the latest wearable device.

In fact we have a long tradition of using the word “digital” as a prefix to new concepts while we adapt.  Some examples include the wide introduction from the 1970s of the “digital computer” a term which no longer needs the digital prefix.  Similarly the “digital mobile phone” replaced its analogue equivalent in the 1990s introducing security and many new features including SMS.  The scandals caused when radio hams listened into analogue calls made by politicians seem like a distant memory!

The term digital really just refers to the use of ones and zeros to describe something in a way that is exactly repeatable rather than an inexact analogue stream.  Consider, for instance, the difference between the ones and zeros used to encode the music in an audio file, which will replay with the same result now and forever in the future, compared to the physical fluctuations in the groove of an old vinyl record which will gradually degrade.  Not only is the audio file more robust it is also more readily able to be manipulated into new and interesting combinations.

Digital business

So it is with digital business.  Once the economy has successfully made the transition from analogue to digital, it will be natural for all business to be thought of in this way.  Today, however, too many people put a website and mobile app over the top of their existing business models and declare the job done.

Their reason for doing this is that they don’t actually understand what a digital business is.

Digital business separates its constituent parts to create independent data and processes which can then be rapidly assembled in a huge number of new and innovative ways.  The move to digital business is actually just a continuation of the move to the information economy.  We are, in fact, moving to the Information-Driven Business that puts information rather than processes at its core.

Airlines are a good example.  Not that many years ago, the process of ticketing through to boarding a flight was analogue meaning that each step led to the next and could not be separated.  Today purchasing, ticketing and boarding a flight are completely independent and can each use completely different processes and digital technology without impacting each other.  Passenger handling for airlines is now a digital business.

What this means is that third parties or competing internal systems can work on an isolated part of the business and find new ways of adding value.  For retailers this means that the pictures and information supporting products are independent of the website that presents them and certainly the payment processes that facilitate customer transactions.  A digital retailer has little trouble sharing information with new logistics, payments and mobile providers to quickly develop more efficient or new routes to market.

The digital facade

In the 1970s and 1980s businesses were largely built-up by using thousands of processes.  Over time, automation has allowed these numbers to explode.  When processes required clerical functions the number of options was limited by available labour.  With automation, every issue can be apparently solved by adding a process.

Where digital business is about breaking activities up into discrete parts which can be reassembled, analogue business tends to be made up of processes which are difficult or impossible to break apart.

The problem with this is that the organisation becomes a maze of processes.  They become increasingly interdependent, to the point where it is impossible to break them apart.

Many businesses have put mobile and web solutions over the top of this maze.  While the result can look fantastic, it doesn’t take long before the wheels fall off.  Customers experience inconsistent product, delivery or price options depending on whether they ring the call centre or look online.  They find that the promised stock is no longer available because the warehouse processes are not properly integrated with the online store.  Without seamless customer information, they aren’t addressed with the same premium privileges or priority across all channels.

In many cases, the process maze means that the addition of a digital façade can do more harm than good.

Ironically, the reverse model of having a truly digital business with an analogue interface to the customer is not only valid but often desirable.  Many customers, particularly business clients, are themselves dependent upon complex processes in their own organisations and it makes perfect sense to work with them the way that they want to work.  An airline that is entirely digital in its core can still deal with corporate travel agents who operate in an analogue world.

Digital transformation from the inside out

I have previously argued that the technology infrastructure that supports digital business, cloud, is actually the foundation for business transformation (see Cloud computing should be about new business models).

Every twenty-first century business needs to define itself in terms of its information assets and differentiated intellectual property.  Business transformation should focus on applying these to the benefit of all of its stakeholders including customers, staff, suppliers and shareholders.

Starting from the core and building a new enterprise around discrete, digital, business capabilities is a big exercise.  The alternative, however, is to risk being pulled under in the long-term by the weight of complexity.

No matter how many extra interfaces, interim processes or quick fixes are put in place, any business that fails in this transformation challenge will ultimately be seen as offering no more than a digital façade on a fading analogue business.

Category: Enterprise2.0, Information Development, Web2.0
No Comments »

by: Gil Allouche
20  Aug  2014

Lessons from How Amazon Uses Big Data

Entering the field of big data is no cake walk. Due to the many nuances of big data architecture combined with the fact that it’s so new, relatively speaking, it can be overwhelming for many executives to implement. Or, instead of being overwhelming, there’s simply a lack of experience and understanding. That misunderstanding leads management, all too often, to use big data inefficiently.

One of the best ways for companies to learn about big data and how they can effectively implement it is by analyzing those who have used big data successfully for years. Amazon.com is one of those companies.

There’s no doubting the data expertise of Amazon.com. One of the key innovators in big data technology, the global giant has given us lesson after lesson on how to successfully gather, analyze and then implement data analytics. Not only has it effectively used big data for it’s own purposes, but with services like Amazon Elastic MapReduce, the company has successfully leveraged its own data use to help others.

Amazon is full of lessons on how to successfully use big data, here are a few –

Focus on the Customer


One of Amazon’s premier uses of big data has been customer recommendations. If you have an Amazon account and you take a quick look at your Amazon home page you’ll notice as you scroll down there are recommendations based on your browsing history, additional recommendations, sale items based on what you’ve bought and searched for in the past. While this type of things occurs frequently today, Amazon was one of the first companies to do this.

Amazon has put a focus on using it’s big data to give its customers a personalized and focused buying experience. Interestingly, by giving customers this personalized experience, the customer tends to buy more than they would otherwise. It’s a simple solution for many problems.

For companies implementing big data, a key focus needs to be the consumer. If companies want to succeed in big data or at all, the consumer has to come first. The more satisfied they are, the better off you’ll be.

Be Focused


It’s impossible to know all Amazon’s uses of big data. Still, though, another lesson we can learn from the online retailing giant is to have an extreme focus on big data gathering and use.

Amazon gathers extremely large amounts of data each second, let alone each day and year. At that rate it would be easy to lose focus on what data is being gathered, why it’s being gathered, and how exactly it can help the customer. But, Amazon doesn’t let that happen. It’s very strategic  both in gathering data and implementing changes and upgrades because of that data.

Too many companies let big data overwhelm them. They don’t have a clear focus when they begin, and they never end up obtaining one. The data they’ve gathered goes to waste and they completely miss the opportunity and potential.

Big Data Works


Amazon is one of the most successful companies today. It’s a true global empire. Consumers shop on Amazon for everything. They are leaders in the e-book, e-reader and tablet industries and they’ve recently entered the foray into TV boxes and phones.

Behind all this success is a rock-solid determination to gather data and use it efficiently. It’s gone where other companies were afraid to go and it’s achieved success other companies wish they had.

Among many contributing factors, Amazon has leveraged its big data expertise in extremely innovative and effective ways. It’s taken big data to the next level. It has shown — time and again — that big data works. It’s shown that if companies want to take their operations and success to the next level then big data is a key component.

Make Big Data Work for You


Amazon is a great example of big data use. It’s not necessarily about the size of the company or the size of the data that’s most important. As Amazon has illustrated, it’s about tailoring to the customer’s needs, being focused and having a plan and actually using the technology. Big data works for Amazon, now make it work for you.

Category: Business Intelligence
No Comments »

Calendar
Collapse Expand Close
TODAY: Wed, October 1, 2014
October2014
SMTWTFS
2829301234
567891011
12131415161718
19202122232425
2627282930311
Recent Comments
Collapse Expand Close