Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.
by: RickDelgado
08  Oct  2014

5 of the Most Common IT Security Mistakes to Watch Out For

Securing the enterprise is no easy task. Every day it seems like there are dozens of new security risks out there, threatening to shut down your company’s systems and steal valuable data. Stories of large corporations suffering from enormous data breaches probably don’t help calm those fears, so it’s important to know the risks are real and businesses must be able to respond to them. Even though enhancing security is crucial, enterprises still make a lot of mistakes while trying to shore up their systems. Here’s a look at some of the most common IT security mistakes so you’ll be better aware of what to watch out for.

 

Overlooking IT Security

It may sound surprising, but many companies don’t place IT security as one of their top priorities. While in the pursuit of making money, businesses see security as a costly endeavor, one which requires numerous resources, significant investments, and a substantial time commitment. If done right, business would go on as usual, which is why some company leaders don’t consider it high on the to-do list. For obvious reasons, this can be a disastrous approach to take. Too many companies become reactive to threats, dealing with them after they have already occurred. Businesses that take IT security threats seriously need to be much more proactive, learning about the latest risks and taking the necessary steps to prevent them from infecting their systems.

 

Password Weaknesses

One of the first lines of defense preventing data leaks and theft is the password. Passwords make sure only authorized persons are able to access networks and systems. To make this effective, passwords need to be strong, but too often this is simply not the case. Many companies actually use default passwords for their network appliances, making for some attractive targets for prospective attackers. On the flip side, those that change passwords will often use weak ones that are vulnerable. Employees and managers need to make sure their passwords cannot simply be guessed by unauthorized users.

 

Lack of Patching

Security threats are constantly evolving. What was once a major risk several years ago is probably not a major concern today, but that only means other threats have taken its place. The best response companies can have to this evolving landscape is to always patch their IT systems, but this doesn’t happen often enough. One expert from Symantec Corp. says at least 75% of security breaches could be prevented if all the security software were patched with the latest updates. If equipped with patches, security systems will have a far better chance of detecting new threats and responding effectively.

 

Lack of Education

Employee behavior is one of the biggest concerns business leaders have. Even with updated systems and the latest software, security can only be as strong as the weakest link, and many times that weakest link ends up being end-users, or employees. Where businesses often make a mistake is in their failure to educate their employees about threats. Without the proper education about the current risks that are out there, it should come as no surprise that an employee will likely engage in activity that proves risky to company security. Some employees turn into “promiscuous clickers”, clicking on email attachments or links on suspicious and even trusted websites that can lead to malware infection. Employees need to be educated on the risky behaviors they might have so they can work to avoid them in the future. It also doesn’t hurt to place adequate endpoint security controls like anti-virus software and firewalls that can protect from risky clicking.

 

The Unprotected Cloud

Many companies are turning to the cloud to take care of many of their storage and computing needs, but that also opens up more possibilities for security problems. Businesses often don’t check on a cloud vendor’s security capabilities and end up paying for it in the end when data gets lost or stolen. The general rule is, the cheaper the cloud service, this fewer protections it will have. This is especially true for free services, which don’t offer encryption and security measures that the more expensive services do. That’s why businesses will need to make sure they’re doing everything on their end to secure their data while also evaluating cloud vendors.

 

Security needs to be a top priority for businesses, but enhancing IT security often requires avoiding simple mistakes. Though it may require financial and technological resources, companies that make sure their systems are secure can rest easy knowing their data is protected. Some of these mistakes are easy to rectify, and with greater security comes greater confidence and more productivity.

 

Category: Information Development
No Comments »

by: Bsomich
04  Oct  2014

MIKE2.0 Community Update.

 

 
 logo.jpg

Have you seen our Open MIKE Series? 

The Open MIKE Podcast is a video podcast show which discusses aspects of the MIKE2.0 framework, and features content contributed to MIKE 2.0 Wiki Articles, Blog Posts, and Discussion Forums.

You can scroll through the Open MIKE Podcast episodes below:

For more information on MIKE2.0 or how to get involved with our online community, please visit www.openmethodology.org.

Sincerely,

MIKE2.0 Community  

Contribute to MIKE:

Start a new article, help witharticles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

Join Us on

images.jpg

 

Did You Know?
All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

 

This Week’s Blogs for Thought:

The Rule of 150 Applied to DataAnthropologist Robin Dunbar has used his research in primates over recent decades to argue that there is a cognitive limit to the number of social relationships that an individual can maintain and hence a natural limit to the breadth of their social group.  In humans, he has proposed that this number is 150, the so-called“Dunbar’s number.”

In the modern organisation, relationships are maintained using data.  It doesn’t matter whether it is the relationship between staff and their customers, tracking vendor contracts, the allocation of products to sales teams or any other of the literally thousands of relationships that exist, they are all recorded centrally and tracked through the data that they throw off.

Read more.

What to Look for When Implementing a BYOD Policy

Businesses everywhere seem to be quickly latching onto the concept of Bring Your Own Device (BYOD) in the workplace. The idea is simple: have employees bring personal devices into work where they can use them for their jobs. For your average business, this seems to be a great way to improve productivity and job satisfaction, but could such a thing work for a hospital? Obviously hospitals are a very different kind of business, where physicians, nurses, and patients interact to get the best care possible. Having personal devices in hand can make the whole operation run much smoother. Many hospitals out there have seen BYOD as a great way to boost productivity and improve patient outcomes. In fact, one survey showed that 85% of hospitals have adopted some form of BYOD. For the few who have not yet made the transition but are looking into it, a number of tips have popped up that could prove helpful.

Read more.

An Open Source Solution for Better Performance Management 

Today, many organizations are facing increased scrutiny and a higher level of overall performance expectation from internal and external stakeholders. Both business and public sector leaders must provide greater external and internal transparancy to their activities, ensure accounting data faces up to compliance challenges, and extract the return and competitive advantage out of their customer, operational and performance information: Managers, investors and regulators have a new perspective on performance and compliance.

Read more.

Forward to a Friend!

Know someone who might be interested in joining the Mike2.0 Community? Forward this message to a friend

Questions?

If you have any questions, please email us at mike2@openmethodology.org.

 

 

Category: Information Development
No Comments »

by: Robert.hillard
27  Sep  2014

The rule of 150 applied to data

Anthropologist Robin Dunbar has used his research in primates over recent decades to argue that there is a cognitive limit to the number of social relationships that an individual can maintain and hence a natural limit to the breadth of their social group.  In humans, he has proposed that this number is 150, the so-called “Dunbar’s number”.

In the modern organisation, relationships are maintained using data.  It doesn’t matter whether it is the relationship between staff and their customers, tracking vendor contracts, the allocation of products to sales teams or any other of the literally thousands of relationships that exist, they are all recorded centrally and tracked through the data that they throw off.

Social structures have evolved over thousands of years using data to deal with the inability of groups of more than 150 to effectively align. One of the best examples of this is the 11th century Doomsday Book ordered by William the Conqueror.  Fast forward to the 21st century and technology has allowed the alignment of businesses and even whole societies in ways that were unimaginable 50 years ago.

Just as a leadership team needs to have a group of people that they relate to that falls within the 150 of Dunbar’s number, they also need to rely on information which allows the management system to extend that span of control.  For the average executive, and ultimately for the average executive leadership team, this means that they can really only keep a handle on 150 “aspects” of their business, reflected in 150 “key data elements”.  These elements anchor data sets that define the organisation.

Key Data Elements

To overcome the constraints of Dunbar’s number, mid-twentieth century conglomerates relied on a hierarchy with delegated management decisions whereas most companies today have heavily centralised decision making which (mostly) delivers a substantial gain in productivity and more efficient allocation of capital.  They can only do this because of the ability to share information efficiently through the introduction of information technology across all layers of the enterprise.

This sharing, though, is dependent on the ability of an executive to remember what data is important.  The same constraint of the human brain to know more than 150 people also applies to the use of that information.  It is reasonable to argue that the information flows have the same constraint as social relationships.

Observing hundreds of organisations over many years, the variety of key data elements is wide but their number is consistently in the range of one to a few hundred.  Perhaps topping out at 500, the majority of well-run organisations have nearer to 150 elements dimensioning their most important data sets.

While decisions are made through metrics, it is the most important key data elements that make up the measures and allow them to be dimensioned.

Although organisations have literally hundreds of thousands of different data elements they record, only a very small number are central to the running of the enterprise.  Arguably, the centre can only keep track of about 150 and use them as a core of managing the business.

Another way of looking at this is that the leadership team (or even the CEO) can really only have 150 close relationships.  If each relationship has one assigned data set or key data element they are responsible for then the overall organisation will have 150.

Choosing the right 150

While most organisations have around 150 key data elements that anchor their most important information, few actually know what they are.  That’s a pity because the choice of 150 tells you a lot about the organisation.  If the 150 don’t encompass the breadth of the enterprise then you can gain insight into what’s really important to the management team.  If there is little to differentiate the key data elements from those that a competitor might choose then the company may lack a clear point of difference and be overly dependent on operational excellence or cost to gain an advantage.

Any information management initiative should start by identifying the 150 most important elements.  If they can’t narrow the set down below a few hundred, they should be suspicious they haven’t gotten to the core of what’s really important to their sponsors.  They should then look to ask the question of whether these key data elements span the  enterprise or pick organisational favourites; whether they offer differentiation or are “me too” and whether they are easy or hard for a competitor to emulate.

The identification of the 150 key data elements provides a powerful foundation for any information and business strategy.  Enabling a discussion on how the organisation is led and managed.  While processes evolve quickly, the information flows persist.  Understanding the 150 allows a strategist to determine whether the business is living up to its strategy or if its strategy needs to be adjusted to reflect the business’s strengths.

Category: Enterprise Data Management, Enterprise2.0, Information Development, Information Governance, Information Management, Information Strategy, Information Value, Metadata
No Comments »

by: RickDelgado
27  Sep  2014

What to Look for When Implementing a BYOD Policy at a Hospital

Businesses everywhere seem to be quickly latching onto the concept of Bring Your Own Device (BYOD) in the workplace. The idea is simple: have employees bring personal devices into work where they can use them for their jobs. For your average business, this seems to be a great way to improve productivity and job satisfaction, but could such a thing work for a hospital? Obviously hospitals are a very different kind of business, where physicians, nurses, and patients interact to get the best care possible. Having personal devices in hand can make the whole operation run much smoother. Many hospitals out there have seen BYOD as a great way to boost productivity and improve patient outcomes. In fact, one survey showed that 85% of hospitals have adopted some form of BYOD. For the few who have not yet made the transition but are looking into it, a number of tips have popped up that could prove helpful.

Keep It Simple (At First)

If the idea is to make physicians’ jobs easier and more convenient, than a BYOD policy is a great way to make that a reality, but it is a pretty big change for a hospital or any other healthcare facility. That’s why when making this change, it’s important to keep things simple to start out. Doctors and staff need time to get accustomed to using their personal devices in their work environment. Often that means downloading helpful applications that can make doing their jobs easier, or getting used to any new security requirements that IT departments define. A simple start allows everyone to ease into the new policy, and any changes that are needed can be done without disrupting the entire program. More complex arrangements can be made later.

Create an Environment of Communication

When dealing with such a drastic change, people will have different ideas about what works and what doesn’t. That’s why everyone involved–from the top hospital administrators to the late night IT shift to the surgical team–will need to be open to new ideas. A doctor may have a unique perspective on how a smartphone or wearable device might be used to help patients. Or an IT worker may see potential in a new app that physicians haven’t thought about using yet. Being able to communicate these ideas can help in the forming of the new BYOD policy. If everyone collaborates together, they will end up with guidelines and suggestions that will keep most people happy, which is what BYOD is all about.

Address Security

One of the biggest concerns over BYOD in any business is that of security. That is certainly true for a hospital as well. When dealing with sensitive patient information, BYOD security must be at the top of any hospital’s priority list. The concerns over security don’t come from thin air. Studies from security companies have shown that around 40% of healthcare employees use personal devices that have no password protection. Many employees also access unsecured networks with their devices. Security can’t be overlooked if a hospital wants to successfully implement a BYOD policy. That means devices will need to be examined to make sure they’re compatible with the hospital’s security network. Some devices may even be rejected. While this limitation may hinder efforts to fully adopt BYOD, any measure taken to ensure tighter security will be worth it in the end.

Focus on Privacy

On that same note, patient privacy must still be taken into account, especially now that so many patient records are now electronic. Of particular concern is making sure all records, devices, and procedures legally comply with the latest HIPAA rules. Hospitals need to work to make sure the data found in their networks and on their employees’ devices is protected. Even though workers may be using their own smartphones or tablets, the data they access on them is still the responsibility of the hospital. Should any of that data get lost or stolen, it will be the hospital that has to deal with subsequent legal proceedings. A BYOD policy that takes all of this into account and plans for the future can saves hospitals a lot of headaches down the line.

Adopting a BYOD program for a hospital is no easy task. Numerous factors need to be taken into consideration to avoid potentially devastating consequences. Luckily, enough hospitals have made the leap to provide helpful lessons on how implementing these policies should be done. With enough expertise and innovation, a hospital creating a BYOD policy for the first time should be able to make that transition with minimal complications.

Category: Information Development
No Comments »

by: Ocdqblog
26  Sep  2014

Is Informed Consent outdated in the Information Age?

Facebook experiment from late 2012 made news earlier this year and raised the ethical question of whether, by using free services provided via the Internet and mobile apps, we have granted informed consent to be experimented on for whatever purposes.

The On the Media TLDR audio podcast recently posted an interview with Christian Rudder, the co-founder of the free dating website OkCupid, who recently blogged about how OkCupid experiments on its users in a post with the intentionally provocative title We Experiment On Human Beings!

While this revelation understandably attracted a lot of attention, at least OkCupid is not trying to hide what it’s doing. Furthermore, as Rudder blogged, “guess what, everybody: if you use the Internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.”

During the interview, Rudder made an interesting comparison between what websites like Facebook and OkCupid do and how psychologists and other social scientists have been experimenting on human beings for decades. This point resonated with me since I have read a lot of books that explore how and why humans behave and think the way we do. Just a few examples are Predictably Irrational by Dan Ariely, You Are Not So Smart by David McRaney, Nudge by Richard Thaler and Cass Sunstein, and Thinking, Fast and Slow by Daniel Kahneman.

Most of the insights discussed in these books are based on the results of countless experiments, most of which were performed on college students since they can be paid in pizza, course credit, or beer money. The majority of the time the subjects in these experiments are not fully informed about the nature of the experiment. In fact, many times they are intentionally misinformed in order to not skew the results of the experiment.

Rudder argued the same thing is done to improve websites. So why do we see hallowed halls when we envision the social scientists behind university research, but we see creepy cubicles when we envision the data scientists behind website experimentation? Perhaps we trust academic more than commercial applications of science.

During the interview, Rudder addressed the issue of trust. Users of OkCupid are trusting the service to provide them with good matches and Rudder acknowledged how experimenting on users can seem like a violation of that trust. “However,” Rudder argued, “doing experiments to make sure that what we’re recommending is the best job that we could possibly do is upholding that trust, not violating it.”

It’s easy to argue that the issue of informed consent regarding experimentation on a dating or social networking website is not the same as informed consent regarding government surveillance, such as last year’s PRISM scandal. The latter is less about experimentation and more about data privacy, where often we are our own worst enemy.

But who actually reads the terms and conditions for a website or mobile app? If you do not accept the terms and conditions, you can’t use it, so most of us accept them by default without bothering to read them. Technically, this constitutes informed consent, which is why it may simply be an outdated concept in the information age.

The information age needs enforced accountability (aka privacy through accountability), which is less about informed consent and more about holding service providers accountable for what they do with our data. This includes the data resulting from the experiments they perform on us. Transparency is an essential aspect of that accountability, allowing us to make an informed decision about what websites and mobile apps we want to use.

However, to Rudder’s point, we are fooling ourselves if we think that such transparency would allow us to avoid using the websites and mobile apps that experiment on us. They all do. They have to in order to be worth using.

 

Category: Information Governance
No Comments »

by: Bsomich
20  Sep  2014

Community Update.

Missed what’s been happening in the MIKE2.0 community? Check out our bi-weekly update:

 

Click to view this email in a browser

 logo.jpg

When to Use a NoSQL Database 

If your company has complicated, large sets of data that it’s looking to analyze, and that data isn’t simple, structured or predictable data then SQL is not going to meet your needs. While SQL specializes in many things, large amounts of unstructured data is not one of those areas. There are other methods for gathering and analyzing your data that will be much more effective and efficient and probably cost you less too.

NoSQL, which stands for “Not Only Standard Query Language,” is what your company needs. It’s perfect for dealing with huge data sets of information that aren’t always going to be structured — meaning every batch that’s gathered and analyzed could potentially be different. It’s a relatively new process, but it’s becoming increasingly important in the big data world because of its immense capabilities.

NoSQL is known for it’s flexibility, scalability and relatively low cost, all of which SQL doesn’t offer. SQL relies on traditional column and row processing with predefined schemas. When you’re dealing with unstructured information, it’s nearly impossible to work with schemas, because there’s no way to know what type of information you’ll be getting each time. Also, with SQL it can be costly to scale data and the actual processing can be very slow. NoSQL on the other hand, makes the process cheaper, and it can also do real-time analyzing with extremely large data sets. It gives companies the flexibility to gather as much or as little information as it needs to be successful.

We invite you to read more about this topic on our blog. And as always, we welcome any suggestions you may have to improve it.

Sincerely,

MIKE2.0 Community

New! Popular Content

Did you know that the followingwiki articles are most popular on Google? Check them out, and feel free to edit or expand them!

What is MIKE2.0?
Deliverable Templates
The 5 Phases of MIKE2.0
Overall Task List
Business Assessment Blueprint
SAFE Architecture
Information Governance Solution

Contribute to MIKE:

Start a new article, help witharticles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

Join Us on
 images.jpg

 

 

This Week’s Blogs for Thought:

What Movie Descriptions Can Teach Us About Metadata

In a previous post on this blog, I reviewed what movie ratings teach us about data quality. In this post I ponder another movie-related metaphor for information development by looking at what movie descriptions teach us about metadata.

Read more.

How CIOs Can Discuss the Contribution of IT

Just how productive are Chief Information Officers or the technology that they manage?  With technology portfolios becoming increasingly complex it is harder than ever to measure productivity.  Yet boards and investors want to know that the capital they have tied-up in the information technology of the enterprise is achieving the best possible return.
For CIOs, talking about value improves the conversation with executive colleagues.  Taking them aside to talk about the success of a project is, even for the most strategic initiatives, usually seen as a tactical discussion.  Changing the topic to increasing customer value or staff productivity through a return on technology capital is a much more strategic contribution.

Read more.
Let the Computers Calculate and the Humans Cogitate

Many organizations are wrapping their enterprise brain around the challenges of business intelligence, looking for the best ways to analyze, present, and deliver information to business users.  More organizations are choosing to do so by pushing business decisions down in order to build a bottom-up foundation.

However, one question coming up more frequently in the era of big data is what should be the division of labor between computers and humans?
Read more.

 

 

Forward to a Friend!

Know someone who might be interested in joining the Mike2.0 Community? Forward this message to a friendQuestions?

If you have any questions, please email us at mike2@openmethodology.org.

 

 

Category: Information Development
No Comments »

by: Gil Allouche
16  Sep  2014

When to Use a NoSQL Database

If your company has complicated, large sets of data that it’s looking to analyze, and that data isn’t simple, structured or predictable data then SQL is not going to meet your needs. While SQL specializes in many things, large amounts of unstructured data is not one of those areas. There are other methods for gathering and analyzing your data that will be much more effective and efficient and probably cost you less too.

NoSQL, which stands for “Not Only Standard Query Language,” is what your company needs. It’s perfect for dealing with huge data sets of information that aren’t always going to be structured — meaning every batch that’s gathered and analyzed could potentially be different. It’s a relatively new process, but it’s becoming increasingly important in the big data world because of its immense capabilities.

NoSQL is known for it’s flexibility, scalability and relatively low cost, all of which SQL doesn’t offer. SQL relies on traditional column and row processing with predefined schemas. When you’re dealing with unstructured information, it’s nearly impossible to work with schemas, because there’s no way to know what type of information you’ll be getting each time. Also, with SQL it can be costly to scale data and the actual processing can be very slow. NoSQL on the other hand, makes the process cheaper, and it can also do real-time analyzing with extremely large data sets. It gives companies the flexibility to gather as much or as little information as it needs to be successful.

Scale
NoSQL really takes information scalability to the next level. In the past, companies have too often been hindered in their data gathering ability because of the limited capabilities of then current technologies to handle large sets of information. Part of NoSQL’s strength comes not only from it’s ability to scale information much better than SQL, but also because it works so well in the cloud. With cloud services, companies have nearly unlimited scalability. Many NoSQL databases also offer integration with other big data tools, such as Apache’s Spark as a Service.

Flexibility
Along the same lines as scalability, companies can now gather more information than was ever possible with SQL. That ability means companies can be much more flexible in the information that they gather and process. In the past, companies also had to be selective of the type and amount of information gathered for fear of reaching storage and processing limits. NoSQL eliminates that difficulty. It’s important that companies have more information, not for information’s sake, but to benefit the consumer. Data sets that companies were previously hindered from utilizing can now be of great benefit.

Price
NoSQL makes scaling and gathering information much more cost efficient than SQL for three reasons. First, it’s increased ability to process large amounts of information quickly and effectively mean companies can make changes faster which positively affects the bottom line. Second, it’s much cheaper to scale information with NoSQL than it is with SQL, which also saves money. Last, because of NoSQL and big data in the cloud, many companies now have access to big data without having to pay the enormous startup costs that come with an onsite infrastructure — something that used to be a necessity for big data.

Speed
Not only does NoSQL offer increased scalability and flexibility at a lower price, but it also provides increased analytical speed. Companies can gather and analyze large data sets, not just quickly, but in real-time. No longer are companies forced to wait hours, days or even weeks to get the information back. The uses for this type of quick analysis are nearly endless. It’s especially important in today’s social media-filled world. Companies can instantly see what is being said about them, how their products are being received, what competitors are doing and what consumers want to see in the future.

Conclusion
NoSQL isn’t perfect and still lacks some of the capabilities of SQL, but it has many strengths that make it a great option for companies looking to gather and analyze large data sets that are unstructured and unpredictable.

Category: Enterprise Data Management
No Comments »

by: Ocdqblog
15  Sep  2014

What Movie Descriptions teach us about Metadata

In a previous post on this blog, I reviewed what movie ratings teach us about data quality. In this post I ponder another movie-related metaphor for information development by looking at what movie descriptions teach us about metadata.

Nightmare on Movie Night

It’s movie night. What are you in the mood for? Action Adventure? Romantic Comedy? Science Fiction? Even after you settle on a genre, picking a movie within it can feel like a scene from a Suspense Thriller. Even if you are in the mood for a Horror film, you don’t want to turn movie night into a nightmare by watching a horrible movie. You need reliable information about movies to help you make your decision. You need better movie metadata.

Tag the Movie: A Netflix Original

In his article about How Netflix Reverse Engineered Hollywood, Alexis Madrigal explained how Netflix uses large teams of people specially trained to watch movies and tag them with all kinds of metadata.

Madrigal described the process as “so sophisticated and precise that taggers receive a 36-page training document that teaches them how to rate movies on their sexually suggestive content, goriness, romance levels, and even narrative elements like plot conclusiveness. They capture dozens of different movie attributes. They even rate the moral status of characters. When these tags are combined with millions of users viewing habits, they become Netflix’s competitive advantage. The company’s main goal as a business is to gain and retain subscribers. And the genres that it displays to people are a key part of that strategy.”

The Vocabulary and Grammar of Movies

As Madrigal investigated how Netflix describes movies, he discovered a well-defined vocabulary. Standardized adjectives were consistently used (e.g., Romantic, Critically Acclaimed, Dark, Suspenseful). Phrases beginning with “Based on” revealed where the idea for a movie came from (e.g., Real Life, Children’s Book, Classic Literature). Phrases beginning with “Set in” oriented where or when a movie was set (e.g., Europe, Victorian Era, Middle East, Biblical Times). Phrases beginning with “From the” dated the decade the movie was made in. Phrases beginning with “For” recommended appropriate age ranges for children’s movies. And phrases beginning with “About” provided insight about the thematic elements of a movie (e.g., Food, Friendship, Marriage, Parenthood).

Madrigal also discovered the rules of grammar Netflix uses to piece together the components of its movie vocabulary to generate more comprehensive genres in a consistent format, such as Romantic Dramas Based on Classic Literature Set in Europe From the 1940s About Marriage (one example of which is Pride and Prejudice).

What Movie Descriptions teach us about Metadata

The movie metadata created by Netflix is conceptually similar to the music metadata created by Pandora with its Music Genome Project. The detailed knowledge of movies that Netflix has encoded as supporting metadata makes movie night magic for their subscribers.

Todd Yellin of Netflix, who guided their movie metadata production, described the goal of personalized movie recommendations as “putting the right title in front of the right person at the right time.” That is the same goal lauded in the data management industry for decades: delivering the right data to the right person at the right time.

To deliver the right data you need to know as much as possible about the data you have. More important, you need to encode that knowledge as supporting metadata.

 

Category: Information Development, Metadata
No Comments »

by: Bsomich
30  Aug  2014

MIKE2.0 Community Update

Missed what’s been happening in the MIKE2.0 Community? Check out our bi-weekly update:

Click to view this email in a browser

 logo.jpg

Have you seen our Open MIKE Podcast series? 

The Open MIKE Podcast is a video podcast show, hosted by Jim Harris, which discusses aspects of the MIKE2.0 framework, and features content contributed to MIKE 2.0 Wiki ArticlesBlog Posts, and Discussion Forums.

We kindly invite any existing MIKE contributors to contact us if they’d like to contribute any audio or video segments for future episodes.

Check it out and feel free to check out our community overview video for more information on how to get involved with MIKE2.0.

On Twitter? Contribute and follow the discussion via the #MIKEPodcast hashtag.

Sincerely,

MIKE2.0 Community

New! Popular Content

Did you know that the following wiki articles are most popular on Google? Check them out, and feel free to edit or expand them!

What is MIKE2.0?
Deliverable Templates
The 5 Phases of MIKE2.0
Overall Task List
Business Assessment Blueprint
SAFE Architecture
Information Governance Solution

Contribute to MIKE:

Start a new article, help witharticles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

Join Us on
images.jpg

This Week’s Blogs for Thought:

RDF PROVenance Ontology

This post features a special 3-part focus on the RDF PROVenance Ontology, a Recommendation from the Worldwide Web Consortium that is expected to be a dominant interchange standard for enterprise metadata in the immediate future, replacing the Dublin Core. Followingintroduction, reviews and commentary about the ontology’sRDF classes and RDF properties should stimulate lively discussion of the report’s conclusions.

Read more. 

Are There Alternatives to Data Hoarding? 

“In a microsecond economy,” Becca Lipman recently blogged, “most data is only useful in the first few milliseconds, or to an extent, hours after it is created. But the way the industry is collecting data, or more accurately, hoarding it, you’d think its value lasts a lifetime. Yes, storage costs are going down and selecting data to delete is no easy task, especially for the unstructured and unclassified sets. And fear of deleting something that could one day be useful is always going to be a concern. But does this give firms the go-ahead to be data hoarders?”

Read more.

Analogue Business with a Digital Facade

Everyone is talking about digital disruption and the need to transform their company into a “digital business”.  However, ask most people what a digital business is and they’ll talk in terms of online shopping, mobile channels or the latest wearable device.

In fact we have a long tradition of using the word “digital” as a prefix to new concepts while we adapt.  Some examples include the wide introduction from the 1970s of the “digital computer” a term which no longer needs the digital prefix.Read more.

 

 

 

Category: Information Development
No Comments »

by: Ocdqblog
27  Aug  2014

Are there alternatives to Data Hoarding?

“In a microsecond economy,” Becca Lipman recently blogged, “most data is only useful in the first few milliseconds, or to an extent, hours after it is created. But the way the industry is collecting data, or more accurately, hoarding it, you’d think its value lasts a lifetime. Yes, storage costs are going down and selecting data to delete is no easy task, especially for the unstructured and unclassified sets. And fear of deleting something that could one day be useful is always going to be a concern. But does this give firms the go-ahead to be data hoarders?”

Whether we choose to measure it in terabytes, petabytes, exabytes, HoardaBytes, or how harshly reality bites, we have been hoarding data long before the data management industry took a super-sized tip from McDonald’s and put the word “big” in front of its signature sandwich. At least McDonald’s starting phasing out their super-sized menu options in 2004, stating the need to offer healthier food choices, a move that was perhaps motivated in some small way by the success of Morgan Spurlock’s Academy Award Nominated documentary film Super Size Me.

Much like fast food is an enabler for our chronic overeating and the growing epidemic of obesity, big data is an enabler for our data hoarding compulsion and the growing epidemic of data obesity.

Does this data smell bad to you?

Are there alternatives to data hoarding? Perhaps we could put an expiration date on data, which after it has been passed we could at least archive, if not delete, the expired data. One challenge with this approach is that with most data you can not say exactly when it will expire. Even if we could, however, expiration dates for data might be as meaningless as the expiration dates we currently have for food.

As Rose Eveleth reported, “these dates are—essentially—made up. Nobody regulates how long milk or cheese or bread stays good, so companies can essentially print whatever date they want on their products.” Eveleth shared links to many sources that basically recommended ignoring the dates and relying on seeing if the food looks or smells bad.

What about regulatory compliance?

Another enabler of data hoarding is concerns about complying with future regulations. This is somewhat analogous to income tax preparation in the United States where many people hoard boxes of receipts for everything in hopes of using them to itemize their tax deductions. Even though most of the contents are deemed irrelevant when filing an official tax return, some people still store the boxes in their attic just in case of a future tax audit.

How useful would data from this source be?

Although calculating the half-life of data has always been problematic, Larry Hardesty recently reported on a new algorithm developed by MIT graduate student Dan Levine and his advisor Jonathan How. By using the algorithm, Levine explained, “the usefulness of data can be assessed before the data itself becomes available.” Similar algorithms might be the best future alternative to data hoarding, especially when you realize that the “keep it just in case we need it” theory often later faces the “we can’t find it amongst all the stuff we kept” reality.

What Say You?

Have you found alternatives to data hoarding? Please share your thoughts by leaving a comment below.

 

Category: Enterprise Data Management, Information Governance
No Comments »

Calendar
Collapse Expand Close
TODAY: Thu, December 18, 2014
December2014
SMTWTFS
30123456
78910111213
14151617181920
21222324252627
28293031123
Recent Comments
Collapse Expand Close