Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.
by: RickDelgado
26  Feb  2015

Cloud Services: Open Source vs. Proprietary

There’s nothing more punk-rock than the sort of DIY ethics currently fueling open-source communities. The general subversiveness combined with an apparent twice-a-week minimum black t-shirt rule among developers may make the open source scene look kind of like a cool-guy/girl clique, at least from an outsider’s perspective.

Everybody is rebelling against something, right?

In the cloud computing ecosystem the basic theme is rebellion against failure, according to whatever that means to whomever is considering the question. And within that question is the other major decision; whether the given needs call for an open-source, or proprietary architecture. So let’s take a closer look at what the major differences between those two models mean for businesses.

Charging for Software

Generally, open source models are free and won’t charge for the use of software. Proprietary models may offer free packages at first, but ultimately always end up costing the customer. Many updates to proprietary software are free, but significant upgrades and the ability to add new packages often comes with a fee. Charges can also come in the form of a per-user fee. Open source options are based more on the development of a community. They take direction from the demands of the market and tend to start with a small collection of developers and users. Successful projects are quickly picked up, while others are left to languish in obscurity.

Vendor Limitations

Vendor lock-ins occur with proprietary software. This means that the website and software used with a proprietary vendor can’t be taken to another provider. It also limits the ability to use other providers with the knowledge to use a particular product. In contrast, open source products are more flexible and allow users to move between different systems freely. Open source cloud computing offers a greater range of compatibility between several different products. Typically, if a proprietary solution goes out of business the end-user is left with an unusable product. With open source projects, there is usually another project or fork that can take off where the old one left off.

Modifying System Code

Proprietary software doesn’t allow the manipulation of the source code. Even simple modifications to change styling or add features are not permitted with proprietary software. This can be beneficial for users who are happy with a set of features that is completely managed by one company. For those who like to tinker and adjust software to their needs, it may not be an ideal solution. Open source options allow for modifications and a company can even create an entire fork based off the existing software. When a feature doesn’t exist within an open source application, a developer can be hired to incorporate the feature into the product.

Licensing and Hosting Costs

Using proprietary software isn’t for the faint of heart or light of wallet. Licensing and hosting fees are often higher with proprietary software. By using open source options, users can avoid having to pay operating system costs, and per-product fees to use the software. This provides more flexibility to those who run open source platforms. A new software package or feature can be quickly added on to an existing installation without the need to purchase a license. Additionally, proprietary software requires the use of commercial databases, which further add to the total cost of operation.

User Documentation

Product documentation is often more involved and useful with open source software. The reason for this is the large communities that often follow and support open source projects. Help documentation for proprietary software is often only surface level. This is partially due to the service-based nature of proprietary software. It’s more profitable when consumers have to rely on the company for support and technical services. However, this can negatively impact business if an update goes wrong and technical support can’t immediately correct the issue. Open source applications come with substantial documentation that is typically updated with each product release and freely available online.

Security and Performance Considerations

When you have an entire community of developers poking and prodding at an application, you tend to have better security. Many of the features that are put into proprietary software are designed to keep the software from being modified. This adds bloat to the code and prevents the option for a light and lean product. Additionally, excess code leaves more room for security and stability flaws. With open source software, there are many more eyes looking at the code and fixes tend to come in much more quickly than with proprietary software. Stability and advanced threat defense tends to be tighter with open source applications, as long as users keep their software updated. Out of date applications are just as vulnerable to hacking and infiltration as proprietary systems.

Summary

Open source and proprietary cloud services both aim to provide end-users with reliable software. Some users prefer the backing of a large company like Amazon or Microsoft, with a tailored list of compatible programs and services. Others prefer the interoperability and flexibility of open source alternatives like OpenStack or Eucalyptus. It’s not necessarily an issue of right or wrong per se. It just depends what the user’s specific needs are. For some open source software is the obvious choice, while those who want more predictably managed solutions may find proprietary solutions the ideal choice.

Category: Business Intelligence, Open Source
No Comments »

by: Jonathan
25  Feb  2015

5 Ways Big Data is Transforming the World of Design

Back in the analog days, designers used hands-on tools to bring their creations to light. But in this day and age of information and advanced technology big data and analytics tools are transforming the world of design as never before.

In a November 2014 article on wired.com Paul Papas, “the Global Leader for the IBM’s Interactive Experience practice, a next-generation digital agency, consultancy, and systems integrator,” discusses the revolutionizing power of big data in all facets and fields of design. Compiled from the author’s views and insights is this list of 5 ways big data is transforming the world of design.

1. Creativity – To illustrate how far design has come in the digital age, Papas has us first picture an architect at a drafting table laboring over a blueprint, or an auto designer modeling next year’s car out of clay. “With some variation,” says Papas, “those were creative tools that designers, architects and artists relied on to render their inspirations, refine their concepts, and finalize them into market-ready products.” While those tools may have some application today, Papas points out how high-performance computing, “has radically transformed the creative process in pharma, automotive, and government R&D.” Thanks to computer modeling and simulation capabilities, designers can render, test, refine and prove products in a virtual world before they go into production.

2. Innovation – “Today, data continues to affect the design of products in new and innovative ways,” says Papas. While no specific examples are mentioned in the article, Google’s autonomous cars controlled by real-time big data analytics comes to mind as an example of innovation in the automobile industry. Smartphones, which in actuality are powerful portable computers that in many ways have transformed our lives, are another example of innovation made possible by big data. According to Papas, “What’s truly revolutionary is how marketers and other business leaders are using data in the design of something much more intimate and essential — the personalized experiences that millions of individuals will have with their products, services or brands.” Which brings us to…

3. Experience Design – In the analog days the goal of designers was to create products that seamlessly combined form and function. In the era of big data, that model has been upgraded to what is being referred to as “experience design.” As Papas explains, when you pull “experience design” apart, “you have equal parts the design of beautiful, elegant interfaces and the creation of irresistible experiences that are smart, individualized, trusted and valuable — all 100 percent dependent on the astute use of data.” According to Papas, today’s businesses are defining their agendas by two forces—“massively available information and new models of individual engagement.” So powerful are these two forces for business that Papas says, “experience design is rapidly becoming a de facto element in contemporary business strategy.”

4. Behavioral Design – Not unlike the clay automobile designers use to mold and shape basic designs, big data is giving rise to a new medium, human behavior. Described by Papas as, “a harder trick to pull off than modeling metal,” designers are using human behavior to, “learn and modify designs before they’re implemented, as insight from data gives companies the ability to understand context, and learn and evolve with the consumer and create unique, reciprocal experiences.”

As evidence of the power of behavioral design, Papas cites Brown University’s dilemma of either upgrading its existing engineering school or moving the entire engineering department off campus to a larger, potentially better facility. “Through a deep analysis of a hodgepodge of data — from faculty collaboration patterns to course enrollments,” says Papas, “Brown discovered patterns showing an enormous amount of cross-fertilization between the school’s communities.” Based on the insights obtained from behavioral data that showed how the off- campus option would “negatively affect students, faculty collaboration and research dollars,” the university chose not to make the move.

5. The “Market of One”Thanks to big data analytics, Papas says that, “the long-anticipated ability to really find, know and engage the proverbial “market of one” is finally at hand.” While not mentioned in the article, images of consumers being engaged on their mobile devices by companies in relevant and meaningful ways—in real-time and in context—serve as an example of how marketers are able to reach the once elusive “market of one.”

Big data is truly transforming the world of design.

Going forward, Papas predicts that the powerful, data-intensive tools that designers now have at their disposal will continue to “render the designs and create the experiences that will unlock the next great level of possibility and value for enterprise in every industry.”

Category: Business Intelligence
No Comments »

by: Robert.hillard
22  Feb  2015

Making the case for jargon, acronyms and clear language

All over the web, authors are ranting about the misuse of the English language in business.  It’s an easy article to write, picking out examples of jargon and the general torturing of sentences in the name of explaining apparently simple concepts while making the writer seem more impressive.

Some great examples from business can be found in this Forbes article: The Most Annoying, Pretentious and Useless Business Jargon.  Examples that I like (or hate) include terms like “swim lanes”, “best practice” and “core competence”.

There are good reasons for using jargon Rather than take cheap shots, let’s start by asking why people use jargon in the first place.  There are often good reasons for using most of the terms, even the ones that we all hate.  Every discipline has its own special language, whether it is science, architecture, law or business across every industry.  This use of language isn’t restricted to technical terminology, it also includes the way that concepts are expressed which can seem obscure or even absurd to an outsider.

While often justified, the use of jargon acts as a tool to exclude these outsiders from participating in the conversation.  This is a problem because there is good evidence that some of the most exciting solutions to problems are multi-disciplinary.

Experts seldom realise that they might be missing out on a breakthrough which comes from another field.  Business, for instance, is borrowing from ideas in physics (such as thermodynamics) in understanding the dynamics of markets as well as biology to understand how organisations evolve.

Just as fields of science, medicine and law have their own language, so does every aspect of business such as human resources, finance, sales et cetera.  Even in general business, diversity of thought comes up with a better answer almost every time.  Getting to those ideas is only possible if the discussion is intelligible to all.

Jargon to avoid debate

While many discussions use jargon and acronyms as a legitimate shortcut, some use of jargon reflects a lack of understanding by the participants or even an attempt to avoid debate in the first place.  Take the example of “metadata”, a term which has appeared in many countries as governments struggle with appropriate security, privacy and retention regimes.

A plain English approach would be to describe the definition of metadata in full in every discussion rather take the shortcut of using the term on its own.  The reality is that even landing on a definition can lead to an uncomfortable debate, but definitely one worth having as the Australian Attorney General learned to his determent in this interview where the purpose of a very important debate was lost in a confusing discussion on what metadata actually is.

The Attorney General isn’t on his own, many executives have been challenged in private and public forums to explain the detail behind terms they’ve commonly used only to come unstuck.

Sometimes people use jargon, like metadata, swim lanes and best practice because they are avoiding admitting they don’t know the detail.  Other times, they are legitimately using the terms to avoid having to repeat whole paragraphs of explanation.  Of course, this is where acronyms come into their own.

Balancing the needs of the reader and author

Given that it takes at least five times as long to write something as it does to read it (and a factor of ten is more realistic for anything complex) the authors of documents and emails can be forgiven for taking a shortcut or two.

The problem is when they forget that every communication is an exchange of information and while information has value, the value is not necessarily the same for both the writer and reader.

For example, there is little that is more annoying that receiving an email which requires extensive research to work out what the TLAs contained within it actually stand for.  Of course, a TLA is a “three letter acronym” (the most popular length of acronym with examples including everything from “BTW” for “by the way” through to LGD for “loss given default”).

Our propensity for short messages has only increased due to our rapid adoption of texts, instant messaging and Twitter.  I’ve written before about the role of email in business (see Reclaim email as a business tool).  Clarity of meaning is fundamental to all communication regardless of the medium.

Given that it does take so much longer to write than to read, it makes sense for the writer to take short-cuts.  However, if the information is of the same value to both the writer and reader, then the shortcut needs to offer a tenfold benefit to the writer to make-up for the additional cost in time to the reader who has to decode what the shortcut means.

This equation gets worse if there are multiple readers, if the benefit is greater to the writer than the reader or when the writer has the advantage of context (that is, they are already thinking about the topic so the jargon and acronyms are already on their mind).

In short, there is seldom a benefit to using jargon or acronyms in an email without taking a moment to either spell them out or provide a link to definitions that are easily accessed.

Is this the best you can do?

Perhaps the need to make sure that a reader’s time is spent wisely is best summed up in this anecdote told by Ambassador Winston Lord about Henry Kissinger (former US Secretary of State).

After giving Henry Kissinger a report that Ambassador Winston Lord had worked diligently on, Kissinger famously asked him if this “Is this the best you can do?” This question was repeated on each draft until Lord reached the end of his tolerance, saying “Damn it, yes, it’s the best I can do.”  To which Kissinger replied: “Fine, then I guess I’ll read it this time.” (sourced from Walter Isaacson, “Kissinger: A Biography”, Simon & Schuster 1992).

Category: Enterprise Content Management, Information Value
No Comments »

by: Bsomich
20  Feb  2015

MIKE2.0 Community Update

Missed what’s been happening in the MIKE2.0 community? Read on!

 

  
 logo.jpg

Business Drivers for Better Metadata Management

There are a number Business Drivers for Better Metadata Management that have caused metadata management to grow in importance over the past few years at most major organisations. These organisations are focused on more than just a data dictionary across their information – they are building comprehensive solutions for managing business and technical metadata.

Our wiki article on the subject explores many factors contributing to the growth of metadata and guidance to better manage it:   

Feel free to check it out when you have a moment.

Sincerely,MIKE2.0 Community

Contribute to MIKE:

Start a new article, help with articles under construction or look for other ways to contribute

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

Join Us on
images.jpg

Did You Know?
All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

 

This Week’s Blogs for Thought:

Keeping Big Data Secure: Should You Consider Data Masking?  

Big data is a boon to every industry. And as data volumes continue their exponential rise, the need to protect sensitive information from being compromised is greater than ever before. The recent data breach of Sony Pictures, and new national threats from foreign factions serve as a cautionary tale for government and private enterprise to be constantly on guard and on the lookout for new and better solutions to keep sensitive information secure.

Read more.

The Architecture After Cloud

I think that Zach Nelson (Netsuite’s CEO) was wrong when he said that “cloud is the last computing architecture” but I also believe that his quote is a healthy challenge to take computing and business architectures to a new level. Nelson went on to say “I don’t think there is anything after it (cloud). What can possibly be after being able to access all of your data any time, anywhere and on any device? There is nothing.”  His comments are available in full from an interview with the Australian Financial Review. Aside from cloud, our industry has had a range of architectures over the decades including client/server, service oriented architecture (SOA) and thin client.  Arguably design patterns such as 4GLs (fourth generation languages) and object oriented programming are also architectures in their own right. I think that we can predict attributes of the next architecture by looking at some of the challenges our technologies face today.Read more.

MSP Cloud Computing Strategies to Consider in 2015

Managed service providers (MSP) have some difficult decisions to make in the coming year. Many of the pressing questions they’re facing revolve around cloud computing as the cloud has become a technology now being embraced by mainstream businesses of all sizes and types. Years ago, the dilemma surrounding the cloud was a relatively easy one to address, especially when clients were asking questions about what is cloud computing and how it could ultimately benefit their organizations.Read more.

  Forward this message to a friend

 

 

Category: Information Development
No Comments »

by: RickDelgado
06  Feb  2015

MSP Cloud Computing Strategies to Consider in 2015

Managed service providers (MSP) have some difficult decisions to make in the coming year. Many of the pressing questions they’re facing revolve around cloud computing as the cloud has become a technology now being embraced by mainstream businesses of all sizes and types. Years ago, the dilemma surrounding the cloud was a relatively easy one to address, especially when clients were asking questions about what is cloud computing and how it could ultimately benefit their organizations. That was then, but Gartner now predicts that by 2016, organizations will store more than a third of their content on the cloud. Most clients are serious about adopting the technology, and they want their MSPs to make it happen. That means MSPs need to make sure their clients get it in 2015, but there’s no foolproof way to do it. The following are several strategies and methods to consider for successfully fulfilling the client’s demands for the cloud.

 

The first thing every managed service providers needs to have is a plan. That may seem unnecessarily basic, but it’s true that many MSPs don’t have a plan in place before delivering cloud computing to the client. MSPs need to craft a roadmap, something that will guide them through every step in the process. This isn’t just common sense; it saves on plenty of time and resources the further into the strategy you go. In many cases, planning can mean the difference between a successful cloud strategy and one that fails the client.

 

Of course, a plan is only the first step MSPs need to take. The next is to identify the right type of strategy that works best for the client. No matter the choice, it’s important for managed service providers to make it clear to the client what will happen, then deliver on those promises. In any scenario, the quality of service should still rank as a top priority, showing the client the worth of the MSP during a time of transition. But when it comes to offering a cloud computing service, there are a number of different strategies to think about. The first is a risk-taking strategy. A MSP that goes with this route actually foregoes partnering with any cloud provider and instead builds a private cloud. The main advantage of using this strategy is that it allows the MSP to retain absolute control over every aspect of the cloud. The downside is financial in nature. Building a cloud is a sizeable investment, and it will take a long time (around three years) to see a return on that investment.

 

A second strategy is more conservative in nature. The conventional method is to look for a major cloud provider like Amazon or Microsoft to make the cloud a reality for the cloud. This strategy makes it easier for clients to buy into cloud computing, plus there’s very little risk. The disadvantage for the MSP is that competition among these cloud providers is intense and growing increasingly brutal. The third strategy is known as the trailblazer option, which features the MSP seeking a white-label provider. This means the MSP can offer enterprise-grade infrastructure at a low cost while maintaining a more agile operation. This added flexibility is particularly attractive to clients.

 

Once a strategy is decided upon, it’s time to start the process of cloud onboarding. This part of the plan can also be difficult and fraught with pitfalls, but MSPs should know about three steps that can maximize the chance for success. All MSPs need to properly analyze the business, technical, and application abilities of the client, which will help the MSP know how to proceed. The transition phase helps to lay out a blueprint, mapping out expectations as the move to the cloud is made, and pointing out the overall impact it will have on the organization. Once the onboarding is complete, MSPs should also do ongoing performance analyses in order to optimize the applications that are now on the cloud.

 

Offering the cloud to clients can be a risky endeavor, but it’s a risk worth taking. Clients are much more interested in cloud computing than they were a few years ago, and managed service providers need to recognize this significant change. By following some of these steps and preparing the right strategy, MSPs can make sure the move to the cloud is a smooth and productive one.

 

Category: Information Development
No Comments »

by: Robert.hillard
25  Jan  2015

The architecture after cloud

I think that Zach Nelson (Netsuite’s CEO) was wrong when he said that “cloud is the last computing architecture” but I also believe that his quote is a healthy challenge to take computing and business architectures to a new level.

Nelson went on to say “I don’t think there is anything after it (cloud). What can possibly be after being able to access all of your data any time, anywhere and on any device? There is nothing.”  His comments are available in full from an interview with the Australian Financial Review.

Aside from cloud, our industry has had a range of architectures over the decades including client/server, service oriented architecture (SOA) and thin client.  Arguably design patterns such as 4GLs (fourth generation languages) and object oriented programming are also architectures in their own right.

I think that we can predict attributes of the next architecture by looking at some of the challenges our technologies face today.  Some of these problems have defined solutions while others will require inventions that aren’t yet on the table.

Security

Near the top of any list is security.  Our Internet-centred technology is increasingly exposed to viruses, hackers and government eavesdroppers.

One reason that so much of our technology is vulnerable is that most operating systems share code libraries between applications.  The most vulnerable application can leave the door open for malicious code to compromise everything running on the same machine.  This is part of the reason that the Apple ecosystem has been less prone to viruses than the Windows platform.  Learning from this, it is likely that the inefficiency of duplicating code will be a small price to pay for siloing applications from each other to reduce the risk of cross-infection.

At the same time our “public key” encryption is regarded by many as being at risk from increasing computing power for brute force code cracking and even the potential of future quantum computers.

Because there is no mathematical proof that encryption based on the factoring of large numbers won’t be cracked in the future it can be argued to be unsuitable for a range of purposes such as voting.  A future architecture might consider more robust approaches such the physical sharing of ciphers.

Privacy

Societies around the world are struggling with defining what law enforcement and security agencies should and shouldn’t have access to.  There is even a growing debate about who owns data.  As different jurisdictions converge on “right to be forgotten” legislation, and societies agree on what back doors keys to give to various agencies, future architectures will be key to simplifying the management of the agreed approaches.

The answers will be part of future architectures with clearer tracking of metadata (and indeed definitions of what metadata means).  At the same time, codification of access to security agencies will hopefully allow users to agree with governments about what can and can’t be intercepted.  Don’t expect this to be an easy public debate as it has to navigate the minefield of national boundaries.

Network latency

Another issue that is topical to application designers is network latency.  Despite huge progress in broadband across the globe, network speeds are not increasing at the same rate as other aspects of computing such as storage, memory or processing speeds.  What’s more, we are far closer to fundamental limits of physics (the speed of light) when managing the transmission from servers around the world.  Even the most efficient link between New York and London would mean the round-trip for an instruction response is 0.04 seconds at a theoretical maximum of the speed of light (with no latency for routers or a network path that is not perfectly straight).  In computing terms 0.04 seconds is the pace of a snail!

The architectural solution has already started to appear with increasing enthusiasm for caching and on-device processing.  Mobile apps are a manifestation of this phenomenon which is also sometimes called edge computing.

The cloud provides the means to efficiently synchronise devices, with the benefits of computing on devices across the network with cheap and powerful processing and masses of data right on-hand.  What many people don’t realise is that Internet Service Providers (ISPs) are already doing this by caching popular YouTube and other content which is why it seems like some videos take forever while others play almost instantly.

Identity

Trying to define the true nature of a person is almost a metaphysical question.  Am I an individual, spouse, family member or business?  Am I paying a bill on my own behalf or doing the banking for my business partners?  Any future architecture will build on today’s approaches and understanding of user identity.

Regardless of whether the answer will be biometrics, social media or two-factor authentication it is likely that future architectures will make it easier to manage identity.  The one thing that we know is that people hate username/password management and a distributed approach with ongoing verification of identity is more likely to gain acceptance (see Login with social media).

Governments want to own this space but there is no evidence that there is a natural right for any one trusted organisation.  Perhaps Estonia’s model of a digital economy and e-residency might provide a clue.  Alternatively, identity could become as highly regulated as national passports with laws preventing individuals from holding more than one credential without explicit permission.

Internet of Things

The Internet of Things brings a new set of challenges with an explosion of the number of connected devices around the world.  For anyone who is passionate about technology, it has been frustrating that useful interfaces have been so slow to emerge.  We were capable of connecting baby monitors, light switches and home security for years before they were readily available and even today they are still clumsy.

Arguably, the greatest factor in the lag between the technological possibility and market availability has been the challenge of interoperability and the assumption that we need standards.  There is a growing belief that market forces are more effective than standards (see The evolution of information standards).  Regardless, the evolution of new architectures is essential to enabling the Internet of Things marathon.

Complexity and Robustness

Our technology has become increasingly complex.  Over many years we have layers building on layers of legacy hidden under facades of modern interfaces.  Not only is this making IT orders of magnitude more expensive, it is also making it far harder to create bulletproof solutions.

Applications that lack robustness are frustrating when they stop you from watching your favourite TV programme, but they could be fatal when combined with the Internet of Things and increasingly autonomous machines such as trains, planes and automobiles.

There is an increasing awareness that the solution is to evolve to simplicity.  Future architectures are going to reward simplicity through streamlined integration of services to create applications across platforms and vendors.

Bringing it all together

The next architecture won’t be a silver bullet to all of the issues I’ve raised in this post, but to be compelling it will need to provide a platform to tackle problems that have appeared intractable to-date.

Perhaps, though, the greatest accelerators of every generation of architecture is a cool name (such as “cloud”, “thin client”, “service oriented architecture” or “client/server”).  Maybe ambitious architects should start with a great label and work back to try and tackle some of the challenges that technologists face today.

Category: Enterprise2.0
No Comments »

by: RickDelgado
21  Jan  2015

Where to Go to Learn About Network Security

The one thought on seemingly every business’s mind is network security. This is understandable given the damaging attacks launched on companies like Home Depot, Target, and Sony in the past year-and-a-half or so. After seeing the effects of data breaches involving large corporations, businesses are taking their security more seriously now than ever before, and many are quickly realizing it’s far from an easy task. The number of cyber threats out there seem to multiply every day, and with them the different methods and techniques they can use to breach a company’s security measures and obtain sensitive data from within. In this ever-evolving environment, it’s important that all business executives and IT security personnel stay up to date on the latest security information and learn all they can as they train to combat these threats. The question then becomes where you should go for the best in network security information and training. Luckily, the resources available are many.

 

When looking at materials and sources that can provide the most helpful information in improving network security, many companies will take cost into account. Some resources do have a cost associated with them, while others are perfectly free. Determining which to use will largely come down to what your company’s budget is for security purposes. Don’t let the price tag affect your opinion on the quality of the source. There are plenty of helpful security resources on the internet that are available at no cost.

 

Take blogs dedicated to the topic of network security, for instance. Blogs are an excellent way to find out what the latest news on the security front is. Of course cyber attacks hitting mega businesses like Target or JPMorgan Chase will almost always be front page news, but what about smaller breaches affecting lesser known companies? That’s where network security blogs come in. With blogs as a frequent resource, you can find out what kind of new threats are out there, how they’re impacting companies, and what’s being done to stop them. Network security blogs can also help you learn about new industry trends you’ll probably want to try for your company. Blogs can also give overviews of new security products and software. Some blogs are maintained by experts in the network security field, like Schneier on Security, and others come from organizations and institutes that dedicate themselves to improving network security for all organizations (the SANS Security Blogs are good examples of this type). Companies in the security field can have their own blogs as well, and though they can provide helpful information, keep in mind they’re also aimed at pushing their products.

 

Another free resource that shouldn’t be overlooked is YouTube. It’s one thing to read about new effective security procedures; it’s another thing to see it in action via an easy-to-follow video. The number of YouTube videos dedicated to network security is vast, but choosing which ones to view is similar to choosing which blogs to follow. Most prominent are those from security companies with a lot to gain from the more eyes they attract. While it’s often assumed that security companies focus only on videos promoting their products, many do offer educational videos that can help you learn more about network security and what’s being done to improve it. Some YouTube channels you’ll want to keep an eye on include the free video tutorials offered by CBT Nuggets and the useful videos from McAfee, which is now part of Intel Security. One word of caution: always check the upload date of the video. A video from five years ago may not have the updated information needed to fight today’s security threats.

 

Beyond the free sources, there are also learning opportunities you can pay for. Many educational institutions have online courses you can take that will give you the skills and knowledge that will help you know what to do for you company’s network security. Gaining these certifications does require extra work, some or most of it likely to be done outside the office, but it can all be worth it in the end. If the goal is to prevent security breaches, combining courses with blogs and online videos can make that goal far more achievable. Look at all of these resources as tools you can add to your arsenal of knowledge and expertise. You and your company will be much better off because of your dedication to learning all you can.

 

Category: Information Development
No Comments »

by: Gil Allouche
15  Jan  2015

Keeping Big Data Secure: Should You Consider Data Masking?

Big data is a boon to every industry. And as data volumes continue their exponential rise, the need to protect sensitive information from being compromised is greater than ever before. The recent data breach of Sony Pictures, and new national threats from foreign factions serve as a cautionary tale for government and private enterprise to be constantly on guard and on the lookout for new and better solutions to keep sensitive information secure.

One security solution, “data masking”, is the subject of a November 2014 article on Nextgov.com.

In the article, Ted Girard, a vice president at Delphix Federal, defines what data masking is—along with its applications in the government sector. Being that data masking also has non-government applications, organizations wondering if this solution is something they should consider for original production data should find the following takeaways from the Nextgov article helpful.

The information explosion
Girard begins by stating the plain and simple truth that in this day and age of exploding volumes of information, “data is central to everything we do.” That being said he warns that, “While the big data revolution presents immense opportunities, there are also profound implications and new challenges associated with it.” Among these challenges, according to Girard, are protecting privacy, enhancing security and improving data quality. “For many agencies just getting started with their big data efforts”, he adds, “these challenges can prove overwhelming.”

The role of data masking
Speaking specifically of governmental needs to protect sensitive health, education, and financial information, Girard explains that data masking is, “a technique used to ensure sensitive data does not enter nonproduction systems.” Furthermore, he explains that data masking is, “designed to protect the original production data from individuals or project teams that do not need real data to perform their tasks.” With data masking, so-called “dummy data”— a similar but obscured version of the real data—is substituted for tasks that do not depend on real data being present.

The need for “agile” data masking solutions
As Girard points out, one of the problems associated with traditional data masking is that, “every request by users for new or refreshed data sets must go through the manual masking process each time.” This, he explains, “is a cumbersome and time-consuming process that promotes ‘cutting corners’– skipping the process altogether and using old, previously masked data sets or delivering teams unmasked versions.” As a result, new agile data masking solutions have been developed to meet the new demands associated with protecting larger volumes of information.

According to Girard, the advantage of agile data masking is that it, “combines the processes of masking and provisioning, allowing organizations to quickly and securely deliver protected data sets in minutes.”

The need for security and privacy
As a result of collecting, storing and processing sensitive information of all kinds,
government agencies need to keep that information protected. Still, as Girard points out, “Information security and privacy considerations are daunting challenges for federal agencies and may be hindering their efforts to pursue big data programs.” The good news with “advance agile masking technology”, according to Girard, is that it helps agencies, “raise the level of security and privacy assurance and meet regulatory compliance requirements.” Thanks to this solution, Girard says that, “sensitive data is protected at each step in the life cycle automatically.”

Preserving data quality
Big data does not necessarily mean better data. According to Girard, a major cause of many big data project failures is poor data. In dealing with big data, Girard says that IT is faced with two major challenges:

1. “Creating better, faster and more robust means of accessing and analyzing large data sets…to keep pace.”
2. “Preserving value and maintaining integrity while protecting data privacy….”

Both of these challenges are formidable, especially with large volumes of data migrating across systems. As Girard explains, “…controls need to be in place to ensure no data is lost, corrupted or duplicated in the process.” He goes on to say that, “The key to effective data masking is making the process seamless to the user so that new data sets are complete and protected while remaining in sync across systems.”

The future of agile data masking
Like many experts, Girard predicts that big data projects will become a greater priority for government agencies over time. Although not mentioned in the article, the NSA’s recent installation of a massive 1.5 billion-dollar data center in Utah serves as a clear example of the government’s growing commitment to big data initiatives. In order to successfully analyze vast amounts of data securely and in real time going forward, Girard says that agencies will need to, “create an agile data management environment to process, analyze and manage data and information.”

In light of growing security threats, organizations looking to protect sensitive production data from being compromised in less-secure environments should consider data masking as an effective security tool for both on-premise and cloud-based big data platforms.

Category: Enterprise Data Management
No Comments »

by: Bsomich
10  Jan  2015

MIKE2.0 Community Update

Missed what’s been happening in the MIKE2.0 data management community? Read on!

 

 logo.jpg

What is an Open Methodology Framework? 

An Open Methodology Framework is a collaborative environment for building methods to solve complex issues impacting business, technology, and society.  The best methodologies provide repeatable approaches on how to do things well based on established techniques. MIKE2.0′s Open Methodology Framework goes beyond the standards, techniques and best practices common to most methodologies with three objectives:

  • To Encourage Collaborative User Engagement
  • To Provide a Framework for Innovation
  • To Balance Release Stability with Continuous Improvement

We believe that this approach provides a successful framework accomplishing things in a better and collaborative fashion. What’s more, this approach allows for concurrent focus on both method and detailed technology artifacts. The emphasis is on emerging areas in which current methods and technologies lack maturity.

The Open Methodology Framework will be extended over time to include other projects. Another example of an open methodology, is open-sustainability which applies many of these concepts to the area of sustainable development. Suggestions for other Open Methodology projects can be initiated on this article’s talk page.

We hope you find this of benefit and welcome any suggestions you may have to improve it.

Sincerely,

MIKE2.0 Community

Popular Content

Did you know that the following wiki articles are most popular on Google? Check them out, and feel free to edit or expand them!

What is MIKE2.0?
Deliverable Templates
The 5 Phases of MIKE2.0
Overall Task List
Business Assessment Blueprint
SAFE Architecture
Information Governance Solution

Contribute to MIKE:

Start a new article, help with articles under construction or look for other ways to contribute

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

Join Us on
 images.jpg

 

This Week’s Blogs for Thought:

Security Concerns for the IoT in Healthcare

It’s easy to get caught up in all the hype surrounding the latest buzzwords in technology. The Internet of Things (IoT) is one such term many insiders are talking about excitedly, and it’s already making a big difference in certain industries. The idea of having objects capable of collecting data while connected to the internet while also communicating with each other is an intriguing one that’s poised to explode. Gartner is even predicting that the number of devices connected to the IoT will grow to 26 billion by the year 2020. Industries all over the world are investigating how the Internet of Things can benefit them, and the healthcare industry is no exception.

Read more.

The Internet of Misfit Things

One of the hallmarks of the holiday season is Christmas television specials. This season I once again watched one of my favorite specials, Rudolph the Red-Nosed Reindeer. One of my favorite scenes is the Island of Misfit Toys, which is a sanctuary for defective and unwanted toys.

This year the Internet of Things became fit for things for tracking fitness. While fitness trackers were among the hottest tech gifts this season, it remains to be seen whether these gadgets are little more than toys at this point in their evolution.Read more.

Our Machines Will Not Outsmart Us

Over the millennia we have been warned that the end of the world is nigh.  While it will no doubt be true one day, warnings by Stephen Hawking in a piece he co-authored on artificial intelligence don’t fill me with fear (See Transcending Complacency on Superintelligent Machines).  I disagree with the commentators across the board who are warning that the machine will outsmart us by the 2030s and that it could become a Terminator-style race between us and them.

Hawking and his co-authors argue that “[s]uccess in creating AI would be the biggest event in human history.  Unfortunately, it might also be our last unless we learn how to avoid the risks.”

Read more.

 

Forward to a Friend!

Category: Information Development
No Comments »

by: RickDelgado
08  Jan  2015

Security Concerns for the Internet of Things in Healthcare

It’s easy to get caught up in all the hype surrounding the latest buzzwords in technology. The Internet of Things (IoT) is one such term many insiders are talking about excitedly, and it’s already making a big difference in certain industries. The idea of having objects capable of collecting data while connected to the internet while also communicating with each other is an intriguing one that’s poised to explode. Gartner is even predicting that the number of devices connected to the IoT will grow to 26 billion by the year 2020. Industries all over the world are investigating how the Internet of Things can benefit them, and the healthcare industry is no exception. But as much as the IoT can lead to some notable advantages in healthcare, a number of concerns have arisen that could derail any momentum. These concerns need to be addressed if those in healthcare will want to get the most out of the IoT.

 

While some ways the IoT can help in healthcare are still theoretical, a good number of devices and practices are already being used to positive effect. Health monitors have become much more versatile and common, like, for example, remote heart rate monitors used at hospitals. What sets these monitors apart is how, via their connection through the IoT, they can communicate with other medical devices and medical personnel to send out alerts in the event that vital signs fall into dangerous territory. Another new technology associated with the Internet of Things is smart beds. These beds detect when a patient is resting on them and can notify the hospital when the patient tries to get up. Smart beds can also take data and modify the pressure being put on the patient to allow for maximum comfort and recovery.

 

Much of the aim of IoT-connected devices lies in decreasing the amount of direct face-to-face interaction patients need with their doctors. IoT devices also help to keep doctors and nurses informed by collecting and supplying large amounts of data, helping them track health information at a level never before realized. The Internet of Things can also drastically reduce the amount of healthcare device downtime by alerting manufacturers, technicians, and managers when devices are about to break down. Parts can be shipped and repair technicians can be summoned almost immediately, and it’s thanks to the sensors being embedded within the devices themselves.

 

While all of these breakthroughs point to added convenience and improved patient health, there is still a significant danger many experts are warning about. In light of the recent hackings hitting major companies over the last few years, security issues remain a top concern that has made hospitals reluctant to truly dive into the IoT’s potential. Perhaps the main overriding concern deals particularly with securing health data and patient privacy. By using devices that are constantly connected to the internet and having them store, process, and transfer health data, the danger of having hackers gain possession of sensitive information is very real. In some cases, health data is more valuable on the black market than even bank account information, and considering that networked devices can be vulnerable to hacking likely has many in the healthcare industry approaching the issue cautiously. Even more serious is the potential for hackers to actually infiltrate devices and use them to attack the patient. This has already been demonstrated in certain environments, and while no actual attacks have been recorded yet, the danger is real and needs to be addressed.

The Internet of Things isn’t going away anytime soon, so the healthcare industry will need to focus on improving IT security as it adopts connected health devices. This may require constant updates and patching to the devices, ensuring that any security holes are plugged up and minimizing the possibility of cyber attacks. Even the federal government is getting involved in the form of the FDA issuing guidelines and standards that all networked medical devices need to meet before being used. Some hospitals are even going as far as internally hosting data to maintain more control, although that does somewhat limit IoT devices. Whatever method is used, the Internet of Things could revolutionize the healthcare industry. Caution is needed, however, to ensure patients’ security is protected every step of the way.

Category: Information Development
No Comments »

Calendar
Collapse Expand Close
TODAY: Tue, March 3, 2015
March2015
SMTWTFS
1234567
891011121314
15161718192021
22232425262728
2930311234
Recent Comments
Collapse Expand Close