You see them everywhere nowadays, whether you’re walking down the street, riding the bus, attending a sporting event, or eating at a restaurant. Smartphones have become a common sight, providing an easy and convenient way to communicate with family and friends through email, texting, and social media. The number of functions smartphones can possess is staggering, so it’s no wonder they’ve become such a major fixture in our lives. As mobile devices creep into everyday life, there’s one place where they might not be entirely welcome: the office. Smartphones have entered the workplace, and with workers performing their jobs on the road more frequently, they can often be the one connection they need to keep up with their workloads. But more and more businesses are looking at smartphones as more of a nuisance than an advantage. Now, some organizations are even considering banning smartphones and similar technology.
If company leaders want to look for reasons to ban personal smartphones and tablets in the office, they don’t have to look far. A recent study backs up many concerns and worries generated by the infiltration of smartphones in work environments. According to the study, a whopping 95 percent of employees say they’re distracted during the workday. While the first source of distraction is coworkers, 45 percent did say that they were distracted by technology like text messages and personal emails. People can become addicted to their smartphones, and it doesn’t take much for an employee to become sucked into their Facebook newsfeed.
In addition to the worries over distractions and missing productivity, business leaders also have significant fears over the security risks that might increase when people use their own devices in the workplace. Many businesses have actually encouraged people to bring personal mobile devices to work with bring your own device (BYOD) policies, but the result has been an increase in the volume and severity of security incidents. When workers are part of a BYOD policy, that means they’ll connect their devices to the business network. Should the device have malware or other damaging code on it, the network and other systems–along with other devices–may become infected as well, fully compromising IT security. Plus, there’s always the fear that an employee’s device will get lost or stolen, and if it has sensitive company data on it, the company may suffer damage.
So worries over distractions and security risks seem to show the need to ban smartphones and other mobile devices from the workplace, but that would ignore the benefits that come from bringing this technology. While they may cause distractions, smartphones have also been shown to increase productivity among employees. That’s one of the reasons workers like to use their own devices as well as a reason so many businesses have adopted BYOD in the first place. Employees can get more work done since they are already familiar with their devices and know how best to use them. BYOD policies can also save companies money since workers would be responsible for the costs of those devices.
There’s also the question of whether a ban on smartphones is even effective. While it’s true that some bans in Finland have been successful, tech bans in other parts of the world are more of a mixed bag. Research shows that around 70 percent of workers already bring their personal devices to work every day. Of those who do so, about 20 percent use personal smartphones that are restricted by their employers. As mentioned before, people grow attached to their smartphones, and many will likely bring them in even if told not to. Workers are also increasingly reliant on their devices for doing work-related tasks. Apps and other features have been developed with businesses in mind, making smartphones a valuable tool in the workplace. Many of the distractions found on smartphones (email, social media) can also be accessed through normal desktop computers as well. In short, tech bans might seem like a good idea at first, but they may prove ineffective in the long run.
Smartphones can be much more than a distraction or security risk. While businesses may be understandably cautious about mobile devices, their employees can get more done in less time. Ultimately, tech bans don’t accomplish what they set out to do, so a more active approach like monitoring mobile activity may be helpful in keeping workers on task and minimizing security threats. A few common sense rules and guidelines can go a long way to getting the most out of smartphones, which effectively avoids an outright ban.
In a previous post, I discussed some data quality and data governance issues associated with open data. In his recent blog post How far can we trust open data?, Owen Boswarva raised several good points about open data.
“The trustworthiness of open data,” Boswarva explained, “depends on the particulars of the individual dataset and publisher. Some open data is robust, and some is rubbish. That doesn’t mean there’s anything wrong with open data as a concept. The same broad statement can be made about data that is available only on commercial terms. But there is a risk attached to open data that does not usually attach to commercial data.”
Data quality, third-party rights, and personal data were three grey areas Boswarva discussed. Although his post focused on a specific open dataset published by an agency of the government of the United Kingdom (UK), his points are generally applicable to all open data.
As Boswarva remarked, the quality of a lot of open data is high even though there is no motivation to incur the financial cost of verifying the quality of data being given away for free. The “publish early even if imperfect” principle also encourages a laxer data quality standard for open data. However, “the silver lining for quality-assurance of open data,” Boswarva explained is that “open licenses maximize re-use, which means more users and re-users, which increases the likelihood that errors will be detected and reported back to the publisher.”
The issue of third-party rights raised by Boswarva was one that I had never considered. His example was the use of a paid third-party provider to validate and enrich postal address data before it is released as part of an open dataset. Therefore, consumers of the open dataset benefit from postal validation and enrichment without paying for it. While the UK third-party providers in this example acquiesced to open re-use of their derived data because their rights were made clear to re-users (i.e., open data consumers), Boswarva pointed out that re-users should be aware that using open data doesn’t provide any protection from third-party liability and, more importantly, doesn’t create any obligation on open data publishers to make sure re-users are aware of any such potential liability. While, again, this is a UK example, that caution should be considered applicable to all open data in all countries.
As for personal data, Boswarva noted that while open datasets are almost invariably non-personal data, “publishers may not realize that their datasets contain personal data, or that analysis of a public release can expose information about individuals.” The example in his post centered on the postal addresses of property owners, which without the names of the owners included in the dataset, are not technically personal data. However, it is easy to cross-reference this with other open datasets to assemble a lot of personally identifiable information that if it were contained in one dataset would be considered a data protection violation (at least in the UK).
In terms of trends, few are as clear and popular as flash storage. While other technology trends might be more visible among the general public (think the explosion of mobile devices), the rise of flash storage among enterprises of all sizes has the potential to make just as big of an impact in the world, even if it happens beneath the surface. There’s little question that the trend is growing and looks to continue over the next few years, but the real question revolves around flash storage and the other mainstream storage option: hard disk drives (HDD). While HDD remains more widely used, flash storage is quickly gaining ground. The question then becomes, how long do we have to wait before flash storage not only overtakes hard drives but becomes the only game in town? A careful analysis reveals some intriguing answers and possibilities for the future, but one that emphasizes a number of obstacles that still need to be overcome.
First, it’s important to look at why flash storage has become so popular in the first place. One of the main selling points of flash storage or solid-state drives (SSD) is its speed. Compared to hard drives, flash storage has much faster processing power. This is achieved by storing data on rewritable memory cells, which doesn’t require moving parts like hard disk drives and their rotating disks (this also means flash storage is more durable). Increased speed and better performance means apps and programs can launch more quickly. The capabilities of flash storage have become sorely needed in the business world since companies are now dealing with large amounts of information in the form of big data. To properly process and analyze big data, more businesses are turning to flash, which has sped up its adoption.
While it’s clear that flash array storage features a number of advantages in comparison to HDD, these advantages don’t automatically mean it is destined to be the sole storage option in the future. For such a reality to come about, solutions to a number of flash storage problems need to be found. The biggest concern and largest drawback to flash storage is the price tag. Hard drives have been around a long time, which is part of the reason the cost to manufacture them is so low. Flash storage is a more recent technology, and the price to use it can be a major barrier limiting the number of companies that would otherwise gladly adopt it. A cheap hard drive can be purchased for around $0.03 per GB. Flash storage is much more expensive at roughly $0.80 per GB. While that not seem like much, keep in mind that’s about 27 times more expensive. For businesses being run on a tight budget, hard drives seem to be the more practical solution.
Beyond the price, flash storage may also suffer from performance problems down the line. While it’s true that flash storage is faster than HDD, it also has a more limited lifespan. Flash cells can only be rewritten so many times, so the more times a business uses it, the more performance will suffer. New technology has the potential to increase that lifespan, but it’s still a concern that enterprises will have to deal with in some fashion. Another problem is that many applications and systems that have been in use for years were designed with hard drives in mind. Apps and operating systems are starting to be created with SSD as the primary storage option, but more changes to existing programs need to happen before flash storage becomes the dominant storage solution.
So getting back to the original question, when will flash storage be the new king of storage options? Or is such a future even likely? Experts differ on what will happen within the next few years. Some believe that it will be a full decade before flash storage is more widely used than hard drives. Others have said that looking at hard drives and flash storage as competitors is the wrong perspective to have. They say the future lies with not one or the other but rather both used in tandem through hybrid systems. The idea would be to use flash storage for active data that is used frequently, while hard drives would be used for bulk storage and archive purposes. There are also experts who say discussion over which storage option will win out is pointless because within the next decade, better storage technologies like memristors, phase-change memory, and even atomic memory will become more mainstream. However the topic is approached, current advantages featured in flash storage make it an easy choice for enterprises with the resources to use it. For now, the trend of more flash looks like it will continue its impressive growth.
Selecting a big data solution can be tricky at times with the different options available for enterprises. Deciding between a cloud big data analytics provider and an on-premise Hadoop solution comes down to recognizing the pros and cons of both options and how they will affect the bottom line.
Pros of On-premise Hadoop Solutions
As with any on-premise solution, on-premise Hadoop allows businesses to have complete control over their Hadoop cluster and perhaps more importantly their data. While the cloud is getting more accessible to industries facing heavy security and compliance regulations, some companies may prefer to keep everything in-house. On-premise Hadoop also avoids the complexity or potential log-in of vendor SLA agreements.
Cons of On-premise Hadoop solutions
Investing in Hadoop hardware can prove to be expensive depending on how it’s being used. Despite using low-cost commodity servers, extending to thousands of nodes can result in significant costs requiring attention to problems typically uncommon but become increase in frequency with a large number of servers.
Hadoop is also complex to maintain and manage. Companies will have to dedicate certain employees to deploying clusters every time a query is made. Once a cluster’s capacity is reached, it will also eat up resources adding additional nodes to th ecluster.
Pros of Cloud providers
One of the pros of using a cloud provider versus an on-premise Hadoop solution is the scalable nature of a cloud provider. Cloud platforms allow for total scalability allowing companies to access unlimited storage on demand. Enterprises can easily upscale or downscale depending on the IT requirements allowing business growth to be supported without expensive changes to your existing IT systems.
Another pro of using a cloud provider versus an on-premise Hadoop solution is the flexibility of solutions available both in and out of the workplace. Employees can more easily access files using devices like smartphones and laptops. Organizations can simultaneously share documents and other files over the cloud while supporting both internal and external collaboration.
Cons of Cloud Providers
Due to the massive growth of cloud computing, organizations are starting to rely on managed data centers run by cloud experts trained in maintaining and scaling shared, private and hybrid Clouds. Companies who do not have their own data scientists will have to make changes to their current cloud computing structure to meet their evolving data needs.
A risk organizations make with cloud providers is relying on the provider’s level of security and responsiveness to technical issues. Though rare, the cloud provider may have downtime that impacts a businesses’ ability to run queries or meet the customer’s demand for queries.
The path businesses take will depend on individual needs and circumstances as there are pros and cons to each type of solution. As big data moves mainstream, you will want to consider how you will take advantage of this resource.
With the holidays upon us, most businesses are dealing with what is usually their busiest time of the year. It’s a period of excitement and increased sales, but it’s also a time of worry and concern. In the wake of the recent data breaches at large retailers like Target and Home Depot, many businesses are approaching the holidays with a more cautious attitude, particularly toward security. Hackers have the potential to steal data and cause millions of dollars in damages, essentially crippling any business no matter their size. What’s even more alarming is that many companies haven’t responded effectively to the threat of security breaches. A recent study has shown that up to 58 percent of retailers are actually less secure compared to a year ago. While some may have added new network security features, cyber attackers have had added time to get inside a business system and take advantage of any weaknesses they have found. The lesson is that organizations need to work on their security for the holidays, and they need to do so immediately. Any delay could be costly.
When it comes to improving business security, one of the first steps is to identify where a company may be vulnerable. This can be accomplished primarily through vulnerability scans. These scans are basically an automated test that businesses can run to find weaknesses within their networks and systems. Any vulnerabilities may eventually be used by hackers to infiltrate the network and steal valuable and sensitive information. This is a particular concern during the holidays since the number of credit card purchases increases dramatically. With the weaknesses properly identified, companies will know where to focus their attention.
Finding vulnerabilities as soon as possible is especially important because current hacking techniques are different than those used years ago. While some hackers may still employ traditional hit-and-run tactics, many others have the long game in mind. Companies that experience attacks during the holidays may actually be suffering from an infiltration that occurred as long as six months ago. Surprisingly, recent research has also shown that attacks during the holiday shopping season don’t actually increase in number, but that doesn’t mean hackers aren’t busy. Many may infiltrate a network during that time but not launch an attack until many months have passed. The main point is that finding vulnerabilities quickly is the first step businesses need to take, and fixing those problems needs to follow immediately.
Companies also need to be on guard for other cyber attacks targeting their business. One of the most common during the holidays is spear phishing. Spear phishing isn’t necessarily targeted toward a company’s network but rather at the employees. The idea is to deceive people into believing an email or similar message is real and have them click on a link. That link usually leads to downloading malware or some other type of virus. During the holidays, spear phishing usually comes in the form of fake charity emails, false shipping confirmations, or a fraudulent bank notification. Since users are making more unusual purchases during this time of year, they are more susceptible to believing this type of scam. While it may seem like this is more of a problem for individuals than for an organization, employees are using personal mobile devices at work much more often through BYOD policies, and those devices often connect to the company’s network. If those devices have been infected with malware, the business could be in trouble.
Combating this type of cyber attack requires companies to inform and train their employees. Workers need to know about the security threats that are out there. That means spotting the warning signs and knowing how to respond to them. This is especially important during the holidays since many workers are only seasonal and may not receive adequate training. Even if the job is temporary, employees still need to be kept up to date about the risks and how to prevent them. Taking this proactive step immediately can help businesses avoid security breaches during the holidays and into the future.
The last thing any business wants is to deal with a security breach during the holidays. Though the threats may feel overwhelming, it’s never too late to start improving security, finding vulnerabilities, and educating employees about the dangers they may face. Fighting the threats is an ongoing battle that should receive extra attention at any time of year, not just the holiday shopping season. With better security, businesses can feel more confident about protecting customer information and preparing for another busy year ahead.
Did You Know?
MIKE’s Integrated Content Repository brings together the open assets from the MIKE2.0 Methodology, shared assets available on the internet and internally held assets. The Integrated Content Repository is a virtual hub of assets that can be used by an Information Management community, some of which are publicly available and some of which are held internally.
Any organisation can follow the same approach and integrate their internally held assets to the open standard provided by MIKE2.0 in order to:
- Build community
- Create a common standard for Information Development
- Share leading intellectual property
- Promote a comprehensive and compelling set of offerings
- Collaborate with the business units to integrate messaging and coordinate sales activities
- Reduce costs through reuse and improve quality through known assets
The Integrated Content Repository is a true Enterprise 2.0 solution: it makes use of the collaborative, user-driven content built using Web 2.0 techniques and technologies on the MIKE2.0 site and incorporates it internally into the enterprise. The approach followed to build this repository is referred to as a mashup.
Feel free to try it out when you have a moment- we’re always open to new content ideas.
This Week’s Blogs for Thought:
Could Data Governance Actually Endanger Data?
One of my favorite books is SuperFreakonomics by economist Steven Levitt and journalist Stephen Dubner, in which, as with their first book and podcast, they challenge conventional thinking on a variety of topics, often revealing counterintuitive insights about how the world works. One of the many examples from the book is their analysis of the Endangered Species Act (ESA) passed by the United States in 1973 with the intention to protect critically imperiled species from extinction.
Trading Your Way to IT Simplicity
Stop reading now if your organisation is easier to navigate today than it was 3, 5 or 10 years ago. The reality that most of us face is that the general ledger that might have cost $100,000 to implement twenty or so years ago will now cost $1 million or even $10 million. Just as importantly, it is getting harder to implement new products, services or systems. The cause of this unsustainable business malaise is the complexity of the technology we have chosen to implement.
Improving Network Security for Small Businesses
You’ve likely read all about them–the massive security breaches and cyber attacks hitting major corporations like Home Depot, Target, and even the New York Times. These damaging attacks have cost these companies millions of dollars in damages, and they’re just a portion of all the security risk stories out there. As a small business owner, you may be tempted to think your company doesn’t have to worry as much about cyber attackers inflicting damage on your operations. After all, compared to a big business, your company has relatively few resources and doesn’t leave nearly as big of a footprint on the market. That belief, however, could leave you and your business vulnerable.
One of my favorite books is SuperFreakonomics by economist Steven Levitt and journalist Stephen Dubner, in which, as with their first book and podcast, they challenge conventional thinking on a variety of topics, often revealing counterintuitive insights about how the world works.
One of the many examples from the book is their analysis of the Endangered Species Act (ESA) passed by the United States in 1973 with the intention to protect critically imperiled species from extinction.
Levitt and Dubner argued the ESA could, in fact, be endangering more species than it protects. After a species is designated as endangered, the next step is to designate the geographic areas considered critical habitats for that species. After an initial set of boundaries is made, public hearings are held, allowing time for developers, environmentalists, and others to have their say. The process to finalize the critical habitats can take months or even years. This lag time creates a strong incentive for landowners within the initial geographic boundaries to act before their property is declared a critical habitat or out of concern that it could attract endangered species. Trees are cut down to make their land less hospitable or development projects are fast-tracked before ESA regulation would prevent them. This often has the unintended consequence of hastening the destruction of more critical habitats and expediting the extinction of more endangered species.
This made me wonder whether data governance could be endangering more data than it protects.
After a newly launched data governance program designates the data that must be governed, the next step is to define the policies and procedures that will have to be implemented. A series of meetings are held, allowing time for stakeholders across the organization to have their say. The process to finalize the policies and procedures can take weeks or even months. This lag time provides an opportunity for developing ways to work around data governance processes once they are in place, or ways to simply not report issues. Either way this can create the facade that data is governed when, in fact, it remains endangered.
Just as it’s easy to make the argument that endangered species should be saved, it’s easy to make the argument that data should be governed. Success is a more difficult argument. While the ESA has listed over 2,000 endangered species, only 28 have been delisted due to recovery. That’s a success rate of only one percent. While the success rate of data governance is hopefully higher, as Loraine Lawson recently blogged, a lot of people don’t know if their data governance program is on the right track or not. And that fact in itself might be endangering data more than not governing data at all.
Stop reading now if your organisation is easier to navigate today than it was 3, 5 or 10 years ago. The reality that most of us face is that the general ledger that might have cost $100,000 to implement twenty or so years ago will now cost $1 million or even $10 million. Just as importantly, it is getting harder to implement new products, services or systems.
The cause of this unsustainable business malaise is the complexity of the technology we have chosen to implement.
For the general ledger it is the myriad of interfaces. For financial services products it is the number of systems that need to keep a record of every aspect of business activity. For telecommunications it is the bringing together the OSS and BSS layers of the enterprise. Every function and industry has its own good reasons for the added complexity.
However good the reasons, the result is that it is generally easier to innovate in a small nimble enterprise, even a start-up, than in the big corporates that are the powerhouse of our economies.
While so much of the technology platform creates efficiencies, often enormous and essential to the productivity of the enterprise, it generally doesn’t support or even permit rapid change. It is really hard to design the capacity to change into the systems that support the organisation. The more complex an environment becomes the harder it is to implement change.
Most organisations recognise the impact of complexity and try to reduce it by implementing an enterprise architecture in one form or another. Supporting the architecture is a set of principles which, if implemented in full, will support consistency and dramatically reduce the cost of change. Despite the best will in the world, few businesses or governments succeed in realising their lofty architectural principles.
The reason is that, while architecture is seen as the solution, it is too hard to implement. Most IT organisations run their business through a book of projects. Each project signs-up to an architecture but quickly implements compromises as challenges arise.
It’s no wonder that architects are perhaps the most frustrated of IT professionals. At the start of each project they get wide commitment to the principles they espouse. As deadlines loom, and the scope evolves, project teams make compromises. While each compromise may appear justified they have the cumulative effect of making the organisation more rather than less complex.
Complexity has a cost. If this cost is full appreciated, the smart organisation can see the value in investing in simplification.
While architects have a clear vision of what “simple” looks like, they often have a hard time putting a measure against it. It is this lack of a measure that makes the economics of technology complexity hard to manage.
Increasingly though, technologists are realising that it is in the fragmentation of data across the enterprise that real complexity lies. Even when there are many interacting components, if there is a simple relationship between core information concepts then the architecture is generally simple to manage.
Simplicity can be achieved through decommissioning (see Value of decommissioning legacy systems) or by reducing the duplication of data. This can be measured using the Small Worlds measure as described in MIKE2.0 or chapter 5 of my book Information-Driven Business. The idea is further extended as “Hillard’s Graph Complexity” in Michael Blaha’s book, UML Database Modelling Workbook.
In summary, the measure looks at how many steps are required to bring together key concepts such as customer, product and staff. The more fragmented information is, the more difficult any business change or product implementation becomes.
Consider the general ledger discussed earlier. In its first implementation in the twentieth century, each key concept associated with the chart of accounts would have been managed in a master list whereas by the time we implement the same functionality today there would be literally hundreds if not thousands of points where various parts of the chart of accounts are required to index interfaces to subsidiary systems across the enterprise.
One approach to realising these benefits is to have dedicated simplification projects. Unfortunately these are the first projects that get cut if short-term savings are needed.
Alternatively, imagine if every project that adds complexity (a little like adding pollution) needed to offset that complexity with equal and opposite “simplicity credits”. Having quantified complexity, architects are well placed to define whether each new project simplifies the enterprise or adds complexity.
Some projects simply have no choice but to add complexity. For example, a new marketing campaign system might have to add customer attributes. However, if they increase the complexity they should buy simplicity “offsets” a little like carbon credits.
The implementation of a new general ledger might provide a great opportunity to reduce complexity by bringing various interfaces together or it could add to it by increasing the sophistication of the chart of the accounts.
In some cases, a project may start off simplifying the enterprise by using enterprise workflow or leveraging a third-party cloud solution, however in the heat of implementation be forced to make compromises that make it a net complexity “polluter”.
The CIO has a role to act as the steward of the enterprise and measure this complexity. Project managers should not be allowed to forget their responsibility to leave the organisation cleaner and leaner at the conclusion of their project. They should include the cost of this in their project budget and purchase offsetting credits from others if they cannot deliver within the original scope due complicating factors.
Those that are most impacted by complexity can pick their priority areas for funding. Early wins will likely reduce support costs and errors in customer service. Far from languishing in the backblocks of the portfolio, project managers will be queueing-up to rid the organisation of many of these long-term annoyances to get the cheapest simplicity credits that they can find!
You’ve likely read all about them–the massive security breaches and cyber attacks hitting major corporations like Home Depot, Target, and even the New York Times. These damaging attacks have cost these companies millions of dollars in damages, and they’re just a portion of all the security risk stories out there. As a small business owner, you may be tempted to think your company doesn’t have to worry as much about cyber attackers inflicting damage on your operations. After all, compared to a big business, your company has relatively few resources and doesn’t leave nearly as big of a footprint on the market. That belief, however, could leave you and your business vulnerable. A study from the National Cyber Security Alliance shows that one out of every five businesses becomes a victim of cyber attacks every year, with an even larger portion targeted by hackers. Small businesses need to work to improve their network security, because it’s not a question of if a cyber attack happens but when.
Of course, many small business owners are aware of the security risks and would like nothing more than to invest in the features that would help them repel hackers. The problem is many of these features require money, time, and other resources, and since many small businesses can only barely make ends meet, security issues tend to fall by the wayside. Luckily, there are still several methods you can employ that will increase your network security and keep your small business safe at no extra cost. One of the first and foremost measures that will provide added protection for your network is having stronger passwords for all your accounts as well as your employees’ accounts. A strong password is long–usually around eight characters or more. The password should contain capital letters, numbers, and symbols, and should not have simple words or phrases, no matter how memorable they might be to you. In addition to stronger passwords, small business leaders should also keep tight control over who has administrative access, which can be made easier through existing tools already found on many desktops and laptops (the Local Group Policy Editor for Windows 8 is a good example of this).
Much of a small businesses network security will depend on the workers. While employees may do great work, they also represent a weakness in a security system. Employees can be susceptible to spearfishing attacks and social engineering, which may introduce malware into the network and infect other systems. The best way to combat this is to educate your employees. Teach them the common methods hackers use to gain access to a network and demonstrate the methods they can practice to prevent these attacks from happening. Small business owners should also make sure every employee’s mobile device is secure, since mobile technology is an increasingly popular target for cyber attacks.
All small business owners should also identify the points in their network that are the most vulnerable. While this can be done with purchased software, it’s well known that wireless routers are a favorite entry point for persistent hackers. In fact, recent research shows that around 80% of the 25 best-selling wireless routers for small and home offices on Amazon had notable security vulnerabilities. Hackers can easily exploit wireless routers, often to damaging effect. There are a number of ways you can make your wireless router more secure for your small business. First, you should never use the default IP ranges that come with the router since attackers can easily predict certain addresses and use them. Second, make sure you turn on your router’s encryption while turning off WPS. And third, as mentioned earlier, make sure your router’s password is particularly strong. Change it from the default password you’re given and turn it into one that’s near impossible to crack.
As always, there are security products out there available for purchase, each with varying degrees of features, quality, and price. While the options are plentiful, you should always take into consideration a number of different factors. You need to ensure you have the staff that is trained to fully utilize the software. You also should make sure the program can be configured quickly and easily to suit your needs. Also, try to get software that will have new capabilities added to it as your small business grows over the years. With these considerations in mind, you’ll be sure to pick a security product that’s right for your business.
Attacks happen, and unfortunately there’s no way to prevent them 100% of the time. The best you can do to protect your small business network is to have the security features that will give you a fighting chance. With improved network security, you’ll be able to grow your business with confidence and a safe outlook for the future.
Missed what’s been happening in the MIKE2.0 community? Check out our bi-weekly update:
Business Drivers for Better Metadata Management
There are a number Business Drivers for Better Metadata Management that have caused metadata management to grow in importance over the past few years at most major organisations. These organisations are focused on more than just a data dictionary across their information – they are building comprehensive solutions for managing business and technical metadata.
Our wiki article on the subject explores many factors contributing to the growth of metadata and guidance to better manage it:
Feel free to check it out when you have a moment.
This Week’s Blogs for Thought:
Is BYON a Bigger Threat than BYOD?
No doubt you’ve heard of bring your own device (BYOD) already. It’s been nearly impossible to avoid the hype surrounding BYOD and all the benefits supporters say it offers. While companies may be focused on BYOD, another trend has slowly but steadily been catching on. It’s called bring your own network (BYON), and while it might sound similar to BYOD, it’s actually creating even more headaches for IT departments. While BYOD is something businesses can adopt and regulate, BYON mostly operates in the shadows. Bring your own network essentially means employees are using their own mobile devices’ 3 and 4G capabilities to create or access wireless hotspots. This is often done when workers determine that the current business network does not meet the demands of their jobs. As you can imagine this trend is causing more than its fair share of problems, particularly when it comes to security.
What are your Big Data Application options?
Hadoop is an excellent tools for collecting and sorting massive volumes of data, but businesses must also use analytics and visualization tools on top of Hadoop in order to reap the full benefits from big data. Here’s a quick list of apps to successfully manage and leverage the massive amounts of information generated as organizations grow.Read more.
Open Data opens eyes to Data Quality and Governance
Calls for increased transparency and accountability lead government agencies around the world to make more information available to the public as open data. As more people accessed this information, it quickly became apparent that data quality and data governance issues complicate putting open data to use.
“It’s an open secret,” Joel Gurin wrote, “that a lot of government data is incomplete, inaccurate, or almost unusable. Some agencies, for instance, have pervasive problems in the geographic data they collect: if you try to map the factories the EPA regulates, you’ll see several pop up in China, the Pacific Ocean, or the middle of Boston Harbor.”Read more.
TODAY: Wed, December 17, 2014December2014