Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Posts Tagged ‘Internet of Things’

by: Jonathan
26  Mar  2015

5 Challenges facing the Internet of Things

Our constant need to be connected has expanded beyond smartphones and tablets into a wider network of interconnected objects. These objects, often referred to as the Internet of Things (IoT), have the ability to communicate with other devices and are constantly connected to the internet in order to record, store and exchange data.

This idea of an “always on, always connected” device does seem a little big brother-ish, but there are definitely some benefits that come with it. For example, we are already seeing smarter thermostats, like Nest, that allow us to remotely control the temperature of our homes. We also have appliances and cars with internet connectivity, that can learn our behavior, and act on their own to provide us with greater functionality. However, while this is an accelerating trend with already many objects on the market, there are still a number of challenges facing IoT, which will continue to hinder its progress and widespread adoption.


It seems as if every discussion surrounding networks and the internet is always followed by a discussion on security. Given the recent publicity of damaging security breaches at major corporations, it’s hard to turn a blind eye to the dangers of more advanced cyber attacks. There’s no hiding the fact that the introduction of IoT will create a number of additional vulnerabilities that’ll need to be protected. Otherwise, these devices will simply turn into easy access points for cyber criminals. Given that IoT is new technology, there aren’t a lot of security options designed specifically for them. Furthermore, the diversity in device types makes uniform solutions very difficult. Until we see greater security measures and programs designed to handle IoT devices, many will remain hesitant to adopt them for personal and professional use.


On the coattails of security comes privacy. One of bigger debates in this age of data is who actually owns the data being created. Is it the users of these devices, the manufacturers, or those who operate the networks. Right now, there’s no clear answer. Regardless, while we are left arguing who owns what information, these devices are tracking how we use them. Your car knows which route you take to work, and your home knows what temperature you prefer in the mornings. In addition, when you consider that almost everything requires an online profile to operate these days, there can be a tremendous amount of private information available to many different organizations. For all we know, our televisions are watching us as we watch our favorite shows, and sending that information to media companies.


In order to create a pure, interconnected IoT ecosystem, there needs to be a seamless experience between different devices. Currently, we haven’t yet achieved that level of interoperability. The problem is that there are so many different makes and models, it’s incredibly difficult to create an IoT system with horizontal platforms that are communicable, operable, and programmable. Right now, IoT communication is fragmented, and many devices are still not able to ‘talk’ with one another. Manufacturers will need to start playing nice with each other, and create devices that are willing to work with competitors.

WAN Capacity

Existing Wide Area Networks (WAN) have been built for moderate-bandwidth requirements capable of handling current device needs. However, the rapid introduction of new devices will dramatically increase WAN traffic, which could strangle entreprise bandwidth. With the growing popularity of Bring Your Own Device policies, people will begin using IoT devices at work, forcing companies to make the necessary upgrades, or suffer crawling speeds and weakened productivity.

Big Data

IoT technology will benefit and simplify many aspects of our lives, but these devices serve a dual purpose, benefiting organizations hungry for information. We live in an era of big data, where organizations are looking to collect information from as many sources as possible in the hopes of learning more about customers and markets. IoT technology will greatly expand the possibilities of data collection. However, the problem then becomes managing this avalanche of data. Storage issues aside, we’ve only just developed improved ways of handling big data analytics, but technologies and platforms will need to further evolve to handle additional demands.

Tags: ,
Category: Web2.0
No Comments »

by: Ocdqblog
29  Dec  2014

The Internet of Misfit Things

One of the hallmarks of the holiday season is Christmas television specials. This season I once again watched one of my favorite specials, Rudolph the Red-Nosed Reindeer. One of my favorite scenes is the Island of Misfit Toys, which is a sanctuary for defective and unwanted toys.

This year the Internet of Things became fit for things for tracking fitness. While fitness trackers were among the hottest tech gifts this season, it remains to be seen whether these gadgets are little more than toys at this point in their evolution. Right now, as J.C. Herz blogged, they are “hipster pet rocks, devices that gather reams of largely superficial information for young people whose health isn’t in question, or at risk.”

Some describe big data as reams of largely superficial information. Although fitness and activity trackers count things such as steps taken, calories burned, and hours slept, as William Bruce Cameron remarked, in a quote often misattributed to Albert Einstein, “Not everything that can be counted counts, and not everything that counts can be counted.”

One interesting count is the use of these trackers. “More than half of US consumers,” Herz reported, “who have owned an activity tracker no longer use it. A third of them took less than six months from unboxing the device to shoving it in a drawer or fobbing it off on a relative.” By contrast, “people with chronic diseases don’t suddenly decide that they’re over it and the novelty has worn off. Tracking and measuring—the quantified self—is what keeps them out of the hospital.”

Unfortunately, even though these are the people who could most benefit from fitness and activity trackers, manufacturers, Herz lamented, “seem more interested in helping the affluent and tech-savvy sculpt their abs and run 5Ks than navigating the labyrinthine world of the FDA, HIPAA, and the other alphabet soup bureaucracies.”

In the original broadcast of Rudolph the Red-Nosed Reindeer in 1964, Rudolph failed to keep his promise to return to the Island of Misfit Toys and rescue them. After the special aired, television stations were inundated with angry letters from children demanding that the Misfit Toys be helped. Beginning the next year, the broadcast concluded with a short scene showing Rudolph leading Santa back to the island to pick up the Misfit Toys and deliver them to new homes.

For fitness and activity trackers to avoid the Internet of Misfit Things, they need to make—and keep—a promise to evolve into wearable medical devices. Not only would this help the healthcare system allay the multi-trillion dollar annual cost of chronic disease, but it would also allow manufacturers to compete in the multi-billion dollar medical devices market. This could bring better health and bigger profits home for the holidays next year.


Tags: ,
Category: Information Development
No Comments »

by: Ocdqblog
22  May  2014

Things Interrupting the Internet of Things

I have written about several aspects of the Internet of Things (IoT) in previous posts on this blog (The Internet of HumansThe Quality of Things, and Is it finally the Year of IoT?). Two recent articles have examined a few of the things that are interrupting the progress of IoT.

In his InformationWeek article Internet of Things: What’s Holding Us Back, Chris Murphy reported that while “companies in a variety of industries — transportation, energy, heavy equipment, consumer goods, healthcare, hospitality, insurance — are getting measurable results by analyzing data collected from all manner of machines, equipment, devices, appliances, and other networked things” (of which his article notes many examples), there are things slowing the progress of IoT.

One of its myths, Murphy explained, “is that companies have all the data they need, but their real challenge is making sense of it. In reality, the cost of collecting some kinds of data remains too high, the quality of the data isn’t always good enough, and it remains difficult to integrate multiple data sources.”

The fact is a lot of the things we want to connect to IoT weren’t built for Internet connectivity, so it’s not as simple as just sticking a sensor on everything. Poor data quality, especially timeliness, but also completeness and accuracy, is impacting the usefulness of the data transmitted by things currently connected to sensors. Other issues Murphy explores in his article include the lack of wireless network ubiquity, data integration complexities, and securing IoT data centers and devices against Internet threats.

“The clearest IoT successes today,” Murphy concluded, “are from industrial projects that save companies money, rather than from projects that drive new revenue. But even with these industrial projects, companies shouldn’t underestimate the cultural change they need to manage as machines start telling veteran machine operators, train drivers, nurses, and mechanics what’s wrong, and what they should do, in their environment.”

In his Wired article Why Tech’s Best Minds Are Very Worried About the Internet of Things, Klint Finley reported on the findings of a survey about IoT from the Pew Research Center. While potential benefits of IoT were noted, such as medical devices monitoring health and environmental sensors detecting pollution, potential risks were highlighted, security being one of the most immediate concerns. “Beyond security concerns,” Finley explained, “there’s the threat of building a world that may be too complex for our own good. If you think error messages and applications crashes are a problem now, just wait until the web is embedded in everything from your car to your sneakers. Like the VCR that forever blinks 12:00, many of the technologies built into the devices of the future may never be used properly.”

The research also noted concerns about IoT expanding the digital divide, ostracizing those who can not, or choose not to, participate. Data privacy concerns were also raised, including the possibility of dehumanizing the workplace by monitoring employees through wearable devices (e.g., your employee badge being used to track your movements).

Some survey respondents hinted at a division similar to what we hear about the cloud, differentiating a public IoT from a private IoT. In the latter case, instead of sacrificing our privacy for the advantages of connected devices, there’s no reason our devices can’t connect to a private network instead of the public internet. Other survey respondents believe that IoT has been overhyped as the next big thing and while it will be useful in some areas (e.g., the military, hospitals, and prisons) it will not pervade mainstream culture and therefore will not invade our privacy as many fear.

What Say You about IoT?

Please share your viewpoint about IoT by leaving a comment below.


Tags: , , ,
Category: Data Quality, Information Development
No Comments »

by: Ocdqblog
12  Mar  2014

Is it finally the Year of IoT?

In previous posts on this blog, The Internet of Humans and The Quality of Things, I have pondered aspects of the Internet of Things (IoT), which is something many analysts have promised for several years would soon be a pervasive phenomenon. There is reason to believe, however, that it is finally the year of IoT.

In a sense IoT is already with us, Christopher Mims explained in his Quartz three-part series about IoT. “For one thing, anyone with a smartphone has already joined the club. The average smartphone is brimming with sensors — an accelerometer, a compass, GPS, light, sound, altimeter. It’s the prototypical internet-connected listening station, equally adept at monitoring our health, the velocity of our car, the magnitude of earthquakes, and countless other things that its creators never envisioned. Smartphones are also becoming wireless hubs for other gadgets and sensors.”

The Invisible Buttons of our increasingly Cybered Spaces

While the challenge previously thwarting it was that connecting devices to IoT was expensive and difficult, Mims predicts that in 2014 we will see the convergence of ever-smarter smartphones and other connected things that are cheaper and easier to use. These connected things are often referred to as invisible buttons.

“An invisible button,” Mims explained, “is simply an area in space that is ‘clicked’ when a person or object, such as a smartphone, moves into that physical space. If invisible buttons were just rigidly defined on-off switches, they wouldn’t be terribly useful. But because the actions they trigger can be modified by an infinitude of other variables, such as the time of day, our previous actions, the actions of others, or what Google knows about our calendar, they quickly become a means to program our physical world.”

An example was written about by Matt McFarland in his Washington Post article How iBeacons could change the world forever. “With iOS 7,” McFarland reported, “Apple unveiled iBeacon, a feature that uses Bluetooth 4.0, a location-based technology. This makes it possible for sensors to detect — within inches — how close a phone is.” McFarland’s article lists nine interesting examples of how IoT technology like iBeacon could enhance the average person’s life.

The End of the Interface as We Know It

“That we currently need a cell phone,” Mims explained, “to act as a proximity sensor is just an artifact of where the technology is at present. The same can be accomplished with any number of other internet-connected sensors.” In fact, the rise of what is known as anticipatory computing might be signaling the end of the interface as we know it. As Mims explained, IoT doesn’t just give us another way to explicitly tell computers what we want. “Instead, by sensing our actions, the internet-connected devices around us will react automatically, and their representations in the cloud will be updated accordingly. In some ways, interacting with computers in the future could be more about telling them what not to do — at least until they’re smart enough to realize that we are modifying our daily routine.”

Every Internet needs its HTML

A critical piece of the IoT puzzle, however, remains to be solved, Mims concluded. “What engineers lack is a universal glue to bind all the of the ‘things’ in the internet of things to each other and to the cloud. A lack of standards means most devices on the internet of things are going to use existing methods of connection — Wi-Fi, Bluetooth and the like — to connect to one another. But eventually, something like HTML, the language of the web, will be required to make the internet of things realize its potential.”

As Robert Hillard blogged, the race to IoT is a marathon. While 2014 does appear to be the Year of IoT, it is still early in the marathon and IoT has many more miles to run. It is going to be a fun run, no doubt about it, but IoT will need open standards, and the interoperability they provide, in order to get us across the finish line.

Tags: , ,
Category: Information Development
No Comments »

by: Robert.hillard
22  Feb  2014

The race to the internet of things is a marathon

The PC era is arguably over and the age of ubiquitous computing might finally be here.  Its first incarnation has been mobility through smartphones and tablets.  Many pundits, though, are looking to wearable devices and the so-called “internet of things” as the underlying trends of the coming decade.

It is tempting to talk about the internet of things as simply another wave of computing like the mainframe, mid-range and personal computer.  However there are as many differences as there are similarities.

While the internet of things is arguably just another approach to solving business problems using computing devices, the units of measure are not so directly tied to processing power, memory or storage.  The mass of interconnections mean that network latency, switching and infrastructure integration play just as much a part, for the first time divorcing a computing trend from “Moore’s Law” and the separate but related downward trend of storage costs.

Does it matter?

The internet of things brings the power of the information economy to every part of society, providing an opportunity to repeat many of the gains that business and government experienced through the last decade of the twentieth century.  Back then there was a wave of new applications that allowed for the centralisation of administration, streamlining of customer service and outsourcing of non-core activities.

Despite all of the apparent gains, however, most infrastructure has remained untouched.  We still drive on roads, rely on electricity and utilise hospitals that are based on technology that really hasn’t benefited from the same leap in productivity that has supercharged the back office.

The 40 year problem

It’s not hard to build the case to drive our infrastructure through the network.  Whether it is mining, transport, health or energy, it is clear that there are mass benefits and efficiencies that are there for the taking.  Even simple coordination at a device level will run our communities and companies better for everyone’s benefit.  Add in the power of analytics and we can imagine a world that is dancing in synchronisation.

However, just as this new wave of computing doesn’t adhere to Moore’s Law, it also lacks the benefit of rapid consumer turnover.  We’ve accepted the need to be constantly replacing our PCs and smartphones, driving in turn the growth of an industry.  But it’s one thing to upgrade one or two devices per person every few years, it is quite another to replace the infrastructure that surrounds us.

Much of what we work with day-to-day in roads, rail, energy and all of the other infrastructure around us has a lifespan that is measured in decades or even half centuries.  Even with a compelling business case, it is not likely to be enough to replace embedded equipment that still has decades of economic value ahead of it.

If we don’t want to wait until our grandchildren are old to benefit from the internet of things, we have to find a way to implement it in a staggered way.

Making a staggered implementation work

Anyone who has renovated their house will have considered whether to build in new technology.  For most, network cabling is a basic, but centralised control of lighting and other devices is a harder ask.  Almost anything that is embedded in the walls runs the risk of being obsolete even before the builder hands over the keys.

The solution is independent and interoperable devices.  Rather than building-in security cameras, consumers are embracing small portable network cameras that they can access through cloud-based services.  The same providers are increasingly offering low-cost online light switches that don’t require anything more than an electrician to replace the faceplate.  This new generation of devices can be purchased as needed rather than all at once and raise little risk of uneconomic obsolescence.

Google’s purchase of NEST appears to be an investment in this incremental, cloud-based, future of interconnected home-based devices.  By putting it in their stable of technologies, it is a fair bet that they see the value being in offering analytics to optimise and better maintain a whole range of services.

Just as companies focused on the consumer at home are discovering an approach that works, so too are those vendors that think about bigger infrastructure including energy, mining and major transport assets.

Future proofing

The car industry is going to go through a revolution over the next decade.  After years of promise, new fuel technologies are bearing fruit.  It’s going to be both an exciting and uncertain ride for all involved.  Hybrid, electric pluggable, electric swappable and hydrogen all vie for roles in the future.

Just as important than as fuel technology is the ability to make cars autonomous.  So-called “range anxiety”, the fear of running out of power in an electric car, is eliminated if the car’s computer makes the driving decisions including taking responsibility for ensuring an adequate charge or access to charging stations.

Arguably the most exciting developments are in the car software rather than hardware.  Like smartphones, manufacturers are treating their vehicles as platforms for future apps.  New cars are including technology that is beyond the capabilities of current software such as that needed to run an autonomous vehicle.

Early beneficiaries are the insurance industry (telematics), car servicing providers (who can more actively target customers) and a whole range of innovators who are able to invent solutions that as drivers we never even knew we needed.

Perhaps a tie-up between innovators like Tesla and Apple will show what connected cars and infrastructure could do!

Winning the marathon

Unlike previous waves of computing, the internet of things is going to take many years to reach its full potential.  During this transition business opportunities that are unimaginable today are going to emerge.  Many of these lie in the data flowing across the internet of things that is genuinely big (see It’s time for a new definition of big data).

These new solutions can be thought of as permutations of cloud services (see Cloud computing should be about new business models).  Those wanting to try to imagine the world of possibilities should remember that the last twenty years have taught us that the consumer will lead the way and that incumbents are seldom the major innovators.

However, incumbent energy and infrastructure companies that do want to take the lead could do well by partnering with consumer technology companies.  The solutions they come up with are likely to include completely different relationships between a range providers, services and customers.

Category: Enterprise Content Management, Enterprise2.0, Information Strategy, Information Value, Web2.0
No Comments »

by: Ocdqblog
30  Dec  2013

Scanning the 2014 Technology Horizon

In 1928, the physicist Paul Dirac, while attempting to describe the electron in quantum mechanical terms, posited the theoretical existence of the positron, a particle with all the electron’s properties but of opposite charge.  In 1932, the experiments of physicist Carl David Anderson confirmed the positron’s existence, a discovery for which he was awarded the 1936 Nobel Prize in Physics.

“If you had asked Dirac or Anderson what the possible applications of their studies were,” Stuart Firestein wrote in his 2012 book Ignorance: How It Drives Science, “they would surely have said their research was aimed simply at understanding the fundamental nature of matter and energy in the universe and that applications were unlikely.”

Nonetheless, 40 years later a practical application of the positron became a part of one of the most important diagnostic and research instruments in modern medicine when, in the late 1970s, biophysicists and engineers developed the first positron emission tomography (PET) scanner.

“Of course, a great deal of additional research went into this as well,” Firestein explained, “but only part of it was directed specifically at making this machine.  Methods of tomography, an imaging technique, some new chemistry to prepare solutions that would produce positrons, and advances in computer technology and programming—all of these led in the most indirect and fundamentally unpredictable ways to the PET scanner at your local hospital.  The point is that this purpose could never have been imagined even by as clever a fellow as Paul Dirac.”

This story came to mind since it’s that time of year when we try to predict what will happen next year.

“We make prediction more difficult because our immediate tendency is to imagine the new thing doing an old job better,” explained Kevin Kelly in his 2010 book What Technology Wants.  Which is why the first cars were called horseless carriages and the first cellphones were called wireless telephones.  But as cars advanced we imagined more than transportation without horses, and as cellphones advanced we imagined more than making phone calls without wires.  The latest generation of cellphones are now called smartphones and cellphone technology has become a part of a mobile computing platform.

IDC predicts 2014 will accelerate the IT transition to the emerging platform (what they call the 3rd Platform) for growth and innovation built on the technology pillars of mobile computing, cloud services, big data analytics, and social networking.  IDC predicts the 3rd Platform will continue to expand beyond smartphones, tablets, and PCs to the Internet of Things.

Among its 2014 predictions, Gartner included the Internet of Everything, explaining how the Internet is expanding beyond PCs and mobile devices into enterprise assets such as field equipment, and consumer items such as cars and televisions.  According to Gartner, the combination of data streams and services created by digitizing everything creates four basic usage models (Manage, Monetize, Operate, Extend) that can be applied to any of the four internets (PeopleThingsInformationPlaces).

These and other predictions for the new year point toward a convergence of emerging technologies, their continued disruption of longstanding business models, and the new business opportunities that they will create.  While this is undoubtedly true, it’s also true that, much like the indirect and unpredictable paths that led to the PET scanner, emerging technologies will follow indirect and unpredictable paths to applications as far beyond our current imagination as a practical application of a positron was beyond the imagination of Dirac and Anderson.

“The predictability of most new things is very low,” Kelly cautioned.  “William Sturgeon, the discoverer of electromagnetism, did not predict electric motors.  Philo Farnsworth did not imagine the television culture that would burst forth from his cathode-ray tube.  Advertisers at the beginning of the last century pitched the telephone as if it was simply a more convenient telegraph.”

“Technologies shift as they thrive,” Kelly concluded.  “They are remade as they are used.  They unleash second- and third-order consequences as they disseminate.  And almost always, they bring completely unpredicted effects as they near ubiquity.”

It’s easy to predict that mobile, cloud, social, and big data analytical technologies will near ubiquity in 2014.  However, the effects of their ubiquity may be fundamentally unpredictable.  One unpredicted effect that we all became painfully aware of in 2013 was the surveillance culture that burst forth from our self-surrendered privacy, which now hangs the Data of Damocles wirelessly over our heads.

Of course, not all of the unpredictable effects will be negative.  Much like the positive charge of the positron powering the positive effect that the PET scanner has had on healthcare, we should charge positively into the new year.  Here’s to hoping that 2014 is a happy and healthy new year for us all.

Tags: , , , , ,
Category: Information Development
No Comments »

by: Ocdqblog
25  Oct  2013

Automation and the Danger of Lost Knowledge

In my previous post I pondered the quality of machine-generated data, cautioning that even though it overcomes some of the inherent errors of human-generated data, it’s not immune to data quality issues.  Nicholas Carr, in his recent article All Can Be Lost: The Risk of Putting Our Knowledge in the Hands of Machines, pondered the quality of our increasingly frequent decision to put so many things into the hands of our automatons.

While autopilot, for example, has improved flight safety over the years, many aviation and automation experts have concluded that the overuse of automation erodes pilots’ expertise and dulls their reflexes.  “Automation has become so sophisticated,” Carr wrote, “what pilots spend a lot of time doing is monitoring screens and keying in data.  They’ve become, it’s not much of an exaggeration to say, computer operators.”  As one expert cited by Carr put it: “We’re forgetting how to fly.”

Carr’s article also examined the impact of automation on medical diagnosis, surgery, stock trading, accounting, navigation, and driving, as well as the fundamental notion underlying those and arguably most, if not all, human activities—the dissemination and retention of knowledge.

“Automation, for all its benefits,” Carr cautioned, “can take a toll on the performance and talents of those who rely on it.  It alters how we act, how we learn, and what we know.  The choices we make, or fail to make, about which tasks we hand off to machines shape our lives and the place we make for ourselves in the world.  That has always been true, but in recent years, as the locus of labor-saving technology has shifted from machinery to software, automation has become ever more pervasive, even as its workings have become more hidden from us.  Seeking convenience, speed, and efficiency, we rush to off-load work to computers without reflecting on what we might be sacrificing as a result.”

The Knowledge You No Longer Need to Have

In his book Where Good Ideas Come From: The Natural History of Innovation, Steven Johnson wrote about the benefits of stacked platforms—layers upon layers that we can build upon without needing to know how any of those layers work.  Although he explained that this paradigm existed long before computers and is the basis of many other fields of human endeavor, information technology provides the easiest contemporary examples.  One is Twitter, which leverages the Web, itself a multi-layered platform, as well as the SMS mobile communications platform, and the GPS satellite system, among others.  Twitter could be quickly developed and deployed, in large part, because it could stand on the shoulders of existing (and mostly open source) information technology giants.

“In a funny way,” Johnson noted, “the real benefit of stacked platforms lies in the knowledge you no longer need to have.  You don’t need to know how to send signals to satellites or parse geo-data to send that tweet circulating the Web’s ecosystem.”

While I agree with Johnson, I also find it more than a little disconcerting.  The danger of lost knowledge enabled by the Web’s ecosystem, and other information technology advancements, is it leaves us at the mercy of the people, corporations, and machines that retain the power of that knowledge.

Is Automation the Undoing of Knowledge?

“Knowing demands doing,” wrote Carr.  “One of the most remarkable things about us is also one of the easiest to overlook: each time we collide with the real, we deepen our understanding of the world and become more fully a part of it.  While we’re wrestling with a difficult task, we may be motivated by an anticipation of the ends of our labor, but it’s the work itself—the means—that makes us who we are.”

“Computer automation severs the ends from the means,” concluded Carr.  “It makes getting what we want easier, but it distances us from the work of knowing.  As we transform ourselves into creatures of the screen, we face an existential question: Does our essence still lie in what we know, or are we now content to be defined by what we want?  If we don’t grapple with that question ourselves, our gadgets will be happy to answer it for us.”

While machine-generated data is not the same thing as machine-generated knowledge, in the same way that automation is not the same thing as artificial intelligence, if we consider how much of our knowledge, and our responsibility for doing something with it, we’re turning over to machines through automation, it might give us pause to hit the pause button—if not the stop button—in some cases.

Tags: , , ,
Category: Data Quality, Information Development
No Comments »

by: Ocdqblog
11  Oct  2013

The Quality of Things

I have previously blogged about how more data might soon be generated by things than by humans, using the phrase the Internet of Humans to not only differentiate human-generated data from the machine-generated data from the Internet of Things, but also as a reminder that humans are a vital source of knowledge that no amount of data from any source could ever replace.

The Internet of Things is the source of the category of big data known as sensor data, which is often the new type you come across while defining big data that requires you to start getting to know NoSQL, and which differentiates machine-generated data from the data generated directly by humans typing, taking pictures, recording videos, scanning bar codes, etc.

One advantage of machine-generated data that’s often alleged is its inherently higher data quality due to humans being removed from the equation.  In this blog post, let’s briefly explore that concept.

Is it hot in here or is it just me?

Let’s use the example of accurately recording the temperature in a room, something that a thing, namely a thermostat, has been helping us do long before the Internet of Anything ever existed.

(Please note, for the purposes of keeping this discussion simple, let’s ignore the fact that the only place where room temperature matches the thermostat is at the thermostat — as well as how that can be compensated for with a network of sensors placed throughout the room).

A thermostat only displays the current temperature, so if I want to keep a log of temperature changes over time, I could do this by writing my manual readings down in a notebook.  Here is where the fallible human, in this case me, enters the equation and could cause a data quality issue.

For example, the temperature in the room is 71 degrees Fahrenheit, but I incorrectly write it down as 77 degrees (or perhaps my sloppy handwriting caused even me to later misinterpret the 1 as 7).  Let’s also say that I wanted to log the room temperature every hour.  This would require me to physically be present to read the thermostat every hour on the hour.  Obviously, I might not be quite that punctual and I could easily forget to take a reading (or several), thereby preventing it from being recorded.

The Quality of Things

Alternatively, using an innovative new thermostat that wirelessly transmits the current temperature, at precisely defined time intervals, to an Internet-based application certainly eliminates the possibility of the two human errors in my example.

However, there are other ways a data quality issue could occur.  For example, the thermostat may not be accurately displaying the temperature because it is not calibrated correctly.  Another thing is that a power loss, mechanical failure, loss of Internet connection, or the Internet-based application crashing could prevent the temperature readings from being recorded.

The point I am trying to raise is that although the quality of things entered by things certainly has some advantages over the quality of things entered by humans, nothing could be further from the truth than to claim that machine-generated data is immune to data quality issues.

Tags: , , , ,
Category: Data Quality, Information Development
No Comments »

by: Ocdqblog
22  Apr  2013

The Internet of Humans

The Internet of Things became a more frequently heard phrase over the last decade as more things embedded with radio-frequency identification (RFID) tags, or similar technology, allowed objects to be uniquely identified, inventoried, and tracked by computers.  Early adopters focused on inventory control and supply chain management, but the growing fields of application include smart meters, smart appliances, and, of course, smart phones.

The concept is referred to as the Internet of Things to differentiate its machine-generated data from data generated directly by humans typing, taking pictures, recording videos, scanning bar codes, etc.

The Internet of Things is the source of the category of big data known as sensor data, which is often the new type you come across while defining big data that requires you to start getting to know NoSQL.

In his book Too Big to Know, David Weinberger discussed another growing category of data facilitated by the Internet, namely the crowd-sourced knowledge of amateurs providing scientists with data to aid in their research.  “Science has a long tradition of embracing amateurs,” Weinberger explained.  “After all, truth is truth, no matter who utters it.”

The era of big data could be called the era of big utterance, and the Internet is the ultimate platform for crowd-sourcing the knowledge of amateurs.  Weinberger provided several examples, including websites like, and, which, as Weinberger explained, “not only enables patients to share details about their treatments and responses but gathers that data, anonymizes it, and provides it to researchers, including to pharmaceutical companies.  The patients are providing highly pertinent information based on an expertise in a disease in which they have special credentials they have earned against their will.”  The intriguing term Weinberger used to describe the source of this crowd-sourced amateur knowledge was human sensors.

In our increasingly data-constructed world, where more data might soon be constructed by things than by humans, I couldn’t help but wonder whether the phrase the Internet of Humans needs to be frequently heard in the coming decades to not only differentiate machine-generated data from human-generated data, but, more importantly, to remind us that humans (amateurs and professionals alike) are a vital source of knowledge that no amount of data from any source could ever replace.

Tags: , ,
Category: Information Development

Collapse Expand Close
TODAY: Wed, March 20, 2019
Collapse Expand Close
Recent Comments
Collapse Expand Close