Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Archive for the ‘Information Development’ Category

by: Phil Simon
01  Nov  2013

Title: Hadoop 2.0: Yes, It’s a Big Deal

In Too Big to Ignore, I wrote about the increasing importance of technologies and systems designed to handle non-relational data. Yes, the structured information on employees, sales, customers, inventory, and the like still matter. But the story doesn’t end with Small Data. There’s a great deal of value to be gleaned from the petabytes of unstructured data lying outside of organizations’ walls. Hadoop is just one tool that can help realize that value.

But no one ever said that Hadoop was perfect or even ideal. The first major iteration of any important technology or application never is.

To that end, data Geeks like me could hardly contain their excitement with the announcement that Hadoop 2.0 is now generally available.

The biggest change to Apache Hadoop 2.2.0, the first generally available version of the 2._x_ series, is the update to the MapReduce framework to Apache YARN, also known as MapReduce 2.0. MapReduce is a big feature in Hadoop—the batch processor that lines up search jobs that go into the Hadoop distributed file system (HDFS) to pull out useful information. In the previous version of MapReduce, jobs could only be done one at a time, in batches, because that’s how the Java-based MapReduce tool worked.

With the available update, MapReduce 2.0 will enable multiple search tools to hit the data within the HDFS storage system at the same time.

Hadoop and Platforms

I asked my friend Scott Kahler about Hadoop 2.0 and he was nothing short of effusive. “Yes, it’s huge deal. YARN will make Hadoop a distributed app platform and not just a Big-Data processing engine”, Kahler told me. “YARN is enabling things like graph databases (Giraph) and event processing engines (Storm) to get instantiated much easier on common distributed system infrastructure.”

I know a thing or two about platforms, and Hadoop 2.0 underscores the fact that it is becoming a de facto ecosystem for Big Data developers across the globe. Got an idea for a new app or web service? Build it on top of Hadoop. Take the core product in a different direction. If others find that app or web service useful, expect further development on top of your work.

Simon Says: We’re Just Getting Started

Hadoop naysayers abound. For all I know, Hadoop isn’t the single best way of handling Big Data. Still, it’s hard to argue that the increased functionality of its second major iteration isn’t a big deal. As it continues to evolve and improve, the benefits begin to exceed its costs.

Yes, many if not most organizations will still resist Big Data for all sorts of reasons. An increasingly developer-friendly Hadoop, though, means great things for enterprises willing to jump into the Big Data fray.


What say you?

Tags: ,
Category: Information Development, Information Management
No Comments »

by: Robert.hillard
26  Oct  2013

Universities disrupted but not displaced

Business and government is finally treating Digital Disruption as seriously as the topic deserves.  I have written previously about the Deloitte Digital Disruption report and highlighted the fact that that in Australia two thirds of the economy will experience a substantial impact in the short to medium term.  Almost every economy around the world faces similar challenges.

Few readers of the report would be surprised to see that both retail and media are heavily impacted.  A walk past any vacated bookshop or a glance at the rapidly emptying displays in the newsagent’s magazine stand are among the many obvious signs that these industries have been seriously disrupted.

There is a danger that other industries take the examples of retail and media too literally and assume their own experience will be the same.  To illustrate why drawing parallels between industries is complex, I’ll use the example of education where many leaders are feeling uncomfortable, but their experience of digital disruption is likely to be very different.

Tertiary education is a sector where lecturers and administrators are watching the rise of MOOCs (Massive Open Online Courses) and wondering whether in the future their students will bother to even enrol in their institutions if they can access the world’s best teachers with a few mouse clicks.

One thing that traditional book sellers and newspapers have in common is that their interactions with their customers are transactional.  The publisher sets a price that and the customer who wants the product pays that price.  In the past, regulation has protected their markets and continues to provide some level of shelter in many countries today.

Education is different with many actors involved in every transaction ranging from students, parents and employers through to teachers, administrators and regulators.  The regulators are complex and range from government to industry bodies.  The buyers pay in a variety of forms including subsidies, loans, entry scores and their own preferences (directing government funding).  The seller earns revenue in complex ways that go well beyond the individual transaction.

There is no doubt that the learning experience is changing with teachers increasingly delivering lectures through a variety of mediums and students leveraging a wider range of tools.  However the real question is whether a small number of superstar lecturers will dominate through MOOCs and, if they do, will today’s university have any part to play in tomorrow’s student experience?

Assuming that tomorrow’s MOOC replaces today’s teacher makes the mistake of likening a university to a bookshop and the lectures to books.  A university is in fact a network with all of the actors I described connected together through complex business rules.  Economists might describe this as a “two-sided market”.

A two-sided market is a sophisticated relationship between buyer and seller.  It is also called a two-sided network which probably more accurately defines the relationship.  Their nature is to have a group of customers who have a many-to-many relationship with a group of providers (who may themselves also be customers).  In the middle is the facilitating organisation, in this case a university, providing a “multi-sided platform”.

Other examples of two-sided markets, or organizations providing multi-sided platforms, are the major credit card companies, social networks and the better online retailers.  In each case, these platforms provide a network benefit that amplifies as the number of participants increases.

The very nature of the advantage of scale for multi-sided platforms means that they tend to consolidate quickly.  While there are periods of innovation when lots of new entrants join the market, the winners quickly emerge and are either acquired or begin to do the acquiring.  This is a dangerous period for incumbents as they need to make the right bets to ensure their scale puts them in a position to be doing the acquiring rather than risk losing market share and potentially being acquired or eliminated.

Understanding the nature of the multi-sided platform that educational institutions provide removes the fear that MOOCs on their own will become the university of tomorrow.  However, innovation means that there will be change and the smart provider will learn from the best of other platforms that are thriving.

Perhaps the next generation of students will experience a mixture of electronic and in-person teaching, industry introductions via Amazon-style recommendations and funding based on incremental performance which is tracked digitally.  If today’s universities pick-up on these trends quickly, they will remain in the box seat to dominate educational services in the decades ahead.

Tags: ,
Category: Information Development
No Comments »

by: Ocdqblog
25  Oct  2013

Automation and the Danger of Lost Knowledge

In my previous post I pondered the quality of machine-generated data, cautioning that even though it overcomes some of the inherent errors of human-generated data, it’s not immune to data quality issues.  Nicholas Carr, in his recent article All Can Be Lost: The Risk of Putting Our Knowledge in the Hands of Machines, pondered the quality of our increasingly frequent decision to put so many things into the hands of our automatons.

While autopilot, for example, has improved flight safety over the years, many aviation and automation experts have concluded that the overuse of automation erodes pilots’ expertise and dulls their reflexes.  “Automation has become so sophisticated,” Carr wrote, “what pilots spend a lot of time doing is monitoring screens and keying in data.  They’ve become, it’s not much of an exaggeration to say, computer operators.”  As one expert cited by Carr put it: “We’re forgetting how to fly.”

Carr’s article also examined the impact of automation on medical diagnosis, surgery, stock trading, accounting, navigation, and driving, as well as the fundamental notion underlying those and arguably most, if not all, human activities—the dissemination and retention of knowledge.

“Automation, for all its benefits,” Carr cautioned, “can take a toll on the performance and talents of those who rely on it.  It alters how we act, how we learn, and what we know.  The choices we make, or fail to make, about which tasks we hand off to machines shape our lives and the place we make for ourselves in the world.  That has always been true, but in recent years, as the locus of labor-saving technology has shifted from machinery to software, automation has become ever more pervasive, even as its workings have become more hidden from us.  Seeking convenience, speed, and efficiency, we rush to off-load work to computers without reflecting on what we might be sacrificing as a result.”

The Knowledge You No Longer Need to Have

In his book Where Good Ideas Come From: The Natural History of Innovation, Steven Johnson wrote about the benefits of stacked platforms—layers upon layers that we can build upon without needing to know how any of those layers work.  Although he explained that this paradigm existed long before computers and is the basis of many other fields of human endeavor, information technology provides the easiest contemporary examples.  One is Twitter, which leverages the Web, itself a multi-layered platform, as well as the SMS mobile communications platform, and the GPS satellite system, among others.  Twitter could be quickly developed and deployed, in large part, because it could stand on the shoulders of existing (and mostly open source) information technology giants.

“In a funny way,” Johnson noted, “the real benefit of stacked platforms lies in the knowledge you no longer need to have.  You don’t need to know how to send signals to satellites or parse geo-data to send that tweet circulating the Web’s ecosystem.”

While I agree with Johnson, I also find it more than a little disconcerting.  The danger of lost knowledge enabled by the Web’s ecosystem, and other information technology advancements, is it leaves us at the mercy of the people, corporations, and machines that retain the power of that knowledge.

Is Automation the Undoing of Knowledge?

“Knowing demands doing,” wrote Carr.  “One of the most remarkable things about us is also one of the easiest to overlook: each time we collide with the real, we deepen our understanding of the world and become more fully a part of it.  While we’re wrestling with a difficult task, we may be motivated by an anticipation of the ends of our labor, but it’s the work itself—the means—that makes us who we are.”

“Computer automation severs the ends from the means,” concluded Carr.  “It makes getting what we want easier, but it distances us from the work of knowing.  As we transform ourselves into creatures of the screen, we face an existential question: Does our essence still lie in what we know, or are we now content to be defined by what we want?  If we don’t grapple with that question ourselves, our gadgets will be happy to answer it for us.”

While machine-generated data is not the same thing as machine-generated knowledge, in the same way that automation is not the same thing as artificial intelligence, if we consider how much of our knowledge, and our responsibility for doing something with it, we’re turning over to machines through automation, it might give us pause to hit the pause button—if not the stop button—in some cases.

Tags: , , ,
Category: Data Quality, Information Development
No Comments »

by: John McClure
24  Oct  2013

Ontology Practices Revealed

Paul Warren, a UK graduate student, recently published an interesting survey about the practices of those professionals who build semantic ontologies which are fundamental to drive the Internet’s semantic web; enterprise and commercial semantic intranets; and even personal triplestores.

The survey is based on a sample of 118 respondents, all leading engineers in the semantic community, roughly 60% academic and research oriented, the remainder being enterprises and entrepeneurs. To me, now an entrepeneur, these opinions do accurately reflect the current state of the art, as over 60% had been developing ontologies for 5 or more years.

The most popular application ontologies reported in use were Dublin Core and Friend of a Friend, or FOAF. Dublin Core is the international library standard which evolved from text-based “index cards” to a Linked Open Data-capable protocol). In fact these Dublin Core ontologies figure prominently in provenance metadata exchange standards (such as the W3′s PROV ontology) so it’s a good thing semantic warehouses are adopting at least this basic level.

Surprisingly though the survey reported only a small interest in Linked Open Data ontologies such as the Data Cube ontology discussed last month. But that should change. Since a Data Cube is not particularly meaningful beyond its notation as an n-dimensional, souped-up, non-orthogonal HTML table with numeric or ‘coded’ cells, semantic ingestion tools will inevitably acquaint users with this key open-source ontology much as the most popular ontology editor tool, Protege, introduces Dublin Core to its users.

The most popular “ontology language” — that is, the language used to describe one’s ontology — cited by 90% respondents was one of the several versions of the W3′s Web Ontology Language (OWL). It’s good that ontology validation tools — which test an ontology for conformance to any of these dialects — are integrated with popular ontology editors.

The objectives for use of an ontology were reported to be (1) conceptual modelling (2) data integration (3) knowledgebase schema (4) knowledge sharing (5) linked data integration (6) ontology-based search and (7) accessing heterogenous data. These objectives were all equally cited as motivations for building or locating an existing ontology for re-use.

The number of classes and properties within the ontologies used by these professionals may reveal something about the state of the art: ontologies containing more than 10,000 classes and more than 10,000 properties were reported to be common. These classes and properties supported roughly 100,000 to 500,000 instances (aka ‘individuals’). The extreme is what interests here. At the high end is an ontology with up to 3 million classes (for real!?); one with up to 300,000 properties (an interesting mountain of work); and for quite a few, 1+ million instances to manage. If these instances were represented by an average 50 triples, this suggests a 50 million record database configuration might be a reasonable target estimate for a fully operational enterprise SPARQL database.

The report goes into topics related to industry distinctions, ontology editing and visualization tools, plus thoughtful observations about patterns in ontologies.

The report — Ontology Users’ Survey – Summary of Results is published on the Knowledge Media Institute’s website. KMI is part of the (multi-disciplinary) Open University located in the United Kingdom.

Category: Information Development
No Comments »

by: Phil Simon
20  Oct  2013

Big Data: Never Let Up

I’ve written before on this site about how companies like Netflix handle Big Data exceptionally well. To serve up customized video recommendations to over 30 million subscribers, many things need to happen. Recognition of the importance of data and massive investments in technology only get an organization so far. No company can succeed without the right type of people, and Netflix is no exception to this rule.

Check out the following Netflix job description for a Data Visualization Engineer – Operational Insight Team:


We’re looking for a passionate, experienced Data Visualization Engineer who will own and build new, high-­impact visualizations in our insight tools to make data both understandable and actionable.


  • Develop rich interactive graphics, data visualizations of “large” amount of structured data, preferably in­-browser
  • Work with an Information Architect/UX Designer and with multiple groups and members across reporting lines to design visual, analytical components within applications

A few things struck me about this job description. First, note here the emphasis on interactivity. Netflix understands that static dataviz only gets an organization so far. To really understand Big Data, it’s becoming important for employees to be ask better questions. In other words, the answer is not always obvious, much less linear. Building interactive tools allows users to find the signals in the noise.

Second, the job description demonstrates just how much Netflix appreciates the multidisciplinary nature of dataviz. This is not about creating tools for IT, HR, finance, or any one department. Increasingly, organizational lines are becoming blurred. Apple’s new corporate headquarters is circular for a reason, and Zappos’ CEO Tony Hsieh is following the same model with his company’s new building.)

Finally, Big Data and dataviz ideally help organizations realize valuable insights. It’s not about collecting as much data as possible. He with the most petabytes doesn’t win. Presenting data in a format that allows people to see what’s going on is paramount.

Simon Says

Despite its lead over its competition, Netflix understands the importance of keeping the foot on the pedal. By hiring new and talented folks, Netflix will put further distance between itself and others.

Silly is the organization that fails to recognize the cardinal importance of people. These days, data and technology alone are necessary but insufficient for long-term success.


What say you?

Tags: ,
Category: Information Development
No Comments »

by: Bsomich
19  Oct  2013

Weekly IM Update.


Data Governance: How competent is your organization?

One of the key concepts of the MIKE2.0 Methodology is that of an Organisational Model for Information Development. This is an organisation that provides a dedicated competency for improving how information is accessed, shared, stored and integrated across the environment.

Organisational models need to be adapted as the organisation moves up the 5 Maturity Levels for organisations in relation to the Information Development competencies below:

Level 1 Data Governance Organisation – Aware

  • An Aware Data Governance Organisation knows that the organisation has issues around Data Governance but is doing little to respond to these issues. Awareness has typically come as the result of some major issues that have occurred that have been Data Governance-related. An organisation may also be at the Aware state if they are going through the process of moving to state where they can effectively address issues, but are only in the early stages of the programme.
Level 2 Data Governance Organisation – Reactive
  • A Reactive Data Governance Organisation is able to address some of its issues, but not until some time after they have occurred. The organisation is not able to address root causes or predict when they are likely to occur. “Heroes” are often needed to address complex data quality issues and the impact of fixes done on a system-by-system level are often poorly understood.
Level 3 Data Governance Organisation – Proactive
  • A Proactive Data Governance Organisation can stop issues before they occur as they are empowered to address root cause problems. At this level, the organisation also conducts ongoing monitoring of data quality to issues that do occur can be resolved quickly.
Level 4 Data Governance Organisation – Managed
Level 5 Data Governance Organisation – Optimal

The MIKE2.0 Solution for the the Centre of Excellence provides an overall approach to improving Data Governance through a Centre of Excellence delivery model for Infrastructure Development and Information Development. We recommend this approach as the most efficient and effective model for building these common set of capabilities across the enterprise environment.

Feel free to check it out when you have a moment and offer any suggestions you may have to improve it.


MIKE2.0 Community

Contribute to MIKE:

Start a new article, help with articles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links: Home Page Login Content Model FAQs MIKE2.0 Governance

Join Us on

Follow Us on 43 copy.jpg

 Join Us on

Did You Know? All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.


This Week’s Blogs for Thought:

The Quality of Things

I have previously blogged about how more data might soon be generated by things than by humans, using the phrase the Internet of Humans to not only differentiate human-generated data from the machine-generated data from the Internet of Things, but also as a reminder that humans are a vital source of knowledge that no amount of data from any source could ever replace.

Read more.

Dinosaurs, Geologists, and the IT-Business Divide

“Like it or not, we live in a tech world, from Apple to Hadoop to Zip files. You can’t ignore the fact that technology touches every facet of our lives. Better to get everything you can, leveraging every byte and every ounce of knowledge IT can bring.”

So write Thomas C. Redman and Bill Sweeney on HBR. Of course, they’re absolutely right. But the tech gap between what organizations can and actually accomplish remains considerable.

Read more.

To create or participate? That is the question.  

The rise of free digital media has removed almost every barrier to entry for people to build the right stage for their content. The tools are there and the effort is minimal. A few clicks, a name, a logo and its done, right?

I often wonder though, are we too focused on building our own communities and not enough on participating in others?

Read more.






Forward to a Friend!
Know someone who might be interested in joining the Mike2.0 Community? Forward this message to a friend

Category: Information Development
No Comments »

by: Ocdqblog
11  Oct  2013

The Quality of Things

I have previously blogged about how more data might soon be generated by things than by humans, using the phrase the Internet of Humans to not only differentiate human-generated data from the machine-generated data from the Internet of Things, but also as a reminder that humans are a vital source of knowledge that no amount of data from any source could ever replace.

The Internet of Things is the source of the category of big data known as sensor data, which is often the new type you come across while defining big data that requires you to start getting to know NoSQL, and which differentiates machine-generated data from the data generated directly by humans typing, taking pictures, recording videos, scanning bar codes, etc.

One advantage of machine-generated data that’s often alleged is its inherently higher data quality due to humans being removed from the equation.  In this blog post, let’s briefly explore that concept.

Is it hot in here or is it just me?

Let’s use the example of accurately recording the temperature in a room, something that a thing, namely a thermostat, has been helping us do long before the Internet of Anything ever existed.

(Please note, for the purposes of keeping this discussion simple, let’s ignore the fact that the only place where room temperature matches the thermostat is at the thermostat — as well as how that can be compensated for with a network of sensors placed throughout the room).

A thermostat only displays the current temperature, so if I want to keep a log of temperature changes over time, I could do this by writing my manual readings down in a notebook.  Here is where the fallible human, in this case me, enters the equation and could cause a data quality issue.

For example, the temperature in the room is 71 degrees Fahrenheit, but I incorrectly write it down as 77 degrees (or perhaps my sloppy handwriting caused even me to later misinterpret the 1 as 7).  Let’s also say that I wanted to log the room temperature every hour.  This would require me to physically be present to read the thermostat every hour on the hour.  Obviously, I might not be quite that punctual and I could easily forget to take a reading (or several), thereby preventing it from being recorded.

The Quality of Things

Alternatively, using an innovative new thermostat that wirelessly transmits the current temperature, at precisely defined time intervals, to an Internet-based application certainly eliminates the possibility of the two human errors in my example.

However, there are other ways a data quality issue could occur.  For example, the thermostat may not be accurately displaying the temperature because it is not calibrated correctly.  Another thing is that a power loss, mechanical failure, loss of Internet connection, or the Internet-based application crashing could prevent the temperature readings from being recorded.

The point I am trying to raise is that although the quality of things entered by things certainly has some advantages over the quality of things entered by humans, nothing could be further from the truth than to claim that machine-generated data is immune to data quality issues.

Tags: , , , ,
Category: Data Quality, Information Development
No Comments »

by: Phil Simon
10  Oct  2013

Dinosaurs, Geologists, and the IT-Business Divide

“Like it or not, we live in a tech world, from Apple to Hadoop to Zip files. You can’t ignore the fact that technology touches every facet of our lives. Better to get everything you can, leveraging every byte and every ounce of knowledge IT can bring.”

So write Thomas C. Redman and Bill Sweeney on HBR. Of course, they’re absolutely right. But the tech gap between what organizations can and actually accomplish remains considerable. Yes, Big Data offers tremendous promise, but organizations stuck in Small Data Hell aren’t remotely close to making that leap.

No Easy Solution, But…

Is there an easy solution? I doubt it, but perhaps not thinking like an old-school line manager or IT professional is a good starting point. That is, those with unique (or at least from non-traditional) backgrounds often produce entirely new ways of doing things. Innovation and improvements need not come from those without formal training in a particular area.

For instance, Charles Darwin was working as a geologist when he proposed the theory of evolution. An astronomer who finally explained what happened to the dinosaurs. Stories like these are rife in Frans Johansson’s excellent book The Medici Effect: What Elephants and Epidemics Can Teach Us About Innovation.

No, a chef or a concert promoter might not know the best way to model data, but it’s clear that something has to give. Maybe those with a holistic view of things will find new ways to bridge the gap that has encumbered organizations for so long.

Simon Says: Remember Two Things

The bulk of the responsibility for “getting more out of tech” should not lay exclusively with the folks in IT. That mind-set is gradually fading into the distance, a byproduct of a foregone era. Applications are much more user friendly, many employees are more tech-savvy, and it’s not IT’s data to begin with.

On a broader level, organizations simply cannot continue to leave so much on the table. They are squandering most of the great opportunities presented by new technologies and data. Those who believe that these possibilities will exist indefinitely are sorely mistaken.


What say you?

Category: Information Development
No Comments »

by: Bsomich
04  Oct  2013

Content Contribution Contest: Winner Announcement

We’re pleased to announce the winner of our recent MIKE2.0 Content Contributor Contest, Lisa Martinez! Her expansion to our Budgeting & Planning Article qualified her to win a new iPad mini.

Big thanks to LisaMar10 for her stellar contribution!


Interested in becoming a MIKE2.0 Contributor?

You can start with the [most wanted pages][1]; or the pages we know [need more work] [2]; or even the [stub] [3] that somebody else has started, but hasn’t been able to finish. Still not sure? Just contact us at and we’ll be happy to help refer you to some articles that might be a good fit for your expertise.

Category: Information Development
No Comments »

by: Bsomich
28  Sep  2013

Weekly IM Update


What is an Open Methodology Framework?

An Open Methodology Framework is a collaborative environment for building methods to solve complex issues impacting business, technology, and society.  The best methodologies provide repeatable approaches on how to do things well based on established techniques. MIKE2.0′s Open Methodology Framework goes beyond the standards, techniques and best practices common to most methodologies with three objectives:

  • To Encourage Collaborative User Engagement
  • To Provide a Framework for Innovation
  • To Balance Release Stability with Continuous Improvement

We believe that this approach provides a successful framework accomplishing things in a better and collaborative fashion. What’s more, this approach allows for concurrent focus on both method and detailed technology artifacts. The emphasis is on emerging areas in which current methods and technologies lack maturity.

The Open Methodology Framework will be extended over time to include other projects. Another example of an open methodology, is open-sustainability which applies many of these concepts to the area of sustainable development. Suggestions for other Open Methodology projects can be initiated on this article’s talk page.

We hope you find this of benefit and welcome any suggestions you may have to improve it.


MIKE2.0 Community

Popular Content

Did you know that the following wiki articles are most popular on Google? Check them out, and feel free to edit or expand them!

What is MIKE2.0?
Deliverable Templates
The 5 Phases of MIKE2.0
Overall Task List
Business Assessment Blueprint
SAFE Architecture
Information Governance Solution

Contribute to MIKE:

Start a new article, help with articles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Content Model
MIKE2.0 Governance

Join Us on

Follow Us on
43 copy.jpg

 Join Us on


This Week’s Blogs for Thought:

The Gordian Knot of Big Data Integration

According to legend, the Gordian Knot was an intricate knot binding the ox-cart of the peasant-turned-king Gordias of Phrygia (located in the Balkan Peninsula of Southeast Europe) to a pole within the temple of Zeus in Gordium.  It was prophesied that whoever loosed the knot would become ruler of all Asia.

In 333 BC, Alexander the Great sliced the knot in half with a stroke of his sword (the so-called “Alexandrian solution”) and that night there was a violent thunderstorm, which Alexander took as a sign that Zeus was pleased and would grant Alexander many victories.  He then proceeded to conquer Asia, and much of the then known world, apparently fulfilling the prophecy.

Read more.

What can we learn from the failure of NFC?

For years now the industry has confidently predicted that Near Field Communication (NFC) would be the vehicle by which we would all abandon our leather wallets and embrace our smartphones for electronic payments.It hasn’t happened and I don’t believe NFC is the solution.  Here’s what we can learn from the failure of NFC to take off.  As enticing as the idea is of using our phone for payments is, there are two problems that the focus on NFC fails to solve.

Read more.

Through a PRISM, Darkly

By now, most people have heard about the NSA surveillance program known as PRISM that, according to a recent AP story, is about even bigger data seizure. Without question, big data has both positive and negative aspects, but these recent revelations are casting more light on the darker side of big data, especially   its considerable   data privacy implications.

A few weeks ago, Robert Hillard blogged about how we are living as far from 1984 as George Orwell, whose dystopian novel 1984, which was published in 1948, has recently spiked in sales since apparently Big Brother is what many people see when they look through a PRISM, darkly.

Read more.



Category: Information Development
No Comments »

Collapse Expand Close
TODAY: Wed, April 23, 2014
Recent Comments
Collapse Expand Close