Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Archive for the ‘Information Development’ Category

by: Ocdqblog
12  Sep  2013

Big Data is the Library of Babel

Richard Ordowich, commenting on my Hail to the Chiefs post, remarked how “most organizations need to improve their data literacy.  Many problems stem from inadequate data definitions, multiple interpretations and understanding about the meanings of data.  Skills in semanticstaxonomy and ontology as well as information management are required.  These are skills that typically reside in librarians but not CDOs.  Perhaps hiring librarians would be better than hiring a CDO.”

I responded that maybe not even librarians can save us by citing The Library of Babel, a short story by Argentine author and librarian Jorge Luis Borges, which is about, as James Gleick explained in his book The Information: A History, A Theory, A Flood, “the mythical library that contains all books, in all languages, books of apology and prophecy, the gospel and the commentary upon that gospel and the commentary upon the commentary upon the gospel, the minutely detailed history of the future, the interpolations of all books in all other books, the faithful catalogue of the library and the innumerable false catalogues.  This library (which others call the universe) enshrines all the information.  Yet no knowledge can be discovered there, precisely because all knowledge is there, shelved side by side with all falsehood.  In the mirrored galleries, on the countless shelves, can be found everything and nothing.  There can be no more perfect case of information glut.”

More than a century before the rise of cloud computing and the mobile devices connected to it, the imagination of Charles Babbage foresaw another library of Babel, one where “the air itself is one vast library, on whose pages are forever written all that man has ever said or woman whispered.”  In a world where word of mouth has become word of data, sometimes causing panic about who may be listening, Babbage’s vision of a permanent record of every human utterance seems eerily prescient.

Of the cloud, Gleick wrote about how “all that information—all that information capacity—looms over us, not quite visible, not quite tangible, but awfully real; amorphous, spectral; hovering nearby, yet not situated in any one place.  Heaven must once have felt this way to the faithful.  People talk about shifting their lives to the cloud—their informational lives, at least.  You may store photographs in the cloud; Google is putting all the world’s books into the cloud; e-mail passes to and from the cloud and never really leaves the cloud.  All traditional ideas of privacy, based on doors and locks, physical remoteness and invisibility, are upended in the cloud.”

“The information produced and consumed by humankind used to vanish,” Gleick concluded, “that was the norm, the default.  The sights, the sounds, the songs, the spoken word just melted away.  Marks on stone, parchment, and paper were the special case.  It did not occur to Sophocles’ audiences that it would be sad for his plays to be lost; they enjoyed the show.  Now expectations have inverted.  Everything may be recorded and preserved, at least potentially: every musical performance; every crime in a shop, elevator, or city street; every volcano or tsunami on the remotest shore; every card played or piece moved in an online game; every rugby scrum and cricket match.  Having a camera at hand is normal, not exceptional; something like 500 billion images were captured in 2010.  YouTube was streaming more than a billion videos a day.  Most of this is haphazard and unorganized.”

The Library of Babel is no longer fiction.  Big Data is the Library of Babel.

Tags: , , , , , ,
Category: Data Quality, Information Development
No Comments »

by: John McClure
10  Sep  2013

Prepositional Ontologies

Lately I’ve been crafting an ontology that uses English language prepositions in a manner that seems natural and economic while staying semantically distinguishable. For reference sake, let’s call this ontology “Grover“, from the hilarious skits Grover on Sesame Street staged to teach children nuanced semantic differences among prepositions like “at” “by” “on” and all the others (just 100) which glue together our nouns into discrete concepts upon which children sometimes but not always willingly act.

I began this effort as I noticed that ontologies were being published that named many of its properties the same as their classes. For instance, the Data Cube Vocabulary I’ve blogged about has a class named “Dataset” and a property named “dataset”. The property “dataset” names, predictably, the “Dataset” class as its range, a pattern that provides little to no “sidecar” information when a concept is labelled with subject, predicate and object. To me, this pattern is rankly duplicative.

A better approach to naming properties, our glue between our common and proper nouns, flows foremost from the requirement that all instances of rdf:Resource are named as type:name; the effects of this requirement on an ontology, can be surprisingly profound. To begin with the above pattern would be aliased to, for instance, the is:within, more accurately stating that a given Observation “is:within” a given Dataset. But, note that with a property named “dataset” there’s little chance to ever exploit its inverse, that is, one that names the mirror property that flows from a dataset to its set of observations.

With Grover however, the inverse of “is:within” is “has:within”, perhaps unfortunately because it’s such a simple statement of fact. Consequently a dataset can be queried to determine its set of observations, a seemingly costless improvement to the Data Cube Vocabulary. But it’s costless only to the extent that the resource naming pattern mentioned above, is followed:

Statement part Data Cube
resource name
Grover
resource name
Subject xyz Observation:xyz
Predicate qub:dataset is:within
Object abc Dataset:abc

The type:name convention is key to maintaining the only advantage that conventional noun-oriented property names enjoy, namely, ensuring the type of the object of a given property is of one (or more) certain classes, as ontology applications try to keep the ‘range’ of a property consistent throughout its knowledgebase operations.

But Grover’s prepositions suggest the Resource Description Framework needs to provide a new mechanism for specifying pair-wise combinations of domains and ranges which are legal as classes for Subject and Object resources in triples involving any given RDF Property. Otherwise there’s not other means evident to specify the formalities of implied property/class relationships that exist in a Grover universe.

The type:name convention works because queries can be just as easily written to test (using a LIKE operator) to test for certain prefixes on a resource’s name, e.g., for Dataset:*. However note the other use of the colon in names, namely for prepositional properties such as is:within. These prefixes are actually xml namespace prefixes, with the idea that verb-based semantic namespace identifiers present several valuable advantages for a highly practical ontology.

Ciao until next month!

Category: Information Development
No Comments »

by: John McClure
07  Sep  2013

Data Cubes and LOD Exchanges

Recently the W3C issued a Candidate Recommendation that caught my eye about the Data Cube Vocabulary which claims to be both very general but also useful for data sets such as survey data, spreadsheets and OLAP. The Vocabulary is based on SDMX, an ISO standard for exchanging and sharing statistical data and metadata among organizations, so there’s a strong international impetus towards consensus about this important piece of Internet 3.0.

Linked Open Data (LOD) is as W3C says “an approach to publishing data on the web, enabling datasets to be linked together through references to common concepts.” This is an approach based on W3′s Resource Desription Frmework (RDF), of course. So the foundational ontology actually to implement this quite worthwhile but grandiose LOD vision is undoubtedly to be this Data Cube Vocabulary — very handily enabling the exchange of semantically annotated HTML tables. Note that relational government data tables can now be published as “LOD data cubes” which are shareable with a public adhering to this new world-standard ontology.

But as the key logical layer in this fast-coming semantic web, Data Cubes very well may affect the manner an Enterprise ontology might be designed. Start with the fact that Data Cubes are themselves built upon several more basic RDF-compliant ontologies:

The Data Cube Vocabulary says that every Data Cube is a Dataset that has a Dataset Definition (more a “one-off ontology” specification). Any dataset can have many Slices of a metaphorical, multi-dimensional “pie” of the dataset. Within the dataset itself and within each slice are unbounded masses of Observations – each observation has values for not only the measured property itself but also any number of applicable key values — that’s all there’s to it, right?

Think of an HTML table. A “data cube” is a “table” element, whose columns are “slices” and whose rows are “keys”. “Observations” are the values of the data cells. This is clear but now the fun starts with identifying the TEXTUAL VALUES that are within the data cells of a given table.

Here is where “Dataset Descriptions” come in — these are associated with an HTML table elment an LOD dataset. These describe all possible dataset dimension keys and the different kinds of properties that can be named in an Observation. Text attributes, measures, dimensions, and coded properties are all provided, and all are sub-properties of rdf:Property.

This is why a Dataset Description is a “one-off ontology”, because it defines only text and numeric properties and, importantly, no classes of functional things. So with perfect pitch, the Data Cube Vocabulary virtually requires Enterprise Ontologies to ground their property hierarchy with the measure, dimension, code, and text attribute properties.

Data Cubes define just a handful of classes like “Dataset” “Slice” “SliceKey” and “Observation”. How are these four classes best inserted to an enterprise’s ontology class hierarchy? “Observation” is easy — it should be the base class of all observable properties, that is, all and only textual properties. “SliceKey” is a Role that an observable property can play. A “Slice” is basically an annotated rdf:Bag, mediated by skos:Collection at times.

A “Dataset” is a hazy term applicable to anythng classifiable as data objects or as data structures, that is, “a set of data” is merely an aggregate collection of data items just as a data object or data structure is. Accordingly, a Data Cube “dataset” class might be placed at or near the root of a class hierarchy, but its more clear to establish it as a subclass of an html:table class.

There’s more to this topic saved for future entries — all those claimed dependencies need to be examined.

Category: Business Intelligence, Information Development, Semantic Web
No Comments »

by: Bsomich
07  Sep  2013

Weekly IM Update.

 
 
 logo.jpg

Available on Archive: The Open MIKE Podcast Series

The Open MIKE Podcast is a video podcast show, hosted by Jim Harris, which discusses aspects of the MIKE2.0 framework, and features content contributed to MIKE 2.0 Wiki Articles, Blog Posts, and Discussion Forums.

You can scroll through all 12 of the Open MIKE Podcast episodes below:

Feel free to check them out when you have a moment.

Sincerely,

MIKE2.0 Community

 
Contribute to MIKE:

Start a new article, help with articles under construction or look for other ways to contribute.

Update your personal profile to advertise yourself to the community and interact with other members.

Useful Links:
Home Page
Login
Content Model
FAQs
MIKE2.0 Governance

Join Us on
42.gif

Follow Us on
43 copy.jpg

 Join Us on

images.jpg

Did You Know?
All content on MIKE2.0 and any contributions you make are published under the Creative Commons license. This allows you free re-use of our content as long as you add a brief reference back to us.

 

This Week’s Blogs for Thought:

Hail to the Chiefs

The MIKE2.0 wiki defines the Chief Data Officer (CDO) as one that plays a key executive leadership role in driving data strategy, architecture, and governance as the executive leader for data management activities.

“Making the most of a company’s data requires oversight and evangelism at the highest levels of management,” Anthony Goldbloom and Merav Bloch explained in their Harvard Business Review blog post Your C-Suite Needs a Chief Data Officer.

Goldbloom and Bloch describe the CDO as being responsible for identifying how data can be used to support the company’s most important priorities, making sure the company is collecting the right data, and ensuring the company is wired to make data-driven decisions.

Check it out.

Can Big Data Save CMOs?

Executive turnover has always fascinated me, especially as of late. HP’s CEO Leo Apotheker had a very short run and Yahoo! has been a veritable merry-go-round over the last five years. Beyond the CEO level, though, many executive tenures resemble those of Spinal Tap drummers. For instance, CMOs have notoriously short lifespans. While the average tenure of a CMO has increased from 23.6 to 43 months since 2004, it’s still not really a long-term position. And I wonder if Big Data can change that.

Read more.

The Downside of DataViz

ITWorld recently ran a great article on the perils of data visualization. The piece covers a number of companies, including Carwoo, a startup that aims to make car buying easier. The company has been using dataviz tool Chartio for a few months. From the article:

Read more.

 
 

  Forward this message to a friend

Category: Information Development
No Comments »

by: Robert.hillard
24  Aug  2013

Reinventing the economy by using and buying IT differently

If any modern economy wants to keep, or even add, value to their country as the digital economy grows it has to search for productivity in new ways.  That means bringing together innovations from IT that are outside today’s core and combining them with solutions developed both locally and globally.

Global economies are experiencing a period of rapid change that has arguably not been seen before by anyone less than 80 years of age.  Global drivers have been shifting from valuing the making of things to the flow of intellectual capital.  This is the shift to an information economy which has most recently been dubbed “digital disruption”.

In just a few short years the digital economy has grown from insignificance to being something that politicians around the world need to pay attention to.

Unfortunately, most governments see the digital economy in terms of the loss of tax revenue from activities performed over the internet rather understanding the extent of the need to recalibrate their economies to the new reality.

While tax loopholes are always worth pursuing, the real focus should be stepping up to challenge of every country around the world: how to keep adding value locally to protect the local economy and jobs.  There is absolutely no point, for instance, in complaining about there being less tax on music streaming than the manufacture, distribution and sale of CDs.  The value is just in a different place and most of it isn’t where it was.

Although the loss of bookstores and music retailers has been one of the most visible changes, the shift in spending has come at an incredible pace.  Just ask any postal service or the newspaper industry.  As citizens we are keen to take advantage of the advances of smartphones, the integration of supply chains with manufacturing in China (giving us very cheap goods) and the power of social media to share of information with our friends.  We are less keen when our kids lose their jobs as retail outlets close, manufacturing shuts down and the newsagent’s paper round gets shorter.

We all benefited dramatically as IT improved the efficiency and breadth of services that government and business was able to offer.  Arguably the mass rollout of business IT was as important to productivity in the 1990s as the economic reforms of the 1980s.  As a direct result there are now millions of people employed in IT around the world.

While this huge workforce has been responsible for so much, today it is not being applied enough to protect economies from leaking value offshore.  Many companies regard technology as something they do just enough of to get on with their “real” businesses, even as their markets diminish.  Even “old economy” businesses need to be encouraging their IT departments to innovate and apply for patents.

To protect their tax base, and future prosperity, each country has to search for productivity in new ways.  That means combining innovations from IT that are outside today’s core and combining them with solutions developed by other organisations locally and internationally.  It means looking beyond the core business and being prepared to spin-off activities at the edge that have real value in their own right.  And it also means governments and large enterprises need to change the way that they procure so that they are seeding whole new economies.

Today when politicians and executives hear about IT projects they don’t think about the productivity gain, they just fear the inevitable delays and potential for bad press.  That’s because large organisations have traditionally approached IT as a 1990s procurement problem rather than as an opportunity to seed a new market. A market that is desperately needed by small and medium enterprises who, while innovative, find it very hard to compete for big IT projects leaving many solutions being limited to less innovative and less efficient approaches.  Every time this happens the local and global economy takes a small productivity hit which ultimately hurts us all.

Imagine a world of IT where government and large business doesn’t believe it has to own the systems that provide its citizens and customers with services.  This is the economy that cloud computing is making possible with panels of providers able to delivery everything from fishing licences to payroll for a transactional fee.

Payment for service reduces government and business involvement in the risky business of delivering large scale IT projects while at the same time providing a leg-up for local businesses to become world leaders using local jobs.

Government can have a major impact through policy settings in areas such as employee share schemes, R&D tax credits and access to skilled labour.  However the biggest single impact they can have growing their digital economies and putting the huge IT workforce to productive work is through the choices they make in what they buy.

Business can have a major impact on productivity by managing cost in the short-term by better integrating with local and global providers, but to repeat the benefits of the 1990s productivity improvements will require a willingness to invent new solutions using the most important tools of our generation: digital and information technology.

Tags: ,
Category: Information Development
3 Comments »

by: John McClure
02  Aug  2013

Gather around the campfire

“Who, what, and where, by what helpe, and by whose:
Why, how, and when, doe many things disclose. ”

- Thomas Wilson, The Arte of Rhetorique, 1560
Our human proclivity for story-telling as the primary method we have to communicate our culture — its values, tales, foibles and limitless possibilities — seems to have been ignored if not forgotten, by the standards we have for digital communications. Rather than build systems that promote human understanding about and appreciation for the complexity of human endeavor, we have built instead a somewhat technocratic structure to transmit ‘facts’ about ‘things’. A consequence of this mono-maniacal focus on ‘facts’ is that we risk the loss of humanity in the thicket of numbers and pseudo-semantic string values we pass around among ourselves.

Let’s instead look at a way to structure our communications that relies on time-tested concepts of ‘best practice’ story-telling. It may seem odd to talk of ‘best practices’ and story-telling in the same breath, though indeed we’re all familiar with these ideas.

The “5 Ws” are a good place to start. These are five or six or sometimes more, basic questions whose answers are essential to effective story-telling, often taught to students of journalism, scientific research, and police forensics. Stories which neglect to relay the entirety of the “Who-What-When-Where-Why” (the five Ws) of a topic too easily may leave listeners ‘semi-ignorant’ about the topic. In a court of law for instance, trials in which method (how) motive (why) and opportunity (when) are not fully explicated and validated, rightly end with verdicts of ‘not guilty’.

The same approach — providing the ‘full story’ about a topic — should undergird our methods of digital communications as well. Perhaps by doing so, much of the current debate about “context” for/of any particular semantic (RDF) dataset might be more tractable and resolvable.

The practical significance of a “5 Ws” approach is that it can directly provide a useful metric about RDF datasets. A low score suggests the dataset makes small effort to give the ‘full story’ about its set of topics, while a high score would indicate good conformance to the precepts of best practice communications. In the real world, for instance, a threshold for this metric could be specified in contracts which envision information exchanged between its parties.

While a high-score of course wouldn’t attest to the reliability or consistency of each answer to the “5 Ws” for a given topical datastream, a low-score is indicative that the ‘speaker’ is merely spouting facts (that is, the RDF approach, which is to “say anything about anything”) best used to accentuate one’s own story but not useful as a complete recounting in its own right.

A “best practice communications” metric might be formulated by examining the nature of the property values associated with a resource. If entities are each a subclass of a 5W class, then it can be a matter of “provisionally checking the box” to the extent that some answer exists: a 100% score might indicate the information about a given topic is nominally complete while a 0% score indicates that merely a reference to the existence of the resource has been provided.

Viewing semantic datastreams each as a highly formalized story (or set of stories), then applying quality criteria developed by professional communicators as long ago as Cicero, can provide valuable insights when building high quality data models and transmitting high quality datastreams.

Category: Information Development
No Comments »

by: John McClure
01  Aug  2013

Making Microsense of Semantics

Like many, I’m one who’s been around since the cinder block days, once entranced by shiny Tektronix tubes stationed nearby a dusty card sorter. After years using languages as varied as Assembler through Scheme, I’ve come to believe the shift these represented, from procedural to declarative, has well-improved the flexibility of software organizations produce.

Interest has now moved towards an equally flexble representation of data. In the ‘old’ days when an organization wanted to collect a new data-item about, say, a Person, then a new column would first be added by a friendly database administrator to a Person Table in one’s relational database. Very inflexible.

The alternative — now widely adopted — reduces databases to a simple forumulation, one that eliminates Person and other entity-specific tables altogether. These “triple-stores” basically have just three columns — Subject, Predicate and Object — in which all data is stored. Triple-stores are often called ‘self-referential’ because first, the type of a Subject of any row in a triple-store is found in a different row (not column) in the triple-store and second, definitions of types are found in different rows of the triple-store. The benefits? Not only is the underlying structure of a triple-store unchanging, but also stand-alone metadata tables (tables describing tables) are unnecessary.

Why? Static relational database tables do work well enough to handle transactional records whose dataitems are usually well-known in advance; the rate of change in those business processes is fairly low, so that the cost of database architectures based on SQL tables is equally low. What, then, is driving the adoption of triple-stores?

The scope of business functions organizations seek to automate has enlarged considerably: the source of new information within an organization is less frequently “forms” completed by users, now more frequently raw text from documents; tweets; blogs; emails; newsfeeds; and other ‘social’ web and internal sources; which have been produced received &or retrieved by organizations.

Semantic technologies are essential components of Natural Language Processing (NLP) applications which extract and convert, for instance, all proper nouns within a text into harvestable networks of “information nodes” found in a triple-store. In fact during such harvesting, context becomes a crucial variable that can change with each sentence analyzed from the text.

Bringing us to my primary distinction between really semantic and non-semantic applications: really semantic applications mimic a human conversation, where the knowledge of an indivdual in a conversation is the result of a continuous accrual of context-specific facts, context-specific definitions, even context-specific contexts. As a direct analogy, Wittgenstein, a modern giant of philosophy, calls this phenomena Language Games to connote that one’s techniques and strategies for analysis of a game’s state and one’s actions, is not derivable in advance — it comes only during the play of the game, i.e., during processing of the text corpora.

Non-semantic applications on the other hand, are more similar to rites, where all operative dialogs are pre-written, memorized, and repeated endlessly.

This analogy to human conversations (to ‘dynamic semantics’) is hardly trivial; it is a dominant modelling technique among ontologists as evidenced by development of, for instance, Discourse Representation Theory (among others, e.g., legal communities have a similar theory, simply called Argumentation) whose rules are used to build Discourse Representation Structures from a stream of sentences that accommodate a variety of linguistic issues including plurals, tense, aspect, generalized quantifiers, anaphora and others.

“Semantic models” are an important path towards a more complete understanding of how humans, when armed with language, are able to reason and draw conclusions about the world. Relational tables, however, in themselves haven’t provided similar insight or re-purposing in different contexts. This fact alone is strong evidence that semantic methods and tools must be prominent in any organization’s technology plans.

Category: Business Intelligence, Information Development, Information Strategy, Semantic Web
1 Comment »

by: Bsomich
31  Jul  2013

Content Contributor Contest

Content Contributor Contest

Hey members! Ready to showcase your IM skills? Contribute a new wiki article or expand an existing article between now and September 31, 2013 and you’ll be automatically entered into a drawing for a free iPad mini!

Ipad.png

Contest Terms and Conditions:

Eligibility is open to any new or existing MIKE2.0 member who makes a significant wiki article expansion (expansion must be greater than half the existing wiki article) or creates a new article. Please note that new articles MUST be related to our core IM competencies such as Business Intelligence, Agile Development, Big Data, Information Governance, etc. Off-topic, promotional and spam articles will be automatically disqualified.

Drawing to take place on October 1, 2013. Winning community member will be notified at the email address listed on their user profile. Contest not open to existing MGA board members, MIKE2.0 team, or contractors.

Need somewhere to start?

How about the [most wanted pages][1]; or the pages we know [need more work] [2]; or even the [stub] [3] that somebody else has started, but hasn’t been able to finish. Still not sure? Just contact us at mike2@openmethodology.org and we’ll be happy to help refer you to some articles that might be a good fit for your expertise.

Good luck!

Category: Information Development
No Comments »

by: Ocdqblog
31  Jul  2013

Metadata and the Baker/baker Paradox

In his recent blog post What’s the Matter with ‘Meta’?, John Owens lamented the misuse of the term metadata — the metadata about metadata — when discussing matters within the information management industry, such as metadata’s colorful connection with data quality.  As Owens explained, “the simplest definition for ‘Meta Data’ is that it is ‘data about data’.  To be more precise, metadata describes the structure and format (but not the content) of the data entities of an enterprise.”  Owens provided several examples in his blog post, which also received great commentary.

Some commenters resisted oversimplifying metadata as data about data, including Rob Karel who, in his recent blog post Metadata, So Mom Can Understand, explained that “at its most basic level, metadata is something that helps to better describe the data you’re trying to remember.”

This metadata crisis reminded me of the book Moonwalking with Einstein: The Art and Science of Remembering Everything, where author Joshua Foer described a strange kind of forgetfulness that psychologists have dubbed the Baker-baker Paradox.

As Foer explained it: “A researcher shows two people the same photograph of a face and tells one of them the guy is a baker and the other that his last name is Baker.  A couple days later, the researcher shows the same two guys the same photograph and asks for the accompanying word.  The person who was told the man’s profession is much more likely to remember it than the person who was given his surname.  Why should that be?  Same photograph.  Same word.  Different amount of remembering.”

“When you hear that the man in the photo is a baker,” Foer explained, “that fact gets embedded in a whole network of ideas about what it means to be a baker: He cooks bread, he wears a big white hat, he smells good when he comes home from work.”

“The name Baker, on the other hand,” Foer continued, “is tethered only to a memory of the person’s face.  That link is tenuous, and should it dissolve, the name will float off irretrievably into the netherworld of lost memories.  (When a word feels like it’s stuck on the tip of the tongue, it’s likely because we’re accessing only part of the neural network that contains the idea, but not all of it.)”

“But when it comes to the man’s profession,” Foer concluded, “there are multiple strings to reel the memory back in.  Even if you don’t at first remember that the man is a baker, perhaps you get some vague sense of breadiness about him, or see some association between his face and a big white hat, or maybe you conjure up a memory of your own neighborhood bakery.  There are any number of knots in that tangle of associations that can be traced back to his profession.”

Metadata makes data better, helping us untangle the knots of associations among the data and information we use everyday.  Whether we be bakers or Bakers, or professions or people described by other metadata, the better we can describe ourselves and our business, the better our business will be.

Although we may not always agree on the definitions demarcating metadata, data, and information, let’s not forget that what matters most is enabling better business the best we can.

Tags: , , ,
Category: Data Quality, Information Development, Metadata
No Comments »

by: Robert.hillard
28  Jul  2013

Will the bionic eye solve information overload?

Right now, researchers are working around the world to find ways of restoring sight to the blind by creating a bionic eye.  The closest analogy is the bionic ear, more properly called a cochlear implant, which works by directly stimulating the auditory nervous system.

While the direct targeting of retinal tissue is analogous to the bionic ear, the implications for our grandchildren and great grandchildren go well beyond the blind community who are the intended recipients.

In the decades since the first cochlear implant, the sole objective of research has been to improve the quality of the sound heard.  There is no hint of a benefit in adding additional information into the stream.

But the visual cortex isn’t the auditory nervous system, it is very close to being a direct feed into brain.

The bionic eye could be very different to the bionic ear.  In the first instance, it is obvious that a direct feed of location data from the web could be added to the stream to help interpret the world around the user.  It isn’t much of leap to go beyond navigation and make the bionic eye a full interface to the Internet.

By the time the bionic eye is as mature as the bionic ear is today, the information revolution should be complete and we will well and truly be living in an information economy.  Arguably the most successful citizens will navigate information overload with ease.  But how will they do it?

Today’s multi-dimensional tools for navigating complex information just won’t cut it.  Each additional dimension that the human mind can understand reduces the number of “small world” steps needed to solve information problems.  You can read more about the Small Worlds measure in chapter 5 of Information-Driven Business or find a summary of the technique in MIKE2.0.

While normal visual tools can only support two or at most three dimensions, business intelligence tools often try to add an extra one or even two through hierarchies or filters.  However, these representations are seldom very successful and are only useful to highly skilled users.

A direct feed into the brain, even if it has to go through the visual nervous system, could provide so much more than a convenient form of today’s Google Glasses.  Properly trained, the brain will adapt visual data to meet information needs in very efficient and surprising ways that go well beyond in-mind images.  It is entirely conceivable that by the late 21st century people will think in a dozen dimensions and navigate terabytes of complex information with ease using an “in-eye” machine connection.

Of course, the implications go well beyond the technology.  How will such implants be used by the wider population?  Will the desire for the ultimate man-machine interface be so overwhelming as to overcome ethical concerns of operating on otherwise healthy patients?

The implications are also social and will affect parents (perhaps even our children).  The training required for the brain could overwhelm many adult minds, but may be most effective when available from an early age.  In fact, it could be that children who are implanted with a bionic eye will derive the greatest advantages from the information economy in the latter decades of the 21st century.

Imagine the pressure on parents in future decades to decide which brand of implant to install in their children.  It makes today’s iOS, Android and Windows smartphone debates pale into insignificance!

Tags: ,
Category: Information Development
No Comments »

Calendar
Collapse Expand Close
TODAY: Wed, April 16, 2014
April2014
SMTWTFS
303112345
6789101112
13141516171819
20212223242526
27282930123
Recent Comments
Collapse Expand Close