Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Archive for the ‘Enterprise Content Management’ Category

by: Robert.hillard
26  Nov  2016

Serendipity

Information overload is as much an overwhelming feeling as it is a measurable reality. We often feel an impossible obligation to be across everything, which leaves us wanting to give up and absorb nothing that hits our various screens. Despite all this, the good news is that the majority of the information we need seems to appear just in time.

Where does that leave those of us who are control freaks? I am not comfortable to know that the right information will find me the majority of the time. I want to know that the information I need is guaranteed to find me every time!

The trouble is, guarantees are expensive. This is related to the debate between search based big data solutions and enterprise data warehouses. Google provides a “near enough” search solution that, given the massive amount of data it trawls through, usually seems to find what we need. Knowledge and business intelligence solutions provide the predictable information flows but come at a huge cost.

Of course, the real sense of serendipity comes when information arrives unsought just when we need it. It can come through the right article being highlighted in a social media feed, a corporate policy being forwarded or the right coffee conversation with a colleague. Of course, serendipity isn’t random coincidence and there is much we can do to improve the odds of it happening when we need it most.

Before doing so, it is important to know what things have to be predictable and reliable. A list is likely to include financial reports, approvals and other controls. What’s more, a scan of any email inbox is likely to show a significant number of messages that need to be read and often actioned. Despite its tyranny on our working lives, email works too well!

Serendipity depends on the quality of our networks, both in terms of who we know and the amount of activity the passes between the nodes. A good way to understand the power of relationships in an information or social network is through the theory of “small worlds” (see chapter 5 of my book Information-Driven Business).

Ironically, in an era when people talk about electronic isolation, social networks, that is who we know, are more important than ever. Serendipity relies on people who we know, at least vaguely, promoting content in a way that we are likely to see.

Just as control freaks worry about relying on serendipity, those that are more relaxed run the risk of relying too much on information finding its way mysteriously to them at the right time. Those that don’t understand why it works, won’t understand when it won’t work.

Far from making experts and consultants redundant, this increasing trend towards having the right information available when it’s needed is making them more necessary than ever before. The skill experts bring is more than information synthesis, something that artificial intelligence is increasingly good at doing and will become even better at in the near future. The job of experts is to find connections that don’t exist on paper, the cognitive leaps that artificial intelligence can’t achieve (see Your insight might just save your job).

The first thing is to be active posting updates. Networks operate through quid quo pro, in the long-term we get back as much as we give. In the office, we call this gossip. Too much gossip and it just becomes noise but the right amount and you have an effective social network. Those people who only ever silently absorb information from their colleagues quickly become irrelevant to their social circle and gradually get excluded.

The second is to be constantly curious, like a bowerbird searching and collecting shiny pieces of information, without necessarily knowing how they will all fit together. The great thing about our modern systems is that massive amounts of tagged content is easy to search in weeks, months and years to come.

Finally, have some sort of framework or process for handling information exchange and picking a channel based on: criticality (in which case email is still likely to be the best medium), urgency (which favours various forms of messaging for brief exchanges), targeted broadcast (which favours posts explicitly highlighted/copied to individuals) or general information exchange (which favours general posts with curated social networks). Today, this is very much up to each individual to develop for themselves, but we can expect it to be part of the curriculum of future generations of children.

No matter how often it seems to happen, almost by magic, information serendipity is no accident and shouldn’t be left to chance.

Tags: ,
Category: Enterprise Content Management, Enterprise Data Management, Enterprise Search, Enterprise2.0, Information Strategy, Information Value
No Comments »

by: Robert.hillard
18  Jun  2016

Is the crowd dumbing us down and killing democracy?

One of the most exciting features of the Internet is the ability to get the voice of the crowd almost instantly. Polling of our organisations and society that would have taken weeks in the past can be done in hours or even minutes. Ideas are tested in the court of public opinion in real time. This is potentially a huge boost for participation in democracy and the running of our businesses, or is it?

Our governments and businesses have long worked with a simple leadership model. We select our leaders through some sort of process and then give them the authority to act on our behalf. In the past, we asked our leaders to report back on a regular basis and, most of the time, we left them to it. In democracies we used elections to score the success or failure of our leaders. If they did well, we gave them another term. In business, we relied on a board selected by shareholders.

This really started to change with the advent of the 24 hour news cycle. Rather than curate 30 minutes of news once a day, the TV needed to find stories to fill all of the day. Unlike newsprint which had time for analyse, speed to air was a key performance metric of reporters and an initial, even if uninformed, sound bite was enough to get something to the public.

There is a popular movement to open-up government even further with regular electronic plebiscites and a default to open data. At its core is the desire to make the machinery of government transparent to all citizens. While transparency is good, it is the consequence of having “too many cooks in the kitchen” that leads to problems. Having everyone have their say, either through direct contributions or through endless polling means that the fundamental approach to decision making has to change. While fulltime politicians have the time to get underneath the complexity of a problem, the mass of voters don’t. The result is that complex arguments get lost in one or two sentence explanations.

This is happening at exactly the time that our global societies are becoming more complex and need sophisticated responses. Issues such as migration, debt and global taxation are too complex to be boiled down to a sound bite. It is telling that few have suggested turning our judiciary over to the court of public opinion!

H. L. Mencken, a well-known journalist from the first half of the 20th century who wrote extensively on society and democracy, once said “For every complex problem there is a solution that is concise, clear, simple, and wrong.” An overly crowd oriented approach to democracy results in these simple answers which are dumbing down our decision makers.

The danger doesn’t stop at our leaders, it also extends to the running of our organisations. We work more collaboratively than ever before. Technology is enabling us to source solutions from the crowd to almost any problem. This can work brilliantly for many problems such as getting a quick view on whether a brand message is going to resonate, or if a product would appeal to a particular demographic.

Where it can let us down is when we start trying to ask too many people to provide input to complex problems. Great design, sophisticated modelling and radical new product offerings don’t lend themselves well to having a large number of people collaborate to find the answer.

Collaboration and the use of the crowd needs to be targeted to the places where it works best. This is going to be more important than ever as more people move to the “gig economy”, the movement where they use platforms like 99designs, Expert360, Topcoder or 10EQS to manage their work. The most successful organisations are going to learn what questions and problems the crowd can solve for them.

Questions that require a simple, technical answer seem to suit this form of working well. Similarly, problems that can be solved with well-defined technical solutions are well suited to handing out to a group of strangers.

The crowd either completely rejects the status quo (the crowd as a protest movement), with little to offer in terms of alternative approaches or it slightly tweaks current solutions (the crowd without depth). Even the individual sourced through the crowd seems to be unlikely to rock the boat due to a lack of context for the problem they’re trying to solve.

The way we work and solve problems as a society and in our organisations is changing. The one thing we know for sure is that we don’t yet know how this will really work!

Tags: ,
Category: Enterprise Content Management, Enterprise2.0, Open Source, Web2.0
No Comments »

by: Robert.hillard
23  Jan  2016

My digital foundations #2

It’s time to set some principles to support the choices you are making for your personal digital architecture. This second instalment of digital foundations will help you extend your architecture to protect your digital content and assets.

In the first instalment of My digital foundations we established a foundation identity and managed the passwords that are ubiquitous to our digital lives. Now we can build on this identity and start to manage our content.

Some principles

Everyone talks about the cloud. This is really no more than moving your data to someone else’s servers and accessing the content through the Internet. The advantages of using the cloud are just as applicable to all our digital lives as it for the organisations that you work for or with.

Moving to the cloud does not mean that content isn’t also being stored locally. It does mean, however, that all content is stored remotely and sometimes replicated locally. The best three examples of content to make sure are properly in the cloud are the files you keep in folders, your digital media (sometimes in folders and sometimes within a media app) and your email.

You also want to ensure that precious digital content is not exposed to a single hit attack. You may be feeling particularly secure because your lifetime of photos is sitting on a cloud drive, secure both on the hard drives of your computer and the server of your provider. Imagine, however, what happens if you get attacked by malware that locks or otherwise scrambles your hard drive (or even just a family member hitting “delete”). Before you know it, the server copy has also been lost or scrambled.

Some cloud services offer version history, which would be a fall-back. A better alternative is to have a physical copy of precious content as well as the cloud version. The physical copy can sit in any location while the virtual copies are protected by the cloud.

Cloud drives and digital media

One of the most important services in your digital life is your cloud drive. There is a plethora of options out there (for example, see Wikipedia’s list of file hosting options).

While most providers do not allow you to have more than one account connected to your device, they do generally allow each account to share folders. For this reason, don’t be tempted to log into the service from work and home using the same account, rather create separate accounts and then share appropriate folders. A similar approach should be applied to each member of your family by creating family folders.

Managing photos and digital media is more complex. Many people are simply overwhelmed by the complexity of downloading the photos and videos from their smartphone. Worse, even if you do work it out, the providers are constantly changing and the approach that works today may not be available tomorrow. It is worth investigating options for cloud based photo and video storage and the associated procedure for downloading images and videos from your device. If you want one approach that will stand the test of time, consider simply using your cloud drive as the primary home of your images and export them to your album product of choice.

Email client

The future of email remains controversial, but it is very likely that it is here to stay (see Email works too well). You should treat email like your paper correspondence (particularly as more and more of your bills and communications end-up in this form). You probably keep a file for your papers with tabs for A to Z and should consider doing something similar for your email folders.

It is also likely that you will end-up with more than one email account that you want to map to your email client. The most common mistake that people make is to leave email on their hard drive by using an email client such as Microsoft Outlook or Mozilla Thunderbird locally.

You need somewhere that you can consolidate email centrally that can also act as a webmail client and central, cloud-based, store that will be persistent for many years. Arguably services such as Yahoo!, Google and Microsoft dominate this space but they are by no means the only providers (see Comparison of webmail providers).

In My digital foundations #1 you chose such a service for your digital network administration. The decision you made then might inform this choice, but it does not have to be the same. Either way, you can choose to map that email account to this service.

With a central store, you are now free to choose an email client. Your one requirement is to ensure you use the IMAP and not POP protocol. The client should leave the email in your central consolidated store. For a list of candidates see Comparison of email clients.

Your architecture

In just two posts, your digital life is supported by a cloud-based network with the potential for numerous participants and elements that manage your most important content. It doesn’t take long before the network is more complex that you can easily visualise without a supporting architecture diagram.

Keeping track of the information flows and making sure that your family is safe will be the subjects covered in the next instalment of “my digital foundations”.

Tags:
Category: Enterprise Content Management
No Comments »

by: Robert.hillard
26  Sep  2015

Email works too well

Everyone who regularly feels overwhelmed by their email would agree that there is a problem.  The hundreds of articles about the issue typically make the same assumption and are wrong. Writer after writer bemoans email as inefficient and an obstacle to productivity. The problem isn’t that email is inefficient, rather, it is too efficient and knows no boundaries.

If email wasn’t productive, only a fraction of today’s emails would be sent.  It allows today’s worker to answer more questions from, and give more directions to, their colleagues in a shorter amount of time than past generations could have imagined possible.

Personal boundaries

In years gone by, work was done at the office.  To continue working at home meant loading paper files into a briefcase, a process that put natural limits of how much could be done out-of-hours.  Even if a large number of memos were written, replies were not going to be received until they went through the internal mail or postal service.

I’ve written before about using email more effectively (see Reclaim email as a business tool).  While there is much that we can do individually, I believe that email is the sharp edge of a technology wedge that challenges our fundamental assumptions about the way we work.

This technology wedge has removed many natural barriers to working as much as we choose.  Many argue that the freedom to work puts the onus of responsibility back on the individual.  This may be true, but we have not invested in developing skills for individuals to know how much work is enough.  We also need to decide, as a society, what attributes we want to encourage in our most successful workers.

A competitive economy means that the most ambitious constantly outdo each other to deliver value for their workplace.  Many see pulling-back, by putting boundaries in place, as reducing their competitiveness in a tough world.

Recognising this, some countries have attempted to put in place limits, most famous is the French 35-hour working week.  The challenges of global employers and an economic downturn have arguably blunted the impact of limiting the hours that individuals work.  In a world where work and play can be divided into seconds as an email is checked while waiting in a supermarket queue, what is the meaning of working hours anyway?

If countries can’t put limits in place, some employers are taking matters into their own hands in an attempt to improve the wellbeing of their staff.  For instance, Daimler employees return from leave with an empty inbox.

Alternatives to email

Given that email overload is such a problem, it is no wonder that there are a wide range of alternatives that have been suggested.  Workflow and Enterprise Content Management (ECM) have been high on the list for a long time now yet neither has made much of a dent on our inboxes.

Perhaps the issue is the shear flexibility of email and the cost of trying to configure workflow or ECM solutions to each individual use case.  They definitely have an important role to play but the amount of email they actually displace is relatively small (and in fact they rely on email as their notification mechanisms).

More recently social media has been heralded as the email killer.  As much as messaging on these platforms is both convenient and used extensively, they have not replaced the inbox.  In fact, more and more are configuring their email clients to be their interface into the messaging stream.

The reason that each of these technologies have so comprehensively failed in their quest to rid us of email overload is that email works really well if your goal is to do more work.  It is however encouraging us to work too much and facilitating a form of communication that is often confrontational and can damage the feeling of wellbeing of staff and relationships between employees.

The opportunity

To replace email, designers of new solutions have to throw out their old assumption of email being an inefficient tool.  Rather than trying to make email more efficient they need to focus on fixing the three real issues: 1) email is efficient but overwhelming; 2) there is no way of naturally limiting the amount of work hitting our inboxes; and 3) job sharing and delegation while absent does not work well.

Before launching headlong into building new products, there is a huge opportunity for research into each of these issues.  How can the problems be measured?  Which organisations are better at reducing their impact and what are the attributes of the solutions they have used?  Is there any evidence that junk email is actually taking a material amount of time for the average worker?  What are the health impacts of information overload and does it matter to society?

Just because existing email alternatives have missed the point, by assuming that email is somehow unproductive, that doesn’t mean there aren’t better solutions.  Freed to look to email as a productivity tool, the focus will move from email filtering and simplification to workload management and sharing.

We need to decide as professionals how we want to work.  Should time away from the office be protected?  Do we really know how much work we are really doing in an average week?  Is it OK for those seeking to get ahead to simply do more work than anyone else?

Those of us in the middle of our careers are the first generation to work electronically.  As pioneers we have a responsibility to set the tone for generations to come.  Will we sentence our children to work in white collar sweatshops or realise the potential of technology to create better workplaces for everyone?

Tags:
Category: Enterprise Content Management, Enterprise2.0
No Comments »

by: Robert.hillard
22  Feb  2015

Making the case for jargon, acronyms and clear language

All over the web, authors are ranting about the misuse of the English language in business.  It’s an easy article to write, picking out examples of jargon and the general torturing of sentences in the name of explaining apparently simple concepts while making the writer seem more impressive.

Some great examples from business can be found in this Forbes article: The Most Annoying, Pretentious and Useless Business Jargon.  Examples that I like (or hate) include terms like “swim lanes”, “best practice” and “core competence”.

There are good reasons for using jargon Rather than take cheap shots, let’s start by asking why people use jargon in the first place.  There are often good reasons for using most of the terms, even the ones that we all hate.  Every discipline has its own special language, whether it is science, architecture, law or business across every industry.  This use of language isn’t restricted to technical terminology, it also includes the way that concepts are expressed which can seem obscure or even absurd to an outsider.

While often justified, the use of jargon acts as a tool to exclude these outsiders from participating in the conversation.  This is a problem because there is good evidence that some of the most exciting solutions to problems are multi-disciplinary.

Experts seldom realise that they might be missing out on a breakthrough which comes from another field.  Business, for instance, is borrowing from ideas in physics (such as thermodynamics) in understanding the dynamics of markets as well as biology to understand how organisations evolve.

Just as fields of science, medicine and law have their own language, so does every aspect of business such as human resources, finance, sales et cetera.  Even in general business, diversity of thought comes up with a better answer almost every time.  Getting to those ideas is only possible if the discussion is intelligible to all.

Jargon to avoid debate

While many discussions use jargon and acronyms as a legitimate shortcut, some use of jargon reflects a lack of understanding by the participants or even an attempt to avoid debate in the first place.  Take the example of “metadata”, a term which has appeared in many countries as governments struggle with appropriate security, privacy and retention regimes.

A plain English approach would be to describe the definition of metadata in full in every discussion rather take the shortcut of using the term on its own.  The reality is that even landing on a definition can lead to an uncomfortable debate, but definitely one worth having as the Australian Attorney General learned to his determent in this interview where the purpose of a very important debate was lost in a confusing discussion on what metadata actually is.

The Attorney General isn’t on his own, many executives have been challenged in private and public forums to explain the detail behind terms they’ve commonly used only to come unstuck.

Sometimes people use jargon, like metadata, swim lanes and best practice because they are avoiding admitting they don’t know the detail.  Other times, they are legitimately using the terms to avoid having to repeat whole paragraphs of explanation.  Of course, this is where acronyms come into their own.

Balancing the needs of the reader and author

Given that it takes at least five times as long to write something as it does to read it (and a factor of ten is more realistic for anything complex) the authors of documents and emails can be forgiven for taking a shortcut or two.

The problem is when they forget that every communication is an exchange of information and while information has value, the value is not necessarily the same for both the writer and reader.

For example, there is little that is more annoying that receiving an email which requires extensive research to work out what the TLAs contained within it actually stand for.  Of course, a TLA is a “three letter acronym” (the most popular length of acronym with examples including everything from “BTW” for “by the way” through to LGD for “loss given default”).

Our propensity for short messages has only increased due to our rapid adoption of texts, instant messaging and Twitter.  I’ve written before about the role of email in business (see Reclaim email as a business tool).  Clarity of meaning is fundamental to all communication regardless of the medium.

Given that it does take so much longer to write than to read, it makes sense for the writer to take short-cuts.  However, if the information is of the same value to both the writer and reader, then the shortcut needs to offer a tenfold benefit to the writer to make-up for the additional cost in time to the reader who has to decode what the shortcut means.

This equation gets worse if there are multiple readers, if the benefit is greater to the writer than the reader or when the writer has the advantage of context (that is, they are already thinking about the topic so the jargon and acronyms are already on their mind).

In short, there is seldom a benefit to using jargon or acronyms in an email without taking a moment to either spell them out or provide a link to definitions that are easily accessed.

Is this the best you can do?

Perhaps the need to make sure that a reader’s time is spent wisely is best summed up in this anecdote told by Ambassador Winston Lord about Henry Kissinger (former US Secretary of State).

After giving Henry Kissinger a report that Ambassador Winston Lord had worked diligently on, Kissinger famously asked him if this “Is this the best you can do?” This question was repeated on each draft until Lord reached the end of his tolerance, saying “Damn it, yes, it’s the best I can do.”  To which Kissinger replied: “Fine, then I guess I’ll read it this time.” (sourced from Walter Isaacson, “Kissinger: A Biography”, Simon & Schuster 1992).

Category: Enterprise Content Management, Information Value
No Comments »

by: Robert.hillard
25  Oct  2014

Your personal cloud meets the enterprise

In organisations around the world employees are accidently merging their personal and professional cloud applications with dire results.  Some of the issues include the routing of sensitive text messages to family members and the replication of confidential documents onto the servers of competitors.

Our personal and work lives are merging.  When done well, this can be a huge boost to our productivity and personal satisfaction.  The rise of the smart mobile device has been a major part of this merge, driving Chief Information Officers towards various flavours of Bring Your Own Device (BYOD) policies.

With the advent of a myriad of cloud services delivered over the internet, our individual cloud architectures are becoming far more complex.  The increasing trend to move beyond BYOD to Bring Your Own Application (BYOA) means that the way individuals configure their own personal technology has the potential to directly impact their employer.

Bringing applications and data to work

Many of our assumptions about workplace technology are based on the world of the past where knowledge workers were rare, staff committed to one employer at a time and IT was focused on automating back office processes.

In this environment, one employee could be swapped out for another easily and all were prepared to conform to a defined way of working.  Our offices were perhaps best compared to Henry Ford’s production line of the early twentieth century where productivity required conformity.

It’s no wonder that IT has put such a high value on a Standard Operating Environment.

However, new styles of working have taken hold (see Your insight might protect your job). Businesses and government are employing more experts on a shared basis.  An increasing proportion of the workforce is best described as being made up of “knowledge workers” and a sizeable number of these people are choosing to work in new ways including spreading their week over several organisations.  Even fulltime staff find that that their employers have entered into complex joint ventures meaning that the definition of the enterprise is constantly shifting.

Personal productivity is complex and highly dependent on the individual.  The approach that works for one person is a complete anathema for another.  Telling people to work in a standard way is like adding a straightjacket to their productivity.  Employees should be allowed to use their favourite tools to schedule their time, author documents, create presentations and take notes.

Not only is the workplace is moving away from lowest common denominator software, the increasing integration between mobile, tablet and personal computers means that the boundaries between them are becoming less clear.  It all adds up to the end of the Standard Operating Environment.

Avoiding fragmentation

CIOs are right to worry that BYOD and BYOA will result in their data being spread out over an unmanageable number of systems and platforms.  It is no longer workable to simply demand that all information be kept in a single source of truth physical database or repository.  I’ve previously argued that the foundation of a proper strategy is to separate the information from the business logic or processes that create it (see Your data is in the cloud when…).

Some of the simplest approaches that CIOs have taken to manage their BYOD environment have involved virtualisation solutions where a virtual desktop with the Standard Operating Environment is run over a client which is available on many devices.

While this is progress, it barely touches the productivity question.  While workers can choose the form factor of the device that suits them, they are still being forced into the straightjacket of lowest common denominator business applications.

The vendors are going to continue to provide better solutions which put the data in a standard form while allowing access to many (even competing) applications.  It’s not about just one copy of the data on a database, but rather allowing controlled replication and digital watermarks that track the movement of this data including loss prevention.

The upside

While CIOs may see many downsides, the upsides go beyond the productivity of individual workers.

For example, organisations constantly struggle with managing their staff master data, but in a world of personal social media employees will effectively identify themselves (see Login with social media).

Managing software licenses, even in the most efficient organisation, is still an imperfect science at best with little motivation for users to optimise their portfolio.  When employees can bring their own cloud subscriptions to work, with an agreed allowance paid to them, the choices that they make are so much more tailored to their actual needs today rather than what they might need in months or even years to come.

Organisations that have grappled with provisioning new PCs are seeing the advantages of the consumer app stores with employees self-administering deployment between devices.  Cloud and hardware providers are increasingly recognising the complex nature of families and are enabling security and deployment of content between members.  The good news is that even the simplest family structure is more complicated than almost any enterprise organisation chart!

Architecture matters

We see a hint of bad architecture when staff misconfigure their iPhones and end-up with their (potentially sensitive) text messages being shared with their spouse or wider family or when contractors use their personal cloud drive working across more than one organisation.  Even worse is when it goes really wrong and a ransomware breach on a personal computer infects all of these shared resources!

The breadth of services that the personal cloud covers is constantly growing.  For many, it already includes their telecommunications, voicemail, data storage, diary, expense management, timesheets, authoring of office documents, messaging (email and texts), professional library, project management, diagram tools and analytics.  Architecture is even beginning to matter in social media with the convergence of the tools most of us use (see The future of social networks).

Some employers fear the trend of cloud, BYOD and BYOA will lead to the loss of their organisation’s IP.  The smart enterprise, however, is realising that well-managed cloud architectures and appropriate taxonomies can help rather than hinder employees to know what’s theirs and what’s yours.

In the near future you may even start choosing staff based on the quality of their personal cloud architecture!

Tags:
Category: Enterprise Content Management, Enterprise2.0, Web2.0
No Comments »

by: John McClure
06  Mar  2014

Grover: A Business Syntax for Semantic English

Grover is a semantic annotation markup syntax based on the grammar of the English language. Grover is related to the Object Management Group’s Semantics of Business Vocabulary and Rules (SBVR), explained later. Grover syntax assigns roles to common parts of speech in the English language so that simple and structured English phrases are used to name and relate information on the semantic web. By having as clear a syntax as possible, the semantic web is more valuable and useful.

An important open-source tool for semantic databases is SemanticMediaWiki that permits everyone to create a personal “wikipedia” in which private topics are maintained for personal use. The Grover syntax is based on this semantic tool and the friendly wiki environment it delivers, though the approach below might also be amenable to other toolsets and environments.

Basic Approach. Within a Grover wiki, syntax roles are established for classes of English parts of speech.

  • Subject:noun(s) -- verb:article/verb:preposition -- Object:noun(s)

refines the standard Semantic Web pattern:

  • SubjectURL -- PredicateURL -- ObjectURLwhile in a SemanticMediaWiki environment, with its relative URLs, this is the pattern:
  • (Subject) Namespace:pagename -- (Predicate) Property:pagename -- (Object) Namespace:pagename.

 

nouns
In a Grover wiki, topic types are nouns, more precisely nounal expressions, are concepts. Every concept is defined by a specific semantic database query, these queries being the foundation of a controlled enterprise vocabulary. In Grover every pagename is the name of a topic and every pagename includes a topic-type prefix. Example: Person:Barack Obama and Title:USA President of the United States of America, two topics related together through one or more predicate relations, for instance “has:this”. Wikis are organized into ‘namespaces’ — its pages’ names are each prefixed with a namespace-name, which function equally as topic-type names. Additionally, an ‘interwiki prefix’ can indicate the URL of the wiki where a page is located — in a manner compatible with the Turtle RDF language.

Nouns (nounal expressions) are the names of topic-types and or of topics; in ontology-speak, nouns are class resources or nouns are individual resources but rarely are nouns defined as property resources (and thereby used as a ‘predicate’ in the standard Semantic Web pattern, mentioned above). This noun requirement is a systemic departure from today’s free-for-all that allows nouns to be part of the name of predicates, leading to the construction of problematic ontologies from the perspective of common users.verbsIn a Grover wiki, “property names” are an additional ontology component forming the bedrock of a controlled semantic vocabulary. Being pages in the “Property” namespace means these are prefixed with the namespace name, “Property”. However the XML namespace is directly implied, for instance has:this implies a “has” XML Namespace. The full pagename of this property is “Property:has:this. The tenses of a verb — infinitive, past, present and future — are each an XML namespace, meaning there are separate have, has, had and will-have XML Namespaces. The modalities of a verb are also separate XML Namespace, may and must. Lastly the negation form for verbs (involving not) are additional XML Namespaces.

The “verb” XML Namespace name is only one part of a property name. The other part of a property name is either a preposition or it is a grammatical author. Together, these comprise an enterprise’s controlled semantic vocabulary.

prepositions
As in English grammar, prepositions are used to relate an indirect object or object of a preposition, to a subject in a sentence. Example: “John is at the Safeway” uses a property named “is:at” to yield the triple Person:John -- is:at -- Store:Safeway. There are approximately about one hundred english prepositions possible for any particular verbal XML Namespace. Examples: had:from, has:until and is:in.
articles
As in English grammar, articles such as “a” and “the” are used to relate direct objects or predicate nominatives to a subject in a sentence. As for prepositions above, articles are associated with a verb XML Namespace. Example: has:a:, has:this, has:these, had:some has:some and will-have:some.

adjectivesIn a Grover wiki, definitions in the “category” namespace include adjectives, such as “Public” and “Secure”. These categories are also found in a controlled modifier vocabulary. The category namespace also includes definitions for past participles, such as “Secured” and “Privatized”. Every adjective and past participle is a category in which any topic can be placed. A third subclass of modifiers include ‘adverbs’, categories in which predicate instances are placed.

That’s about all that’s needed to understand Grover, the Business Syntax for Semantic English! Let’s use the Grover syntax to implement a snippet from the Object Management Group’s Semantics of Business Vocabulary and Rules (SBVR) which has statements such as this for “Adopted definition”:

adopted definition
Definition: definition that a speech community adopts from an external source by providing a reference to the definition.
Necessities: (1) The concept ‘adopted definition’ is included in Definition Origin. (2) Each adopted definition must be for a concept in the body of shared meanings of the semantic community of the speech community.

 

Now we can use Grover’s syntax to ‘adopt’ the OMG’s definition for “Adopted definition”.
Concept:Term:Adopted definition -- is:within -- Concept:Definition
Concept:Term:Adopted definition -- is:in -- Category:Adopted
Term:Adopted definition -- is:a -- Concept:Term:Adopted definition
Term:Adopted definition -- is:also -- Concept:Term:Adopted definition
Term:Adopted definition -- is:of -- Association:Object Management Group
Term:Adopted definition -- has:this -- Reference:http://www.omg.org/spec/SBVR/1.2/PDF/
Term:Adopted definition -- must-be:of -- Concept:Semantic Speech Community
Term:Adopted definition -- must-have:some -- Concept:Reference

This simplified but structured English permits the widest possible segment of the populace to participate in constructing and perfecting an enterprise knowledge base built upon the Resource Description Framework.

More complex information can be specified on wikipages using standard wiki templates. For instance to show multiple references on the “Term:Adopted definition” page, the “has:this” wiki template can be used:
{{has:this|Reference:http://www.omg.org/spec/SBVR/1.1/PDF/;Reference:http://www.omg.org/spec/SBVR/1.2/PDF/}}
Multi-lingual text values and resource references would be as follows, using the wiki templates (a) {{has:this}} and (b) {{skos:prefLabel}}
{{has:this |@=en|@en=Reference:http://www.omg.org/spec/SBVR/1.2/PDF/}}
{{skos:prefLabel|@=en;de|@en=Adopted definition|@de=Angenommen definition}}

One important feature of the Grover approach is its modification of our general understanding about how ontologies are built. Today, ontologies specify classes, properties and individuals; a data model emerges from listings of range/domain axioms associated with a propery’s definition. Instead under Grover, an ontology’s data models are explicitly stated with deontic verbs that pair subjects with objects; this is an intuitively stronger and more governable approach for such a critical enterprise resource as the ontology.

Category: Business Intelligence, Enterprise Content Management, Enterprise Data Management, Enterprise2.0, Information Development, Semantic Web
No Comments »

by: Robert.hillard
22  Feb  2014

The race to the internet of things is a marathon

The PC era is arguably over and the age of ubiquitous computing might finally be here.  Its first incarnation has been mobility through smartphones and tablets.  Many pundits, though, are looking to wearable devices and the so-called “internet of things” as the underlying trends of the coming decade.

It is tempting to talk about the internet of things as simply another wave of computing like the mainframe, mid-range and personal computer.  However there are as many differences as there are similarities.

While the internet of things is arguably just another approach to solving business problems using computing devices, the units of measure are not so directly tied to processing power, memory or storage.  The mass of interconnections mean that network latency, switching and infrastructure integration play just as much a part, for the first time divorcing a computing trend from “Moore’s Law” and the separate but related downward trend of storage costs.

Does it matter?

The internet of things brings the power of the information economy to every part of society, providing an opportunity to repeat many of the gains that business and government experienced through the last decade of the twentieth century.  Back then there was a wave of new applications that allowed for the centralisation of administration, streamlining of customer service and outsourcing of non-core activities.

Despite all of the apparent gains, however, most infrastructure has remained untouched.  We still drive on roads, rely on electricity and utilise hospitals that are based on technology that really hasn’t benefited from the same leap in productivity that has supercharged the back office.

The 40 year problem

It’s not hard to build the case to drive our infrastructure through the network.  Whether it is mining, transport, health or energy, it is clear that there are mass benefits and efficiencies that are there for the taking.  Even simple coordination at a device level will run our communities and companies better for everyone’s benefit.  Add in the power of analytics and we can imagine a world that is dancing in synchronisation.

However, just as this new wave of computing doesn’t adhere to Moore’s Law, it also lacks the benefit of rapid consumer turnover.  We’ve accepted the need to be constantly replacing our PCs and smartphones, driving in turn the growth of an industry.  But it’s one thing to upgrade one or two devices per person every few years, it is quite another to replace the infrastructure that surrounds us.

Much of what we work with day-to-day in roads, rail, energy and all of the other infrastructure around us has a lifespan that is measured in decades or even half centuries.  Even with a compelling business case, it is not likely to be enough to replace embedded equipment that still has decades of economic value ahead of it.

If we don’t want to wait until our grandchildren are old to benefit from the internet of things, we have to find a way to implement it in a staggered way.

Making a staggered implementation work

Anyone who has renovated their house will have considered whether to build in new technology.  For most, network cabling is a basic, but centralised control of lighting and other devices is a harder ask.  Almost anything that is embedded in the walls runs the risk of being obsolete even before the builder hands over the keys.

The solution is independent and interoperable devices.  Rather than building-in security cameras, consumers are embracing small portable network cameras that they can access through cloud-based services.  The same providers are increasingly offering low-cost online light switches that don’t require anything more than an electrician to replace the faceplate.  This new generation of devices can be purchased as needed rather than all at once and raise little risk of uneconomic obsolescence.

Google’s purchase of NEST appears to be an investment in this incremental, cloud-based, future of interconnected home-based devices.  By putting it in their stable of technologies, it is a fair bet that they see the value being in offering analytics to optimise and better maintain a whole range of services.

Just as companies focused on the consumer at home are discovering an approach that works, so too are those vendors that think about bigger infrastructure including energy, mining and major transport assets.

Future proofing

The car industry is going to go through a revolution over the next decade.  After years of promise, new fuel technologies are bearing fruit.  It’s going to be both an exciting and uncertain ride for all involved.  Hybrid, electric pluggable, electric swappable and hydrogen all vie for roles in the future.

Just as important than as fuel technology is the ability to make cars autonomous.  So-called “range anxiety”, the fear of running out of power in an electric car, is eliminated if the car’s computer makes the driving decisions including taking responsibility for ensuring an adequate charge or access to charging stations.

Arguably the most exciting developments are in the car software rather than hardware.  Like smartphones, manufacturers are treating their vehicles as platforms for future apps.  New cars are including technology that is beyond the capabilities of current software such as that needed to run an autonomous vehicle.

Early beneficiaries are the insurance industry (telematics), car servicing providers (who can more actively target customers) and a whole range of innovators who are able to invent solutions that as drivers we never even knew we needed.

Perhaps a tie-up between innovators like Tesla and Apple will show what connected cars and infrastructure could do!

Winning the marathon

Unlike previous waves of computing, the internet of things is going to take many years to reach its full potential.  During this transition business opportunities that are unimaginable today are going to emerge.  Many of these lie in the data flowing across the internet of things that is genuinely big (see It’s time for a new definition of big data).

These new solutions can be thought of as permutations of cloud services (see Cloud computing should be about new business models).  Those wanting to try to imagine the world of possibilities should remember that the last twenty years have taught us that the consumer will lead the way and that incumbents are seldom the major innovators.

However, incumbent energy and infrastructure companies that do want to take the lead could do well by partnering with consumer technology companies.  The solutions they come up with are likely to include completely different relationships between a range providers, services and customers.

Tags:
Category: Enterprise Content Management, Enterprise2.0, Information Strategy, Information Value, Web2.0
No Comments »

by: Robert.hillard
20  Jan  2014

Your data is in the cloud when…

It’s fashionable to be able to claim that you’ve moved everything from your email to your enterprise applications “into the cloud”.  But what about your data?  Just because information is stored over the Internet, it shouldn’t necessarily qualify as being “in the cloud”.

New cloud solutions are appearing at an incredible rate.  From productivity to consumer applications the innovations are staggering. However, there is a world of difference between the ways that the data is being managed.

The best services are treating the application separately to the data that supports it and making the content easily available from outside the application.  Unfortunately there is still a long way to go before the aspirations of information-driven businesses can be met by the majority of cloud services as they continue to lock away the content and keep the underlying models close to their chest.

An illustrative example of best is a simple drawing solution, Draw.io.  Draw.io is a serious threat to products that support the development of diagrams of many kinds.  Draw.io avoids any ownership of the diagrams by reading and saving XML and leaving it to its user to decide where to put the content while making it particularly easy to integrate with your Google Drive or Dropbox account, keeping the content both in the cloud and under your control.  This separation is becoming much more common with cloud providers likely to bifurcate between the application and data layers.

You can see Draw.io in action as part of a new solution for entity-relationship diagrams in the tools section of www.infodrivenbusiness.com .

Offering increasing sophistication in data storage are the fully integrated solutions such as Workday, Salesforce.com and the cloud offerings of the traditional enterprise software companies such as Oracle and SAP.  These vendors are realising that they need to work seamlessly with other enterprise solutions either directly or through third-party integration tools.

Also important to watch are the offerings from Microsoft, Apple and Google which provide application logic as well as facilitating third-party access to cloud storage, but lead you strongly (and sometimes exclusively) towards their own products.

There are five rules I propose for putting data in the cloud:

1. Everyone should be able to collaborate on the content at same time
To be in the cloud, it isn’t enough to back-up the data on your hard disk drive to an Internet server.  While obvious, this is a challenge to solutions that claim to offer cloud but have simply moved existing file and database storage to a remote location.  Many cloud providers are now offering APIs to make it easy for application developers to offer solutions with collaboration built-in.

2. Data and logic are separated
Just like the rise of client/server architectures in the 1990s, cloud solutions are increasingly separating the tiers of their architecture.  This is where published models and the ability to store content in any location is a real advantage.  Ideally the data can be moved as needed providing an added degree of flexibility and the ability to comply with different jurisdictional requirements.

3. The data is available to other applications regardless of vendor
Applications shouldn’t be a black box.  The trend towards separating the data from the business logic leads inexorably towards open access to the data by different cloud services.  Market forces are also leading towards open APIs and even published models.

4. The data is secure
The content not only needs to be secure, but it also needs to be seen to be secure.  Ideally it is only visible to the owner of the content and not the cloud application or storage vendor.  This is where those vendors offering solutions that separate application logic and storage have an advantage given that much of the security is in the control of the buyer of the service.

5. The data remains yours
I’ve written about data ownership before (see You should own your own data).  This is just as important regardless of whether the cloud solution is supporting a consumer, a business or government.

Tags:
Category: Enterprise Content Management, Enterprise Data Management, Information Development, Information Governance, Information Management
No Comments »

by: John McClure
23  Dec  2013

Semantic Notations

In a previous entry about the Worldwide Web Consortium (W3C)’s RDF PROVenance Ontology, I mentioned that it includes a notation aimed at human consumption — wow, I thought, that’s a completely new architectural thrust by the W3C. Until now the W3C has published only notations aimed at computer consumption. Now it is going to be promoting a “notation aimed at human consumption”! So here’s examples of what’s being proposed.
----------------------------------------------------
[1] entity(e1)
[2] activity(a2, 2011-11-16T16:00:00, 2011-11-16T16:00:01)
[3] wasDerivedFrom(e2, e1, a, g2, u1)
----------------------------------------------------

[1] declares that an entity named “e1″ exists. This could have been “book(e1)” presumably, so any subclass of prov:Entity can be referenced instead. Note: The prov:entity property and the prov:Entity class are top concepts in the PROV ontology.
[2] should be read as “activity a2, which occurred between 2011-11-16T16:00:00 and 2011-11-16T16:00:01″. An ‘activity’ is a sub-property of prov:influence, as an ‘Activity’ is a sub-class of prov:Influence, both top concepts in the PROV ontology.
[3] additional details are shown for a “wasDerivedFrom” event (a sub-property of prov:wasInfluencedBy, a top concept of the PROV ontology); to wit, that activity a, the generation g2, and the usage u1, resources are eacb related to the “wasDerivedFrom” relation.
—————————————————-

The W3 syntax above is a giant step towards establishing standard mechanisms for semantic notations. I’m sure though this approach doesn’t yet qualify as an efficient user-level syntax for semantic notation however. First, I’d note that the vocabulary of properties essentially mirrors the vocabulary of classes — from a KISS view this ontology design pattern imposes a useless burden on a user, to learn two vocabularies of similar and hierarchical names, one for classes and one for properties. Secondly camel-case conventions are surely not amenable to most users (really, who wants to promote poor spelling?). Thirdly attribute values are not labelled classically (”type:name”) — treating resource names opaquely wastes an obvious opportunity for clarifications and discoveries of subjects’ own patterns of information as well as incidental annotations not made elsewhere. Finally a small point, is that commas are not infrequently found in names of resources causing problems in this context.

Another approach is to move around the strings in the notations above, to achieve a more natural reading to average English speakers. Using a consistent framework of verbs and prepositions for properties named verb:preposition, an approach introduced in an earlier entry, yields an intuitively more interesting syntax with possibilities for future expansion.
----------------------------------------------------
[1] has:this(Entity:e1)
[2] has:this(Activity:a2; Timestamp:2011-11-16T16:00:00; Timestamp:2011-11-16T16:00:01)
[3] was:from(Source:e2; Event:e1; Activity:a; Generation:g2; Usage:u1)
[4] was:from(Source:e1; Source:e2; Source:e3)
----------------------------------------------------

[1] declares that an annotation for a specific page (a subject) has a certain entity named e1, which may or may not exist (that is, be de-referenceable). e1 is qualified as of type “Entity”.
[2] By convention the first value in a set of values provided to a “property function” is the target of the namespaced relation “has:this” with the subject resource being the resource which is being semantically annotated. Each attribute value associated with the relation is qualified by the type of value that it is.
[3] The property wasDerivedFrom is here a relation with a target resource that is to be interpreted as a “Source” kind-of-thing, i.e., “role”. This relation shows four related (perhaps influential) resources.
[4] This is a list of attribute values acceptable in this notation, overloading the ‘was:from’ predicate function for a less tiresome syntax.
—————————————————-

The chief advantages of this approach to semantic notations in comparison with the current W3C recommendation is first, that it eliminates the need for dual (dueling) hierarchies, by its adoption of a fixed number of prepositional properties. Second it is in a lexical sense formally more complete yet ripe with opportunities for improved semantic markup, in part by its requirement to type strings. Lastly it is intuitively clearer to an average user, perhaps leading to a more conducive process for semantic annotations.

Category: Data Quality, Enterprise Content Management, Enterprise Data Management, Information Development, Information Governance, Information Management, Information Strategy, Semantic Web, Sustainable Development
No Comments »

Calendar
Collapse Expand Close
TODAY: Sun, April 23, 2017
April2017
SMTWTFS
2627282930311
2345678
9101112131415
16171819202122
23242526272829
30123456
Archives
Collapse Expand Close
Recent Comments
Collapse Expand Close