Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Posts Tagged ‘Twitter’

by: Phil Simon
01  Dec  2013

Twitter: Rubbish, Valuable, or Both?

We reach a certain age and the music gets too loud. We believe that the world is going to hell in a hand basket. Things were just better when we were young.

This is always the case, and today is no exception. The complaints can be deafening. Young people don’t read books anymore. Look at what young folks are wearing. We decry the state of society and the future.

Of course, we’ve seen this movie before; every generation does this. Many people my age and older dismiss Twitter and the very idea tweeting. Somehow, reading books and newspapers were more sophisticated than texting, tweeting, blogging, and friending.

The Remarkable Data Behind 140 Characters

That may be true on some abstract cultural level. Curmudgeon Andrew Keen would wholeheartedly agree with that premise. On a data or technological level, though, nothing could be further from the truth. In fact, the data technology behind 140 characters are nothing short of remarkable. The BusinessWeek story The Hidden Technology That Makes Twitter Huge details the data and metadata behind each tweet.

From the piece:

All tweets share the same anatomy. To examine the guts of a tweet, you request an “API key” from Twitter, which is a fast, automated procedure. You then visit special Web addresses that, instead of nicely formatted Web pages for humans to read, return raw data for computers to read. That data is expressed in a computer language—a smushed-up nest of brackets and characters. It’s a simplified version of JavaScript called JSON, which stands for JavaScript Object Notation. API essentially means “speaks (and reads) JSON.” The language comes in a bundle of name/value fields, 31 of which make up a tweet. For example, if a tweet has been “favorited” 25 times, the corresponding name is “favorite_count” and “25” is the value.

Think about it: 31 data fields captured for each tweet. I’d bet my house that that number will only rise in the coming years. Types of data that we can’t even imagine will one day be tracked, analyzed, and used in unfathomable ways.

The notion of 140 characters may seem inherently limiting. What can you really know from such a small amount of text?  The answer depends, but surely a great deal more can be gleaned from a tweet’s metadata. Think about things like a tweet’s location, user, device, date, time, URL, and the like. All of a sudden, tweets can become more informative, contextual, and maybe even predictive.

Simon Says: Learn from Twitter

The lesson here is two-fold. First, recognize that you can’t predict the future. As I’ve said myriad times in my career, its better to have it and not need it than need it and not have it. With data storage costs plummeting, why not track every type of data you can?

Second, ensure that your organization’s  infrastructure can support new data types, emerging sources, and increased volumes. Yes, ETL is still important, but the future is all about APIs. Make sure that your organization is keeping up with the times. The costs of inaction are irrelevance and possibly extinction.


What say you?

Category: Information Value
No Comments »

by: Phil Simon
08  Jan  2013

Google Now and Big Data

Few companies manage information better than Google. If there were an annual award for corporate information management (IM), I’m sure that Google would have won it–or had been in the top three–over the past decade.

Why take IM so seriously? Because, quite frankly, without it, Google becomes much, much less valuable. Sans information, how does Google really help us? It helps us find what we want; Google doesn’t directly give us what we want. In other words, we don’t spend much time on We use it to go to the sites that let us buy things.

It turns out that we ain’t seen nothing yet. Google has been fine-tuning Google Now (Google Alerts on steroids.) From a recent TechCrunch article:

Google Now is a standard feature of Android Jelly Bean and up. It’s an easily accessible screen that shows you information about your daily commute (because it learns where you go every day and makes an educated guess as to where ‘home’ and ‘work’ are for you), appointments, local weather, upcoming flight and hotel reservations (assuming you give it access to scan your Gmail account) and how your favorite team did last night (it learns that from your search behavior). It also notices when you are not at home and shows you how long it’ll take you to get back to your house, or, if you are travelling, presents you with a list of nearby attractions you may be interested in, the value of the local currency, the time back home and easy access to Google Translate.

In a post-PC world, it’s not hard to understand the vast potential value of a personalized technological companion who can help you navigate an increasingly busy and complex world. With what Google knows about you via email, Web-surfing habits, social connections via Plus, and the like, Google Now may in fact be a game-changer.

Implications for Big Data

But you can only do so much on your Android device. What if you could see things? What if wearable technology and augmented reality could make your life even easier (or creepy, depending on your point of view)? Enter Google’s Project Glass, a pet project of Sergey Brin.

If you think that data is big now, get ready for Really Big Data. What if you could just think about recording a walk in Paris and publish it to YouTube in the process? What if you could review a product on Yelp by talking to yourself at the store? Perhaps speech-to-text technology would then publish that review automatically? Maybe your car will drive you to your next appointment on your Google calendar.

Does this sound Kurzweilian? It should. Google just hired the legendary futurist.

The implications are nearly limitless. As technology continues to evolve into heretofore “protected” areas, more and more data will be generated. Companies like Amazon, Apple, Facebook, Twitter, and Google have the compute power and storage capacity to actually do something with this data. Machine learning, text analytics, natural language processing, and other

Simon Says

The enabling technologies behind Big Data are getting better every day. We’re just getting started. Your organization ought to be preparing for a data-driven world right now. If not, it may very well fall and not get up.


What say you?

Tags: , ,
Category: Information Management
1 Comment »

by: Phil Simon
24  Aug  2012

The Power of When

One would hope that today most businesses know which of their products sell–and how much. After all, it’s 2012. Beyond that, many companies have spent a great deal of money trying to determine why people buy–and don’t buy–certain products. Now, with advances in information technology, we’re getting closer to answering the when question–as in, when customers spend money.

Not surprisingly, it turns out that mega-retailer Wal-Mart is at the forefront of the all-important when question. It turns out that when people are paid is a major factor in when they buy what they buy. According to a recent Reuters’ article:

“The paycheck cycle remains pronounced, and there continues to be a lot of uncertainty in the global economy,” Chief Financial Officer Charles Holley said in a recorded message on Thursday.

The world’s largest retailer, viewed as a barometer of economic activity, continues to see signs customers are strapped—spending more at the beginning of the month, when they get their paychecks. [emphasis mine]

That is, budget-conscious customers may be more likely to buy a new cell phone when their bank accounts are flush with cash. By contrast, consumers may well buy diapers and food as needed. In other words, demand for “essential” products is probably more inelastic.

Let’s delve deeper into the power of when.

Complementary Goods

Now, to be sure, the question of when isn’t entirely new nor is it unaddressed. Basic economic theory tells us that certain products are more likely to be purchased with one another–a.k.a., complementary goods. For instance, it’s not uncommon for consumers to buy peanut butter and jelly together or salsa with their chips. There’s a very good reason that these products are often if not always stocked near each other in supermarkets.

So, what is new? I’d argue that even small businesses have never before had access to such vast information and technology. Those that build embracing the power of analytics into their cultures have at the ability to gain remarkable insights into their customers’ behavior.

With tremendously powerful data and technology, new relationships can be discovered. Maybe there’s an idea time of the month to put peanut butter on sale. Perhaps such a discount will spur sales of jelly or bread or bananas. Maybe product placement in certain areas at certain hours of the day mean more units sold. The possibilities are limitless.

Simon Says

I’ll grant that most businesses know what people buy and how they pay for these products (although the explosion of mobile payment methods like Square will introduce new data, considerations, and variables). That’s table stakes these days. But how many really understand other essential data questions such as why and when?

And think about the transformative power of when for the consumer. Yes, we know about daily deal sites like Groupon. When products go viral (on Twitter or on sites like, flash mobs can erupt making moot weekly or monthly sales projections. I’ll bet that the importance of when will continue to increase.


What say you?


Tags: , , , ,
Category: Information Management, Information Strategy
No Comments »

by: Phil Simon
17  Jun  2012

Big Data: Is it Really That Different?

Nate Silver is a really smart guy. Not even 40, he’s already developed tools to analyze statistics ultimately bought by Major League Baseball. He writes about politics and stats for the New York Times. And, at the Data Science Summit here in Las Vegas, I saw him speak about Big Data. (You can watch the video of that talk here.)

DSS was all about Big Data, a topic about which I’m not terribly familiar. (It is a fairly new term, after all.) I suspected that Silver would be speaking over my head. But a funny thing happened when I watched his speak: I was able to follow most of what he was saying.

By way of background, I’m a numbers guy and have been for a very long time. Whether in sports or work, I like statistics. I like data. I remember most of the material from my different probability and stat courses in college. I understand terms like p values, statistical significance, and confidence intervals. This isn’t that difficult to understand; it’s not like I’m a full-time poet or ballet dancer.

Wide, Not Long

The interesting thing about new data sources and streams is that some of the old tools just don’t cut it. Consider the relational database. Data from CRM or ERP applications data fit nicely into transactional tables linked by primary and foreign keys. On many levels, however, social media is far from CRM and ERP–and this has profound ramifications for data management. Case in point: Twitter runs on Scala, not a relational database. Why? To make a long story short, the type of data generated and accessed by Twitter just can’t run fast enough in a traditional RMDB architecture. This type of data is wide, not long. Think columns, not rows.

To this end, Big Data requires some different tools and Scala is just one of many. Throw Hadoop in there as well. But our need to use “some different tools” does not mean that tried and true statistical principles fall by the wayside. On the contrary, old stalwarts like Bayes’ Theorem are still as relevant as ever, as Silver pointed out during his speech.

Simon Says: The Best of Both Worlds

In an era of Big Data, there will be winners and losers. Organizations that see success will be the ones that combine the best of the old with the best of the new. Retrofitting unstructured or semistructured data into RMDBs won’t cut it, nor will ignoring still-relevant statistical techniques. Use the best of the old and the best of the new to succeed in the Big Data world.


What say you?

Tags: , , ,
Category: Information Management
1 Comment »

by: Phil Simon
20  Sep  2010

The Semantic Web, Customer Service, and the Financial World

Yogi Berra once said, “It’s like deja-vu, all over again.” With regard to new technologies, the same continues to hold true: The technologies themselves might be new, but they are fundamentally addressing old problems. Many of today’s emerging technologies such as cloud computing, SaaS, MDM, and social networks are really attempts to deal with core business issues. These include:

  • Cost
  • Communication
  • Collaboration
  • Data integrity
  • Access to data
  • Network reliability
  • Scale
  • The eternal battle to do more with less
  • And many, many others

One lesser-discussed topic is semantic technologies–at least for the time being. While not on the tips of most people’s tongues quite yet, they are beginning to make inroads in a number of areas. In this five part series, I’ll be covering the semantic web with a particular focus on improving customer service in the financial industry.

  • Part I: Introduction to the Semantic Web (today’s post)
  • Part II: The Semantic Web: The Business Case and Customer Service Implications
  • Part III: The Semantic Web and Complementary Technologies
  • Part IV: A Semantic Web Case Study
  • Part V: How to Leverage the Semantic Web Now

In this part of this series, we’ll answer three main questions:

  • What is the semantic web?
  • How is web 3.0 different than web 2.0
  • What is semantic technology and how is it different than information technology?

What is the semantic web?

You may not be familiar with the semantic web. Six months ago, I sure wasn’t. So let’s get the definition out of the way. According to Wikipedia, it is:

an evolving development of the World Wide Web in which the meaning (semantics) of information and services on the web is defined, making it possible for the web to “understand” and satisfy the requests of people and machines to use the web content.[1][2] It derives from World Wide Web Consortium director Sir Tim Berners-Lee‘s vision of the Web as a universal medium for data, information, and knowledge exchange.

Some have referred to the semantic web as Web 3.0. While perhaps a decade away by some estimates, such as a recent report by Pew Research, thousands of people are working right now on making it a reality.

How is Web 3.0 different than Web 2.0?

Let’s review a little history here. Web 1.0 represented the first incarnation of a GUI-based internet. Note that, as Wired points out, the web and internet are not the same things. Most Web 1.0 sites were glorified brochures with basic, static pages such as contact, about, services, etc. Perhaps the greatest achievement of Web 1.0 was Google’s perfection of basic search.

Enter Web 2.0, the social or two-way web. We’re living it right now. The second incarnation of the web makes it quite easy to connect with others who share similar interests via Facebook and Twitter. It isn’t hard to build user-friendly web sites and even personal social networks with tools such as Ning. In short, Web 2.0 allows us to create a deeper and more meaningful experience than its predecessor.

In Web 3.0 or the semantic web, information will mean the same thing to everybody, irrespective of language or country. In large part, this is possible via the extensive use of tags.

Examples of Web 3.0-ish things available today include:

Via tags, people can more easily find things. Rather than just searching for “rock song” on Pandora, tagging enables people to find “progressive rock songs from the 1970s with odd time signatures and atmospheric feels (read: Pink Floyd).” Alternatively, those searching for photos of a red car on Flickr need not be frustrated because someone used a similar term: red automobile.

What is semantic technology and how is it different than information technology?

Traditional information technology requires predefined information about meanings and relationships. Two disparate systems or tables cannot talk to each other. On the other hand, semantic technology encodes meanings separately from data and content files, and separately from application code. Different applications based upon semantic technology can talk to each other more seamlessly primarily because of their extensive use of metadata (data about data).

Of course, an enterprise cannot merely decide to “go semantic.” A great deal of work needs to be performed on a number of levels in order for an organization to realize the benefits of semantic technology. Fortunately, MIKE2.0 provides guidance. Its Semantic Enterprise Solution Offering provides a layer for the enterprise to establish coherence, consistency, and interoperability across its information assets. Applicable information assets may range fully from structured to unstructured (text and document) sources. The methodology of this Offering is

  • inherently incremental
  • layered onto existing capabilities and resources
  • flexible to accommodate expansions in scope, new learning, and changes the continuum

Simon Says

The semantic web is huge, although its widespread adoption is not happening anytime soon. In future parts of the series, we’ll cover specific applications of the semantic web to customer service and the financial industry.

Tags: , ,
Category: Enterprise Data Management, Semantic Web

by: Phil Simon
02  Aug  2010

Charlie Rose, Customer Service, and the Master Twitter Record

My post last week detailed the customer service-oriented frustrations that many of us face while dealing with large companies.Writing about it was cathartic and probably saved me at least one trip to a shrink.

In a related and inspired post, Maria Ogneva writes about a future world in which companies and customers are able to efficiently interact via different media:

  • phone
  • email
  • Twitter handle
  • physical address (required to be sure, although it just seems so dated)

    It’s an interesting post and one well worth checking out.

    All of this makes me wonder a few things:

    • Do many large organizations even have a master customer record anywhere?
    • Which people and departments can access this record? What about updating it?
    • What specific data elements are on that record?
    • Since Twitter handle is probably not one of those elements, from a technical standpoint, how hard would it be to add? Adding it is sure preferable to the separate creation of a master Twitter Record (MTR) that would invariably get out of sync with a “main” one.

      Now, no one here is claiming that Twitter ought to be the primary means for a company to deal with its customers. First, not everyone is on Twitter. Second, that aside for a moment, a direct message of no more than 140 characters is probably too restrictive to resolve an even moderately complex customer issue. Finally, it’s so easy to retweet that companies would (probably justifiably) fear making certain responses public–at least so easily.

      On the other hand, why not have that club in the bag? Use it when needed. this would be like my rarely-used three iron on the golf course. I don’t use it often, but when I need it, I’m sure glad that I have it.

      Big Company Customer Service

      I don’t buy into the notion that a large organization cannot provide state-of-the-art customer service. It’s a matter of priorities and will. There’s no business or technology limitation to being able to take care of the people who take care of you.

      Case in point:

      Recently on the Charlie Rose show, Amazon CEO Jeff Bezos talked about the company’s relentless focus on the customer from day one. There were those early on who claimed that Amazon was a “cute little company” but, once the heavyweights like Wal-Mart embraced e-commerce, they would crush the Amazons of the world. Bezos laughs now about the “Amazon.toast” references from the mid-1990s.

      Reports of Amazon’s demise were premature. Today, to say that the company does a good job managing customer information to create a seamless buying experience is an understatement. Also, consider that Amazon isn’t really “just  Amazon.” Consider that many small companies maintain stores on the Amazon site. To buy a book or pair of shoes, one need not reenter his or her information multiple times to make a purchase.

      With that in mind, is it really too much to ask a bit telecomm company to get its act together?

      Simon Says

      Look, when a company of any size makes a customer service error, the aggrieved customer is going to tell people about it. Lots of people. The company cannot control anything that the customer says or does after that point. This is a far cry, though, from claiming that the company is helpless against a constant stream of negative feedback via tweets, emails, blog posts, discussion boards, and the like. Use a mistake as an opportunity to rectify a bad situation. Big mouths and active fingers work both ways: Those same customers are likely to point out that at least the company did the right thing in the end–and doing the right thing requires good data on those customers.


      What say you?

      Tags: ,
      Category: Information Value, Master Data Management

      Collapse Expand Close
      TODAY: Tue, March 19, 2019
      Collapse Expand Close
      Recent Comments
      Collapse Expand Close