Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Author Archive

by: Phil Simon
17  Feb  2014

Big Data: There Is No Guarantee

I know two things about business. First, success is never the result of one thing, a point echoed in one of my very favorite business books, The Halo Effect: … and the Eight Other Business Delusions That Deceive Managers. Second, there is no such thing as a guarantee.

Over the last decade, I’ve spent a great deal of time refining my view on management. More recently, I’ve thought extensively about thinking about Big Data and emerging technologies. I’ve spoken about these subjects many times and written hundreds of thousands of words on these topics.

Before continuing with Big Data, let’s review a little history.

On MiniDiscs

As The Guardian recently pointed out, “twenty years ago Sony launched a format that promised CD clarity and cassette convenience–but the world just wasn’t interested.” Then, in 1998, the company ”declared that 1998 was to be the ‘Year of the MiniDisc’” and launched a $30M marketing campaign.
Minidisc_Sony_MZ1
So, what went wrong? Well, at the time that Sony was going all-in on the MiniDisc, the Internet was enabling the mass–and illegal–sharing of mp3 files. It’s impossible to know, sans sites like Napster, if the MiniDisc would have exploded or at least done markedly better. It’s certainly possible, but is it really unfathomable to consider vastly differently scenario? Maybe we’d be talking today about Sony the same way that we talk about Apple. Instead, the outlook on Sony isn’t exactly rosy.

And it’s that lack of predictability that scares many folks who are considering taking the big plunge. What are the guarantees that the millions spent on data scientists, new technologies like Hadoop, and new approaches will pay off?

None.

Simon Says: Big Data Offers No Guarantees

An organization that “does Big Data” well may still ultimately fail for all sorts of reasons. Cultural issues, competitive factors, regulatory changes, new entrants, key employee attrition, and just plain dumb luck all play major roles in an organization’s success.

As a result, it’s essential to embrace holistic and probabilistic thinking. Put differently, it’s never about only one thing. I’ll bet a great deal of money that organizations that recognize the opportunities inherent in Big Data–and then act on them–will, all else being equal, do better than those that remain in the dark.

Feedback

What say you?

Tags:
Category: Information Management
No Comments »

by: Phil Simon
09  Feb  2014

Social Data: Asset and Liability

In late December of 2013, Google Chairman Eric Schmidt admitted that ignoring social networking had been a big mistake. ”I guess, in our defense, we were busy working on many other things, but we should have been in that area and I take responsibility for that,” he said.

Brass tacks: Google’s misstep and Facebook’s opportunistic land grab of social media have resulted in a striking data chasm between the two behemoths. As a result, Facebook can do something that Google just can’t.

To his credit, Mark Zuckerberg has not been complacent with this lead. This is an age of ephemera. He is building upon his company’s lead in social data. Case in point: the launch of Graph Search.

The rationale here is pretty straightforward: Why let Google catch up? With Graph Search, Facebook users can determine which of their friends have gone to a Mexican restaurant in the last six months in San Francisco. What about which friends like the Rolling Stones or The Beatles? (Need to resell a ticket? Why use StubHub here? Maybe Facebook gets a cut of the transaction?) These are questions and problems that Google can’t address but Facebook can.

All good for Zuck et. al, right? Not really. It turns out that delivering relevant social data in a timely manner is proving remarkably elusive, even for the smart cookies at Facebook.

The New News Feeds

As Wired reported in May of 2013, Facebook “redesigned its News Feed with bolder images and special sections for friends, photos, and music, saying the activity stream will become more like a ‘personalized newspaper’ that fits better with people’s mobile lifestyles.” Of course, many users didn’t like the move, but that’s par for the course these days. You’re never going to make 1.2 billion users happy.

But Facebook quickly realized that it didn’t get the relaunch of News Feed right. Not even close. Just a few weeks before Schimdt’s revealing quote, Business Insider reported that Facebook was at it again, making major tweaks to its feed and then halting its new launch. This problem has no simple solution.

Simon Says: Big Data Is a Full-Time Job

Big Data is no picnic. “Managing” it isn’t easy, even for billion-dollar companies such as Facebook. The days of “set it and forget it” have long passed. Organizations need to be constantly monitoring the effectiveness of their data-driven products and services, to say nothing of testing for security issues. (Can someone say Target?)

Feedback

What say you?

Tags: ,
Category: Information Development
No Comments »

by: Phil Simon
31  Jan  2014

Google Only Gets You to a Certain Point

I love Google and in a pretty unhealthy way. In my third book, The New Small, there are oodles references to Google products and services. I use Google on a daily basis for all sorts of different things, including e-mail, document sharing, phone calls, calendars, and Hangouts.

And one more little thing: search. I can’t imagine ever “Binging” something and, at least in the U.S., most people don’t either.

Yet, there are limitations to Google and, in this post, I am going to discuss one of the main ones.

A few years ago, I worked on a project doing some data migration. I supported one line of business (LOB) for my client while another consultant (let’s call him Mark) supported a separate LOB. Mark and I worked primarily with Microsoft Access. The organization ultimately wanted to move toward an enterprise-grade database, in all likelihood SQL Server.

Relying Too Much on Google

Mark was a nice guy. At the risk of being immodest, though, his Access and data chops weren’t quite on my level. He’d sometimes ask me questions about how to do some relatively basic things, such as removing duplicates. (Answer: SELECT DISTINCT.) When he had more difficult questions, I would look at his queries and see things that just didn’t make a whole lot of sense. For example, he’d try to write one massive query that did everything, rather than breaking them up into individual parts.

Now, I am very aware that development methodologies vary and there’s no “right” one. Potato/pot-ah-to, right? Also, I didn’t mind helping Mark–not at all. I’ll happily share knowledge, especially when I’m not pressed with something urgent.

Mark did worry me, though, when I asked him if he knew SQL Server better than MS Access. “No,” he replied. “I’ll just Google whatever I need.”

Hmm.

For doing research and looking up individual facts, Google rocks. Finding examples of formulas or SQL statements isn’t terribly difficult either. But one does not learn to use a robust tool like SQL Server or even Access by merely using a search engine. You don’t design an enterprise system via Google search results. You don’t build a data model, one search at a time. These things require a much more profound understanding of the process.

In other words, there’s just no replacement for reading books, playing with applications, taking courses, understanding higher-level concepts, rather than just workarounds, and overall experience.

Simon Says

You don’t figure out how to play golf while on the course. You go to the practice range. I’d hate to go to a foreign country without being able to speak the language–or accompanied by someone who can. Yes, I could order dinner with a dictionary, but what if a doctor asked me in Italian where the pain was coming from?

Feedback

What say you?

Tags: , , ,
Category: Information Development
No Comments »

by: Phil Simon
20  Jan  2014

The Case Against Standard Reports

Over the course of my career, I have written more reports than I can count. I’ve created myriad dashboards, databases, SQL queries, ETL tools, neat Microsoft Excel VBA, scripts, routines, and other ways to pull and massage data.

In a way, I am Big Data.

This doesn’t make me special. It just makes me a seasoned data-management professional. If you’re reading this post, odds are that the list above resonates with you.

Three Problems with Creating Excessive Reports

As an experienced report writer, it’s not terribly hard to pull data from databases table, external sources, and the web. There’s no shortage of forums, bulletin boards, wikis, websites, and communities devoted to the most esoteric of data- and report-related concerns. Google is a wonderful thing.

I’ve made a great deal of money in my career by doing as I was told. That is, a client would need me to create ten reports and I would dutifully create them. Sometimes, though, I would sense that ten weren’t really needed. I would then ask if any reports could be combined. What if I could build only six or eight reports to give that client the same information? What if I could write a single report with multiple output options?

There are three main problems with creating an excessive number of discrete reports. First, it encourages a rigid mode of thinking, as in: “I’ll only see it if it’s on the XYZ report.” For instance, Betty in Accounts Receivable runs an past due report to find vendors who are more than 60 days late with their payments. While this report may be helpful, it will fail to include any data that does not meet predefined criterion. Perhaps her employer is particularly concerned about invoices from particularly shady vendors only 30 days past due.

Second, there’s usually a great deal of overlap. Organizations with hundreds of standard reports typically use multiple versions of the same report. If you ran a “metareport”, I’d bet that some duplicates would appear. In and of itself, this isn’t a huge problem. But often database changes means effectively modifying the same multiple times.

Third, and most important these days, the reliance upon standard reports inhibits data discovery.

Simon Says

Look, standard reports aren’t going anywhere. Simple lists and financial statements are invaluable for millions of organizations.

At the same time, though, one massive report for everything is less than ideal. Ditto for a “master” set of reports. These days, true data discovery tools like Tableau increase the odds of finding needles in haystacks.

Why not add interactivity to basic reports to allows non-technical personnel to do more with the same tools?

Feedback

What say you?

Tags: , ,
Category: Information Development, Information Value
No Comments »

by: Phil Simon
08  Jan  2014

The Big Enablers of Big Data

Big Data requires million-dollar investments.

Nonsense. That notion is just plain wrong. Long gone are the days in which organizations need to purchase expensive hardware and software, hire consultants, and then three years later start to use it. Sure, you can still go on-premise, but for many companies cloud computing, open source tools like Hadoop, and SaaS have changed the game.

But let’s drill down a bit. How can an organization get going with Big Data quickly and inexpensively? The short answer is, of course, that it depends. But here are three trends and technologies driving the diverse state of Big Data adoption.

Crowdsourcing and Gamification

Consider Kaggle. Founded in April 2010 by Anthony Goldbloom and Jeremy Howard, the company seeks to make data science a sport, and an affordable one at that. Kaggle is equal parts rowdsourcing company, social network, wiki, gamification site, and job board (like Monster or Dice).

Kaggle is a mesmerizing amalgam of a company, one that in many ways defies business convention. Anyone can post a data project by selecting an industry, type (public or private), type of participation (team or individual), reward amount, and timetable.” Kaggle lets you easily put data scientists to work for you, and renting is much less expensive than buying them.

Open Source Applications

But that’s just one way to do Big Data in a relatively inexpensive manner–at least compared to building everything from scratch and hiring a slew of data scientists. As I wrote in Too Big to Ignore, digital advertising company Quantcast attacked Big Data in a very different way, forking the Hadoop file system. This required a much larger financial commitment than just running contest on Kaggle.

The common thread: Quantcast’s valuation is nowhere near that of Facebook, Twitter, et al. The company employs dozens of people–not thousands.

Open Innovation

Finally, even large organizations with billion-dollar budgets can save a great deal of money on the Big Data front. Consider NASA, nowhere close to anyone’s definition of small. NASA embraces open innovation, running contests on Innocentive to find low-cost solutions to thorny data issues. NASA often prizes in the thousands of dollars, receiving suggestions and solutions from all over the globe.

Simon Says

I’ve said this many times. There’s no one “right” way to do Big Data. Budgets, current employee skills, timeframes, privacy and regulatory concerns, and other factors should drive an organization’s direction and choice of technologies.

Feedback

What say you?

Tags: ,
Category: Information Development, Information Management, Open Source
No Comments »

by: Phil Simon
01  Jan  2014

Watson 2.0, Platform Thinking, and Data Marketplaces

Few computing and technological achievements rival IBM’s Watson. Its impressive accomplishments to this point include high-profile victories in chess and Jeopardy! 

Turns out that we ain’t seen nothin’ yet. Its next incarnation will be much more developer-friendly. From a recent GigaOM piece:

Developers who want to incorporate Watson’s ability to understand natural language and provide answers need only have their applications make a REST API call to IBM’s new Watson Developers Cloud. “It doesn’t require that you understand anything about machine learning other than the need to provide training data,” Rob High, IBM’s CTO for Watson, said in a recent interview about the new platform.

The rationale to embrace platform thinking is as follows: As impressive as Watson is, even an organization as large as IBM (with over 400,000 employees) does not hold a monopoly on smart people. Platforms and ecosystems can take Watson in myriad directions, many of which you and I can’t even anticipate. Innovation is externalized to some extent. (If you’re a developer curious to get started, knock yourself out.)

Data Marketplaces

Continue reading the article and you’ll see that Watson 2.0 “ships” not only with an API, but an SDK, an app store, and a data marketplace. That is, the more data Watson has, the more it can learn. Can someone say network effect?

Think about it for a minute. A data marketplace? Really? Doesn’t information really want to be free?

Well, yes and no. There’s no dearth of open data on the Internet, a trend that shows no signs of abating. But let’s not overdo it. The success of Kaggle has shown that thousands of organizations are willing to pay handsomely for data that solves important business problems, especially if that data is timely, accurate, and aggregated well. As a result, data marketplaces are becoming increasingly important and profitable.

Simon Says: Embrace Data and Platform Thinking

The market for data is nothing short of vibrant. Big Data has arrived, but not all data is open, public, free, and usable.

Combine the explosion of data with platform thinking. It’s not just about the smart cookies who work for you. There’s no shortage of ways to embrace platforms and ecosystems, even if you’re a mature company. Don’t just look inside your organization’s walls for answers to vexing questions. Look outside. You just might be amazed at what you’ll find.

Feedback

What say you?

Tags: , ,
Category: Information Development
No Comments »

by: Phil Simon
17  Dec  2013

Does Your Company Need Data Visualization Apps?

Few learned folks dispute the fact that the era of Big Data has arrived. Debate terms if you like, but most of us are bombarded with information these days. The question is turning to, How do we attempt to understand all of this data?

With the proliferation of low-cost and free dataviz tools, isn’t it fair to ask about the need for standalone data visualization applications? It’s not as if it takes hundreds of thousands of dollars to deploy these tools. It’s not 1998 anymore.

And Bill Shander recently asks the $60,000 dataviz question on the Harvard Business Review blog. Shander writes:

Data visualization can produce big benefits, some of which are subtle yet powerful. One of the biggest benefits is personalization–e.g., enabling potential customers to estimate the value of a complex solution to their challenges.

It’s an interesting post and a timely one for me, what with The Visual Organization coming out soon. In short, I agree. I doubt that a barber or salon owner would need data visualization tools. If you’re reading this blog, however, I doubt that you work in either business.

Dataviz: Cheaper and Easier than Ever

Shander correctly points out that cost can get out of hand. Fair enough. At the same time, though, let’s not forget the advent of open-source and cloud-based tools that have dramatically changed the equation. You can be up and running with Tableau in a few minutes. D3 is completely free, although learning it takes some time.

Beyond cost, ease of use is orders of magnitude better than, say, 1997. I remember those days. Proper programmers needed to code. People in different lines of business needed to rely almost exclusively on IT for these types of things.

My, how times have changed. Drag-and-drop functionality is almost a sine qua non. Few vendors hawk their wares as “only for developers.” It’s become much easier to add disparate data sources, although power users can still do a great deal more than the average functional (read: non-technical user).

Simon Says

In the future, more and more companies will realize that Excel is not enough. Big Data is here to stay, and dataviz helps us make sense of it–and ask better questions. Those intent on trying to cram square pegs into round holes will forgo tremendous opportunities.

Feedback

What say you?

If you’d like to watch the trailer to The Visual Organization, see below.

Tags: , ,
Category: Information Management, Information Value
No Comments »

by: Phil Simon
09  Dec  2013

In Defense of Structured Data

Writing a book is always a rewarding and challenging endeavor. Ask any author. As I pen these words, I am going through the final edits on The Visual Organization. As someone who recognizes that plenty of great work has already been done in the area of data visualization, I dutifully cite a slew of sources.

It’s just smart to recognize the previous contributions of others in any field, especially the influential ones. What’s more, attribution and intellectual property are fundamental to the world of publishing. And I certainly like receiving credit for things I have written and said. I get a little miffed when I don’t.

While this is often a bit of an administrative burden on the author, I get it. This means that I have to ask for and receive permissions from media sites, bloggers, and individuals if I want to use more than a certain number of their words.

Better Data Collection Mechanisms

Still, there’s a way to collect and track this information that makes it less onerous for all involved. No, we don’t have to build a sophisticated database, but simple spreadsheets are far superior for this type of rudimentary data management. (And, make no mistake, that’s exactly what this exercise invovles.)

Lamentably, though, not everyone sees it this way. For instance, the data-challenged wizards at FastCompany had me follow an inefficient and borderline excruciating “process” of writing several e-mails to several different companies. This included a series of questions collected in a free-response format. A web-based survey would have better for all concerned. Responding to individual e-mails doesn’t lend itself to easy and comprehensive analysis.

At the end of the e-mail chain, someone new sent me what appeared to be an arbitrary number on what I’d have to pay to use 87 words from an article of theirs. That price was in my view excessive and capricious, underscored by the nature the process.

Simon Says: Structure What You Can

You’ll get no argument from me on the importance of unstructured data. Still, structured data remains exceptionally valuable. There’s just no excuse for treating everything as unstructured.

The smartest folks and organizations out there are breaking from old habits and recognizing that one size doesn’t necessarily fit all. Creating structure where it makes sense saves a great deal of time and allows others to see things that others miss. As Henry David Thoreau once said, “It’s not what you look at that matters, it’s what you see.”

Feedback

What say you?

Tags:
Category: Information Development
No Comments »

by: Phil Simon
01  Dec  2013

Twitter: Rubbish, Valuable, or Both?

We reach a certain age and the music gets too loud. We believe that the world is going to hell in a hand basket. Things were just better when we were young.

This is always the case, and today is no exception. The complaints can be deafening. Young people don’t read books anymore. Look at what young folks are wearing. We decry the state of society and the future.

Of course, we’ve seen this movie before; every generation does this. Many people my age and older dismiss Twitter and the very idea tweeting. Somehow, reading books and newspapers were more sophisticated than texting, tweeting, blogging, and friending.

The Remarkable Data Behind 140 Characters

That may be true on some abstract cultural level. Curmudgeon Andrew Keen would wholeheartedly agree with that premise. On a data or technological level, though, nothing could be further from the truth. In fact, the data technology behind 140 characters are nothing short of remarkable. The BusinessWeek story The Hidden Technology That Makes Twitter Huge details the data and metadata behind each tweet.

From the piece:

All tweets share the same anatomy. To examine the guts of a tweet, you request an “API key” from Twitter, which is a fast, automated procedure. You then visit special Web addresses that, instead of nicely formatted Web pages for humans to read, return raw data for computers to read. That data is expressed in a computer language—a smushed-up nest of brackets and characters. It’s a simplified version of JavaScript called JSON, which stands for JavaScript Object Notation. API essentially means “speaks (and reads) JSON.” The language comes in a bundle of name/value fields, 31 of which make up a tweet. For example, if a tweet has been “favorited” 25 times, the corresponding name is “favorite_count” and “25” is the value.

Think about it: 31 data fields captured for each tweet. I’d bet my house that that number will only rise in the coming years. Types of data that we can’t even imagine will one day be tracked, analyzed, and used in unfathomable ways.

The notion of 140 characters may seem inherently limiting. What can you really know from such a small amount of text?  The answer depends, but surely a great deal more can be gleaned from a tweet’s metadata. Think about things like a tweet’s location, user, device, date, time, URL, and the like. All of a sudden, tweets can become more informative, contextual, and maybe even predictive.

Simon Says: Learn from Twitter

The lesson here is two-fold. First, recognize that you can’t predict the future. As I’ve said myriad times in my career, its better to have it and not need it than need it and not have it. With data storage costs plummeting, why not track every type of data you can?

Second, ensure that your organization’s  infrastructure can support new data types, emerging sources, and increased volumes. Yes, ETL is still important, but the future is all about APIs. Make sure that your organization is keeping up with the times. The costs of inaction are irrelevance and possibly extinction.

Feedback

What say you?

Tags:
Category: Information Value
No Comments »

by: Phil Simon
19  Nov  2013

Incentives, Data, and IT Projects

IT project failures continue to haunt us. Consider the recent BusinessWeek article on the Healthcare.gov. It’s a fascinating read, as it demonstrates that massive problems that plagued the project. In short, it was an unmitigated disaster. From the piece:

Put charitably, the rollout of healthcare.gov has been a mess. Millward Brown Digital, a consulting firm, reports that a mere 1 percent of the 3.7 million people who tried to register on the federal exchange in the first week actually managed to enroll. Even if the problems are fixed, the debacle makes clear that it’s time for the government to change the way it ships code—namely, by embracing the approach to software development that has revolutionized the technology industry.

You could write a book about the failure about a project so large, expensive (nearly $400 million, and important. Even as of this writing, site performance is terrible:

Understanding Incentives

I certainly don’t have any inside information about how the project was (mis)managed. I can’t imagine, however, that incentives were properly aligned. With more than 50 different tech vendors involved, it’s highly probable that most if not all parties behaved in their rational, economic self-interest. Think Freakonomics.

I am reminded of an ERP project on which I worked about five years ago. The CIO routinely ignored the advice of consultants that the system was nowhere near ready to go live. I found out near the end of my contentious assignment that the entire executive team was receiving massive bonuses based upon going live at the end of the year. This no doubt colored the CIO’s perception of show-stopping issues.

Think about it. When a $100,000 bonus is on the line, how can you not minimize the impact employee and vendor data issues? More to the point, how can one truly be objective when that type of carrot is at stake?

Simon Says

I like to think that I have my own moral compass and that, if given the chance, I put the needs of the many above the needs of the few, to paraphrase Aristotle. Silly is the organization, however, that ignores the impact of financial incentives on IT projects.

Structure your compensation based upon long-term organizational goals, not short-term ones. While no guarantee, you’ll increase the chances of successful IT projects.

Feedback

What say you?

Tags: ,
Category: Information Governance, Information Management
No Comments »

Calendar
Collapse Expand Close
TODAY: Mon, December 10, 2018
December2018
SMTWTFS
2526272829301
2345678
9101112131415
16171819202122
23242526272829
303112345
Archives
Collapse Expand Close
Recent Comments
Collapse Expand Close