Open Framework, Information Management Strategy & Collaborative Governance | Data & Social Methodology - MIKE2.0 Methodology
Members
Collapse Expand Close

To join, please contact us.

Improve MIKE 2.0
Collapse Expand Close
Need somewhere to start? How about the most wanted pages; or the pages we know need more work; or even the stub that somebody else has started, but hasn't been able to finish. Or create a ticket for any issues you have found.

Posts Tagged ‘IBM’

by: Phil Simon
01  Jan  2014

Watson 2.0, Platform Thinking, and Data Marketplaces

Few computing and technological achievements rival IBM’s Watson. Its impressive accomplishments to this point include high-profile victories in chess and Jeopardy! 

Turns out that we ain’t seen nothin’ yet. Its next incarnation will be much more developer-friendly. From a recent GigaOM piece:

Developers who want to incorporate Watson’s ability to understand natural language and provide answers need only have their applications make a REST API call to IBM’s new Watson Developers Cloud. “It doesn’t require that you understand anything about machine learning other than the need to provide training data,” Rob High, IBM’s CTO for Watson, said in a recent interview about the new platform.

The rationale to embrace platform thinking is as follows: As impressive as Watson is, even an organization as large as IBM (with over 400,000 employees) does not hold a monopoly on smart people. Platforms and ecosystems can take Watson in myriad directions, many of which you and I can’t even anticipate. Innovation is externalized to some extent. (If you’re a developer curious to get started, knock yourself out.)

Data Marketplaces

Continue reading the article and you’ll see that Watson 2.0 “ships” not only with an API, but an SDK, an app store, and a data marketplace. That is, the more data Watson has, the more it can learn. Can someone say network effect?

Think about it for a minute. A data marketplace? Really? Doesn’t information really want to be free?

Well, yes and no. There’s no dearth of open data on the Internet, a trend that shows no signs of abating. But let’s not overdo it. The success of Kaggle has shown that thousands of organizations are willing to pay handsomely for data that solves important business problems, especially if that data is timely, accurate, and aggregated well. As a result, data marketplaces are becoming increasingly important and profitable.

Simon Says: Embrace Data and Platform Thinking

The market for data is nothing short of vibrant. Big Data has arrived, but not all data is open, public, free, and usable.

Combine the explosion of data with platform thinking. It’s not just about the smart cookies who work for you. There’s no shortage of ways to embrace platforms and ecosystems, even if you’re a mature company. Don’t just look inside your organization’s walls for answers to vexing questions. Look outside. You just might be amazed at what you’ll find.

Feedback

What say you?

Tags: , ,
Category: Information Development
No Comments »

by: Phil Simon
27  Nov  2011

Top-Down Business Intelligence

“A good plan is like a road map: it shows the final destination and usually the best way to get there.”

–H. Stanley Judd

I recently had the opportunity to attend IBM’s Information On Demand Conference in Las Vegas, NV. At breakfast one day, Crysta Anderson of IBM, two attendees, and I talked about some BI implementations gone awry. In this post, I’ll discuss a few of the key problems that have plagued organizations spending millions of dollars on BI products–and still not seen the anticipated results.

A Familiar Scenario?

Recognize this? An organization (call it YYZ here) is mired in a sea of individual reports, overwhelmed end users, bad data, and the like. The CIO hears about a competitor successfully deploying a BI tool, generating meaningful KPIs and insights into the business. After lengthy vendor overview and procurement processes, YYZ makes the leap and rolls out its new BI toys. Two years later, YYZ has yet to see any of the expected benefits.

Many of you have seen this movie before.

So, you’re the CIO of YYZ and your own future doesn’t look terribly bright. What to do? Blame vendors if you want. Consultants also make for good whipping boys. Maybe some regulations stood in your way. And there’s always that recalcitrant department or individual employees who didn’t get with the program. But I have one simple question for you: Where did you start?

As we discussed at that table at the IOD conference, many organizations get off on the wrong foot. They begin their BI implementations from the bottom up. In other words, they take a look at the myriad reports, scripts, queries, and standalone databases used–and not used–and then attempt to recreate all or most of them in the new BI tool. Thousands of hours, hundreds of thousands of dollars, and months of employees’ time are spent replicating the old in the new.

The Wrong Questions

There are two major problem with the bottom-up approach. For one and most obvious, not all reports are needed. The core–and faulty assumption–is that we need tomorrow what we needed yesterday. As a consultant, I personally have seen many people request reports in the “new” system that they would never run. When I would ask a question about the purpose of the report or the ultimate destination, I would far too often hear crickets.

Beyond superfluous reports, there’s a deeper problem with bottom-up BI: it runs counter the whole notion of BI in the first place.

Let me explain. At its highest level, cubes, OLAP, data mining, and other sophisticated analytical techniques and tools all start at the top. That is, the conversation typically begins with high-level questions such as:

  • Who are our best customers?
  • Where are our profits coming from?
  • Which products are selling best in each area?
  • Which salespeople seem to be struggling?

An organization cannot start a BI implementation in the weeds and expect to be successful. It needs to start with the right questions, define its terms, and clean up its data. Only then can it begin to see the fruits from the BI tree.

Simon Says

Fight the urge to “just do it” when deploying BI tools. Failing to reassess core business processes, KPIs and metrics, and data quality issues are all recipes for disaster. Consultants and vendors who believe that you can just figure these things out mid-stream should be ignored.

Feedback

What say you?

 

Tags:
Category: Business Intelligence
4 Comments »

by: Sean.mcclowry
30  Sep  2008

Many eyes for information management

IBM’s many eyes product is pretty cool – its a way to visualize information and comes with a number of tools you can used for textual and more structured information.

Here’s many eyes run against a wikipedia article that describes MIKE2.0:

This is just one view (a wordle) but many eyes give you many ways to visualize the data.

And to see something really interesting, check it out against some of the latest political dialogue.

But many eyes is far from a toy.  There’s some real power in being able to connect semantics to analytics.  My favorite is from another technology – GapMinder: http://www.ted.com/index.php/talks/hans_rosling_shows_the_best_stats_you_ve_ever_seen.html

Have fun with many eyes for now but watch this space – I think we’ll be see it grow a lot more!

Tags: , ,
Category: Web2.0
No Comments »

by: Robert.hillard
05  Nov  2007

Getting the benefit out of compliance

I recently participated in a podcast to talk about why I’m involved in MIKE2.0 and how information can be turned to a company’s advantage.  In summary, too many organizations are only looking at information from a defensive perspective with a focus on compliance.

Compliance in general, and for many organizations, Sarbanes-Oxley in particular, are topics that get a lot of management attention.  The core of the work is to define business processes and to identify control points.  When I look at the results from most companies, I see vast quantities of process documentation, often in Microsoft Visio, which has been printed into fat binders and placed on the shelf.  Compliance achieved!

You don’t need me to tell you about the benefits of living documents.  Any analysis which sits on the shelf is out-of-date before it is even printed.  There have been many discussions about engineering systems on the back of the process documentation, however few approaches have been truly successful as inevitably there is a separation of some kind between the applications to run the process and the documentation.

I, my colleagues, and most people involved in MIKE2.0 advocate a different approach.  Start by looking at the way you measure compliance, which is looking at the data which comes out of each control point.  If the data is complete then the navigation between control points is actually of much less consequence (different people do their jobs differently).

When we take this data-driven approach, we also find that a complete analysis of control points also generally shows that the most valuable information held by the company is general identified.  It should come as no surprise that controls provide a live feed of business crucial activities – Business Activity Monitoring (BAM)!  Now we can support multiple applications providing the same data, but doing it in different ways (often this corresponds to product systems) and we can free-up business units to find creative ways to achieve the best possible business outcome.

The key to doing this successfully is to take an Information Development approach.  If governance and business supervision focuses on the outcomes (measured through the control point data) rather than process steps then the company is generally more agile, able to integrate new business units more rapidly and is staffed by empowered executives.

I recently attended IBM’s Information On Demand conference in Las Vegas, including meeting with IBM’s Information Management CTO, Anant Jhingran.  Anant and IBM understand the necessity of separating the content away from the application, I suspect this is why they are happy to stay out of the application space and why they are so supportive of SOA, specialist XML vendors and other forms of open communities.

Two of these XML vendors that I find particularly interesting in this context, because of their support of this “ecosystem” style of approach, are JustSystems and CoreFiling.

JustSystems, who have perhaps been known in the past as a Japanese “office” software company, have made a major push in the XML space with products like xfy which allows organizations to build process flows and dynamic datasets without having to build the full system.  We find this attractive as it supports the Information Development approach of allowing prototyping focused on the content, then building a process, providing a content test platform and then (in production) providing a place to review content and manage content irrespective of the application that manages the process flow.

CoreFiling have been one of the early XBRL providers.  XBRL is the emerging business reporting XML standard and is gaining rapid acceptance (particularly with regulators, hence its attraction to organizations with significant compliance obligations).  CoreFiling provide products, such as SpiderMonkey which will supports the dynamic development of metadata (or taxonomies) across multiple applications and user groups, which is critical if the Information Development philosophy is to scale beyond small workgroups.

Tags: , ,
Category: MIKE2.0
No Comments »

by: Sean.mcclowry
21  Oct  2007

Globalization and Name Recognition

Organizations that focus on individual consumers often to struggle to identify their customers at the most basic level – their name. There are many reasons for this:

  • Capture-dependent: spelling mistakes
  • Customer-dependent: name changes
  • Application-dependent: packing multiple fields into a single field
  • Architecture-dependent: conflicting names for the same person across systems

These different types of issues then become increasingly difficult to address in a complex organization such as a retail bank or telco where dozens to hundreds of systems may hold customer records.

Collectively, Customer Data Integration (CDI) means doing all these things well and helps address what was a cause of failure on many Customer Relationship Management (CRM) implementations. Vendors such as IBM, Syperion, Initiate and Oracle offer CDI-specific Solutions and the market is undergoing rapid growth.

Over the last few years there have been significant benefits to addressing these issues through better governance, data quality improvement programmes and upgrades to new applications that were more sophisticated in their capability to store customer data.

This involves fixing historical issues and minimizing the chance of errors occurring in the future.

One of the challenges that globalization brings is around name recognition. Techniques that have been applied over the past few years simply do not work as well with many Eastern European, North African, Middle Eastern and Asian names. The phonetic translations that convert Arabic names into a Western form are typically inconsistent.

Living in London, I see the Retail Banking sector facing perhaps the greatest complexity worldwide. Rapidly changing demographics require new techniques and technologies to solve this name recognition issue. Once again, big vendors are moving into this space through acquisition – with IBM offering a specific product – GNR – to meet the globalization name challenge.

Tags: , , ,
Category: Master Data Management
No Comments »

by: Sean.mcclowry
04  Sep  2007

A next generation modelling tool for Information Management

One of the difficult aspects of Information Development is that organizations cannot “start over” – they need to fix the issues of the past. This means that the transition to a new model must incorporate a significant transition from the old world

Most organizations have a very poorly defined view of their current state information architecture: models are undefined, quality is unknown and ownership is unclear. A product that models the Enterprise Information Architecture and provides a path for transitioning to the future state would therefore be extremely valuable.

Its capabilities could be grouped into 2 categories:

Things that can be done today, but typically through multiple products

  • Can be used to define data models and interface schemas
  • Provides GUI-based transformation mapping such as XML schema mapping or O-R mapping
  • Is able to profile data and content to identify accuracy, consistency or integrity issues in a once-off or ongoing fashion
  • Takes the direct outputs of profiling and incorporates these into a set of transformation rules
  • Helps identify data-dependent business rules and classifies rule metadata
  • Has an import utility to bring in common standards

New capabilities typically not seen in products today

  • An ability to assign value to information based on its economic value within an organization
  • Provides an information requirements gathering capability that includes drill down and traceability mapping are available across requirements
  • Provides a data mastering model that shows overlaps of information assets across the enterprise and rules for its propagation
  • Provides an ownership model to assign individual responsibility for different areas of the information architecture (e.g. data stewards, data owners, CIO)
  • Has a compliance feature that can be run to check adherence to regulations and recommended best practices
  • Provides a collaborative capability for users to jointly work together for better Information Governace

In summary, this product would be like an advanced profiling tool, enterprise architecture modelling tool and planning, budgeting and forecasting tool in one. It would be a major advantage to organizations on their path to Information Development.

Today’s solutions for Active Metadata Integration and Model Driven Development seem to provide the starting point for this next generation product. Smaller software firms such as MetaMatrix provided some visionary ideas to begin to move organizations to model driven Information Development. The bi-directional metadata repositories provided by the major players such as IBM and Informatica area a big step in the right direction. There is, however, a significant opportunity for a product that can fill the gap that exists today.

Tags: , , , ,
Category: Enterprise Data Management, Information Strategy
2 Comments »

Calendar
Collapse Expand Close
TODAY: Mon, April 24, 2017
April2017
SMTWTFS
2627282930311
2345678
9101112131415
16171819202122
23242526272829
30123456
Archives
Collapse Expand Close
Recent Comments
Collapse Expand Close