Manage Learn to apply best practices and optimize your operations.

Introduction to mining unstructured data

Get an introduction to mining unstructured data and learn what inspired the authors of "Mining the talk: Unlocking the business value in unstructured information" to write the book.

mining unstructured data

The following is an excerpt from Mining the talk: Unlocking the business value in unstructured information; Copyright 2008 by International Business Machines Corporation. It is reprinted here with permission. Download the full chapter about mining unstructured data, for free.


Table of contents

People are talking about your business every day. Are you listening?

Your customers are talking. They're talking about you to your face and behind your back. They're saying how much they like you, and how much they hate you. They're describing what they wish you would do for them, and what the competition is already doing for them. They are writing emails to you, posting blogs about you, and discussing you endlessly in public forums. Are you listening?

Other businesses and organizations are talking too. Researchers talk about new technologies and approaches you might be interested in. Other businesses describe innovations you could leverage in your products. Your competitors are revealing technical approaches and broadcasting their strategies in various publications. They talk about what they are working on and what they think is important. Are you listening?

Your employees are also talking. They are producing great ideas that are languishing for lack of the right context to apply them. They are looking for the right partners to help them innovate and create the next big thing for your company. They reveal new ways to improve your internal processes and even change the entire vision for your company. Are you listening?

All of this talk is going on out there now, even as you read these pages. And you can listen -- if you know how. This book is about how we learned to listen to the talk and to turn it -- the unstructured data -- into valuable business insights for our company and for our customers. Now we would like to share that knowledge with you.

A short story ... The data mining contest

Writing this book has been a project that beckoned for many years. We had started and stopped multiple times. We knew we wanted to write the book, but we had trouble convincing ourselves that anyone would want to read it. At a gut level, we knew that what we were doing was important and unique. However, there were a lot of competing methods and products, with more added every day, and we could not spend all of our time evaluating each of them to determine if our approach was measurably superior. Then, in May 2006, an event happened that in one day demonstrated convincingly that our approach was significantly better than all the other alternatives in our field. The results of this day would energize us to go ahead and complete this book.

More on this book about unstructured data mining
This chapter is excerpted from Mining the Talk: Unlocking the Business Value in Unstructured Information, authored by Scott Spangler and Jeffrey Kreulen. Published by IBM Press, July, 2007; ISBN 0132339536; Copyright 2008 by International Business Machines Corporation. All rights reserved. For more information, please visit:

It began when a potential client was considering a large unstructured data mining project. Like most companies, they had a huge collection of documents describing customer interactions. They wanted to automatically classify these documents to route them to the correct business process. They questioned whether or not this was even feasible, and if so, how expensive would it be. Rather than invite all the vendors in this space to present proposals, they wanted to understand how effective each technical approach was on their data. To this end, they set up the following "contest."

They took a sample of 5,000 documents that had been scanned and converted to text and divided them manually into 50 categories of around 100 documents each. They then invited seven of the leading data mining vendors with products in this space to spend one week with the data using whatever tools and techniques they wished to model these 50 categories. When they were done, they would be asked to classify another unseen set of 25,000 documents. The different vendors' products would be compared based on speed, accuracy of classification, and ease of use during training. The results would be shared with all concerned.

That was it. The "contest" had no prize. There was no promise of anything more on the client's part after it was over. No money would change hands. Nothing would be published about the incident. There was no guarantee that anything would come of it. I was dead set against participating in this activity for three very good reasons: 1) I thought that the chances it would lead to eventual business were small; 2) I didn't think the problem they were proposing was well formed since we would have no chance to talk to them up front to identify business objectives, and from these to design a set of categories that truly reflected the needs of the business as well as the actual state of the data; and 3) I was already scheduled to be in London that week working with a paying customer.

I explained all of these reasons to Jeff, and he listened patiently and said, "You could get back a day early from London and be there on Friday."

"So I would have one day while the other vendors had five! No way!"

"You won't need more than one day. You'll do it in half a day." I didn't respond to that—I recognize rank flattery when I hear it. Then Jeff said, "I guess you really don't want to do this."

That stopped me a moment. The truth was I did want to do it. I had always been curious to know how our methods stacked up against the competition in an unbiased comparison, and here was an opportunity to find out. "OK. I'll go," I found myself saying.

As planned, I arrived at the designated testing location on Friday morning at 9AM. A representative of the client showed me to an empty cubicle where sat a PC that contained the training data sample. On the way, he questioned me about whether or not I would want to work until late in the day (this was the Friday before Memorial Day weekend). I assured him that this would not be the case. He showed me where on the hard drive the data was located and then left. I installed our software1 on the PC and got to work.

About an hour later, he stopped by to see how I was coming along. "Well, I'm essentially done modeling your data," I said. He laughed, assuming I was making a joke. "No, seriously, take a look." We spent about an hour perusing his data in the tool. I spent some time showing him the strengths and weaknesses of the classification scheme they had set up, showing him exactly which categories were well defined and which were not, and identifying outliers in the training set that might have a negative influence on classifier performance. He was quite impressed.

"So, can you classify the test set now?" he asked.

"Sure, I'll go ahead and start that up." I kicked off the process that classified the 25,000 test documents based on the model generated from the training set categories. We watched it run together for a few seconds. Then he asked me how long it would take. I tried to calculate in my head how long it should take based on the type of model I was using and the size of the document collection. I prevaricated just long enough before answering. Before I could give my best guess, the classification had completed. It took about one minute.

"So that's it? You're done?" he asked, clearly bemused.

"Yes. We can try some other classification models to see if they do any better, but I think this will probably be the best we can come up with. You seem surprised."

He lowered his voice to barely a whisper. "I shouldn't be telling you this, but most of the other vendors are still here, and some of them still haven't come up with a result. None of them finished in less than three days. You did it all in less than two hours? Is your software really that much better than theirs? How does your accuracy stack up?"

"I don't know for sure," I answered truthfully, "but based on the noise I see in your training set, and the accuracy levels our models predict, I doubt they will do any better than we just did." (Two weeks later, when the results were tabulated for all vendors, our accuracy rate was almost exactly as predicted, and it turned out to be better than any of the other participating vendors.)

"So why is your stuff so much better than theirs?" he asked.

"That's not an easy question to answer. Let's go to lunch, and I'll tell you about it."

What I told the client over lunch is the story of how and why our methodology evolved and what made it unique. I explained to him how every other unstructured data mining approach on the market was based on the idea that "the best algorithm wins." In other words, researchers had picked a few sets of "representative" text data, often items culled from news articles or research abstracts, and then each created their own approaches to classifying these sets of articles in the most accurate fashion. They honed the approaches against each other and tuned them to perform with optimum speed and accuracy on one type of unstructured data. Then these algorithms eventually became products, turned loose on a world that looked nothing like the lab environment in which they were optimally designed to succeed.

Our approach was very different. It assumed very little about the kind of unstructured data that would be given as input. It also didn't assume any one "correct" classification scheme, but observed that the classification of these documents might vary depending on the business context. These assumptions about the vast variability inherent in both business data and classification schemes for that data, led us to an approach that was orders of magnitude more flexible and generic than anything else available on the market. It was this flexibility and adaptability that allowed me to go into a new situation and, without ever having seen the data or the classification scheme ahead of time, quickly model the key aspects of the domain and produce an automated classifier of high accuracy and performance.

In the beginning

In 1998, a group from IBM's Services organization came to our Research group with a problem. IBM Global Services manages the computer helpdesk operations of hundreds of companies. In doing so, they document millions of problem tickets -- records of each call that are typed in by the helpdesk operator each time an operator has an interaction with a customer. Here is what a typical problem ticket looks like:


1836853 User calling in with WORD BASIC error when opening files in word. Had user delete NORMAL.DOT and had her reenter Word, she was fine at that point. 00:04:17 ducar May 2:07:05:656PM

Imagine millions of these sitting in databases. There they could be indexed, searched, sorted, and counted. But this vast data collection could not be used to answer the following simple question: What kinds of problems are we seeing at the helpdesk this month? If the data could be leveraged to do this analysis, then some of the more frequent tasks could potentially be automated, thus significantly reducing costs.

So why was it so hard to answer this question with the data they had? The reason is that the data is unstructured. There is no set vocabulary or language of fixed terms used to describe each problem. Instead, the operator describes the customer issue in ordinary everyday language…as they would describe it to a peer at the helpdesk operations center. As in normal conversation, there is no consistency of word choice or sentence structure or grammar or punctuation or spelling in describing problems. So the same problem called in on different days to different operators might result in a very different problem ticket description. This kind of unstructured information in free-form text is what we refer to as "talk." It is simply the way humans have been communicating with each other for thousands of years, and it's the most prevalent kind of data to be found in the world. Potentially, it's also the most valuable, because hidden inside the talk is little bits and pieces of important information that, if aggregated and summarized, could communicate actionable intelligence about how any business is running, how its customers and employees perceive it, what is going right and what is going wrong, and possibly solutions to the most pressing problems the business faces. These are examples of the gold that is waiting to be discovered if we can only "Mine the Talk."

And so with this challenge began the journey that culminated in this book.

The data mining thesis

The purpose of this book is to share with you the insights and knowledge that we have gained from the journey we have been on for nearly a decade. We are applied researchers and software engineers that have been developing technologies to address real-world business problems. We have implemented and experimented with variations of most approaches and algorithms that are espoused in the literature, as well as quite a few new techniques of our own. Through trial and error, insight, and sometimes good luck, we have come up with an approach, supported by technology, that we think will revolutionize the definition of business intelligence and how businesses leverage information analytics into the future.

Our work can be summarized in one simple thesis:


A methodology centered around developing taxonomies that capture both domain knowledge and business objectives is necessary to successfully unlock the business value in all kinds of unstructured information.

In this introduction, we will take you through the thinking that has led us to this conclusion, and outline the methodology we use to Mine the Talk to create lasting business value.

Read Part 2: The business context for unstructured data mining

Dig Deeper on Business intelligence data mining

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.