Essential Guide

Browse Sections


This content is part of the Essential Guide: Guide to big data analytics tools, trends and best practices
Get started Bring yourself up to speed with our introductory content.

Big data architecture doesn't need to involve big data sets

Many companies are looking to expand their big data architecture, but they may not realize they may have all the data they need.

Big data is on the mind of most executives today. It's clear there are opportunities for big wins in analyzing data. But do the data sets need to be so big?

Tim Baucom, vice president of the commercial division, Shaw IndustriesTim Baucom

The most widely recognized definition for big data comes from Gartner, which said the concept is typified by data volume, variety and velocity. Since the term came into vogue in the mid-2000s, executives have looked for ways they can develop their big data architecture. However, they may not realize the answers to their analytics questions may already be in-house, and can often be simpler than they expect.

Shaw Industries, a Georgia-based carpet manufacturer, found the answers to many of their data questions by looking in their enterprise resource management software, customer relationship management system and data warehouse, rather than looking to purchase vast amounts of third-party customer data or collect Web data, as many businesses developing a big data architecture do. Company officials say this approach helped improve sales performance.

"To get those systems all talking together, that's the big opportunity," said Tim Baucom, vice president of the commercial division at Shaw.

The main problem Shaw was looking to solve was variations in pricing. Baucom said the typical sales process at Shaw, which mainly focuses on commercial developments, could take six months or more. During this time the price of materials may change or over-budget contractors may change their orders. These factors made it difficult for Shaw salespeople to deliver consistent margins on their sales.

By 2005, the company knew it needed to base prices on data to make margins more predictable. Baucom said he was familiar with Zilliant Inc., which offers tools that apply predictive algorithms to customer and product data in order to optimize prices for businesses. Shaw implemented Zilliant's MarginMax software to track quotes throughout the sales process, make recommendations based on changing conditions and measure pricing consistency in sales. The software pulled all the necessary information from the carpet maker's existing data systems.

Pulling together these different data sources was no small feat. Baucom said there was no cross-pollination among customer service, sales and Web data. All of it was stored in siloed systems. Initially Shaw relied on data scientists at Austin, Texas-based Zilliant to configure the system so it could spot correlations between the data in the different systems.

But at the beginning the sales team determined that many of the patterns emerging were coincidental, such as variations in pricing by day of the week. This is a common problem faced by organizations as they stand up analytics systems. Many need a data scientist to define the algorithms, but it often takes a business expert to confirm that the algorithms are delivering useful results. In this case, Baucom had to get personally involved to look for meaningful and actionable correlations, such as variations in margin based on the type of project or material used. Through working with the vendor, he was able to get the system configured to deliver more meaningful results.


See what role in-memory storage should play

Learn how big data is changing structure of data warehousing

Read how design patterns impact big data systems

Baucom said Shaw's sales managers are now "armed with data," which allows them to set prices more effectively. Since it began basing sales quotes on data analysis, Shaw has seen a 5% increase in sales margins.

As for whether the initiative is an example of big data, Baucom isn't sure. He said most people at the company looked at it in those terms back in 2005, but he realizes that by today's standards it may not be a true example of big data architecture. But for him, the point is to base pricing decisions on data to whatever extent possible, regardless of what you call the effort.

"We are in an industry where pricing and price negotiation is continual throughout the life of the project, so being able to help bring a project back to budget without compromising is key," he said. "We needed to find the Goldilocks zone," on pricing.

Ed Burns is site editor of SearchBusinessAnalytics. Email him at and follow him on Twitter: @EdBurnsTT.

Dig Deeper on Big data analytics

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

Is your business developing big data architecture or focusing on small wins?
What we consider Big Data changes, used to be 16K was big data, now my wristwatch has 1Gig.
When I teach Hadoop classes, I suggest this definition of 'Big Data'.
Any data that causes Enterprise pain; too much to store, too expensive, takes too long, too much latency, etc.
The difference with Hadoop and other such solutions is linear scalability.
Just as most software exhibits O(n log n) runtime performance, Enterprise systems exhibit O(n log n) cost behavior. Twice the performance rarely costs only twice as much $$.