Conference Coverage

Strata + Hadoop World 2016: Hadoop and Spark in spotlight

Reporting and analysis from IT events

Sellpoints sold on using Spark SQL for big data ETL jobs

To help companies target ads to website users, Sellpoints Inc. relies on the Spark processing engine -- including its Spark SQL module -- to prepare online activity data for analysis.

The Apache Spark processing engine is often paired with Hadoop, helping users to accelerate analysis of data stored in the Hadoop Distributed File System. But Spark can also be used as a standalone big data platform. That's the case at online marketing and advertising services provider Sellpoints Inc. -- and it likely wouldn't be possible without the technology's Spark SQL module.

Sellpoints initially used a combination of Hadoop and Spark via a cloud-based managed service to process data on the Web activities of consumers for analysis by its business intelligence (BI) and data science teams. But in early 2015, the Emeryville, Calif., company converted solely to a Spark system from Databricks, also in the cloud, to streamline its architecture and reduce technical support issues.

Benny Blum, vice president of product and data at Sellpoints, said the analysts there use a mix of Spark SQL and the Scala language to set up extract, transform and load (ETL) processes for turning the raw data into usable information that can help the company target ads and marketing campaigns to individual website visitors for its corporate clients.

The BI team in particular leans heavily on Spark SQL, since it doesn't require the same level of technical skills as Scala does. Some BI analysts do all of their ETL programming with the SQL on Spark technology, according to Blum.

"Spark SQL is really an enabler for someone who's less technical to work with Spark," he said. "If we didn't have it, a platform like Databricks wouldn't be as viable for our organization, because we'd have a lot more reliance on the data science and engineering teams to do all of the work."

Sellpoints collects hundreds of millions of data points from website logs on a daily basis, amounting to a couple terabytes per month. Blum said the raw data is streamed into an Amazon Simple Storage Service data store and then run through the ETL routines in Spark to cleanse it and convert it into more understandable metrics-based formats. Spark also translates the data for output to Tableau's BI software, which is used to build reports and data visualizations for the company's customers.

At this point, Blum said, Spark SQL isn't a perfect match for the standard SQL that has long served as the primary programming language for mainstream relational databases. "There are certain commands that I expect to be there that aren't there, or may be there, but under a different name," he noted. Despite such kinks, though, Blum thinks the Spark variant is familiar enough for SQL-savvy users to get the job done. "If you know SQL, you can work with it."

Craig Stedman is executive editor of SearchBusinessAnalytics. Email him at cstedman@techtarget.com, and follow us on Twitter: @BizAnalyticsTT.

Next Steps

Read about SQL-on-Hadoop software, Spark SQL's close cousin

Find out why users are tapping Spark to replace MapReduce

Take a short quiz on the Spark processing engine's features

This was first published in March 2016

PRO+

Content

Find more PRO+ content and other member only offers, here.

Conference Coverage

Strata + Hadoop World 2016: Hadoop and Spark in spotlight
Related Discussions

Craig Stedman asks:

Is your organization using SQL to help power Spark data analysis applications? What has your experience been?

0  Responses So Far

Join the Discussion

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchDataManagement

SearchAWS

SearchContentManagement

SearchCRM

SearchOracle

SearchSAP

SearchSQLServer

SearchSalesforce

Close