Access your Pro+ Content below.
Handling the hoopla: When to use Hadoop, and when not to
This article is part of the Business Information issue of Special Edition, September 2013
In the past few years, Hadoop has earned a lofty reputation as the go-to big data analytics engine. To many, it's synonymous with big data technology. But the open source distributed processing framework isn't the right answer to every big data problem, and companies looking to deploy it need to carefully evaluate when to use Hadoop -- and when to turn to something else. For example, Hadoop has ample power for processing large amounts of unstructured or semi-structured data. But it isn't known for its speed in dealing with smaller data sets. That has limited its application at Metamarkets Group Inc., a San Francisco-based provider of real-time marketing analytics services for online advertisers. Metamarkets CEO Michael Driscoll said the company uses Hadoop for large, distributed data processing tasks where time isn't a constraint. That includes running end-of-the-day reports to review daily transactions or scanning historical data dating back several months. But when it comes to running the real-time analytics processes that are...
Access this PRO+ Content for Free!
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Features in this issue
Hadoop has become everyone's big data darling. But it can only do so much, and savvy businesses need to make sure it's a good fit for their needs.
Big data and in-memory analytics software can form a mutually beneficial relationship, provided business users really need in-memory processing power.
Columns in this issue
There's real value to be gained from big data projects, if organizations can get past all the hype and look at big data tools with a realistic eye.