Hadoop is the Answer! What is the Question?
Tim Negris, the VP of Marketing at 1010data, makes some interesting HADOOPIAN observations in this post.
see http://cloudcomputing.sys-com/node/1860458
According to Tim, 1010data doesn’t have much actual overlap with Hadoop, which provides programmatic batch job access to linked content files for text, log and social graph data analytics. I must confess, I have not been paying much attention to Hadoop.
But, while doing research for my upcoming presentation on Cloud-based Big Data Analytics at Cloud Expo in NYC (3:00 on Thursday), I uncovered an apocrypha in the making, a rich mythology about a yellow elephant whose name seems to have become the answer to every question about Big Data. Got a boatload of data? Store it in Hadoop. Want to search and analyze that data? Do it with Hadoop. Want to invest in a technology company? If it works with Hadoop, get out the checkbook and get in line.
And then, I was on a Big Data panel at the Cowen 39th Annual Technology, Media and Telecom Conference this week and several of my fellow panelists were from companies that in one way or another had something to do with Hadoop.
So, as a public service to prospective message victims of the Hadoop hype, I decided to try to figure out what Hadoop really is and what it is really good for. No technology gets so popular so quickly unless it is good for something, and Hadoop is no exception. But Hadoop is not the solution to every Big Data problem. Nothing is. Hadoop is a low-level technology that must be programmed to be useful for anything.
It is a relatively immature (V0.20.x) Apache open source project that has spawned a number of related projects and a growing number of applications and systems built on top of the crowd-sourced Hadoop code. I have discovered that many people say “Hadoop” when they really mean Hadoop plus things that run on or with it. For instance, “Hadoop is an analytical database” means Hadoop plus Hive plus Pig. The ever-lengthening “Powered By” list is here.
Despite their general enthusiasm for the framework, though, many Hadoop developers also stress the difficulty of programming applications for it, including Rick Wesel, the developer of the Cascading MapReduce library and API, who writes on his blog,
The one thing Hadoop does not help with is providing a simple means to develop real world applications. Hadoop works in terms of MapReduce jobs. But real work consists of many, if not dozens, of MapReduce jobs chained together, working in parallel and serially.
MapReduce is a patented software framework developed by Google and underlying Hadoop. Its Wikipedia enry describes the two parts like this:
“Map” step: The master node takes the input, partitions it up into smaller sub-problems, and distributes those to worker nodes. A worker node may do this again in turn, leading to a multi-level tree structure. The worker node processes that smaller problem, and passes the answer back to its master node.
“Reduce” step: The master node then takes the answers to all the sub-problems and combines them in some way to get the output – the answer to the problem it was originally trying to solve.
So what is Hadoop? Straight from the elephant’s mouth,
Apache Hadoop is a framework for running applications on large cluster built of commodity hardware. The Hadoop framework transparently provides applications both reliability and data motion. Hadoop implements a computational paradigm named Map/Reduce, where the application is divided into many small fragments of work, each of which may be executed or reexecuted on any node in the cluster. In addition, it provides a distributed file system (HDFS) that stores data on the compute nodes, providing very high aggregate bandwidth across the cluster. Both Map/Reduce and the distributed file system are designed so that node failures are automatically handled by the framework.
Said more simply, Hadoop lets you chop up large amounts of data and processing so as to spread it out over a dedicated cluster of commodity server machines, providing high scalability, fault tolerance and efficiency in processing operations on large quantities of unstructured data (text and web content) and semi-structured data (log records, social graphs, etc.) In as much as a computer exists to process data, Hadoop in effect turns lots of cheap little computers into one big computer that is especially good for analyzing indexed text.
Category: Uncategorized