Virtualization and Cloud

Jonathan Gershater

Subscribe to Jonathan Gershater: eMailAlertsEmail Alerts
Get Jonathan Gershater: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Cloud Computing, Apache Web Server Journal, IT Strategy, Big Data on Ulitzer, Hadoop

hadoop: Article

Hadoop and Big Data Easily Understood - How to Conduct a Census of a City

What is Hadoop, what is Map Reduce and what is Big Data - how to count the residents of San Francisco, Los Angeles or a village.

BigData (and Hadoop) are buzzword and growth areas of computing; this article will distill the concepts into easy-to-understand terms.

As the name implies, BigData is literally "big data" or "lots of data" that needs to be processed. Lets take a simple example: the city council of San Francisco is required to take a census of its population - literally how many people live at each address. There are city employees who are employed to count the residents. The city of Los Angeles has a similar requirement.

Consider are two methods to accomplish this task:

1. Request all the San Francisco residents to line up at City Hall and be prcessed by the city employees. Of course, this is very cumbersome and time consuming because the people are brought to the city hall and processed one by one - in scientific terms the data are transfered to the processing node. The people have to wait in line for a long time, the processing time is lengthy as the employees do down the line counting and processing the residents: "How many people live at your address?" In scientific terms, the data is processed serially, one after the next; then the data is aggregated at the end of the processing phase.

2. Send census forms to each address and request the residents to complete the form on a specific date and return to city hall for aggregation. In scientific terms the data is processed at the data node (resident's address) in parallel (all forms completed simultaneously by residents on a target date) and then aggregated at the processing node (city hall).

These two phases: process and aggregate are known in the Big Data community as "Map Reduce" - phase one map the data; phase two reduce the data to aggregate totals.

So far so good.

How do we process large amounts of data, in parallel, very quickly using computers?

An open source project, licensed under Apache, known as Hadoop, grew out of research from Yahoo and Google. Hadoop performs the MapReduce function on a distributed file system known as HDFS (Hadoop Distributed File System). Why is the file system distributed ? Remember, the Map Reduce function performs the phase one processing of data at the data node (the census form is completed at the resident's address). It is too time consuming to copy the data to a central processing node (request all the residents to line up at city hall). Thus Hadoop uses a distributed file system, so that the processing takes place on many distributed servers at once (in parallel). Because Hadoop is distributing the processing task, it can take advantage of cheap commodity hardware - compare this to processing all the data centrally on big expensive hardware.

The advantage of cheap commodity hardware is you only need to use as many servers as needed. To use the census analogy - how many computers would Hadoop require to process the census forms of San Francisco (four square miles) compared to Los Angeles (400+ square miles) ? If the census bureau buys one very large computer, the computer would be able to process the data for Los Angeles, but most of the compute power (and electricity) used by the computer would be wasted when the computer processed the census data for San Francisco. So the census bureau can rent say four computers to process the San Francisco data, and then rent perhaps another 90 to process the census data for Los Angeles. It is cheaper to rent commodity physical hardware and the Hadoop task is also an opportunity to use an elastic cloud computing environment where compute power is used on demand.

More Stories By Jonathan Gershater

Jonathan Gershater has lived and worked in Silicon Valley since 1996, primarily doing system and sales engineering specializing in: Web Applications, Identity and Security. At Red Hat, he provides Technical Marketing for Virtualization and Cloud. Prior to joining Red Hat, Jonathan worked at 3Com, Entrust (by acquisition) two startups, Sun Microsystems and Trend Micro.

(The views expressed in this blog are entirely mine and do not represent my employer - Jonathan).