Swiftorial Logo
Home
Swift Lessons
Matchups
CodeSnaps
Tutorials
Career
Resources

Hadoop Comprehensive Tutorial

Introduction to Hadoop

Hadoop is an open-source framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.

Hadoop Architecture

The Hadoop framework consists of the following modules:

  • Hadoop Common: The common utilities that support the other Hadoop modules.
  • Hadoop Distributed File System (HDFS): A distributed file system that provides high-throughput access to application data.
  • Hadoop YARN: A framework for job scheduling and cluster resource management.
  • Hadoop MapReduce: A YARN-based system for parallel processing of large data sets.

Setting Up Hadoop

Follow these steps to set up Hadoop on a single node (pseudo-distributed mode):

Step 1: Download Hadoop

wget https://downloads.apache.org/hadoop/common/hadoop-3.3.1/hadoop-3.3.1.tar.gz

Step 2: Extract the tar file

tar -xzf hadoop-3.3.1.tar.gz

Step 3: Move to the Hadoop directory

cd hadoop-3.3.1

HDFS Commands

Here are some basic HDFS commands:

Create a directory:

hdfs dfs -mkdir /user/hadoop

List files in a directory:

hdfs dfs -ls /user/hadoop

Upload a file to HDFS:

hdfs dfs -put localfile.txt /user/hadoop

Download a file from HDFS:

hdfs dfs -get /user/hadoop/file.txt localfile.txt

Running a MapReduce Job

To run a MapReduce job, you need to write a Mapper and a Reducer. Here is a simple example:

Mapper Code (WordCountMapper.java):

public class WordCountMapper extends Mapper { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(Object key, Text value, Context context) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); } } }

Reducer Code (WordCountReducer.java):

public class WordCountReducer extends Reducer { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } }

Compile and Run the Job:

hadoop jar wordcount.jar WordCount /input /output

Hadoop and Elasticsearch Integration

Elasticsearch can be integrated with Hadoop to allow for the storage and analysis of large data sets. You can use the elasticsearch-hadoop connector for this purpose.

Step 1: Download the Elasticsearch-Hadoop connector:

wget https://artifacts.elastic.co/downloads/elasticsearch-hadoop/elasticsearch-hadoop-7.10.2.zip

Step 2: Extract the zip file:

unzip elasticsearch-hadoop-7.10.2.zip

Step 3: Add the connector to your Hadoop classpath:

export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:/path/to/elasticsearch-hadoop-7.10.2/dist/elasticsearch-hadoop-7.10.2.jar

Step 4: Configure the Elasticsearch output in your job:

job.getConfiguration().set("es.nodes", "localhost:9200"); job.getConfiguration().set("es.resource", "my_index/my_type");

Conclusion

Hadoop is a powerful tool for managing and processing large data sets. By integrating it with Elasticsearch, you can take advantage of both Hadoop's processing capabilities and Elasticsearch's powerful search and analytics features.