Version 2 (modified by claudiu.gheorghe, 14 years ago) (diff)


We will describe how we successfully installed a Hadoop cluster setup in the ED202 laboratory.
Step 1. Firstly, read and follow carefully the one-machine setup for Hadoop from this how-to on Hadoop single-node setup.
Step 2. After that, you can step further by following the guide for dual-node setup.
Step 3. For extending the cluster and add the N-th slave to the cluster, you must do the following:

Step 3.1 Follow step 1 on the slave-N machine.
Step 3.2 Copy all the contents of the <HADOOP_DIR>/conf/ directory from a working slave configuration.
Step 3.3 Set the hostname of the machine to a suggestive string, let's say slave-N.
Step 3.4 Add an entry to the master's /etc/hosts file like

... slave-N

where is the ip of the slave-N machine.

Step 3.5 Add the same entry added in master's /etc/hosts file to each other slave's /etc/hosts, so that every slave can resolve the name slave-N.
Step 3.6 Put the master's ssh public keys to slave-N`s ~/.ssh directory, and check that

#ssh slave-N

works without asking a passwsord.
Step 3.7 Add a line with slave-N in the master's <HADOOP_DIR>/conf/slaves file.

Make sure you install on each machine a Linux distribution that easily permits changing the computer's hostname. We have used Ubuntu 8.04 and Ubuntu 9.10 and it worked fine. We also tried with Fedora 10, but we didn't succeeded.