[120] | 1 | <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> |
---|
| 2 | <html> |
---|
| 3 | <head> |
---|
| 4 | <title>Hadoop</title> |
---|
| 5 | </head> |
---|
| 6 | <body> |
---|
| 7 | |
---|
| 8 | Hadoop is a distributed computing platform. |
---|
| 9 | |
---|
| 10 | <p>Hadoop primarily consists of the <a |
---|
| 11 | href="org/apache/hadoop/hdfs/package-summary.html">Hadoop Distributed FileSystem |
---|
| 12 | (HDFS)</a> and an |
---|
| 13 | implementation of the <a href="org/apache/hadoop/mapred/package-summary.html"> |
---|
| 14 | Map-Reduce</a> programming paradigm.</p> |
---|
| 15 | |
---|
| 16 | |
---|
| 17 | <p>Hadoop is a software framework that lets one easily write and run applications |
---|
| 18 | that process vast amounts of data. Here's what makes Hadoop especially useful:</p> |
---|
| 19 | <ul> |
---|
| 20 | <li> |
---|
| 21 | <b>Scalable</b>: Hadoop can reliably store and process petabytes. |
---|
| 22 | </li> |
---|
| 23 | <li> |
---|
| 24 | <b>Economical</b>: It distributes the data and processing across clusters |
---|
| 25 | of commonly available computers. These clusters can number into the thousands |
---|
| 26 | of nodes. |
---|
| 27 | </li> |
---|
| 28 | <li> |
---|
| 29 | <b>Efficient</b>: By distributing the data, Hadoop can process it in parallel |
---|
| 30 | on the nodes where the data is located. This makes it extremely rapid. |
---|
| 31 | </li> |
---|
| 32 | <li> |
---|
| 33 | <b>Reliable</b>: Hadoop automatically maintains multiple copies of data and |
---|
| 34 | automatically redeploys computing tasks based on failures. |
---|
| 35 | </li> |
---|
| 36 | </ul> |
---|
| 37 | |
---|
| 38 | <h2>Requirements</h2> |
---|
| 39 | |
---|
| 40 | <h3>Platforms</h3> |
---|
| 41 | |
---|
| 42 | <ul> |
---|
| 43 | <li> |
---|
| 44 | Hadoop was been demonstrated on GNU/Linux clusters with 2000 nodes. |
---|
| 45 | </li> |
---|
| 46 | <li> |
---|
| 47 | Win32 is supported as a <i>development</i> platform. Distributed operation |
---|
| 48 | has not been well tested on Win32, so this is not a <i>production</i> |
---|
| 49 | platform. |
---|
| 50 | </li> |
---|
| 51 | </ul> |
---|
| 52 | |
---|
| 53 | <h3>Requisite Software</h3> |
---|
| 54 | |
---|
| 55 | <ol> |
---|
| 56 | <li> |
---|
| 57 | Java 1.6.x, preferably from |
---|
| 58 | <a href="http://java.sun.com/javase/downloads/">Sun</a>. |
---|
| 59 | Set <tt>JAVA_HOME</tt> to the root of your Java installation. |
---|
| 60 | </li> |
---|
| 61 | <li> |
---|
| 62 | ssh must be installed and sshd must be running to use Hadoop's |
---|
| 63 | scripts to manage remote Hadoop daemons. |
---|
| 64 | </li> |
---|
| 65 | <li> |
---|
| 66 | rsync may be installed to use Hadoop's scripts to manage remote |
---|
| 67 | Hadoop installations. |
---|
| 68 | </li> |
---|
| 69 | </ol> |
---|
| 70 | |
---|
| 71 | <h4>Additional requirements for Windows</h4> |
---|
| 72 | |
---|
| 73 | <ol> |
---|
| 74 | <li> |
---|
| 75 | <a href="http://www.cygwin.com/">Cygwin</a> - Required for shell support in |
---|
| 76 | addition to the required software above. |
---|
| 77 | </li> |
---|
| 78 | </ol> |
---|
| 79 | |
---|
| 80 | <h3>Installing Required Software</h3> |
---|
| 81 | |
---|
| 82 | <p>If your platform does not have the required software listed above, you |
---|
| 83 | will have to install it.</p> |
---|
| 84 | |
---|
| 85 | <p>For example on Ubuntu Linux:</p> |
---|
| 86 | <p><blockquote><pre> |
---|
| 87 | $ sudo apt-get install ssh<br> |
---|
| 88 | $ sudo apt-get install rsync<br> |
---|
| 89 | </pre></blockquote></p> |
---|
| 90 | |
---|
| 91 | <p>On Windows, if you did not install the required software when you |
---|
| 92 | installed cygwin, start the cygwin installer and select the packages:</p> |
---|
| 93 | <ul> |
---|
| 94 | <li>openssh - the "Net" category</li> |
---|
| 95 | <li>rsync - the "Net" category</li> |
---|
| 96 | </ul> |
---|
| 97 | |
---|
| 98 | <h2>Getting Started</h2> |
---|
| 99 | |
---|
| 100 | <p>First, you need to get a copy of the Hadoop code.</p> |
---|
| 101 | |
---|
| 102 | <p>Edit the file <tt>conf/hadoop-env.sh</tt> to define at least |
---|
| 103 | <tt>JAVA_HOME</tt>.</p> |
---|
| 104 | |
---|
| 105 | <p>Try the following command:</p> |
---|
| 106 | <tt>bin/hadoop</tt> |
---|
| 107 | <p>This will display the documentation for the Hadoop command script.</p> |
---|
| 108 | |
---|
| 109 | <h2>Standalone operation</h2> |
---|
| 110 | |
---|
| 111 | <p>By default, Hadoop is configured to run things in a non-distributed |
---|
| 112 | mode, as a single Java process. This is useful for debugging, and can |
---|
| 113 | be demonstrated as follows:</p> |
---|
| 114 | <tt> |
---|
| 115 | mkdir input<br> |
---|
| 116 | cp conf/*.xml input<br> |
---|
| 117 | bin/hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+'<br> |
---|
| 118 | cat output/* |
---|
| 119 | </tt> |
---|
| 120 | <p>This will display counts for each match of the <a |
---|
| 121 | href="http://java.sun.com/j2se/1.4.2/docs/api/java/util/regex/Pattern.html"> |
---|
| 122 | regular expression.</a></p> |
---|
| 123 | |
---|
| 124 | <p>Note that input is specified as a <em>directory</em> containing input |
---|
| 125 | files and that output is also specified as a directory where parts are |
---|
| 126 | written.</p> |
---|
| 127 | |
---|
| 128 | <h2>Distributed operation</h2> |
---|
| 129 | |
---|
| 130 | To configure Hadoop for distributed operation you must specify the |
---|
| 131 | following: |
---|
| 132 | |
---|
| 133 | <ol> |
---|
| 134 | |
---|
| 135 | <li>The NameNode (Distributed Filesystem master) host. This is |
---|
| 136 | specified with the configuration property <tt><a |
---|
| 137 | href="../core-default.html#fs.default.name">fs.default.name</a></tt>. |
---|
| 138 | </li> |
---|
| 139 | |
---|
| 140 | <li>The {@link org.apache.hadoop.mapred.JobTracker} (MapReduce master) |
---|
| 141 | host and port. This is specified with the configuration property |
---|
| 142 | <tt><a |
---|
| 143 | href="../mapred-default.html#mapred.job.tracker">mapred.job.tracker</a></tt>. |
---|
| 144 | </li> |
---|
| 145 | |
---|
| 146 | <li>A <em>slaves</em> file that lists the names of all the hosts in |
---|
| 147 | the cluster. The default slaves file is <tt>conf/slaves</tt>. |
---|
| 148 | |
---|
| 149 | </ol> |
---|
| 150 | |
---|
| 151 | <h3>Pseudo-distributed configuration</h3> |
---|
| 152 | |
---|
| 153 | You can in fact run everything on a single host. To run things this |
---|
| 154 | way, put the following in: |
---|
| 155 | <br/> |
---|
| 156 | <br/> |
---|
| 157 | conf/core-site.xml: |
---|
| 158 | <xmp><configuration> |
---|
| 159 | |
---|
| 160 | <property> |
---|
| 161 | <name>fs.default.name</name> |
---|
| 162 | <value>hdfs://localhost/</value> |
---|
| 163 | </property> |
---|
| 164 | |
---|
| 165 | </configuration></xmp> |
---|
| 166 | |
---|
| 167 | conf/hdfs-site.xml: |
---|
| 168 | <xmp><configuration> |
---|
| 169 | |
---|
| 170 | <property> |
---|
| 171 | <name>dfs.replication</name> |
---|
| 172 | <value>1</value> |
---|
| 173 | </property> |
---|
| 174 | |
---|
| 175 | </configuration></xmp> |
---|
| 176 | |
---|
| 177 | conf/mapred-site.xml: |
---|
| 178 | <xmp><configuration> |
---|
| 179 | |
---|
| 180 | <property> |
---|
| 181 | <name>mapred.job.tracker</name> |
---|
| 182 | <value>localhost:9001</value> |
---|
| 183 | </property> |
---|
| 184 | |
---|
| 185 | </configuration></xmp> |
---|
| 186 | |
---|
| 187 | <p>(We also set the HDFS replication level to 1 in order to |
---|
| 188 | reduce warnings when running on a single node.)</p> |
---|
| 189 | |
---|
| 190 | <p>Now check that the command <br><tt>ssh localhost</tt><br> does not |
---|
| 191 | require a password. If it does, execute the following commands:</p> |
---|
| 192 | |
---|
| 193 | <p><tt>ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa<br> |
---|
| 194 | cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys |
---|
| 195 | </tt></p> |
---|
| 196 | |
---|
| 197 | <h3>Bootstrapping</h3> |
---|
| 198 | |
---|
| 199 | <p>A new distributed filesystem must be formatted with the following |
---|
| 200 | command, run on the master node:</p> |
---|
| 201 | |
---|
| 202 | <p><tt>bin/hadoop namenode -format</tt></p> |
---|
| 203 | |
---|
| 204 | <p>The Hadoop daemons are started with the following command:</p> |
---|
| 205 | |
---|
| 206 | <p><tt>bin/start-all.sh</tt></p> |
---|
| 207 | |
---|
| 208 | <p>Daemon log output is written to the <tt>logs/</tt> directory.</p> |
---|
| 209 | |
---|
| 210 | <p>Input files are copied into the distributed filesystem as follows:</p> |
---|
| 211 | |
---|
| 212 | <p><tt>bin/hadoop fs -put input input</tt></p> |
---|
| 213 | |
---|
| 214 | <h3>Distributed execution</h3> |
---|
| 215 | |
---|
| 216 | <p>Things are run as before, but output must be copied locally to |
---|
| 217 | examine it:</p> |
---|
| 218 | |
---|
| 219 | <tt> |
---|
| 220 | bin/hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+'<br> |
---|
| 221 | bin/hadoop fs -get output output |
---|
| 222 | cat output/* |
---|
| 223 | </tt> |
---|
| 224 | |
---|
| 225 | <p>When you're done, stop the daemons with:</p> |
---|
| 226 | |
---|
| 227 | <p><tt>bin/stop-all.sh</tt></p> |
---|
| 228 | |
---|
| 229 | <h3>Fully-distributed operation</h3> |
---|
| 230 | |
---|
| 231 | <p>Fully distributed operation is just like the pseudo-distributed operation |
---|
| 232 | described above, except, specify:</p> |
---|
| 233 | |
---|
| 234 | <ol> |
---|
| 235 | |
---|
| 236 | <li>The hostname or IP address of your master server in the value |
---|
| 237 | for <tt><a |
---|
| 238 | href="../core-default.html#fs.default.name">fs.default.name</a></tt>, |
---|
| 239 | as <tt><em>hdfs://master.example.com/</em></tt> in <tt>conf/core-site.xml</tt>.</li> |
---|
| 240 | |
---|
| 241 | <li>The host and port of the your master server in the value |
---|
| 242 | of <tt><a href="../mapred-default.html#mapred.job.tracker">mapred.job.tracker</a></tt> |
---|
| 243 | as <tt><em>master.example.com</em>:<em>port</em></tt> in <tt>conf/mapred-site.xml</tt>.</li> |
---|
| 244 | |
---|
| 245 | <li>Directories for <tt><a |
---|
| 246 | href="../hdfs-default.html#dfs.name.dir">dfs.name.dir</a></tt> and |
---|
| 247 | <tt><a href="../hdfs-default.html#dfs.data.dir">dfs.data.dir</a> |
---|
| 248 | in <tt>conf/hdfs-site.xml</tt>. |
---|
| 249 | </tt>These are local directories used to hold distributed filesystem |
---|
| 250 | data on the master node and slave nodes respectively. Note |
---|
| 251 | that <tt>dfs.data.dir</tt> may contain a space- or comma-separated |
---|
| 252 | list of directory names, so that data may be stored on multiple local |
---|
| 253 | devices.</li> |
---|
| 254 | |
---|
| 255 | <li><tt><a href="../mapred-default.html#mapred.local.dir">mapred.local.dir</a></tt> |
---|
| 256 | in <tt>conf/mapred-site.xml</tt>, the local directory where temporary |
---|
| 257 | MapReduce data is stored. It also may be a list of directories.</li> |
---|
| 258 | |
---|
| 259 | <li><tt><a |
---|
| 260 | href="../mapred-default.html#mapred.map.tasks">mapred.map.tasks</a></tt> |
---|
| 261 | and <tt><a |
---|
| 262 | href="../mapred-default.html#mapred.reduce.tasks">mapred.reduce.tasks</a></tt> |
---|
| 263 | in <tt>conf/mapred-site.xml</tt>. |
---|
| 264 | As a rule of thumb, use 10x the |
---|
| 265 | number of slave processors for <tt>mapred.map.tasks</tt>, and 2x the |
---|
| 266 | number of slave processors for <tt>mapred.reduce.tasks</tt>.</li> |
---|
| 267 | |
---|
| 268 | </ol> |
---|
| 269 | |
---|
| 270 | <p>Finally, list all slave hostnames or IP addresses in your |
---|
| 271 | <tt>conf/slaves</tt> file, one per line. Then format your filesystem |
---|
| 272 | and start your cluster on your master node, as above. |
---|
| 273 | |
---|
| 274 | </body> |
---|
| 275 | </html> |
---|
| 276 | |
---|