1 | <html> |
---|
2 | <body> |
---|
3 | <table border="1"> |
---|
4 | <tr> |
---|
5 | <td>name</td><td>value</td><td>description</td> |
---|
6 | </tr> |
---|
7 | <tr> |
---|
8 | <td><a name="hadoop.job.history.location">hadoop.job.history.location</a></td><td></td><td> If job tracker is static the history files are stored |
---|
9 | in this single well known place. If No value is set here, by default, |
---|
10 | it is in the local file system at ${hadoop.log.dir}/history. |
---|
11 | </td> |
---|
12 | </tr> |
---|
13 | <tr> |
---|
14 | <td><a name="hadoop.job.history.user.location">hadoop.job.history.user.location</a></td><td></td><td> User can specify a location to store the history files of |
---|
15 | a particular job. If nothing is specified, the logs are stored in |
---|
16 | output directory. The files are stored in "_logs/history/" in the directory. |
---|
17 | User can stop logging by giving the value "none". |
---|
18 | </td> |
---|
19 | </tr> |
---|
20 | <tr> |
---|
21 | <td><a name="io.sort.factor">io.sort.factor</a></td><td>10</td><td>The number of streams to merge at once while sorting |
---|
22 | files. This determines the number of open file handles.</td> |
---|
23 | </tr> |
---|
24 | <tr> |
---|
25 | <td><a name="io.sort.mb">io.sort.mb</a></td><td>100</td><td>The total amount of buffer memory to use while sorting |
---|
26 | files, in megabytes. By default, gives each merge stream 1MB, which |
---|
27 | should minimize seeks.</td> |
---|
28 | </tr> |
---|
29 | <tr> |
---|
30 | <td><a name="io.sort.record.percent">io.sort.record.percent</a></td><td>0.05</td><td>The percentage of io.sort.mb dedicated to tracking record |
---|
31 | boundaries. Let this value be r, io.sort.mb be x. The maximum number |
---|
32 | of records collected before the collection thread must block is equal |
---|
33 | to (r * x) / 4</td> |
---|
34 | </tr> |
---|
35 | <tr> |
---|
36 | <td><a name="io.sort.spill.percent">io.sort.spill.percent</a></td><td>0.80</td><td>The soft limit in either the buffer or record collection |
---|
37 | buffers. Once reached, a thread will begin to spill the contents to disk |
---|
38 | in the background. Note that this does not imply any chunking of data to |
---|
39 | the spill. A value less than 0.5 is not recommended.</td> |
---|
40 | </tr> |
---|
41 | <tr> |
---|
42 | <td><a name="io.map.index.skip">io.map.index.skip</a></td><td>0</td><td>Number of index entries to skip between each entry. |
---|
43 | Zero by default. Setting this to values larger than zero can |
---|
44 | facilitate opening large map files using less memory.</td> |
---|
45 | </tr> |
---|
46 | <tr> |
---|
47 | <td><a name="mapred.job.tracker">mapred.job.tracker</a></td><td>local</td><td>The host and port that the MapReduce job tracker runs |
---|
48 | at. If "local", then jobs are run in-process as a single map |
---|
49 | and reduce task. |
---|
50 | </td> |
---|
51 | </tr> |
---|
52 | <tr> |
---|
53 | <td><a name="mapred.job.tracker.http.address">mapred.job.tracker.http.address</a></td><td>0.0.0.0:50030</td><td> |
---|
54 | The job tracker http server address and port the server will listen on. |
---|
55 | If the port is 0 then the server will start on a free port. |
---|
56 | </td> |
---|
57 | </tr> |
---|
58 | <tr> |
---|
59 | <td><a name="mapred.job.tracker.handler.count">mapred.job.tracker.handler.count</a></td><td>10</td><td> |
---|
60 | The number of server threads for the JobTracker. This should be roughly |
---|
61 | 4% of the number of tasktracker nodes. |
---|
62 | </td> |
---|
63 | </tr> |
---|
64 | <tr> |
---|
65 | <td><a name="mapred.task.tracker.report.address">mapred.task.tracker.report.address</a></td><td>127.0.0.1:0</td><td>The interface and port that task tracker server listens on. |
---|
66 | Since it is only connected to by the tasks, it uses the local interface. |
---|
67 | EXPERT ONLY. Should only be changed if your host does not have the loopback |
---|
68 | interface.</td> |
---|
69 | </tr> |
---|
70 | <tr> |
---|
71 | <td><a name="mapred.local.dir">mapred.local.dir</a></td><td>${hadoop.tmp.dir}/mapred/local</td><td>The local directory where MapReduce stores intermediate |
---|
72 | data files. May be a comma-separated list of |
---|
73 | directories on different devices in order to spread disk i/o. |
---|
74 | Directories that do not exist are ignored. |
---|
75 | </td> |
---|
76 | </tr> |
---|
77 | <tr> |
---|
78 | <td><a name="mapred.system.dir">mapred.system.dir</a></td><td>${hadoop.tmp.dir}/mapred/system</td><td>The shared directory where MapReduce stores control files. |
---|
79 | </td> |
---|
80 | </tr> |
---|
81 | <tr> |
---|
82 | <td><a name="mapred.temp.dir">mapred.temp.dir</a></td><td>${hadoop.tmp.dir}/mapred/temp</td><td>A shared directory for temporary files. |
---|
83 | </td> |
---|
84 | </tr> |
---|
85 | <tr> |
---|
86 | <td><a name="mapred.local.dir.minspacestart">mapred.local.dir.minspacestart</a></td><td>0</td><td>If the space in mapred.local.dir drops under this, |
---|
87 | do not ask for more tasks. |
---|
88 | Value in bytes. |
---|
89 | </td> |
---|
90 | </tr> |
---|
91 | <tr> |
---|
92 | <td><a name="mapred.local.dir.minspacekill">mapred.local.dir.minspacekill</a></td><td>0</td><td>If the space in mapred.local.dir drops under this, |
---|
93 | do not ask more tasks until all the current ones have finished and |
---|
94 | cleaned up. Also, to save the rest of the tasks we have running, |
---|
95 | kill one of them, to clean up some space. Start with the reduce tasks, |
---|
96 | then go with the ones that have finished the least. |
---|
97 | Value in bytes. |
---|
98 | </td> |
---|
99 | </tr> |
---|
100 | <tr> |
---|
101 | <td><a name="mapred.tasktracker.expiry.interval">mapred.tasktracker.expiry.interval</a></td><td>600000</td><td>Expert: The time-interval, in miliseconds, after which |
---|
102 | a tasktracker is declared 'lost' if it doesn't send heartbeats. |
---|
103 | </td> |
---|
104 | </tr> |
---|
105 | <tr> |
---|
106 | <td><a name="mapred.tasktracker.instrumentation">mapred.tasktracker.instrumentation</a></td><td>org.apache.hadoop.mapred.TaskTrackerMetricsInst</td><td>Expert: The instrumentation class to associate with each TaskTracker. |
---|
107 | </td> |
---|
108 | </tr> |
---|
109 | <tr> |
---|
110 | <td><a name="mapred.tasktracker.memory_calculator_plugin">mapred.tasktracker.memory_calculator_plugin</a></td><td></td><td> |
---|
111 | Name of the class whose instance will be used to query memory information |
---|
112 | on the tasktracker. |
---|
113 | |
---|
114 | The class must be an instance of |
---|
115 | org.apache.hadoop.util.MemoryCalculatorPlugin. If the value is null, the |
---|
116 | tasktracker attempts to use a class appropriate to the platform. |
---|
117 | Currently, the only platform supported is Linux. |
---|
118 | </td> |
---|
119 | </tr> |
---|
120 | <tr> |
---|
121 | <td><a name="mapred.tasktracker.taskmemorymanager.monitoring-interval">mapred.tasktracker.taskmemorymanager.monitoring-interval</a></td><td>5000</td><td>The interval, in milliseconds, for which the tasktracker waits |
---|
122 | between two cycles of monitoring its tasks' memory usage. Used only if |
---|
123 | tasks' memory management is enabled via mapred.tasktracker.tasks.maxmemory. |
---|
124 | </td> |
---|
125 | </tr> |
---|
126 | <tr> |
---|
127 | <td><a name="mapred.tasktracker.procfsbasedprocesstree.sleeptime-before-sigkill">mapred.tasktracker.procfsbasedprocesstree.sleeptime-before-sigkill</a></td><td>5000</td><td>The time, in milliseconds, the tasktracker waits for sending a |
---|
128 | SIGKILL to a process that has overrun memory limits, after it has been sent |
---|
129 | a SIGTERM. Used only if tasks' memory management is enabled via |
---|
130 | mapred.tasktracker.tasks.maxmemory.</td> |
---|
131 | </tr> |
---|
132 | <tr> |
---|
133 | <td><a name="mapred.map.tasks">mapred.map.tasks</a></td><td>2</td><td>The default number of map tasks per job. |
---|
134 | Ignored when mapred.job.tracker is "local". |
---|
135 | </td> |
---|
136 | </tr> |
---|
137 | <tr> |
---|
138 | <td><a name="mapred.reduce.tasks">mapred.reduce.tasks</a></td><td>1</td><td>The default number of reduce tasks per job. Typically set to 99% |
---|
139 | of the cluster's reduce capacity, so that if a node fails the reduces can |
---|
140 | still be executed in a single wave. |
---|
141 | Ignored when mapred.job.tracker is "local". |
---|
142 | </td> |
---|
143 | </tr> |
---|
144 | <tr> |
---|
145 | <td><a name="mapred.jobtracker.restart.recover">mapred.jobtracker.restart.recover</a></td><td>false</td><td>"true" to enable (job) recovery upon restart, |
---|
146 | "false" to start afresh |
---|
147 | </td> |
---|
148 | </tr> |
---|
149 | <tr> |
---|
150 | <td><a name="mapred.jobtracker.job.history.block.size">mapred.jobtracker.job.history.block.size</a></td><td>3145728</td><td>The block size of the job history file. Since the job recovery |
---|
151 | uses job history, its important to dump job history to disk as |
---|
152 | soon as possible. Note that this is an expert level parameter. |
---|
153 | The default value is set to 3 MB. |
---|
154 | </td> |
---|
155 | </tr> |
---|
156 | <tr> |
---|
157 | <td><a name="mapred.jobtracker.taskScheduler">mapred.jobtracker.taskScheduler</a></td><td>org.apache.hadoop.mapred.JobQueueTaskScheduler</td><td>The class responsible for scheduling the tasks.</td> |
---|
158 | </tr> |
---|
159 | <tr> |
---|
160 | <td><a name="mapred.jobtracker.taskScheduler.maxRunningTasksPerJob">mapred.jobtracker.taskScheduler.maxRunningTasksPerJob</a></td><td></td><td>The maximum number of running tasks for a job before |
---|
161 | it gets preempted. No limits if undefined. |
---|
162 | </td> |
---|
163 | </tr> |
---|
164 | <tr> |
---|
165 | <td><a name="mapred.map.max.attempts">mapred.map.max.attempts</a></td><td>4</td><td>Expert: The maximum number of attempts per map task. |
---|
166 | In other words, framework will try to execute a map task these many number |
---|
167 | of times before giving up on it. |
---|
168 | </td> |
---|
169 | </tr> |
---|
170 | <tr> |
---|
171 | <td><a name="mapred.reduce.max.attempts">mapred.reduce.max.attempts</a></td><td>4</td><td>Expert: The maximum number of attempts per reduce task. |
---|
172 | In other words, framework will try to execute a reduce task these many number |
---|
173 | of times before giving up on it. |
---|
174 | </td> |
---|
175 | </tr> |
---|
176 | <tr> |
---|
177 | <td><a name="mapred.reduce.parallel.copies">mapred.reduce.parallel.copies</a></td><td>5</td><td>The default number of parallel transfers run by reduce |
---|
178 | during the copy(shuffle) phase. |
---|
179 | </td> |
---|
180 | </tr> |
---|
181 | <tr> |
---|
182 | <td><a name="mapred.reduce.copy.backoff">mapred.reduce.copy.backoff</a></td><td>300</td><td>The maximum amount of time (in seconds) a reducer spends on |
---|
183 | fetching one map output before declaring it as failed. |
---|
184 | </td> |
---|
185 | </tr> |
---|
186 | <tr> |
---|
187 | <td><a name="mapred.task.timeout">mapred.task.timeout</a></td><td>600000</td><td>The number of milliseconds before a task will be |
---|
188 | terminated if it neither reads an input, writes an output, nor |
---|
189 | updates its status string. |
---|
190 | </td> |
---|
191 | </tr> |
---|
192 | <tr> |
---|
193 | <td><a name="mapred.tasktracker.map.tasks.maximum">mapred.tasktracker.map.tasks.maximum</a></td><td>2</td><td>The maximum number of map tasks that will be run |
---|
194 | simultaneously by a task tracker. |
---|
195 | </td> |
---|
196 | </tr> |
---|
197 | <tr> |
---|
198 | <td><a name="mapred.tasktracker.reduce.tasks.maximum">mapred.tasktracker.reduce.tasks.maximum</a></td><td>2</td><td>The maximum number of reduce tasks that will be run |
---|
199 | simultaneously by a task tracker. |
---|
200 | </td> |
---|
201 | </tr> |
---|
202 | <tr> |
---|
203 | <td><a name="mapred.jobtracker.completeuserjobs.maximum">mapred.jobtracker.completeuserjobs.maximum</a></td><td>100</td><td>The maximum number of complete jobs per user to keep around |
---|
204 | before delegating them to the job history.</td> |
---|
205 | </tr> |
---|
206 | <tr> |
---|
207 | <td><a name="mapred.jobtracker.instrumentation">mapred.jobtracker.instrumentation</a></td><td>org.apache.hadoop.mapred.JobTrackerMetricsInst</td><td>Expert: The instrumentation class to associate with each JobTracker. |
---|
208 | </td> |
---|
209 | </tr> |
---|
210 | <tr> |
---|
211 | <td><a name="mapred.child.java.opts">mapred.child.java.opts</a></td><td>-Xmx200m</td><td>Java opts for the task tracker child processes. |
---|
212 | The following symbol, if present, will be interpolated: @taskid@ is replaced |
---|
213 | by current TaskID. Any other occurrences of '@' will go unchanged. |
---|
214 | For example, to enable verbose gc logging to a file named for the taskid in |
---|
215 | /tmp and to set the heap maximum to be a gigabyte, pass a 'value' of: |
---|
216 | -Xmx1024m -verbose:gc -Xloggc:/tmp/@taskid@.gc |
---|
217 | |
---|
218 | The configuration variable mapred.child.ulimit can be used to control the |
---|
219 | maximum virtual memory of the child processes. |
---|
220 | </td> |
---|
221 | </tr> |
---|
222 | <tr> |
---|
223 | <td><a name="mapred.child.ulimit">mapred.child.ulimit</a></td><td></td><td>The maximum virtual memory, in KB, of a process launched by the |
---|
224 | Map-Reduce framework. This can be used to control both the Mapper/Reducer |
---|
225 | tasks and applications using Hadoop Pipes, Hadoop Streaming etc. |
---|
226 | By default it is left unspecified to let cluster admins control it via |
---|
227 | limits.conf and other such relevant mechanisms. |
---|
228 | |
---|
229 | Note: mapred.child.ulimit must be greater than or equal to the -Xmx passed to |
---|
230 | JavaVM, else the VM might not start. |
---|
231 | </td> |
---|
232 | </tr> |
---|
233 | <tr> |
---|
234 | <td><a name="mapred.child.tmp">mapred.child.tmp</a></td><td>./tmp</td><td> To set the value of tmp directory for map and reduce tasks. |
---|
235 | If the value is an absolute path, it is directly assigned. Otherwise, it is |
---|
236 | prepended with task's working directory. The java tasks are executed with |
---|
237 | option -Djava.io.tmpdir='the absolute path of the tmp dir'. Pipes and |
---|
238 | streaming are set with environment variable, |
---|
239 | TMPDIR='the absolute path of the tmp dir' |
---|
240 | </td> |
---|
241 | </tr> |
---|
242 | <tr> |
---|
243 | <td><a name="mapred.inmem.merge.threshold">mapred.inmem.merge.threshold</a></td><td>1000</td><td>The threshold, in terms of the number of files |
---|
244 | for the in-memory merge process. When we accumulate threshold number of files |
---|
245 | we initiate the in-memory merge and spill to disk. A value of 0 or less than |
---|
246 | 0 indicates we want to DON'T have any threshold and instead depend only on |
---|
247 | the ramfs's memory consumption to trigger the merge. |
---|
248 | </td> |
---|
249 | </tr> |
---|
250 | <tr> |
---|
251 | <td><a name="mapred.job.shuffle.merge.percent">mapred.job.shuffle.merge.percent</a></td><td>0.66</td><td>The usage threshold at which an in-memory merge will be |
---|
252 | initiated, expressed as a percentage of the total memory allocated to |
---|
253 | storing in-memory map outputs, as defined by |
---|
254 | mapred.job.shuffle.input.buffer.percent. |
---|
255 | </td> |
---|
256 | </tr> |
---|
257 | <tr> |
---|
258 | <td><a name="mapred.job.shuffle.input.buffer.percent">mapred.job.shuffle.input.buffer.percent</a></td><td>0.70</td><td>The percentage of memory to be allocated from the maximum heap |
---|
259 | size to storing map outputs during the shuffle. |
---|
260 | </td> |
---|
261 | </tr> |
---|
262 | <tr> |
---|
263 | <td><a name="mapred.job.reduce.input.buffer.percent">mapred.job.reduce.input.buffer.percent</a></td><td>0.0</td><td>The percentage of memory- relative to the maximum heap size- to |
---|
264 | retain map outputs during the reduce. When the shuffle is concluded, any |
---|
265 | remaining map outputs in memory must consume less than this threshold before |
---|
266 | the reduce can begin. |
---|
267 | </td> |
---|
268 | </tr> |
---|
269 | <tr> |
---|
270 | <td><a name="mapred.map.tasks.speculative.execution">mapred.map.tasks.speculative.execution</a></td><td>true</td><td>If true, then multiple instances of some map tasks |
---|
271 | may be executed in parallel.</td> |
---|
272 | </tr> |
---|
273 | <tr> |
---|
274 | <td><a name="mapred.reduce.tasks.speculative.execution">mapred.reduce.tasks.speculative.execution</a></td><td>true</td><td>If true, then multiple instances of some reduce tasks |
---|
275 | may be executed in parallel.</td> |
---|
276 | </tr> |
---|
277 | <tr> |
---|
278 | <td><a name="mapred.job.reuse.jvm.num.tasks">mapred.job.reuse.jvm.num.tasks</a></td><td>1</td><td>How many tasks to run per jvm. If set to -1, there is |
---|
279 | no limit. |
---|
280 | </td> |
---|
281 | </tr> |
---|
282 | <tr> |
---|
283 | <td><a name="mapred.min.split.size">mapred.min.split.size</a></td><td>0</td><td>The minimum size chunk that map input should be split |
---|
284 | into. Note that some file formats may have minimum split sizes that |
---|
285 | take priority over this setting.</td> |
---|
286 | </tr> |
---|
287 | <tr> |
---|
288 | <td><a name="mapred.jobtracker.maxtasks.per.job">mapred.jobtracker.maxtasks.per.job</a></td><td>-1</td><td>The maximum number of tasks for a single job. |
---|
289 | A value of -1 indicates that there is no maximum. </td> |
---|
290 | </tr> |
---|
291 | <tr> |
---|
292 | <td><a name="mapred.submit.replication">mapred.submit.replication</a></td><td>10</td><td>The replication level for submitted job files. This |
---|
293 | should be around the square root of the number of nodes. |
---|
294 | </td> |
---|
295 | </tr> |
---|
296 | <tr> |
---|
297 | <td><a name="mapred.tasktracker.dns.interface">mapred.tasktracker.dns.interface</a></td><td>default</td><td>The name of the Network Interface from which a task |
---|
298 | tracker should report its IP address. |
---|
299 | </td> |
---|
300 | </tr> |
---|
301 | <tr> |
---|
302 | <td><a name="mapred.tasktracker.dns.nameserver">mapred.tasktracker.dns.nameserver</a></td><td>default</td><td>The host name or IP address of the name server (DNS) |
---|
303 | which a TaskTracker should use to determine the host name used by |
---|
304 | the JobTracker for communication and display purposes. |
---|
305 | </td> |
---|
306 | </tr> |
---|
307 | <tr> |
---|
308 | <td><a name="tasktracker.http.threads">tasktracker.http.threads</a></td><td>40</td><td>The number of worker threads that for the http server. This is |
---|
309 | used for map output fetching |
---|
310 | </td> |
---|
311 | </tr> |
---|
312 | <tr> |
---|
313 | <td><a name="mapred.task.tracker.http.address">mapred.task.tracker.http.address</a></td><td>0.0.0.0:50060</td><td> |
---|
314 | The task tracker http server address and port. |
---|
315 | If the port is 0 then the server will start on a free port. |
---|
316 | </td> |
---|
317 | </tr> |
---|
318 | <tr> |
---|
319 | <td><a name="keep.failed.task.files">keep.failed.task.files</a></td><td>false</td><td>Should the files for failed tasks be kept. This should only be |
---|
320 | used on jobs that are failing, because the storage is never |
---|
321 | reclaimed. It also prevents the map outputs from being erased |
---|
322 | from the reduce directory as they are consumed.</td> |
---|
323 | </tr> |
---|
324 | <tr> |
---|
325 | <td><a name="mapred.output.compress">mapred.output.compress</a></td><td>false</td><td>Should the job outputs be compressed? |
---|
326 | </td> |
---|
327 | </tr> |
---|
328 | <tr> |
---|
329 | <td><a name="mapred.output.compression.type">mapred.output.compression.type</a></td><td>RECORD</td><td>If the job outputs are to compressed as SequenceFiles, how should |
---|
330 | they be compressed? Should be one of NONE, RECORD or BLOCK. |
---|
331 | </td> |
---|
332 | </tr> |
---|
333 | <tr> |
---|
334 | <td><a name="mapred.output.compression.codec">mapred.output.compression.codec</a></td><td>org.apache.hadoop.io.compress.DefaultCodec</td><td>If the job outputs are compressed, how should they be compressed? |
---|
335 | </td> |
---|
336 | </tr> |
---|
337 | <tr> |
---|
338 | <td><a name="mapred.compress.map.output">mapred.compress.map.output</a></td><td>false</td><td>Should the outputs of the maps be compressed before being |
---|
339 | sent across the network. Uses SequenceFile compression. |
---|
340 | </td> |
---|
341 | </tr> |
---|
342 | <tr> |
---|
343 | <td><a name="mapred.map.output.compression.codec">mapred.map.output.compression.codec</a></td><td>org.apache.hadoop.io.compress.DefaultCodec</td><td>If the map outputs are compressed, how should they be |
---|
344 | compressed? |
---|
345 | </td> |
---|
346 | </tr> |
---|
347 | <tr> |
---|
348 | <td><a name="map.sort.class">map.sort.class</a></td><td>org.apache.hadoop.util.QuickSort</td><td>The default sort class for sorting keys. |
---|
349 | </td> |
---|
350 | </tr> |
---|
351 | <tr> |
---|
352 | <td><a name="mapred.userlog.limit.kb">mapred.userlog.limit.kb</a></td><td>0</td><td>The maximum size of user-logs of each task in KB. 0 disables the cap. |
---|
353 | </td> |
---|
354 | </tr> |
---|
355 | <tr> |
---|
356 | <td><a name="mapred.userlog.retain.hours">mapred.userlog.retain.hours</a></td><td>24</td><td>The maximum time, in hours, for which the user-logs are to be |
---|
357 | retained. |
---|
358 | </td> |
---|
359 | </tr> |
---|
360 | <tr> |
---|
361 | <td><a name="mapred.hosts">mapred.hosts</a></td><td></td><td>Names a file that contains the list of nodes that may |
---|
362 | connect to the jobtracker. If the value is empty, all hosts are |
---|
363 | permitted.</td> |
---|
364 | </tr> |
---|
365 | <tr> |
---|
366 | <td><a name="mapred.hosts.exclude">mapred.hosts.exclude</a></td><td></td><td>Names a file that contains the list of hosts that |
---|
367 | should be excluded by the jobtracker. If the value is empty, no |
---|
368 | hosts are excluded.</td> |
---|
369 | </tr> |
---|
370 | <tr> |
---|
371 | <td><a name="mapred.max.tracker.blacklists">mapred.max.tracker.blacklists</a></td><td>4</td><td>The number of blacklists for a taskTracker by various jobs |
---|
372 | after which the task tracker could be blacklisted across |
---|
373 | all jobs. The tracker will be given a tasks later |
---|
374 | (after a day). The tracker will become a healthy |
---|
375 | tracker after a restart. |
---|
376 | </td> |
---|
377 | </tr> |
---|
378 | <tr> |
---|
379 | <td><a name="mapred.max.tracker.failures">mapred.max.tracker.failures</a></td><td>4</td><td>The number of task-failures on a tasktracker of a given job |
---|
380 | after which new tasks of that job aren't assigned to it. |
---|
381 | </td> |
---|
382 | </tr> |
---|
383 | <tr> |
---|
384 | <td><a name="jobclient.output.filter">jobclient.output.filter</a></td><td>FAILED</td><td>The filter for controlling the output of the task's userlogs sent |
---|
385 | to the console of the JobClient. |
---|
386 | The permissible options are: NONE, KILLED, FAILED, SUCCEEDED and |
---|
387 | ALL. |
---|
388 | </td> |
---|
389 | </tr> |
---|
390 | <tr> |
---|
391 | <td><a name="mapred.job.tracker.persist.jobstatus.active">mapred.job.tracker.persist.jobstatus.active</a></td><td>false</td><td>Indicates if persistency of job status information is |
---|
392 | active or not. |
---|
393 | </td> |
---|
394 | </tr> |
---|
395 | <tr> |
---|
396 | <td><a name="mapred.job.tracker.persist.jobstatus.hours">mapred.job.tracker.persist.jobstatus.hours</a></td><td>0</td><td>The number of hours job status information is persisted in DFS. |
---|
397 | The job status information will be available after it drops of the memory |
---|
398 | queue and between jobtracker restarts. With a zero value the job status |
---|
399 | information is not persisted at all in DFS. |
---|
400 | </td> |
---|
401 | </tr> |
---|
402 | <tr> |
---|
403 | <td><a name="mapred.job.tracker.persist.jobstatus.dir">mapred.job.tracker.persist.jobstatus.dir</a></td><td>/jobtracker/jobsInfo</td><td>The directory where the job status information is persisted |
---|
404 | in a file system to be available after it drops of the memory queue and |
---|
405 | between jobtracker restarts. |
---|
406 | </td> |
---|
407 | </tr> |
---|
408 | <tr> |
---|
409 | <td><a name="mapred.task.profile">mapred.task.profile</a></td><td>false</td><td>To set whether the system should collect profiler |
---|
410 | information for some of the tasks in this job? The information is stored |
---|
411 | in the user log directory. The value is "true" if task profiling |
---|
412 | is enabled.</td> |
---|
413 | </tr> |
---|
414 | <tr> |
---|
415 | <td><a name="mapred.task.profile.maps">mapred.task.profile.maps</a></td><td>0-2</td><td> To set the ranges of map tasks to profile. |
---|
416 | mapred.task.profile has to be set to true for the value to be accounted. |
---|
417 | </td> |
---|
418 | </tr> |
---|
419 | <tr> |
---|
420 | <td><a name="mapred.task.profile.reduces">mapred.task.profile.reduces</a></td><td>0-2</td><td> To set the ranges of reduce tasks to profile. |
---|
421 | mapred.task.profile has to be set to true for the value to be accounted. |
---|
422 | </td> |
---|
423 | </tr> |
---|
424 | <tr> |
---|
425 | <td><a name="mapred.line.input.format.linespermap">mapred.line.input.format.linespermap</a></td><td>1</td><td> Number of lines per split in NLineInputFormat. |
---|
426 | </td> |
---|
427 | </tr> |
---|
428 | <tr> |
---|
429 | <td><a name="mapred.skip.attempts.to.start.skipping">mapred.skip.attempts.to.start.skipping</a></td><td>2</td><td> The number of Task attempts AFTER which skip mode |
---|
430 | will be kicked off. When skip mode is kicked off, the |
---|
431 | tasks reports the range of records which it will process |
---|
432 | next, to the TaskTracker. So that on failures, TT knows which |
---|
433 | ones are possibly the bad records. On further executions, |
---|
434 | those are skipped. |
---|
435 | </td> |
---|
436 | </tr> |
---|
437 | <tr> |
---|
438 | <td><a name="mapred.skip.map.auto.incr.proc.count">mapred.skip.map.auto.incr.proc.count</a></td><td>true</td><td> The flag which if set to true, |
---|
439 | SkipBadRecords.COUNTER_MAP_PROCESSED_RECORDS is incremented |
---|
440 | by MapRunner after invoking the map function. This value must be set to |
---|
441 | false for applications which process the records asynchronously |
---|
442 | or buffer the input records. For example streaming. |
---|
443 | In such cases applications should increment this counter on their own. |
---|
444 | </td> |
---|
445 | </tr> |
---|
446 | <tr> |
---|
447 | <td><a name="mapred.skip.reduce.auto.incr.proc.count">mapred.skip.reduce.auto.incr.proc.count</a></td><td>true</td><td> The flag which if set to true, |
---|
448 | SkipBadRecords.COUNTER_REDUCE_PROCESSED_GROUPS is incremented |
---|
449 | by framework after invoking the reduce function. This value must be set to |
---|
450 | false for applications which process the records asynchronously |
---|
451 | or buffer the input records. For example streaming. |
---|
452 | In such cases applications should increment this counter on their own. |
---|
453 | </td> |
---|
454 | </tr> |
---|
455 | <tr> |
---|
456 | <td><a name="mapred.skip.out.dir">mapred.skip.out.dir</a></td><td></td><td> If no value is specified here, the skipped records are |
---|
457 | written to the output directory at _logs/skip. |
---|
458 | User can stop writing skipped records by giving the value "none". |
---|
459 | </td> |
---|
460 | </tr> |
---|
461 | <tr> |
---|
462 | <td><a name="mapred.skip.map.max.skip.records">mapred.skip.map.max.skip.records</a></td><td>0</td><td> The number of acceptable skip records surrounding the bad |
---|
463 | record PER bad record in mapper. The number includes the bad record as well. |
---|
464 | To turn the feature of detection/skipping of bad records off, set the |
---|
465 | value to 0. |
---|
466 | The framework tries to narrow down the skipped range by retrying |
---|
467 | until this threshold is met OR all attempts get exhausted for this task. |
---|
468 | Set the value to Long.MAX_VALUE to indicate that framework need not try to |
---|
469 | narrow down. Whatever records(depends on application) get skipped are |
---|
470 | acceptable. |
---|
471 | </td> |
---|
472 | </tr> |
---|
473 | <tr> |
---|
474 | <td><a name="mapred.skip.reduce.max.skip.groups">mapred.skip.reduce.max.skip.groups</a></td><td>0</td><td> The number of acceptable skip groups surrounding the bad |
---|
475 | group PER bad group in reducer. The number includes the bad group as well. |
---|
476 | To turn the feature of detection/skipping of bad groups off, set the |
---|
477 | value to 0. |
---|
478 | The framework tries to narrow down the skipped range by retrying |
---|
479 | until this threshold is met OR all attempts get exhausted for this task. |
---|
480 | Set the value to Long.MAX_VALUE to indicate that framework need not try to |
---|
481 | narrow down. Whatever groups(depends on application) get skipped are |
---|
482 | acceptable. |
---|
483 | </td> |
---|
484 | </tr> |
---|
485 | <tr> |
---|
486 | <td><a name="job.end.retry.attempts">job.end.retry.attempts</a></td><td>0</td><td>Indicates how many times hadoop should attempt to contact the |
---|
487 | notification URL </td> |
---|
488 | </tr> |
---|
489 | <tr> |
---|
490 | <td><a name="job.end.retry.interval">job.end.retry.interval</a></td><td>30000</td><td>Indicates time in milliseconds between notification URL retry |
---|
491 | calls</td> |
---|
492 | </tr> |
---|
493 | <tr> |
---|
494 | <td><a name="hadoop.rpc.socket.factory.class.JobSubmissionProtocol">hadoop.rpc.socket.factory.class.JobSubmissionProtocol</a></td><td></td><td> SocketFactory to use to connect to a Map/Reduce master |
---|
495 | (JobTracker). If null or empty, then use hadoop.rpc.socket.class.default. |
---|
496 | </td> |
---|
497 | </tr> |
---|
498 | <tr> |
---|
499 | <td><a name="mapred.task.cache.levels">mapred.task.cache.levels</a></td><td>2</td><td> This is the max level of the task cache. For example, if |
---|
500 | the level is 2, the tasks cached are at the host level and at the rack |
---|
501 | level. |
---|
502 | </td> |
---|
503 | </tr> |
---|
504 | <tr> |
---|
505 | <td><a name="mapred.queue.names">mapred.queue.names</a></td><td>default</td><td> Comma separated list of queues configured for this jobtracker. |
---|
506 | Jobs are added to queues and schedulers can configure different |
---|
507 | scheduling properties for the various queues. To configure a property |
---|
508 | for a queue, the name of the queue must match the name specified in this |
---|
509 | value. Queue properties that are common to all schedulers are configured |
---|
510 | here with the naming convention, mapred.queue.$QUEUE-NAME.$PROPERTY-NAME, |
---|
511 | for e.g. mapred.queue.default.submit-job-acl. |
---|
512 | The number of queues configured in this parameter could depend on the |
---|
513 | type of scheduler being used, as specified in |
---|
514 | mapred.jobtracker.taskScheduler. For example, the JobQueueTaskScheduler |
---|
515 | supports only a single queue, which is the default configured here. |
---|
516 | Before adding more queues, ensure that the scheduler you've configured |
---|
517 | supports multiple queues. |
---|
518 | </td> |
---|
519 | </tr> |
---|
520 | <tr> |
---|
521 | <td><a name="mapred.acls.enabled">mapred.acls.enabled</a></td><td>false</td><td> Specifies whether ACLs are enabled, and should be checked |
---|
522 | for various operations. |
---|
523 | </td> |
---|
524 | </tr> |
---|
525 | <tr> |
---|
526 | <td><a name="mapred.queue.default.acl-submit-job">mapred.queue.default.acl-submit-job</a></td><td>*</td><td> Comma separated list of user and group names that are allowed |
---|
527 | to submit jobs to the 'default' queue. The user list and the group list |
---|
528 | are separated by a blank. For e.g. alice,bob group1,group2. |
---|
529 | If set to the special value '*', it means all users are allowed to |
---|
530 | submit jobs. |
---|
531 | </td> |
---|
532 | </tr> |
---|
533 | <tr> |
---|
534 | <td><a name="mapred.queue.default.acl-administer-jobs">mapred.queue.default.acl-administer-jobs</a></td><td>*</td><td> Comma separated list of user and group names that are allowed |
---|
535 | to delete jobs or modify job's priority for jobs not owned by the current |
---|
536 | user in the 'default' queue. The user list and the group list |
---|
537 | are separated by a blank. For e.g. alice,bob group1,group2. |
---|
538 | If set to the special value '*', it means all users are allowed to do |
---|
539 | this operation. |
---|
540 | </td> |
---|
541 | </tr> |
---|
542 | <tr> |
---|
543 | <td><a name="mapred.job.queue.name">mapred.job.queue.name</a></td><td>default</td><td> Queue to which a job is submitted. This must match one of the |
---|
544 | queues defined in mapred.queue.names for the system. Also, the ACL setup |
---|
545 | for the queue must allow the current user to submit a job to the queue. |
---|
546 | Before specifying a queue, ensure that the system is configured with |
---|
547 | the queue, and access is allowed for submitting jobs to the queue. |
---|
548 | </td> |
---|
549 | </tr> |
---|
550 | <tr> |
---|
551 | <td><a name="mapred.tasktracker.indexcache.mb">mapred.tasktracker.indexcache.mb</a></td><td>10</td><td> The maximum memory that a task tracker allows for the |
---|
552 | index cache that is used when serving map outputs to reducers. |
---|
553 | </td> |
---|
554 | </tr> |
---|
555 | <tr> |
---|
556 | <td><a name="mapred.merge.recordsBeforeProgress">mapred.merge.recordsBeforeProgress</a></td><td>10000</td><td> The number of records to process during merge before |
---|
557 | sending a progress notification to the TaskTracker. |
---|
558 | </td> |
---|
559 | </tr> |
---|
560 | <tr> |
---|
561 | <td><a name="mapred.reduce.slowstart.completed.maps">mapred.reduce.slowstart.completed.maps</a></td><td>0.05</td><td>Fraction of the number of maps in the job which should be |
---|
562 | complete before reduces are scheduled for the job. |
---|
563 | </td> |
---|
564 | </tr> |
---|
565 | </table> |
---|
566 | </body> |
---|
567 | </html> |
---|