source: proiecte/HadoopJUnit/hadoop-0.20.1/docs/cn/core-default.html @ 120

Last change on this file since 120 was 120, checked in by (none), 14 years ago

Added the mail files for the Hadoop JUNit Project

  • Property svn:executable set to *
File size: 11.9 KB
Line 
1<html>
2<body>
3<table border="1">
4<tr>
5<td>name</td><td>value</td><td>description</td>
6</tr>
7<tr>
8<td><a name="hadoop.tmp.dir">hadoop.tmp.dir</a></td><td>/tmp/hadoop-${user.name}</td><td>A base for other temporary directories.</td>
9</tr>
10<tr>
11<td><a name="hadoop.native.lib">hadoop.native.lib</a></td><td>true</td><td>Should native hadoop libraries, if present, be used.</td>
12</tr>
13<tr>
14<td><a name="hadoop.http.filter.initializers">hadoop.http.filter.initializers</a></td><td></td><td>A comma separated list of class names. Each class in the list
15  must extend org.apache.hadoop.http.FilterInitializer. The corresponding
16  Filter will be initialized. Then, the Filter will be applied to all user
17  facing jsp and servlet web pages.  The ordering of the list defines the
18  ordering of the filters.</td>
19</tr>
20<tr>
21<td><a name="hadoop.security.authorization">hadoop.security.authorization</a></td><td>false</td><td>Is service-level authorization enabled?</td>
22</tr>
23<tr>
24<td><a name="hadoop.logfile.size">hadoop.logfile.size</a></td><td>10000000</td><td>The max size of each log file</td>
25</tr>
26<tr>
27<td><a name="hadoop.logfile.count">hadoop.logfile.count</a></td><td>10</td><td>The max number of log files</td>
28</tr>
29<tr>
30<td><a name="io.file.buffer.size">io.file.buffer.size</a></td><td>4096</td><td>The size of buffer for use in sequence files.
31  The size of this buffer should probably be a multiple of hardware
32  page size (4096 on Intel x86), and it determines how much data is
33  buffered during read and write operations.</td>
34</tr>
35<tr>
36<td><a name="io.bytes.per.checksum">io.bytes.per.checksum</a></td><td>512</td><td>The number of bytes per checksum.  Must not be larger than
37  io.file.buffer.size.</td>
38</tr>
39<tr>
40<td><a name="io.skip.checksum.errors">io.skip.checksum.errors</a></td><td>false</td><td>If true, when a checksum error is encountered while
41  reading a sequence file, entries are skipped, instead of throwing an
42  exception.</td>
43</tr>
44<tr>
45<td><a name="io.compression.codecs">io.compression.codecs</a></td><td>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec</td><td>A list of the compression codec classes that can be used
46               for compression/decompression.</td>
47</tr>
48<tr>
49<td><a name="io.serializations">io.serializations</a></td><td>org.apache.hadoop.io.serializer.WritableSerialization</td><td>A list of serialization classes that can be used for
50  obtaining serializers and deserializers.</td>
51</tr>
52<tr>
53<td><a name="fs.default.name">fs.default.name</a></td><td>file:///</td><td>The name of the default file system.  A URI whose
54  scheme and authority determine the FileSystem implementation.  The
55  uri's scheme determines the config property (fs.SCHEME.impl) naming
56  the FileSystem implementation class.  The uri's authority is used to
57  determine the host, port, etc. for a filesystem.</td>
58</tr>
59<tr>
60<td><a name="fs.trash.interval">fs.trash.interval</a></td><td>0</td><td>Number of minutes between trash checkpoints.
61  If zero, the trash feature is disabled.
62  </td>
63</tr>
64<tr>
65<td><a name="fs.file.impl">fs.file.impl</a></td><td>org.apache.hadoop.fs.LocalFileSystem</td><td>The FileSystem for file: uris.</td>
66</tr>
67<tr>
68<td><a name="fs.hdfs.impl">fs.hdfs.impl</a></td><td>org.apache.hadoop.hdfs.DistributedFileSystem</td><td>The FileSystem for hdfs: uris.</td>
69</tr>
70<tr>
71<td><a name="fs.s3.impl">fs.s3.impl</a></td><td>org.apache.hadoop.fs.s3.S3FileSystem</td><td>The FileSystem for s3: uris.</td>
72</tr>
73<tr>
74<td><a name="fs.s3n.impl">fs.s3n.impl</a></td><td>org.apache.hadoop.fs.s3native.NativeS3FileSystem</td><td>The FileSystem for s3n: (Native S3) uris.</td>
75</tr>
76<tr>
77<td><a name="fs.kfs.impl">fs.kfs.impl</a></td><td>org.apache.hadoop.fs.kfs.KosmosFileSystem</td><td>The FileSystem for kfs: uris.</td>
78</tr>
79<tr>
80<td><a name="fs.hftp.impl">fs.hftp.impl</a></td><td>org.apache.hadoop.hdfs.HftpFileSystem</td><td></td>
81</tr>
82<tr>
83<td><a name="fs.hsftp.impl">fs.hsftp.impl</a></td><td>org.apache.hadoop.hdfs.HsftpFileSystem</td><td></td>
84</tr>
85<tr>
86<td><a name="fs.ftp.impl">fs.ftp.impl</a></td><td>org.apache.hadoop.fs.ftp.FTPFileSystem</td><td>The FileSystem for ftp: uris.</td>
87</tr>
88<tr>
89<td><a name="fs.ramfs.impl">fs.ramfs.impl</a></td><td>org.apache.hadoop.fs.InMemoryFileSystem</td><td>The FileSystem for ramfs: uris.</td>
90</tr>
91<tr>
92<td><a name="fs.har.impl">fs.har.impl</a></td><td>org.apache.hadoop.fs.HarFileSystem</td><td>The filesystem for Hadoop archives. </td>
93</tr>
94<tr>
95<td><a name="fs.checkpoint.dir">fs.checkpoint.dir</a></td><td>${hadoop.tmp.dir}/dfs/namesecondary</td><td>Determines where on the local filesystem the DFS secondary
96      name node should store the temporary images to merge.
97      If this is a comma-delimited list of directories then the image is
98      replicated in all of the directories for redundancy.
99  </td>
100</tr>
101<tr>
102<td><a name="fs.checkpoint.edits.dir">fs.checkpoint.edits.dir</a></td><td>${fs.checkpoint.dir}</td><td>Determines where on the local filesystem the DFS secondary
103      name node should store the temporary edits to merge.
104      If this is a comma-delimited list of directoires then teh edits is
105      replicated in all of the directoires for redundancy.
106      Default value is same as fs.checkpoint.dir
107  </td>
108</tr>
109<tr>
110<td><a name="fs.checkpoint.period">fs.checkpoint.period</a></td><td>3600</td><td>The number of seconds between two periodic checkpoints.
111  </td>
112</tr>
113<tr>
114<td><a name="fs.checkpoint.size">fs.checkpoint.size</a></td><td>67108864</td><td>The size of the current edit log (in bytes) that triggers
115       a periodic checkpoint even if the fs.checkpoint.period hasn't expired.
116  </td>
117</tr>
118<tr>
119<td><a name="fs.s3.block.size">fs.s3.block.size</a></td><td>67108864</td><td>Block size to use when writing files to S3.</td>
120</tr>
121<tr>
122<td><a name="fs.s3.buffer.dir">fs.s3.buffer.dir</a></td><td>${hadoop.tmp.dir}/s3</td><td>Determines where on the local filesystem the S3 filesystem
123  should store files before sending them to S3
124  (or after retrieving them from S3).
125  </td>
126</tr>
127<tr>
128<td><a name="fs.s3.maxRetries">fs.s3.maxRetries</a></td><td>4</td><td>The maximum number of retries for reading or writing files to S3,
129  before we signal failure to the application.
130  </td>
131</tr>
132<tr>
133<td><a name="fs.s3.sleepTimeSeconds">fs.s3.sleepTimeSeconds</a></td><td>10</td><td>The number of seconds to sleep between each S3 retry.
134  </td>
135</tr>
136<tr>
137<td><a name="local.cache.size">local.cache.size</a></td><td>10737418240</td><td>The limit on the size of cache you want to keep, set by default
138  to 10GB. This will act as a soft limit on the cache directory for out of band data.
139  </td>
140</tr>
141<tr>
142<td><a name="io.seqfile.compress.blocksize">io.seqfile.compress.blocksize</a></td><td>1000000</td><td>The minimum block size for compression in block compressed
143          SequenceFiles.
144  </td>
145</tr>
146<tr>
147<td><a name="io.seqfile.lazydecompress">io.seqfile.lazydecompress</a></td><td>true</td><td>Should values of block-compressed SequenceFiles be decompressed
148          only when necessary.
149  </td>
150</tr>
151<tr>
152<td><a name="io.seqfile.sorter.recordlimit">io.seqfile.sorter.recordlimit</a></td><td>1000000</td><td>The limit on number of records to be kept in memory in a spill
153          in SequenceFiles.Sorter
154  </td>
155</tr>
156<tr>
157<td><a name="io.mapfile.bloom.size">io.mapfile.bloom.size</a></td><td>1048576</td><td>The size of BloomFilter-s used in BloomMapFile. Each time this many
158  keys is appended the next BloomFilter will be created (inside a DynamicBloomFilter).
159  Larger values minimize the number of filters, which slightly increases the performance,
160  but may waste too much space if the total number of keys is usually much smaller
161  than this number.
162  </td>
163</tr>
164<tr>
165<td><a name="io.mapfile.bloom.error.rate">io.mapfile.bloom.error.rate</a></td><td>0.005</td><td>The rate of false positives in BloomFilter-s used in BloomMapFile.
166  As this value decreases, the size of BloomFilter-s increases exponentially. This
167  value is the probability of encountering false positives (default is 0.5%).
168  </td>
169</tr>
170<tr>
171<td><a name="hadoop.util.hash.type">hadoop.util.hash.type</a></td><td>murmur</td><td>The default implementation of Hash. Currently this can take one of the
172  two values: 'murmur' to select MurmurHash and 'jenkins' to select JenkinsHash.
173  </td>
174</tr>
175<tr>
176<td><a name="ipc.client.idlethreshold">ipc.client.idlethreshold</a></td><td>4000</td><td>Defines the threshold number of connections after which
177               connections will be inspected for idleness.
178  </td>
179</tr>
180<tr>
181<td><a name="ipc.client.kill.max">ipc.client.kill.max</a></td><td>10</td><td>Defines the maximum number of clients to disconnect in one go.
182  </td>
183</tr>
184<tr>
185<td><a name="ipc.client.connection.maxidletime">ipc.client.connection.maxidletime</a></td><td>10000</td><td>The maximum time in msec after which a client will bring down the
186               connection to the server.
187  </td>
188</tr>
189<tr>
190<td><a name="ipc.client.connect.max.retries">ipc.client.connect.max.retries</a></td><td>10</td><td>Indicates the number of retries a client will make to establish
191               a server connection.
192  </td>
193</tr>
194<tr>
195<td><a name="ipc.server.listen.queue.size">ipc.server.listen.queue.size</a></td><td>128</td><td>Indicates the length of the listen queue for servers accepting
196               client connections.
197  </td>
198</tr>
199<tr>
200<td><a name="ipc.server.tcpnodelay">ipc.server.tcpnodelay</a></td><td>false</td><td>Turn on/off Nagle's algorithm for the TCP socket connection on
201  the server. Setting to true disables the algorithm and may decrease latency
202  with a cost of more/smaller packets.
203  </td>
204</tr>
205<tr>
206<td><a name="ipc.client.tcpnodelay">ipc.client.tcpnodelay</a></td><td>false</td><td>Turn on/off Nagle's algorithm for the TCP socket connection on
207  the client. Setting to true disables the algorithm and may decrease latency
208  with a cost of more/smaller packets.
209  </td>
210</tr>
211<tr>
212<td><a name="webinterface.private.actions">webinterface.private.actions</a></td><td>false</td><td> If set to true, the web interfaces of JT and NN may contain
213                actions, such as kill job, delete file, etc., that should
214                not be exposed to public. Enable this option if the interfaces
215                are only reachable by those who have the right authorization.
216  </td>
217</tr>
218<tr>
219<td><a name="hadoop.rpc.socket.factory.class.default">hadoop.rpc.socket.factory.class.default</a></td><td>org.apache.hadoop.net.StandardSocketFactory</td><td> Default SocketFactory to use. This parameter is expected to be
220    formatted as "package.FactoryClassName".
221  </td>
222</tr>
223<tr>
224<td><a name="hadoop.rpc.socket.factory.class.ClientProtocol">hadoop.rpc.socket.factory.class.ClientProtocol</a></td><td></td><td> SocketFactory to use to connect to a DFS. If null or empty, use
225    hadoop.rpc.socket.class.default. This socket factory is also used by
226    DFSClient to create sockets to DataNodes.
227  </td>
228</tr>
229<tr>
230<td><a name="hadoop.socks.server">hadoop.socks.server</a></td><td></td><td> Address (host:port) of the SOCKS server to be used by the
231    SocksSocketFactory.
232  </td>
233</tr>
234<tr>
235<td><a name="topology.node.switch.mapping.impl">topology.node.switch.mapping.impl</a></td><td>org.apache.hadoop.net.ScriptBasedMapping</td><td> The default implementation of the DNSToSwitchMapping. It
236    invokes a script specified in topology.script.file.name to resolve
237    node names. If the value for topology.script.file.name is not set, the
238    default value of DEFAULT_RACK is returned for all node names.
239  </td>
240</tr>
241<tr>
242<td><a name="topology.script.file.name">topology.script.file.name</a></td><td></td><td> The script name that should be invoked to resolve DNS names to
243    NetworkTopology names. Example: the script would take host.foo.bar as an
244    argument, and return /rack1 as the output.
245  </td>
246</tr>
247<tr>
248<td><a name="topology.script.number.args">topology.script.number.args</a></td><td>100</td><td> The max number of args that the script configured with
249    topology.script.file.name should be run with. Each arg is an
250    IP address.
251  </td>
252</tr>
253</table>
254</body>
255</html>
Note: See TracBrowser for help on using the repository browser.