source: proiecte/HadoopJUnit/hadoop-0.20.1/src/docs/releasenotes.html @ 141

Last change on this file since 141 was 120, checked in by (none), 14 years ago

Added the mail files for the Hadoop JUNit Project

  • Property svn:executable set to *
File size: 22.8 KB
Line 
1<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
2<html>
3<head>
4<META http-equiv="Content-Type" content="text/html; charset=UTF-8">
5<title>Hadoop 0.20.1 Release Notes</title>
6<STYLE type="text/css">
7                H1 {font-family: sans-serif}
8                H2 {font-family: sans-serif; margin-left: 7mm}
9                TABLE {margin-left: 7mm}
10        </STYLE>
11</head>
12<body>
13<h1>Hadoop 0.20.1 Release Notes</h1>
14                These release notes include new developer and user-facing incompatibilities, features, and major improvements. The table below is sorted by Component.
15
16                <a name="changes"></a>
17<h2>Changes Since Hadoop 0.20.0</h2>
18
19<h3>Common</h3>
20
21<h4>        Sub-task
22</h4>
23<ul>
24<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-6213'>HADOOP-6213</a>] -         Remove commons dependency on commons-cli2
25</li>
26</ul>
27   
28<h4>        Bug
29</h4>
30<ul>
31<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-4626'>HADOOP-4626</a>] -         API link in forrest doc should point to the same version of hadoop.
32</li>
33<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-4674'>HADOOP-4674</a>] -         hadoop fs -help should list detailed help info for the following commands: test, text, tail, stat &amp; touchz
34</li>
35<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-4856'>HADOOP-4856</a>] -         Document JobInitializationPoller configuration in capacity scheduler forrest documentation.
36</li>
37<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-4931'>HADOOP-4931</a>] -         Document TaskTracker's memory management functionality and CapacityScheduler's memory based scheduling.
38</li>
39<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-5210'>HADOOP-5210</a>] -         Reduce Task Progress shows &gt; 100% when the total size of map outputs (for a single reducer) is high
40</li>
41<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-5213'>HADOOP-5213</a>] -         BZip2CompressionOutputStream NullPointerException
42</li>
43<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-5349'>HADOOP-5349</a>] -         When the size required for a path is -1, LocalDirAllocator.getLocalPathForWrite fails with a DiskCheckerException when the disk it selects is bad.
44</li>
45<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-5533'>HADOOP-5533</a>] -         Recovery duration shown on the jobtracker webpage is inaccurate
46</li>
47<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-5539'>HADOOP-5539</a>] -         o.a.h.mapred.Merger not maintaining map out compression on intermediate files
48</li>
49<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-5636'>HADOOP-5636</a>] -         Job is left in Running state after a killJob
50</li>
51<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-5641'>HADOOP-5641</a>] -         Possible NPE in CapacityScheduler's MemoryMatcher
52</li>
53<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-5646'>HADOOP-5646</a>] -         TestQueueCapacities is failing Hudson tests for the last few builds
54</li>
55<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-5648'>HADOOP-5648</a>] -         Not able to generate gridmix.jar on already compiled version of hadoop
56</li>
57<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-5654'>HADOOP-5654</a>] -         TestReplicationPolicy.&lt;init&gt; fails on java.net.BindException
58</li>
59<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-5655'>HADOOP-5655</a>] -         TestMRServerPorts fails on java.net.BindException
60</li>
61<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-5688'>HADOOP-5688</a>] -         HftpFileSystem.getChecksum(..) does not work for the paths with scheme and authority
62</li>
63<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-5691'>HADOOP-5691</a>] -         org.apache.hadoop.mapreduce.Reducer should not be abstract.
64</li>
65<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-5711'>HADOOP-5711</a>] -         Change Namenode file close log to info
66</li>
67<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-5718'>HADOOP-5718</a>] -         Capacity Scheduler should not check for presence of default queue while starting up.
68</li>
69<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-5719'>HADOOP-5719</a>] -         Jobs failed during job initalization are never removed from Capacity Schedulers waiting list
70</li>
71<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-5736'>HADOOP-5736</a>] -         Update CapacityScheduler documentation to reflect latest changes
72</li>
73<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-5746'>HADOOP-5746</a>] -         Errors encountered in MROutputThread after the last map/reduce call can go undetected
74</li>
75<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-5796'>HADOOP-5796</a>] -         DFS Write pipeline does not detect defective datanode correctly in some cases (HADOOP-3339)
76</li>
77<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-5828'>HADOOP-5828</a>] -         Use absolute path for JobTracker's mapred.local.dir in MiniMRCluster
78</li>
79<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-5850'>HADOOP-5850</a>] -         map/reduce doesn't run jobs with 0 maps
80</li>
81<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-5863'>HADOOP-5863</a>] -         mapred metrics shows negative count of waiting maps and reduces
82</li>
83<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-5869'>HADOOP-5869</a>] -         TestQueueCapacitisues.apache.org/jjira/browse/HADOOP-OP-6017</a>] -         NameNode and SecondaryNameNode fail to restart because of abnormal filenames.
84</li>
85<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-6097'>HADOOP-6097</a>] -         Multiple bugs w/ Hadoop archives
86</li>
87<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-6139'>HADOOP-6139</a>] -         Incomplete help message is displayed for rm and rmr options.
88</li>
89<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-6141'>HADOOP-6141</a>] -         hadoop 0.20 branch &quot;test-patch&quot; is broken
90</li>
91<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-6145'>HADOOP-6145</a>] -         No error message for deleting non-existant file or directory.
92</li>
93<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-6215'>HADOOP-6215</a>] -         fix GenericOptionParser to deal with -D with '=' in the value
94</li>
95</ul>
96   
97<h4>        Improvement
98</h4>
99<ul>
100<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-5726'>HADOOP-5726</a>] -         Remove pre-emption from the capacity scheduler code base
101</li>
102</ul>
103   
104<h4>        New Feature
105</h4>
106<ul>
107<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-3315'>HADOOP-3315</a>] -         New binary file format
108</li>
109<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-5714'>HADOOP-5714</a>] -         Metric to show number of fs.exists (or number of getFileInfo) calls
110</li>
111<li>[<a href='https://issues.apache.org/jira/browse/HADOOP-6080'>HADOOP-6080</a>] -         Handling of  Trash with quota
112</li>
113</ul>
114
115<h3>HDFS</h3>
116
117<h4>        Bug
118</h4>
119<ul>
120  <li>[<a href='https://issues.apache.org/jira/browse/HDFS-26'>HDFS-26</a>] -           HADOOP-5862 for version .20  (Namespace quota exceeded message unclear)
121</li>
122<li>[<a href='https://issues.apache.org/jira/browse/HDFS-167'>HDFS-167</a>] -         DFSClient continues to retry indefinitely
123</li>
124<li>[<a href='https://issues.apache.org/jira/browse/HDFS-438'>HDFS-438</a>] -         Improve help message for quotas
125</li>
126<li>[<a href='https://issues.apache.org/jira/browse/HDFS-442'>HDFS-442</a>] -         dfsthroughput in test.jar throws NPE
127</li>
128<li>[<a href='https://issues.apache.org/jira/browse/HDFS-485'>HDFS-485</a>] -         error : too many fetch failures
129</li>
130<li>[<a href='https://issues.apache.org/jira/browse/HDFS-495'>HDFS-495</a>] -         Hadoop FSNamesystem startFileInternal() getLease() has bug
131</li>
132<li>[<a href='https://issues.apache.org/jira/browse/HDFS-525'>HDFS-525</a>] -         ListPathsServlet.java uses static SimpleDateFormat that has threading issues
133</li>
134</ul>
135   
136<h4>        Improvement
137</h4>
138<ul>
139<li>[<a href='https://issues.apache.org/jira/browse/HDFS-504'>HDFS-504</a>] -         HDFS updates the modification time of a file when the file is closed.
140</li>
141<li>[<a href='https://issues.apache.org/jira/browse/HDFS-527'>HDFS-527</a>] -         Refactor DFSClient constructors
142</li>
143</ul>
144                               
145<h3>Map/Reduce</h3>
146
147<h4>        Bug
148</h4>
149<ul>
150<li>[<a href='https://issues.apache.org/jira/browse/MAPREDUCE-2'>MAPREDUCE-2</a>] -         ArrayOutOfIndex error in KeyFieldBasedPartitioner on empty key
151</li>
152<li>[<a href='https://issues.apache.org/jira/browse/MAPREDUCE-18'>MAPREDUCE-18</a>] -         Under load the shuffle sometimes gets incorrect data
153</li>
154<li>[<a href='https://issues.apache.org/jira/browse/MAPREDUCE-40'>MAPREDUCE-40</a>] -         Memory management variables need a backwards compatibility option after HADOOP-5881
155</li>
156<li>[<a href='https://issues.apache.org/jira/browse/MAPREDUCE-112'>MAPREDUCE-112</a>] -         Reduce Input Records and Reduce Output Records counters are not being set when using the new Mapreduce reducer API
157</li>
158<li>[<a href='https://issues.apache.org/jira/browse/MAPREDUCE-124'>MAPREDUCE-124</a>] -         When abortTask of OutputCommitter fails with an Exception for a map-only job, the task is marked as success
159</li>
160<li>[<a href='https://issues.apache.org/jira/browse/MAPREDUCE-130'>MAPREDUCE-130</a>] -         Delete the jobconf copy from the log directory of the JobTracker when the job is retired
161</li>
162<li>[<a href='https://issues.apache.org/jira/browse/MAPREDUCE-179'>MAPREDUCE-179</a>] -         setProgress not called for new RecordReaders
163</li>
164<li>[<a href='https://issues.apache.org/jira/browse/MAPREDUCE-383'>MAPREDUCE-383</a>] -         pipes combiner does not reset properly after a spill
165</li>
166<li>[<a href='https://issues.apache.org/jira/browse/MAPREDUCE-421'>MAPREDUCE-421</a>] -         mapred pipes might return exit code 0 even when failing
167</li>
168<li>[<a href='https://issues.apache.org/jira/browse/MAPREDUCE-430'>MAPREDUCE-430</a>] -         Task stuck in cleanup with OutOfMemoryErrors
169</li>
170<li>[<a href='https://issues.apache.org/jira/browse/MAPREDUCE-565'>MAPREDUCE-565</a>] -         Partitioner does not work with new API
171</li>
172<li>[<a href='https://issues.apache.org/jira/browse/MAPREDUCE-657'>MAPREDUCE-657</a>] -         CompletedJobStatusStore hardcodes filesystem to hdfs
173</li>
174<li>[<a href='https://issues.apache.org/jira/browse/MAPREDUCE-687'>MAPREDUCE-687</a>] -         TestMiniMRMapRedDebugScript fails sometimes
175</li>
176<li>[<a href='https://issues.apache.org/jira/browse/MAPREDUCE-735'>MAPREDUCE-735</a>] -         ArrayIndexOutOfBoundsException is thrown by KeyFieldBasedPartitioner
177</li>
178<li>[<a href='https://issues.apache.org/jira/browse/MAPREDUCE-745'>MAPREDUCE-745</a>] -         TestRecoveryManager fails sometimes
179</li>
180<li>[<a href='https://issues.apache.org/jira/browse/MAPREDUCE-796'>MAPREDUCE-796</a>] -         Encountered &quot;ClassCastException&quot; on tasktracker while running wordcount with MultithreadedMapRunner
181</li>
182<li>[<a href='https://issues.apache.org/jira/browse/MAPREDUCE-805'>MAPREDUCE-805</a>] -         Deadlock in Jobtracker
183</li>
184<li>[<a href='https://issues.apache.org/jira/browse/MAPREDUCE-806'>MAPREDUCE-806</a>] -         WordCount example does not compile given the current instructions
185</li>
186<li>[<a href='https://issues.apache.org/jira/browse/MAPREDUCE-807'>MAPREDUCE-807</a>] -         Stray user files in mapred.system.dir with permissions other than 777 can prevent the jobtracker from starting up.
187</li>
188<li>[<a href='https://issues.apache.org/jira/browse/MAPREDUCE-818'>MAPREDUCE-818</a>] -         org.apache.hadoop.mapreduce.Counters.getGroup returns null if the group name doesnt exist.
189</li>
190<li>[<a href='https://issues.apache.org/jira/browse/MAPREDUCE-827'>MAPREDUCE-827</a>] -         &quot;hadoop job -status &lt;jobid&gt;&quot; command should display job's completion status also.
191</li>
192<li>[<a href='https://issues.apache.org/jira/browse/MAPREDUCE-832'>MAPREDUCE-832</a>] -         Too many WARN messages about deprecated memorty config variables in JobTacker log
193</li>
194<li>[<a href='https://issues.apache.org/jira/browse/MAPREDUCE-834'>MAPREDUCE-834</a>] -         When TaskTracker config use old memory management values its memory monitoring is diabled.
195</li>
196<li>[<a href='https://issues.apache.org/jira/browse/MAPREDUCE-838'>MAPREDUCE-838</a>] -         Task succeeds even when committer.commitTask fails with IOException
197</li>
198<li>[<a href='https://issues.apache.org/jira/browse/MAPREDUCE-911'>MAPREDUCE-911</a>] -         TestTaskFail fail sometimes
199</li>
200<li>[<a href='https://issues.apache.org/jira/browse/MAPREDUCE-924'>MAPREDUCE-924</a>] -         TestPipes crashes on trunk
201</li>
202</ul>
203   
204<h4>        Improvement
205</h4>
206<ul>
207<li>[<a href='https://issues.apache.org/jira/browse/MAPREDUCE-465'>MAPREDUCE-465</a>] -         Deprecate org.apache.hadoop.mapred.lib.MultithreadedMapRunner
208</li>
209<li>[<a href='https://issues.apache.org/jira/browse/MAPREDUCE-487'>MAPREDUCE-487</a>] -         DBInputFormat support for Oracle
210</li>
211<li>[<a href='https://issues.apache.org/jira/browse/MAPREDUCE-767'>MAPREDUCE-767</a>] -         to remove mapreduce dependency on commons-cli2
212</li>
213</ul>
214
215<h2>Changes Since Hadoop 0.19.1</h2>
216<table border="1">
217<tr bgcolor="#DDDDDD">
218<th align="left">Issue</th><th align="left">Component</th><th align="left">Notes</th>
219</tr>
220<tr>
221<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-3344">HADOOP-3344</a></td><td>build</td><td>Changed build procedure for libhdfs to build correctly for different platforms. Build instructions are in the Jira item.</td>
222</tr>
223<tr>
224<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4253">HADOOP-4253</a></td><td>conf</td><td>Removed  from class org.apache.hadoop.fs.RawLocalFileSystem deprecated methods public String getName(), public void lock(Path p, boolean shared) and public void release(Path p).</td>
225</tr>
226<tr>
227<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4454">HADOOP-4454</a></td><td>conf</td><td>Changed processing of conf/slaves file to allow # to begin a comment.</td>
228</tr>
229<tr>
230<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4631">HADOOP-4631</a></td><td>conf</td><td>Split hadoop-default.xml into core-default.xml, hdfs-default.xml and mapreduce-default.xml.</td>
231</tr>
232<tr>
233<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4035">HADOOP-4035</a></td><td>contrib/capacity-sched</td><td>Changed capacity scheduler policy to take note of task memory requirements and task tracker memory availability.</td>
234</tr>
235<tr>
236<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4445">HADOOP-4445</a></td><td>contrib/capacity-sched</td><td>Changed JobTracker UI to better present the number of active tasks.</td>
237</tr>
238<tr>
239<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4576">HADOOP-4576</a></td><td>contrib/capacity-sched</td><td>Changed capacity scheduler UI to better present number of running and pending tasks.</td>
240</tr>
241<tr>
242<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4179">HADOOP-4179</a></td><td>contrib/chukwa</td><td>Introduced Vaidya rule based performance diagnostic tool for Map/Reduce jobs.</td>
243</tr>
244<tr>
245<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4827">HADOOP-4827</a></td><td>contrib/chukwa</td><td>Improved framework for data aggregation in Chuckwa.</td>
246</tr>
247<tr>
248<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4843">HADOOP-4843</a></td><td>contrib/chukwa</td><td>Introduced Chuckwa collection of job history.</td>
249</tr>
250<tr>
251<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-5030">HADOOP-5030</a></td><td>contrib/chukwa</td><td>Changed RPM install location to the value specified by build.properties file.</td>
252</tr>
253<tr>
254<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-5531">HADOOP-5531</a></td><td>contrib/chukwa</td><td>Disabled Chukwa unit tests for 0.20 branch only.</td>
255</tr>
256<tr>
257<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4789">HADOOP-4789</a></td><td>contrib/fair-share</td><td>Changed fair scheduler to divide resources equally between pools, not jobs.</td>
258</tr>
259<tr>
260<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4873">HADOOP-4873</a></td><td>contrib/fair-share</td><td>Changed fair scheduler UI to display minMaps and minReduces variables.</td>
261</tr>
262<tr>
263<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-3750">HADOOP-3750</a></td><td>dfs</td><td>Removed deprecated method parseArgs from org.apache.hadoop.fs.FileSystem.</td>
264</tr>
265<tr>
266<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4029">HADOOP-4029</a></td><td>dfs</td><td>Added name node storage information to the dfshealth page, and moved data node information to a separated page.</td>
267</tr>
268<tr>
269<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4103">HADOOP-4103</a></td><td>dfs</td><td>Modified dfsadmin -report to report under replicated blocks. blocks with corrupt replicas, and missing blocks&quot;.</td>
270</tr>
271<tr>
272<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4567">HADOOP-4567</a></td><td>dfs</td><td>Changed GetFileBlockLocations to return topology information for nodes that host the block replicas.</td>
273</tr>
274<tr>
275<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4572">HADOOP-4572</a></td><td>dfs</td><td>Moved org.apache.hadoop.hdfs.{CreateEditsLog, NNThroughputBenchmark} to org.apache.hadoop.hdfs.server.namenode.</td>
276</tr>
277<tr>
278<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4618">HADOOP-4618</a></td><td>dfs</td><td>Moved HTTP server from FSNameSystem to NameNode. Removed FSNamesystem.getNameNodeInfoPort(). Replaced FSNamesystem.getDFSNameNodeMachine() and FSNamesystem.getDFSNameNodePort() with new method  FSNamesystem.getDFSNameNodeAddress(). Removed constructor NameNode(bindAddress, conf).</td>
279</tr>
280<tr>
281<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4826">HADOOP-4826</a></td><td>dfs</td><td>Introduced new dfsadmin command saveNamespace to command the name service to do an immediate save of the file system image.</td>
282</tr>
283<tr>
284<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4970">HADOOP-4970</a></td><td>dfs</td><td>Changed trash facility to use absolute path of the deleted file.</td>
285</tr>
286<tr>
287<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-5468">HADOOP-5468</a></td><td>documentation</td><td>Reformatted HTML documentation for Hadoop to use submenus at the left column.</td>
288</tr>
289<tr>
290<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-3497">HADOOP-3497</a></td><td>fs</td><td>Changed the semantics of file globbing with a PathFilter (using the globStatus method of FileSystem). Previously, the filtering was too restrictive, so that a glob of /*/* and a filter that only accepts /a/b would not have matched /a/b. With this change /a/b does match. </td>
291</tr>
292<tr>
293<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4234">HADOOP-4234</a></td><td>fs</td><td>Changed KFS glue layer to allow applications to interface with multiple KFS metaservers.</td>
294</tr>
295<tr>
296<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4422">HADOOP-4422</a></td><td>fs/s3</td><td>Modified Hadoop file system to no longer create S3 buckets. Applications can create buckets for their S3 file systems by other means, for example, using the JetS3t API.</td>
297</tr>
298<tr>
299<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-3063">HADOOP-3063</a></td><td>io</td><td>Introduced BloomMapFile subclass of MapFile that creates a Bloom filter from all keys.</td>
300</tr>
301<tr>
302<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-1230">HADOOP-1230</a></td><td>mapred</td><td>Replaced parameters with context obejcts in Mapper, Reducer, Partitioner, InputFormat, and OutputFormat classes.</td>
303</tr>
304<tr>
305<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-1650">HADOOP-1650</a></td><td>mapred</td><td>Upgraded all core servers to use Jetty 6</td>
306</tr>
307<tr>
308<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-3923">HADOOP-3923</a></td><td>mapred</td><td>Moved class org.apache.hadoop.mapred.StatusHttpServer to org.apache.hadoop.http.HttpServer.</td>
309</tr>
310<tr>
311<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-3986">HADOOP-3986</a></td><td>mapred</td><td>Removed classes org.apache.hadoop.mapred.JobShell and org.apache.hadoop.mapred.TestJobShell. Removed from JobClient methods static void  setCommandLineConfig(Configuration conf) and public static Configuration getCommandLineConfig().</td>
312</tr>
313<tr>
314<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4188">HADOOP-4188</a></td><td>mapred</td><td>Removed Task's dependency on concrete file systems by taking list from FileSystem class. Added statistics table to FileSystem class. Deprecated FileSystem method getStatistics(Class&lt;? extends FileSystem&gt; cls).</td>
315</tr>
316<tr>
317<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4210">HADOOP-4210</a></td><td>mapred</td><td>Changed public class org.apache.hadoop.mapreduce.ID to be an abstract class. Removed from class org.apache.hadoop.mapreduce.ID the methods  public static ID read(DataInput in) and public static ID forName(String str).</td>
318</tr>
319<tr>
320<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4305">HADOOP-4305</a></td><td>mapred</td><td>Improved TaskTracker blacklisting strategy to better exclude faulty tracker from executing tasks.</td>
321</tr>
322<tr>
323<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4435">HADOOP-4435</a></td><td>mapred</td><td>Changed JobTracker web status page to display the amount of heap memory in use. This changes the JobSubmissionProtocol.</td>
324</tr>
325<tr>
326<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4565">HADOOP-4565</a></td><td>mapred</td><td>Improved MultiFileInputFormat so that multiple blocks from the same node or same rack can be combined into a single split.</td>
327</tr>
328<tr>
329<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4749">HADOOP-4749</a></td><td>mapred</td><td>Added a new counter REDUCE_INPUT_BYTES.</td>
330</tr>
331<tr>
332<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4783">HADOOP-4783</a></td><td>mapred</td><td>Changed history directory permissions to 750 and history file permissions to 740.</td>
333</tr>
334<tr>
335<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-3422">HADOOP-3422</a></td><td>metrics</td><td>Changed names of ganglia metrics to avoid conflicts and to better identify source function.</td>
336</tr>
337<tr>
338<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4284">HADOOP-4284</a></td><td>security</td><td>Introduced HttpServer method to support global filters.</td>
339</tr>
340<tr>
341<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4575">HADOOP-4575</a></td><td>security</td><td>Introduced independent HSFTP proxy server for authenticated access to clusters.</td>
342</tr>
343<tr>
344<td><a href="https://issues.apache.org:443/jira/browse/HADOOP-4661">HADOOP-4661</a></td><td>tools/distcp</td><td>Introduced distch tool for parallel ch{mod, own, grp}.</td>
345</tr>
346</table>
347</body>
348</html>
Note: See TracBrowser for help on using the repository browser.