Uses of Class
org.apache.hadoop.mapreduce.InputSplit

Packages that use InputSplit
org.apache.hadoop.mapred A software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) parallelly on large clusters (thousands of nodes) built of commodity hardware in a reliable, fault-tolerant manner. 
org.apache.hadoop.mapreduce   
org.apache.hadoop.mapreduce.lib.input   
 

Uses of InputSplit in org.apache.hadoop.mapred
 

Subclasses of InputSplit in org.apache.hadoop.mapred
 class FileSplit
          Deprecated. Use FileSplit instead.
 

Uses of InputSplit in org.apache.hadoop.mapreduce
 

Methods in org.apache.hadoop.mapreduce that return InputSplit
 InputSplit MapContext.getInputSplit()
          Get the input split for this map.
 

Methods in org.apache.hadoop.mapreduce that return types with arguments of type InputSplit
abstract  List<InputSplit> InputFormat.getSplits(JobContext context)
          Logically split the set of input files for the job.
 

Methods in org.apache.hadoop.mapreduce with parameters of type InputSplit
abstract  RecordReader<K,V> InputFormat.createRecordReader(InputSplit split, TaskAttemptContext context)
          Create a record reader for a given split.
abstract  void RecordReader.initialize(InputSplit split, TaskAttemptContext context)
          Called once at initialization.
 

Constructors in org.apache.hadoop.mapreduce with parameters of type InputSplit
MapContext(Configuration conf, TaskAttemptID taskid, RecordReader<KEYIN,VALUEIN> reader, RecordWriter<KEYOUT,VALUEOUT> writer, OutputCommitter committer, StatusReporter reporter, InputSplit split)
           
Mapper.Context(Configuration conf, TaskAttemptID taskid, RecordReader<KEYIN,VALUEIN> reader, RecordWriter<KEYOUT,VALUEOUT> writer, OutputCommitter committer, StatusReporter reporter, InputSplit split)
           
 

Uses of InputSplit in org.apache.hadoop.mapreduce.lib.input
 

Methods in org.apache.hadoop.mapreduce.lib.input that return types with arguments of type InputSplit
 List<InputSplit> FileInputFormat.getSplits(JobContext job)
          Generate the list of files and make them into FileSplits.
 

Methods in org.apache.hadoop.mapreduce.lib.input with parameters of type InputSplit
 RecordReader<LongWritable,Text> TextInputFormat.createRecordReader(InputSplit split, TaskAttemptContext context)
           
 RecordReader<K,V> SequenceFileInputFormat.createRecordReader(InputSplit split, TaskAttemptContext context)
           
 void SequenceFileRecordReader.initialize(InputSplit split, TaskAttemptContext context)
           
 void LineRecordReader.initialize(InputSplit genericSplit, TaskAttemptContext context)
           
 



Copyright © 2009 The Apache Software Foundation