A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

A

abortTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.FileOutputCommitter
 
abortTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
Deprecated. Discard the task output
abortTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
Deprecated. This method implements the new interface by calling the old method.
abortTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
Delete the work directory
abortTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.OutputCommitter
Discard the task output
ABSOLUTE - Static variable in class org.apache.hadoop.metrics.spi.MetricValue
 
AbstractMapWritable - Class in org.apache.hadoop.io
Abstract base class for MapWritable and SortedMapWritable Unlike org.apache.nutch.crawl.MapWritable, this class allows creation of MapWritable<Writable, MapWritable> so the CLASS_TO_ID and ID_TO_CLASS maps travel with the class instead of being static.
AbstractMapWritable() - Constructor for class org.apache.hadoop.io.AbstractMapWritable
constructor.
AbstractMetricsContext - Class in org.apache.hadoop.metrics.spi
The main class of the Service Provider Interface.
AbstractMetricsContext() - Constructor for class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Creates a new instance of AbstractMetricsContext
accept(Path) - Method in interface org.apache.hadoop.fs.PathFilter
Tests whether or not the specified abstract pathname should be included in a pathname list.
accept(Class<?>) - Method in class org.apache.hadoop.io.serializer.JavaSerialization
 
accept(Class<?>) - Method in interface org.apache.hadoop.io.serializer.Serialization
Allows clients to test whether this Serialization supports the given class.
accept(Class<?>) - Method in class org.apache.hadoop.io.serializer.WritableSerialization
 
accept(CompositeRecordReader.JoinCollector, K) - Method in interface org.apache.hadoop.mapred.join.ComposableRecordReader
While key-value pairs from this RecordReader match the given key, register them with the JoinCollector provided.
accept(CompositeRecordReader.JoinCollector, K) - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
If key provided matches that of this Composite, give JoinCollector iterator over values it may emit.
accept(CompositeRecordReader.JoinCollector, K) - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
Add an iterator to the collector at the position occupied by this RecordReader over the values in this stream paired with the key provided (ie register a stream of values from this source matching K with a collector).
accept(Path) - Method in class org.apache.hadoop.mapred.OutputLogFilter
 
accept(Object) - Method in interface org.apache.hadoop.mapred.SequenceFileInputFilter.Filter
filter function Decide if a record should be filtered or not
accept(Object) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.MD5Filter
Filtering method If MD5(key) % frequency==0, return true; otherwise return false
accept(Object) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.PercentFilter
Filtering method If record# % frequency==0, return true; otherwise return false
accept(Object) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.RegexFilter
Filtering method If key matches the regex, return true; otherwise return false
AccessControlException - Exception in org.apache.hadoop.fs.permission
Deprecated. Use AccessControlException instead.
AccessControlException() - Constructor for exception org.apache.hadoop.fs.permission.AccessControlException
Deprecated. Default constructor is needed for unwrapping from RemoteException.
AccessControlException(String) - Constructor for exception org.apache.hadoop.fs.permission.AccessControlException
Deprecated. Constructs an AccessControlException with the specified detail message.
AccessControlException(Throwable) - Constructor for exception org.apache.hadoop.fs.permission.AccessControlException
Deprecated. Constructs a new exception with the specified cause and a detail message of (cause==null ? null : cause.toString()) (which typically contains the class and detail message of cause).
AccessControlException - Exception in org.apache.hadoop.security
An exception class for access control related issues.
AccessControlException() - Constructor for exception org.apache.hadoop.security.AccessControlException
Default constructor is needed for unwrapping from RemoteException.
AccessControlException(String) - Constructor for exception org.apache.hadoop.security.AccessControlException
Constructs an AccessControlException with the specified detail message.
AccessControlException(Throwable) - Constructor for exception org.apache.hadoop.security.AccessControlException
Constructs a new exception with the specified cause and a detail message of (cause==null ? null : cause.toString()) (which typically contains the class and detail message of cause).
activateOptions() - Method in class org.apache.hadoop.mapred.TaskLogAppender
 
activeTaskTrackers() - Method in class org.apache.hadoop.mapred.JobTracker
Get the active task tracker statuses in the cluster
add(Object) - Method in class org.apache.hadoop.contrib.utils.join.ArrayListBackedIterator
 
add(Object) - Method in interface org.apache.hadoop.contrib.utils.join.ResetableIterator
 
add(X) - Method in class org.apache.hadoop.mapred.join.ArrayListBackedIterator
 
add(InputSplit) - Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
Add an InputSplit to this collection.
add(ComposableRecordReader<K, ? extends V>) - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
Add a RecordReader to this collection.
add(TupleWritable) - Method in class org.apache.hadoop.mapred.join.JoinRecordReader.JoinDelegationIterator
 
add(V) - Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader.MultiFilterDelegationIterator
 
add(T) - Method in interface org.apache.hadoop.mapred.join.ResetableIterator
Add an element to the collection of elements to iterate over.
add(U) - Method in class org.apache.hadoop.mapred.join.ResetableIterator.EMPTY
 
add(X) - Method in class org.apache.hadoop.mapred.join.StreamBackedIterator
 
add(String, MetricsBase) - Method in class org.apache.hadoop.metrics.util.MetricsRegistry
Add a new metrics to the registry
add(Node) - Method in class org.apache.hadoop.net.NetworkTopology
Add a leaf node Update node counter & rack counter if neccessary
add(Key) - Method in class org.apache.hadoop.util.bloom.BloomFilter
 
add(Key) - Method in class org.apache.hadoop.util.bloom.CountingBloomFilter
 
add(Key) - Method in class org.apache.hadoop.util.bloom.DynamicBloomFilter
 
add(Key) - Method in class org.apache.hadoop.util.bloom.Filter
Adds a key to this filter.
add(List<Key>) - Method in class org.apache.hadoop.util.bloom.Filter
Adds a list of keys to this filter.
add(Collection<Key>) - Method in class org.apache.hadoop.util.bloom.Filter
Adds a collection of keys to this filter.
add(Key[]) - Method in class org.apache.hadoop.util.bloom.Filter
Adds an array of keys to this filter.
add(Key) - Method in class org.apache.hadoop.util.bloom.RetouchedBloomFilter
 
add_escapes(String) - Method in exception org.apache.hadoop.record.compiler.generated.ParseException
Used to convert raw characters to their escaped version when these raw version cannot be used as part of an ASCII string literal.
addArchiveToClassPath(Path, Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Add an archive path to the current set of classpath entries.
addCacheArchive(URI, Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Add a archives to be localized to the conf
addCacheFile(URI, Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Add a file to be localized to the conf
addClass(String, Class, String) - Method in class org.apache.hadoop.util.ProgramDriver
This is the method that adds the classed to the repository
addColumn(ColumnName, boolean) - Method in class org.apache.hadoop.examples.dancing.DancingLinks
Add a column to the table
addColumn(ColumnName) - Method in class org.apache.hadoop.examples.dancing.DancingLinks
Add a column to the table
addCommand(List<String>, boolean) - Static method in class org.apache.hadoop.mapred.TaskLog
Add quotes to each of the command strings and return as a single string
addContext(Context, boolean) - Method in class org.apache.hadoop.http.HttpServer
 
addContext(String, String, boolean) - Method in class org.apache.hadoop.http.HttpServer
Add a context
addDefaultApps(ContextHandlerCollection, String) - Method in class org.apache.hadoop.http.HttpServer
Add default apps.
addDefaultResource(String) - Static method in class org.apache.hadoop.conf.Configuration
Add a default resource.
addDefaults() - Method in class org.apache.hadoop.mapred.join.CompositeInputFormat
Adds the default set of identifiers to the parser.
addDefaultServlets() - Method in class org.apache.hadoop.http.HttpServer
Add default servlets.
addDependingJob(Job) - Method in class org.apache.hadoop.mapred.jobcontrol.Job
Add a job to this jobs' dependency list.
addDoubleValue(Object, double) - Method in class org.apache.hadoop.contrib.utils.join.JobBase
Increment the given counter by the given incremental value If the counter does not exist, one is created with value 0.
addEscapes(String) - Static method in error org.apache.hadoop.record.compiler.generated.TokenMgrError
Replaces unprintable characters by their espaced (or unicode escaped) equivalents in the given string
addFalsePositive(Key) - Method in class org.apache.hadoop.util.bloom.RetouchedBloomFilter
Adds a false positive information to this retouched Bloom filter.
addFalsePositive(Collection<Key>) - Method in class org.apache.hadoop.util.bloom.RetouchedBloomFilter
Adds a collection of false positive information to this retouched Bloom filter.
addFalsePositive(List<Key>) - Method in class org.apache.hadoop.util.bloom.RetouchedBloomFilter
Adds a list of false positive information to this retouched Bloom filter.
addFalsePositive(Key[]) - Method in class org.apache.hadoop.util.bloom.RetouchedBloomFilter
Adds an array of false positive information to this retouched Bloom filter.
addField(String, TypeID) - Method in class org.apache.hadoop.record.meta.RecordTypeInfo
Add a field.
addFileset(FileSet) - Method in class org.apache.hadoop.record.compiler.ant.RccTask
Adds a fileset that can consist of one or more files
addFileToClassPath(Path, Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Add an file path to the current set of classpath entries It adds the file to cache as well.
addFilter(String, String, Map<String, String>) - Method in interface org.apache.hadoop.http.FilterContainer
Add a filter to the container.
addFilter(String, String, Map<String, String>) - Method in class org.apache.hadoop.http.HttpServer
Add a filter to the container.
addFilterPathMapping(String, Context) - Method in class org.apache.hadoop.http.HttpServer
Add the path spec to the filter path mapping.
addGlobalFilter(String, String, Map<String, String>) - Method in interface org.apache.hadoop.http.FilterContainer
Add a global filter to the container.
addGlobalFilter(String, String, Map<String, String>) - Method in class org.apache.hadoop.http.HttpServer
Add a global filter to the container.
addIdentifier(String, Class<?>[], Class<? extends Parser.Node>, Class<? extends ComposableRecordReader>) - Static method in class org.apache.hadoop.mapred.join.Parser.Node
For a given identifier, add a mapping to the nodetype for the parse tree and to the ComposableRecordReader to be created, including the formals required to invoke the constructor.
addInputPath(JobConf, Path) - Static method in class org.apache.hadoop.mapred.FileInputFormat
Deprecated. Add a Path to the list of inputs for the map-reduce job.
addInputPath(JobConf, Path, Class<? extends InputFormat>) - Static method in class org.apache.hadoop.mapred.lib.MultipleInputs
Add a Path with a custom InputFormat to the list of inputs for the map-reduce job.
addInputPath(JobConf, Path, Class<? extends InputFormat>, Class<? extends Mapper>) - Static method in class org.apache.hadoop.mapred.lib.MultipleInputs
Add a Path with a custom InputFormat and Mapper to the list of inputs for the map-reduce job.
addInputPath(Job, Path) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
Add a Path to the list of inputs for the map-reduce job.
addInputPaths(JobConf, String) - Static method in class org.apache.hadoop.mapred.FileInputFormat
Deprecated. Add the given comma separated paths to the list of inputs for the map-reduce job.
addInputPaths(Job, String) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
Add the given comma separated paths to the list of inputs for the map-reduce job.
addInternalServlet(String, String, Class<? extends HttpServlet>) - Method in class org.apache.hadoop.http.HttpServer
Deprecated. this is a temporary method
additionalConfSpec_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
addJob(Job) - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
Add a new job.
addJobInProgressListener(JobInProgressListener) - Method in class org.apache.hadoop.mapred.JobTracker
 
addJobs(Collection<Job>) - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
Add a collection of jobs
addLongValue(Object, long) - Method in class org.apache.hadoop.contrib.utils.join.JobBase
Increment the given counter by the given incremental value If the counter does not exist, one is created with value 0.
addMapper(JobConf, Class<? extends Mapper<K1, V1, K2, V2>>, Class<? extends K1>, Class<? extends V1>, Class<? extends K2>, Class<? extends V2>, boolean, JobConf) - Static method in class org.apache.hadoop.mapred.lib.ChainMapper
Adds a Mapper class to the chain job's JobConf.
addMapper(JobConf, Class<? extends Mapper<K1, V1, K2, V2>>, Class<? extends K1>, Class<? extends V1>, Class<? extends K2>, Class<? extends V2>, boolean, JobConf) - Static method in class org.apache.hadoop.mapred.lib.ChainReducer
Adds a Mapper class to the chain job's JobConf.
addMultiNamedOutput(JobConf, String, Class<? extends OutputFormat>, Class<?>, Class<?>) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
Adds a multi named output for the job.
addName(Class, String) - Static method in class org.apache.hadoop.io.WritableName
Add an alternate name for a class.
addNamedOutput(JobConf, String, Class<? extends OutputFormat>, Class<?>, Class<?>) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
Adds a named output for the job.
addNextValue(Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
add a value to the aggregator
addNextValue(double) - Method in class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
add a value to the aggregator
addNextValue(Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
add a value to the aggregator
addNextValue(long) - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
add a value to the aggregator
addNextValue(Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
add a value to the aggregator
addNextValue(long) - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
add a value to the aggregator
addNextValue(Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
add a value to the aggregator
addNextValue(long) - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
add a value to the aggregator
addNextValue(Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
add a value to the aggregator
addNextValue(Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
add a value to the aggregator
addNextValue(Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
add a value to the aggregator
addNextValue(Object) - Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregator
add a value to the aggregator
addNextValue(Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
add the given val to the aggregator.
addPhase(String) - Method in class org.apache.hadoop.util.Progress
Adds a named node to the tree.
addPhase() - Method in class org.apache.hadoop.util.Progress
Adds a node to the tree.
addResource(String) - Method in class org.apache.hadoop.conf.Configuration
Add a configuration resource.
addResource(URL) - Method in class org.apache.hadoop.conf.Configuration
Add a configuration resource.
addResource(Path) - Method in class org.apache.hadoop.conf.Configuration
Add a configuration resource.
addResource(InputStream) - Method in class org.apache.hadoop.conf.Configuration
Add a configuration resource.
addRow(boolean[]) - Method in class org.apache.hadoop.examples.dancing.DancingLinks
Add a row to the table.
addServlet(String, String, Class<? extends HttpServlet>) - Method in class org.apache.hadoop.http.HttpServer
Add a servlet in the server.
addSslListener(InetSocketAddress, String, String, String) - Method in class org.apache.hadoop.http.HttpServer
Deprecated. Use HttpServer.addSslListener(InetSocketAddress, Configuration, boolean)
addSslListener(InetSocketAddress, Configuration, boolean) - Method in class org.apache.hadoop.http.HttpServer
Configure an ssl listener on the server.
addStaticResolution(String, String) - Static method in class org.apache.hadoop.net.NetUtils
Adds a static resolution for host.
addTaskEnvironment_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
addToMap(Class) - Method in class org.apache.hadoop.io.AbstractMapWritable
Add a Class to the maps if it is not already present.
adjustBeginLineColumn(int, int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
Method to adjust line and column numbers for the start of a token.
adjustTop() - Method in class org.apache.hadoop.util.PriorityQueue
Should be called when the Object at top changes values.
advance() - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner
Move the cursor to the next key-value pair.
AggregateWordCount - Class in org.apache.hadoop.examples
This is an example Aggregated Hadoop Map/Reduce application.
AggregateWordCount() - Constructor for class org.apache.hadoop.examples.AggregateWordCount
 
AggregateWordCount.WordCountPlugInClass - Class in org.apache.hadoop.examples
 
AggregateWordCount.WordCountPlugInClass() - Constructor for class org.apache.hadoop.examples.AggregateWordCount.WordCountPlugInClass
 
AggregateWordHistogram - Class in org.apache.hadoop.examples
This is an example Aggregated Hadoop Map/Reduce application.
AggregateWordHistogram() - Constructor for class org.apache.hadoop.examples.AggregateWordHistogram
 
AggregateWordHistogram.AggregateWordHistogramPlugin - Class in org.apache.hadoop.examples
 
AggregateWordHistogram.AggregateWordHistogramPlugin() - Constructor for class org.apache.hadoop.examples.AggregateWordHistogram.AggregateWordHistogramPlugin
 
aggregatorDescriptorList - Variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJobBase
 
allAllowed() - Method in class org.apache.hadoop.security.SecurityUtil.AccessControlList
 
allFinished() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
 
and(FsAction) - Method in enum org.apache.hadoop.fs.permission.FsAction
AND operation.
and(Filter) - Method in class org.apache.hadoop.util.bloom.BloomFilter
 
and(Filter) - Method in class org.apache.hadoop.util.bloom.CountingBloomFilter
 
and(Filter) - Method in class org.apache.hadoop.util.bloom.DynamicBloomFilter
 
and(Filter) - Method in class org.apache.hadoop.util.bloom.Filter
Peforms a logical AND between this filter and a specified filter.
anonymize(SerializedRecord) - Static method in class org.apache.hadoop.contrib.failmon.Anonymizer
Anonymize hostnames, ip addresses and file names/paths that appear in fields of a SerializedRecord.
anonymize(EventRecord) - Static method in class org.apache.hadoop.contrib.failmon.Anonymizer
Anonymize hostnames, ip addresses and file names/paths that appear in fields of an EventRecord, after it gets serialized into a SerializedRecord.
anonymize() - Method in class org.apache.hadoop.contrib.failmon.OfflineAnonymizer
Performs anonymization for the log file.
Anonymizer - Class in org.apache.hadoop.contrib.failmon
This class provides anonymization to SerializedRecord objects.
Anonymizer() - Constructor for class org.apache.hadoop.contrib.failmon.Anonymizer
 
append(Path, int, Progressable) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Append to an existing file (optional operation).
append(Path) - Method in class org.apache.hadoop.fs.FileSystem
Append to an existing file (optional operation).
append(Path, int) - Method in class org.apache.hadoop.fs.FileSystem
Append to an existing file (optional operation).
append(Path, int, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
Append to an existing file (optional operation).
append(Path, int, Progressable) - Method in class org.apache.hadoop.fs.FilterFileSystem
Append to an existing file (optional operation).
append(Path, int, Progressable) - Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
This optional operation is not yet supported.
append(Path, int, Progressable) - Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
This optional operation is not yet supported.
append(Path, int, Progressable) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
Append to an existing file (optional operation).
append(Path, int, Progressable) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
This optional operation is not yet supported.
append(Path, int, Progressable) - Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
This optional operation is not yet supported.
append(Writable) - Method in class org.apache.hadoop.io.ArrayFile.Writer
Append a value to the file.
append(WritableComparable, Writable) - Method in class org.apache.hadoop.io.BloomMapFile.Writer
 
append(byte[], byte[]) - Method in class org.apache.hadoop.io.file.tfile.TFile.Writer
Adding a new key-value pair to the TFile.
append(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.file.tfile.TFile.Writer
Adding a new key-value pair to TFile.
append(WritableComparable, Writable) - Method in class org.apache.hadoop.io.MapFile.Writer
Append a key/value pair to the map.
append(Writable, Writable) - Method in class org.apache.hadoop.io.SequenceFile.Writer
Append a key/value pair.
append(Object, Object) - Method in class org.apache.hadoop.io.SequenceFile.Writer
Append a key/value pair.
append(WritableComparable) - Method in class org.apache.hadoop.io.SetFile.Writer
Append a key to a set.
append(byte[], int, int) - Method in class org.apache.hadoop.io.Text
Append a range of bytes to the end of the given text
append(LoggingEvent) - Method in class org.apache.hadoop.mapred.TaskLogAppender
 
append(LoggingEvent) - Method in class org.apache.hadoop.metrics.jvm.EventCounter
 
append(byte[], int, int) - Method in class org.apache.hadoop.record.Buffer
Append specified bytes to the buffer.
append(byte[]) - Method in class org.apache.hadoop.record.Buffer
Append specified bytes to the buffer
appendRaw(byte[], int, int, SequenceFile.ValueBytes) - Method in class org.apache.hadoop.io.SequenceFile.Writer
 
appendTo(StringBuilder) - Method in class org.apache.hadoop.mapreduce.JobID
Add the stuff after the "job" prefix to the given builder.
appendTo(StringBuilder) - Method in class org.apache.hadoop.mapreduce.TaskAttemptID
Add the unique string to the StringBuilder
appendTo(StringBuilder) - Method in class org.apache.hadoop.mapreduce.TaskID
Add the unique string to the given builder.
applyUMask(FsPermission) - Method in class org.apache.hadoop.fs.permission.FsPermission
Apply a umask to this permission and return a new one
applyUMask(FsPermission) - Method in class org.apache.hadoop.fs.permission.PermissionStatus
Apply umask.
approximateCount(Key) - Method in class org.apache.hadoop.util.bloom.CountingBloomFilter
This method calculates an approximate count of the key, i.e.
archiveURIs - Variable in class org.apache.hadoop.streaming.StreamJob
 
args - Variable in class org.apache.hadoop.fs.shell.Command
 
argv_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
arrangeKeys(ArrayList<String>) - Static method in class org.apache.hadoop.contrib.failmon.SerializedRecord
Arrange the keys to provide a more readable printing order: first goes the timestamp, then the hostname and then the type, followed by all other keys found.
ArrayFile - Class in org.apache.hadoop.io
A dense file-based mapping from integers to values.
ArrayFile() - Constructor for class org.apache.hadoop.io.ArrayFile
 
ArrayFile.Reader - Class in org.apache.hadoop.io
Provide access to an existing array file.
ArrayFile.Reader(FileSystem, String, Configuration) - Constructor for class org.apache.hadoop.io.ArrayFile.Reader
Construct an array reader for the named file.
ArrayFile.Writer - Class in org.apache.hadoop.io
Write a new array file.
ArrayFile.Writer(Configuration, FileSystem, String, Class<? extends Writable>) - Constructor for class org.apache.hadoop.io.ArrayFile.Writer
Create the named file for values of the named class.
ArrayFile.Writer(Configuration, FileSystem, String, Class<? extends Writable>, SequenceFile.CompressionType, Progressable) - Constructor for class org.apache.hadoop.io.ArrayFile.Writer
Create the named file for values of the named class.
ArrayListBackedIterator - Class in org.apache.hadoop.contrib.utils.join
This class provides an implementation of ResetableIterator.
ArrayListBackedIterator() - Constructor for class org.apache.hadoop.contrib.utils.join.ArrayListBackedIterator
 
ArrayListBackedIterator(ArrayList<Object>) - Constructor for class org.apache.hadoop.contrib.utils.join.ArrayListBackedIterator
 
ArrayListBackedIterator<X extends Writable> - Class in org.apache.hadoop.mapred.join
This class provides an implementation of ResetableIterator.
ArrayListBackedIterator() - Constructor for class org.apache.hadoop.mapred.join.ArrayListBackedIterator
 
ArrayListBackedIterator(ArrayList<X>) - Constructor for class org.apache.hadoop.mapred.join.ArrayListBackedIterator
 
arrayToString(String[]) - Static method in class org.apache.hadoop.util.StringUtils
Given an array of strings, return a comma-separated list of its elements.
ArrayWritable - Class in org.apache.hadoop.io
A Writable for arrays containing instances of a class.
ArrayWritable(Class<? extends Writable>) - Constructor for class org.apache.hadoop.io.ArrayWritable
 
ArrayWritable(Class<? extends Writable>, Writable[]) - Constructor for class org.apache.hadoop.io.ArrayWritable
 
ArrayWritable(String[]) - Constructor for class org.apache.hadoop.io.ArrayWritable
 
atEnd() - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner
Is cursor at the end location?
ATTEMPT - Static variable in class org.apache.hadoop.mapreduce.TaskAttemptID
 
AuthorizationException - Exception in org.apache.hadoop.security.authorize
An exception class for authorization-related issues.
AuthorizationException() - Constructor for exception org.apache.hadoop.security.authorize.AuthorizationException
 
AuthorizationException(String) - Constructor for exception org.apache.hadoop.security.authorize.AuthorizationException
 
AuthorizationException(Throwable) - Constructor for exception org.apache.hadoop.security.authorize.AuthorizationException
Constructs a new exception with the specified cause and a detail message of (cause==null ? null : cause.toString()) (which typically contains the class and detail message of cause).
authorize(Subject, ConnectionHeader) - Method in class org.apache.hadoop.ipc.RPC.Server
 
authorize(Subject, ConnectionHeader) - Method in class org.apache.hadoop.ipc.Server
Authorize the incoming client connection.
authorize(Subject, Class<?>) - Static method in class org.apache.hadoop.security.authorize.ServiceAuthorizationManager
Authorize the user to access the protocol being used.
available() - Method in class org.apache.hadoop.fs.FSInputChecker
 
available() - Method in class org.apache.hadoop.io.compress.DecompressorStream
 
available() - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
 

B

backup(int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
baseBlockSize - Static variable in interface org.apache.hadoop.io.compress.bzip2.BZip2Constants
 
beginColumn - Variable in class org.apache.hadoop.record.compiler.generated.Token
beginLine and beginColumn describe the position of the first character of this token; endLine and endColumn describe the position of the last character of this token.
beginLine - Variable in class org.apache.hadoop.record.compiler.generated.Token
beginLine and beginColumn describe the position of the first character of this token; endLine and endColumn describe the position of the last character of this token.
BeginToken() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
BinaryComparable - Class in org.apache.hadoop.io
Interface supported by WritableComparable types supporting ordering/permutation by a representative set of bytes.
BinaryComparable() - Constructor for class org.apache.hadoop.io.BinaryComparable
 
BinaryRecordInput - Class in org.apache.hadoop.record
 
BinaryRecordInput(InputStream) - Constructor for class org.apache.hadoop.record.BinaryRecordInput
Creates a new instance of BinaryRecordInput
BinaryRecordInput(DataInput) - Constructor for class org.apache.hadoop.record.BinaryRecordInput
Creates a new instance of BinaryRecordInput
BinaryRecordOutput - Class in org.apache.hadoop.record
 
BinaryRecordOutput(OutputStream) - Constructor for class org.apache.hadoop.record.BinaryRecordOutput
Creates a new instance of BinaryRecordOutput
BinaryRecordOutput(DataOutput) - Constructor for class org.apache.hadoop.record.BinaryRecordOutput
Creates a new instance of BinaryRecordOutput
bind(ServerSocket, InetSocketAddress, int) - Static method in class org.apache.hadoop.ipc.Server
A convenience method to bind to a given address and report better exceptions if the address is not a valid host.
blacklistedTaskTrackers() - Method in class org.apache.hadoop.mapred.JobTracker
Get the blacklisted task tracker statuses in the cluster
Block - Class in org.apache.hadoop.fs.s3
Holds metadata about a block of data being stored in a FileSystemStore.
Block(long, long) - Constructor for class org.apache.hadoop.fs.s3.Block
 
BlockCompressorStream - Class in org.apache.hadoop.io.compress
A CompressorStream which works with 'block-based' based compression algorithms, as opposed to 'stream-based' compression algorithms.
BlockCompressorStream(OutputStream, Compressor, int, int) - Constructor for class org.apache.hadoop.io.compress.BlockCompressorStream
Create a BlockCompressorStream.
BlockCompressorStream(OutputStream, Compressor) - Constructor for class org.apache.hadoop.io.compress.BlockCompressorStream
Create a BlockCompressorStream with given output-stream and compressor.
BlockDecompressorStream - Class in org.apache.hadoop.io.compress
A DecompressorStream which works with 'block-based' based compression algorithms, as opposed to 'stream-based' compression algorithms.
BlockDecompressorStream(InputStream, Decompressor, int) - Constructor for class org.apache.hadoop.io.compress.BlockDecompressorStream
Create a BlockDecompressorStream.
BlockDecompressorStream(InputStream, Decompressor) - Constructor for class org.apache.hadoop.io.compress.BlockDecompressorStream
Create a BlockDecompressorStream.
BlockDecompressorStream(InputStream) - Constructor for class org.apache.hadoop.io.compress.BlockDecompressorStream
 
blockExists(long) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
BlockLocation - Class in org.apache.hadoop.fs
 
BlockLocation() - Constructor for class org.apache.hadoop.fs.BlockLocation
Default Constructor
BlockLocation(String[], String[], long, long) - Constructor for class org.apache.hadoop.fs.BlockLocation
Constructor with host, name, offset and length
BlockLocation(String[], String[], String[], long, long) - Constructor for class org.apache.hadoop.fs.BlockLocation
Constructor with host, name, network topology, offset and length
BLOOM_FILE_NAME - Static variable in class org.apache.hadoop.io.BloomMapFile
 
BloomFilter - Class in org.apache.hadoop.util.bloom
Implements a Bloom filter, as defined by Bloom in 1970.
BloomFilter() - Constructor for class org.apache.hadoop.util.bloom.BloomFilter
Default constructor - use with readFields
BloomFilter(int, int, int) - Constructor for class org.apache.hadoop.util.bloom.BloomFilter
Constructor
BloomMapFile - Class in org.apache.hadoop.io
This class extends MapFile and provides very much the same functionality.
BloomMapFile() - Constructor for class org.apache.hadoop.io.BloomMapFile
 
BloomMapFile.Reader - Class in org.apache.hadoop.io
 
BloomMapFile.Reader(FileSystem, String, Configuration) - Constructor for class org.apache.hadoop.io.BloomMapFile.Reader
 
BloomMapFile.Reader(FileSystem, String, WritableComparator, Configuration, boolean) - Constructor for class org.apache.hadoop.io.BloomMapFile.Reader
 
BloomMapFile.Reader(FileSystem, String, WritableComparator, Configuration) - Constructor for class org.apache.hadoop.io.BloomMapFile.Reader
 
BloomMapFile.Writer - Class in org.apache.hadoop.io
 
BloomMapFile.Writer(Configuration, FileSystem, String, Class<? extends WritableComparable>, Class<? extends Writable>, SequenceFile.CompressionType, CompressionCodec, Progressable) - Constructor for class org.apache.hadoop.io.BloomMapFile.Writer
 
BloomMapFile.Writer(Configuration, FileSystem, String, Class<? extends WritableComparable>, Class, SequenceFile.CompressionType, Progressable) - Constructor for class org.apache.hadoop.io.BloomMapFile.Writer
 
BloomMapFile.Writer(Configuration, FileSystem, String, Class<? extends WritableComparable>, Class, SequenceFile.CompressionType) - Constructor for class org.apache.hadoop.io.BloomMapFile.Writer
 
BloomMapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class, SequenceFile.CompressionType, CompressionCodec, Progressable) - Constructor for class org.apache.hadoop.io.BloomMapFile.Writer
 
BloomMapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class, SequenceFile.CompressionType, Progressable) - Constructor for class org.apache.hadoop.io.BloomMapFile.Writer
 
BloomMapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class, SequenceFile.CompressionType) - Constructor for class org.apache.hadoop.io.BloomMapFile.Writer
 
BloomMapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class) - Constructor for class org.apache.hadoop.io.BloomMapFile.Writer
 
BloomMapFile.Writer(Configuration, FileSystem, String, Class<? extends WritableComparable>, Class) - Constructor for class org.apache.hadoop.io.BloomMapFile.Writer
 
BOOL - Static variable in class org.apache.hadoop.record.meta.TypeID.RIOType
 
BOOLEAN_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
BooleanWritable - Class in org.apache.hadoop.io
A WritableComparable for booleans.
BooleanWritable() - Constructor for class org.apache.hadoop.io.BooleanWritable
 
BooleanWritable(boolean) - Constructor for class org.apache.hadoop.io.BooleanWritable
 
BooleanWritable.Comparator - Class in org.apache.hadoop.io
A Comparator optimized for BooleanWritable.
BooleanWritable.Comparator() - Constructor for class org.apache.hadoop.io.BooleanWritable.Comparator
 
BoolTypeID - Static variable in class org.apache.hadoop.record.meta.TypeID
Constant classes for the basic types, so we can share them.
bufcolumn - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
buffer - Variable in class org.apache.hadoop.io.compress.CompressorStream
 
buffer - Variable in class org.apache.hadoop.io.compress.DecompressorStream
 
buffer() - Method in class org.apache.hadoop.io.file.tfile.ByteArray
 
buffer() - Method in interface org.apache.hadoop.io.file.tfile.RawComparable
Get the underlying byte array.
Buffer - Class in org.apache.hadoop.record
A byte sequence that is used as a Java native type for buffer.
Buffer() - Constructor for class org.apache.hadoop.record.Buffer
Create a zero-count sequence.
Buffer(byte[]) - Constructor for class org.apache.hadoop.record.Buffer
Create a Buffer using the byte array as the initial value.
Buffer(byte[], int, int) - Constructor for class org.apache.hadoop.record.Buffer
Create a Buffer using the byte range as the initial value.
buffer - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
BUFFER - Static variable in class org.apache.hadoop.record.meta.TypeID.RIOType
 
BUFFER_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
BufferedFSInputStream - Class in org.apache.hadoop.fs
A class optimizes reading from FSInputStream by bufferring
BufferedFSInputStream(FSInputStream, int) - Constructor for class org.apache.hadoop.fs.BufferedFSInputStream
Creates a BufferedFSInputStream with the specified buffer size, and saves its argument, the input stream in, for later use.
BufferTypeID - Static variable in class org.apache.hadoop.record.meta.TypeID
 
bufline - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
bufpos - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
BuiltInZlibDeflater - Class in org.apache.hadoop.io.compress.zlib
A wrapper around java.util.zip.Deflater to make it conform to org.apache.hadoop.io.compress.Compressor interface.
BuiltInZlibDeflater(int, boolean) - Constructor for class org.apache.hadoop.io.compress.zlib.BuiltInZlibDeflater
 
BuiltInZlibDeflater(int) - Constructor for class org.apache.hadoop.io.compress.zlib.BuiltInZlibDeflater
 
BuiltInZlibDeflater() - Constructor for class org.apache.hadoop.io.compress.zlib.BuiltInZlibDeflater
 
BuiltInZlibInflater - Class in org.apache.hadoop.io.compress.zlib
A wrapper around java.util.zip.Inflater to make it conform to org.apache.hadoop.io.compress.Decompressor interface.
BuiltInZlibInflater(boolean) - Constructor for class org.apache.hadoop.io.compress.zlib.BuiltInZlibInflater
 
BuiltInZlibInflater() - Constructor for class org.apache.hadoop.io.compress.zlib.BuiltInZlibInflater
 
BYTE - Static variable in class org.apache.hadoop.record.meta.TypeID.RIOType
 
BYTE_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
ByteArray - Class in org.apache.hadoop.io.file.tfile
Adaptor class to wrap byte-array backed objects (including java byte array) as RawComparable objects.
ByteArray(BytesWritable) - Constructor for class org.apache.hadoop.io.file.tfile.ByteArray
Constructing a ByteArray from a BytesWritable.
ByteArray(byte[]) - Constructor for class org.apache.hadoop.io.file.tfile.ByteArray
Wrap a whole byte array as a RawComparable.
ByteArray(byte[], int, int) - Constructor for class org.apache.hadoop.io.file.tfile.ByteArray
Wrap a partial byte array as a RawComparable.
byteDesc(long) - Static method in class org.apache.hadoop.fs.FsShell
Deprecated. Consider using StringUtils.byteDesc(long) instead.
byteDesc(long) - Static method in class org.apache.hadoop.util.StringUtils
Return an abbreviated English-language desc of the byte length
bytesToCodePoint(ByteBuffer) - Static method in class org.apache.hadoop.io.Text
Returns the next code point at the current position in the buffer.
BytesWritable - Class in org.apache.hadoop.io
A byte sequence that is usable as a key or value.
BytesWritable() - Constructor for class org.apache.hadoop.io.BytesWritable
Create a zero-size sequence.
BytesWritable(byte[]) - Constructor for class org.apache.hadoop.io.BytesWritable
Create a BytesWritable using the byte array as the initial value.
BytesWritable.Comparator - Class in org.apache.hadoop.io
A Comparator optimized for BytesWritable.
BytesWritable.Comparator() - Constructor for class org.apache.hadoop.io.BytesWritable.Comparator
 
byteToHexString(byte[], int, int) - Static method in class org.apache.hadoop.util.StringUtils
Given an array of bytes it will convert the bytes to a hex string representation of the bytes
byteToHexString(byte[]) - Static method in class org.apache.hadoop.util.StringUtils
Same as byteToHexString(bytes, 0, bytes.length).
ByteTypeID - Static variable in class org.apache.hadoop.record.meta.TypeID
 
ByteWritable - Class in org.apache.hadoop.io
A WritableComparable for a single byte.
ByteWritable() - Constructor for class org.apache.hadoop.io.ByteWritable
 
ByteWritable(byte) - Constructor for class org.apache.hadoop.io.ByteWritable
 
ByteWritable.Comparator - Class in org.apache.hadoop.io
A Comparator optimized for ByteWritable.
ByteWritable.Comparator() - Constructor for class org.apache.hadoop.io.ByteWritable.Comparator
 
BZip2Codec - Class in org.apache.hadoop.io.compress
This class provides CompressionOutputStream and CompressionInputStream for compression and decompression.
BZip2Codec() - Constructor for class org.apache.hadoop.io.compress.BZip2Codec
Creates a new instance of BZip2Codec
BZip2Constants - Interface in org.apache.hadoop.io.compress.bzip2
Base class for both the compress and decompress classes.
BZip2DummyCompressor - Class in org.apache.hadoop.io.compress.bzip2
This is a dummy compressor for BZip2.
BZip2DummyCompressor() - Constructor for class org.apache.hadoop.io.compress.bzip2.BZip2DummyCompressor
 
BZip2DummyDecompressor - Class in org.apache.hadoop.io.compress.bzip2
This is a dummy decompressor for BZip2.
BZip2DummyDecompressor() - Constructor for class org.apache.hadoop.io.compress.bzip2.BZip2DummyDecompressor
 

C

cacheArchives - Variable in class org.apache.hadoop.streaming.StreamJob
 
CachedDNSToSwitchMapping - Class in org.apache.hadoop.net
A cached implementation of DNSToSwitchMapping that takes an raw DNSToSwitchMapping and stores the resolved network location in a cache.
CachedDNSToSwitchMapping(DNSToSwitchMapping) - Constructor for class org.apache.hadoop.net.CachedDNSToSwitchMapping
 
cacheFiles - Variable in class org.apache.hadoop.streaming.StreamJob
 
call(Writable, InetSocketAddress) - Method in class org.apache.hadoop.ipc.Client
Deprecated. Use Client.call(Writable, InetSocketAddress, Class, UserGroupInformation) instead
call(Writable, InetSocketAddress, UserGroupInformation) - Method in class org.apache.hadoop.ipc.Client
Deprecated. Use Client.call(Writable, InetSocketAddress, Class, UserGroupInformation) instead
call(Writable, InetSocketAddress, Class<?>, UserGroupInformation) - Method in class org.apache.hadoop.ipc.Client
Make a call, passing param, to the IPC server running at address which is servicing the protocol protocol, with the ticket credentials, returning the value.
call(Writable[], InetSocketAddress[]) - Method in class org.apache.hadoop.ipc.Client
Deprecated. Use Client.call(Writable[], InetSocketAddress[], Class, UserGroupInformation) instead
call(Writable[], InetSocketAddress[], Class<?>, UserGroupInformation) - Method in class org.apache.hadoop.ipc.Client
Makes a set of calls in parallel.
call(Method, Object[][], InetSocketAddress[], Configuration) - Static method in class org.apache.hadoop.ipc.RPC
Deprecated. Use RPC.call(Method, Object[][], InetSocketAddress[], UserGroupInformation, Configuration) instead
call(Method, Object[][], InetSocketAddress[], UserGroupInformation, Configuration) - Static method in class org.apache.hadoop.ipc.RPC
Expert: Make multiple, parallel calls to a set of servers.
call(Class<?>, Writable, long) - Method in class org.apache.hadoop.ipc.RPC.Server
 
call(Writable, long) - Method in class org.apache.hadoop.ipc.Server
Deprecated. Use Server.call(Class, Writable, long) instead
call(Class<?>, Writable, long) - Method in class org.apache.hadoop.ipc.Server
Called for each call.
callQueueLen - Variable in class org.apache.hadoop.ipc.metrics.RpcMetrics
 
canCommit(TaskAttemptID) - Method in class org.apache.hadoop.mapred.TaskTracker
Child checking whether it can commit
captureDebugOut(List<String>, File) - Static method in class org.apache.hadoop.mapred.TaskLog
Wrap a command in a shell to capture debug script's stdout and stderr to debugout.
captureOutAndError(List<String>, File, File, long) - Static method in class org.apache.hadoop.mapred.TaskLog
Wrap a command in a shell to capture stdout and stderr to files.
captureOutAndError(List<String>, List<String>, File, File, long) - Static method in class org.apache.hadoop.mapred.TaskLog
Wrap a command in a shell to capture stdout and stderr to files.
captureOutAndError(List<String>, List<String>, File, File, long, String) - Static method in class org.apache.hadoop.mapred.TaskLog
Wrap a command in a shell to capture stdout and stderr to files.
CBZip2InputStream - Class in org.apache.hadoop.io.compress.bzip2
An input stream that decompresses from the BZip2 format (without the file header chars) to be read as any other stream.
CBZip2InputStream(InputStream) - Constructor for class org.apache.hadoop.io.compress.bzip2.CBZip2InputStream
Constructs a new CBZip2InputStream which decompresses bytes read from the specified stream.
CBZip2OutputStream - Class in org.apache.hadoop.io.compress.bzip2
An output stream that compresses into the BZip2 format (without the file header chars) into another stream.
CBZip2OutputStream(OutputStream) - Constructor for class org.apache.hadoop.io.compress.bzip2.CBZip2OutputStream
Constructs a new CBZip2OutputStream with a blocksize of 900k.
CBZip2OutputStream(OutputStream, int) - Constructor for class org.apache.hadoop.io.compress.bzip2.CBZip2OutputStream
Constructs a new CBZip2OutputStream with specified blocksize.
ChainMapper - Class in org.apache.hadoop.mapred.lib
The ChainMapper class allows to use multiple Mapper classes within a single Map task.
ChainMapper() - Constructor for class org.apache.hadoop.mapred.lib.ChainMapper
Constructor.
ChainReducer - Class in org.apache.hadoop.mapred.lib
The ChainReducer class allows to chain multiple Mapper classes after a Reducer within the Reducer task.
ChainReducer() - Constructor for class org.apache.hadoop.mapred.lib.ChainReducer
Constructor.
charAt(int) - Method in class org.apache.hadoop.io.Text
Returns the Unicode Scalar Value (32-bit integer value) for the character at position.
checkDir(File) - Static method in class org.apache.hadoop.util.DiskChecker
 
checkExistence(String) - Static method in class org.apache.hadoop.contrib.failmon.Environment
Checks whether a specific shell command is available in the system.
checkForRotation() - Method in class org.apache.hadoop.contrib.failmon.LogParser
Check whether the log file has been rotated.
checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.FileOutputFormat
 
checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.lib.db.DBOutputFormat
Check for validity of the output-specification for the job.
checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.lib.NullOutputFormat
Deprecated.  
checkOutputSpecs(FileSystem, JobConf) - Method in interface org.apache.hadoop.mapred.OutputFormat
Deprecated. Check for validity of the output-specification for the job.
checkOutputSpecs(FileSystem, JobConf) - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
 
checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
 
checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.NullOutputFormat
 
checkOutputSpecs(JobContext) - Method in class org.apache.hadoop.mapreduce.OutputFormat
Check for validity of the output-specification for the job.
checkPath(Path) - Method in class org.apache.hadoop.fs.FileSystem
Check that a Path belongs to this FileSystem.
checkPath(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Check that a Path belongs to this FileSystem.
checkpoint() - Method in class org.apache.hadoop.fs.Trash
Create a trash checkpoint.
checkStream() - Method in class org.apache.hadoop.io.compress.DecompressorStream
 
checksum2long(byte[]) - Static method in class org.apache.hadoop.fs.FSInputChecker
Convert a checksum byte array to a long
CHECKSUM_CRC32 - Static variable in class org.apache.hadoop.util.DataChecksum
 
CHECKSUM_NULL - Static variable in class org.apache.hadoop.util.DataChecksum
 
ChecksumException - Exception in org.apache.hadoop.fs
Thrown for checksum errors.
ChecksumException(String, long) - Constructor for exception org.apache.hadoop.fs.ChecksumException
 
ChecksumFileSystem - Class in org.apache.hadoop.fs
Abstract Checksumed FileSystem.
ChecksumFileSystem(FileSystem) - Constructor for class org.apache.hadoop.fs.ChecksumFileSystem
 
checkURIs(URI[], URI[]) - Static method in class org.apache.hadoop.filecache.DistributedCache
This method checks if there is a conflict in the fragment names of the uris.
chmod(String, String) - Static method in class org.apache.hadoop.fs.FileUtil
Change the permissions on a filename.
chooseBlockSize(long) - Static method in class org.apache.hadoop.io.compress.bzip2.CBZip2OutputStream
Chooses a blocksize based on the given length of the data to compress.
chooseRandom(String) - Method in class org.apache.hadoop.net.NetworkTopology
randomly choose one node from scope if scope starts with ~, choose one from the all nodes except for the ones in scope; otherwise, choose one from scope
chooseShardForDelete(DocumentID) - Method in class org.apache.hadoop.contrib.index.example.HashingDistributionPolicy
 
chooseShardForDelete(DocumentID) - Method in class org.apache.hadoop.contrib.index.example.RoundRobinDistributionPolicy
 
chooseShardForDelete(DocumentID) - Method in interface org.apache.hadoop.contrib.index.mapred.IDistributionPolicy
Choose a shard or all shards to send a delete request.
chooseShardForInsert(DocumentID) - Method in class org.apache.hadoop.contrib.index.example.HashingDistributionPolicy
 
chooseShardForInsert(DocumentID) - Method in class org.apache.hadoop.contrib.index.example.RoundRobinDistributionPolicy
 
chooseShardForInsert(DocumentID) - Method in interface org.apache.hadoop.contrib.index.mapred.IDistributionPolicy
Choose a shard to send an insert request.
cleanup() - Method in class org.apache.hadoop.contrib.failmon.Executor
 
cleanup() - Method in class org.apache.hadoop.contrib.failmon.RunOnce
 
cleanup(Log, Closeable...) - Static method in class org.apache.hadoop.io.IOUtils
Close the Closeable objects and ignore any IOException or null pointers.
cleanup() - Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
The default cleanup.
cleanup(int) - Static method in class org.apache.hadoop.mapred.TaskLog
Purge old user logs.
cleanup(Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT>.Context) - Method in class org.apache.hadoop.mapreduce.Mapper
Called once at the end of the task.
cleanup(Reducer<KEYIN, VALUEIN, KEYOUT, VALUEOUT>.Context) - Method in class org.apache.hadoop.mapreduce.Reducer
Called once at the end of the task.
cleanupJob(JobContext) - Method in class org.apache.hadoop.mapred.FileOutputCommitter
 
cleanupJob(JobContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
Deprecated. For cleaning up the job's output after job completion
cleanupJob(JobContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
Deprecated. This method implements the new interface by calling the old method.
cleanupJob(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
Delete the temporary directory, including all of the work directories.
cleanupJob(JobContext) - Method in class org.apache.hadoop.mapreduce.OutputCommitter
For cleaning up the job's output after job completion
cleanupProgress() - Method in class org.apache.hadoop.mapred.JobStatus
 
cleanupProgress() - Method in interface org.apache.hadoop.mapred.RunningJob
Get the progress of the job's cleanup-tasks, as a float between 0.0 and 1.0.
cleanupStorage() - Method in class org.apache.hadoop.mapred.TaskTracker
Removes all contents of temporary storage.
clear() - Method in class org.apache.hadoop.conf.Configuration
Clears all keys from the configuration.
clear() - Method in class org.apache.hadoop.io.MapWritable
clear() - Method in class org.apache.hadoop.io.SortedMapWritable
clear() - Method in class org.apache.hadoop.io.Text
Clear the string to empty.
clear() - Method in class org.apache.hadoop.mapred.join.ArrayListBackedIterator
 
clear() - Method in class org.apache.hadoop.mapred.join.JoinRecordReader.JoinDelegationIterator
 
clear() - Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader.MultiFilterDelegationIterator
 
clear() - Method in interface org.apache.hadoop.mapred.join.ResetableIterator
Close datasources, but do not release internal resources.
clear() - Method in class org.apache.hadoop.mapred.join.ResetableIterator.EMPTY
 
clear() - Method in class org.apache.hadoop.mapred.join.StreamBackedIterator
 
clear() - Method in class org.apache.hadoop.util.bloom.HashFunction
Clears this hash function.
clear() - Method in class org.apache.hadoop.util.PriorityQueue
Removes all entries from the PriorityQueue.
CLEARMASK - Static variable in class org.apache.hadoop.io.compress.bzip2.CBZip2OutputStream
This constant is accessible by subclasses for historical purposes.
clearStatistics() - Static method in class org.apache.hadoop.fs.FileSystem
 
Client - Class in org.apache.hadoop.ipc
A client for an IPC service.
Client(Class<? extends Writable>, Configuration, SocketFactory) - Constructor for class org.apache.hadoop.ipc.Client
Construct an IPC client whose values are of the given Writable class.
Client(Class<? extends Writable>, Configuration) - Constructor for class org.apache.hadoop.ipc.Client
Construct an IPC client with the default SocketFactory
ClientTraceLog - Static variable in class org.apache.hadoop.mapred.TaskTracker
 
clone(JobConf) - Method in class org.apache.hadoop.contrib.utils.join.TaggedMapOutput
 
clone(T, Configuration) - Static method in class org.apache.hadoop.io.WritableUtils
Make a copy of a writable object using serialization to a buffer.
clone() - Method in class org.apache.hadoop.mapred.JobStatus
 
clone() - Method in class org.apache.hadoop.record.Buffer
 
cloneFileAttributes(Path, Path, Progressable) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Clones the attributes (like compression of the input file and creates a corresponding Writer
cloneInto(Writable, Writable) - Static method in class org.apache.hadoop.io.WritableUtils
Deprecated. use ReflectionUtils.cloneInto instead.
cloneWritableInto(Writable, Writable) - Static method in class org.apache.hadoop.util.ReflectionUtils
Deprecated. 
close() - Method in class org.apache.hadoop.contrib.failmon.LocalStore
Close the temporary local file
close() - Method in class org.apache.hadoop.contrib.index.example.IdentityLocalAnalysis
 
close() - Method in class org.apache.hadoop.contrib.index.example.LineDocLocalAnalysis
 
close() - Method in class org.apache.hadoop.contrib.index.example.LineDocRecordReader
 
close() - Method in class org.apache.hadoop.contrib.index.lucene.FileSystemDirectory
 
close() - Method in class org.apache.hadoop.contrib.index.lucene.ShardWriter
Close the shard writer.
close() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateCombiner
 
close() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateMapper
 
close() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateReducer
 
close() - Method in class org.apache.hadoop.contrib.utils.join.ArrayListBackedIterator
 
close() - Method in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
 
close() - Method in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
close() - Method in interface org.apache.hadoop.contrib.utils.join.ResetableIterator
 
close() - Method in class org.apache.hadoop.examples.MultiFileWordCount.MultiFileLineRecordReader
 
close() - Method in class org.apache.hadoop.examples.PiEstimator.PiReducer
Reduce task done, write output to a file.
close() - Method in class org.apache.hadoop.examples.SleepJob
 
close() - Method in class org.apache.hadoop.fs.FileSystem
No more filesystem operations are needed.
close() - Method in class org.apache.hadoop.fs.FilterFileSystem
 
close() - Method in class org.apache.hadoop.fs.FSDataOutputStream
 
close() - Method in class org.apache.hadoop.fs.FsShell
 
close() - Method in class org.apache.hadoop.fs.ftp.FTPInputStream
 
close() - Method in class org.apache.hadoop.fs.HarFileSystem
 
close() - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
close() - Method in class org.apache.hadoop.io.BloomMapFile.Writer
 
close() - Method in class org.apache.hadoop.io.compress.bzip2.CBZip2InputStream
 
close() - Method in class org.apache.hadoop.io.compress.bzip2.CBZip2OutputStream
 
close() - Method in class org.apache.hadoop.io.compress.CompressionInputStream
 
close() - Method in class org.apache.hadoop.io.compress.CompressionOutputStream
 
close() - Method in class org.apache.hadoop.io.compress.CompressorStream
 
close() - Method in class org.apache.hadoop.io.compress.DecompressorStream
 
close() - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
 
close() - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
 
close() - Method in class org.apache.hadoop.io.DefaultStringifier
 
close() - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader
Close the reader.
close() - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner
Close the scanner.
close() - Method in class org.apache.hadoop.io.file.tfile.TFile.Writer
Close the Writer.
close() - Method in class org.apache.hadoop.io.MapFile.Reader
Close the map.
close() - Method in class org.apache.hadoop.io.MapFile.Writer
Close the map.
close() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Close the file.
close() - Method in interface org.apache.hadoop.io.SequenceFile.Sorter.RawKeyValueIterator
closes the iterator so that the underlying streams can be closed
close() - Method in class org.apache.hadoop.io.SequenceFile.Writer
Close the file.
close() - Method in interface org.apache.hadoop.io.serializer.Deserializer
Close the underlying input stream and clear up any resources.
close() - Method in interface org.apache.hadoop.io.serializer.Serializer
Close the underlying output stream and clear up any resources.
close() - Method in interface org.apache.hadoop.io.Stringifier
Closes this object.
close() - Method in class org.apache.hadoop.mapred.JobClient
Close the JobClient.
close() - Method in class org.apache.hadoop.mapred.join.ArrayListBackedIterator
 
close() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
Close all child RRs.
close() - Method in class org.apache.hadoop.mapred.join.JoinRecordReader.JoinDelegationIterator
 
close() - Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader.MultiFilterDelegationIterator
 
close() - Method in interface org.apache.hadoop.mapred.join.ResetableIterator
Close datasources and release resources.
close() - Method in class org.apache.hadoop.mapred.join.ResetableIterator.EMPTY
 
close() - Method in class org.apache.hadoop.mapred.join.StreamBackedIterator
 
close() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
Forward close request to proxied RR.
close() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
 
close() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorCombiner
Do nothing.
close() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJobBase
 
close() - Method in class org.apache.hadoop.mapred.lib.ChainMapper
Closes the ChainMapper and all the Mappers in the chain.
close() - Method in class org.apache.hadoop.mapred.lib.ChainReducer
Closes the ChainReducer, the Reducer and all the Mappers in the chain.
close() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
 
close() - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.DBRecordReader
Close this InputSplit to future operations.
close(Reporter) - Method in class org.apache.hadoop.mapred.lib.db.DBOutputFormat.DBRecordWriter
Close this RecordWriter to future operations.
close() - Method in class org.apache.hadoop.mapred.lib.DelegatingMapper
 
close() - Method in class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
 
close() - Method in class org.apache.hadoop.mapred.lib.MultipleOutputs
Closes all the opened named outputs.
close() - Method in class org.apache.hadoop.mapred.LineRecordReader
Deprecated.  
close() - Method in class org.apache.hadoop.mapred.MapReduceBase
Deprecated. Default implementation that does nothing.
close() - Method in interface org.apache.hadoop.mapred.RawKeyValueIterator
Closes the iterator so that the underlying streams can be closed.
close() - Method in interface org.apache.hadoop.mapred.RecordReader
Close this InputSplit to future operations.
close(Reporter) - Method in interface org.apache.hadoop.mapred.RecordWriter
Close this RecordWriter to future operations.
close() - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
 
close() - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
 
close() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
 
close() - Method in class org.apache.hadoop.mapred.TaskLogAppender
 
close() - Method in class org.apache.hadoop.mapred.TaskTracker
Close down the TaskTracker and all its components.
close(Reporter) - Method in class org.apache.hadoop.mapred.TextOutputFormat.LineRecordWriter
Deprecated.  
close() - Method in class org.apache.hadoop.mapreduce.lib.input.LineRecordReader
 
close() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
 
close(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.TextOutputFormat.LineRecordWriter
 
close() - Method in class org.apache.hadoop.mapreduce.RecordReader
Close the record reader.
close(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.RecordWriter
Close this RecordWriter to future operations.
close() - Method in class org.apache.hadoop.metrics.jvm.EventCounter
 
close() - Method in interface org.apache.hadoop.metrics.MetricsContext
Stops monitoring and also frees any buffered data, returning this object to its initial state.
close() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Stops monitoring and frees buffered data, returning this object to its initial state.
close() - Method in class org.apache.hadoop.metrics.spi.CompositeContext
 
close() - Method in class org.apache.hadoop.net.SocketInputStream
 
close() - Method in class org.apache.hadoop.net.SocketOutputStream
 
close() - Method in class org.apache.hadoop.streaming.PipeMapper
 
close() - Method in class org.apache.hadoop.streaming.PipeReducer
 
close() - Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
Close this to future operations.
close() - Method in class org.apache.hadoop.util.LineReader
Close the underlying stream.
Closeable - Interface in org.apache.hadoop.io
Deprecated. use java.io.Closeable
closeAll() - Static method in class org.apache.hadoop.fs.FileSystem
Close all cached filesystems.
closed - Variable in class org.apache.hadoop.io.compress.CompressorStream
 
closed - Variable in class org.apache.hadoop.io.compress.DecompressorStream
 
closeSocket(Socket) - Static method in class org.apache.hadoop.io.IOUtils
Closes the socket ignoring IOException
closeStream(Closeable) - Static method in class org.apache.hadoop.io.IOUtils
Closes the stream ignoring IOException.
closeWriter() - Method in class org.apache.hadoop.contrib.index.mapred.IntermediateForm
Close the Lucene index writer associated with the intermediate form, if created.
ClusterStatus - Class in org.apache.hadoop.mapred
Status information on the current state of the Map-Reduce cluster.
cmpcl - Variable in class org.apache.hadoop.mapred.join.Parser.Node
 
CodeBuffer - Class in org.apache.hadoop.record.compiler
A wrapper around StringBuffer that automatically does indentation
CodecPool - Class in org.apache.hadoop.io.compress
A global compressor/decompressor pool used to save and reuse (possibly native) compression/decompression codecs.
CodecPool() - Constructor for class org.apache.hadoop.io.compress.CodecPool
 
collate(Object[], String) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
collate(List, String) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
collect(Object, TaggedMapOutput, OutputCollector, Reporter) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
The subclass can overwrite this method to perform additional filtering and/or other processing logic before a value is collected.
collect(K, V) - Method in interface org.apache.hadoop.mapred.OutputCollector
Adds a key/value pair to the output.
collected - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
column - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
combine(Object[], Object[]) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
 
combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapred.join.InnerJoinRecordReader
Return true iff the tuple is full (all data sources contain this key).
combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader
Default implementation offers MultiFilterRecordReader.emit(org.apache.hadoop.mapred.join.TupleWritable) every Tuple from the collector (the outer join of child RRs).
combine(Object[], TupleWritable) - Method in class org.apache.hadoop.mapred.join.OuterJoinRecordReader
Emit everything from the collector.
COMBINE_CLASS_ATTR - Static variable in class org.apache.hadoop.mapreduce.JobContext
 
CombineFileInputFormat<K,V> - Class in org.apache.hadoop.mapred.lib
An abstract InputFormat that returns CombineFileSplit's in InputFormat.getSplits(JobConf, int) method.
CombineFileInputFormat() - Constructor for class org.apache.hadoop.mapred.lib.CombineFileInputFormat
default constructor
CombineFileRecordReader<K,V> - Class in org.apache.hadoop.mapred.lib
A generic RecordReader that can hand out different recordReaders for each chunk in a CombineFileSplit.
CombineFileRecordReader(JobConf, CombineFileSplit, Reporter, Class<RecordReader<K, V>>) - Constructor for class org.apache.hadoop.mapred.lib.CombineFileRecordReader
A generic RecordReader that can hand out different recordReaders for each chunk in the CombineFileSplit.
CombineFileSplit - Class in org.apache.hadoop.mapred.lib
A sub-collection of input files.
CombineFileSplit() - Constructor for class org.apache.hadoop.mapred.lib.CombineFileSplit
default constructor
CombineFileSplit(JobConf, Path[], long[], long[], String[]) - Constructor for class org.apache.hadoop.mapred.lib.CombineFileSplit
 
CombineFileSplit(JobConf, Path[], long[]) - Constructor for class org.apache.hadoop.mapred.lib.CombineFileSplit
 
CombineFileSplit(CombineFileSplit) - Constructor for class org.apache.hadoop.mapred.lib.CombineFileSplit
Copy constructor
comCmd_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
COMMA - Static variable in class org.apache.hadoop.util.StringUtils
 
COMMA_STR - Static variable in class org.apache.hadoop.util.StringUtils
 
COMMA_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
Command - Class in org.apache.hadoop.fs.shell
An abstract class for the execution of a file system command
Command(Configuration) - Constructor for class org.apache.hadoop.fs.shell.Command
Constructor
CommandFormat - Class in org.apache.hadoop.fs.shell
Parse the args of a command and check the format of args.
CommandFormat(String, int, int, String...) - Constructor for class org.apache.hadoop.fs.shell.CommandFormat
constructor
commitPending(TaskAttemptID, TaskStatus) - Method in class org.apache.hadoop.mapred.TaskTracker
Task is reporting that it is in commit_pending and it is waiting for the commit Response
commitTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.FileOutputCommitter
 
commitTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
Deprecated. To promote the task's temporary output to final output location The task's output is moved to the job's output directory.
commitTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
Deprecated. This method implements the new interface by calling the old method.
commitTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
Move the files from the work directory to the job output directory
commitTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.OutputCommitter
To promote the task's temporary output to final output location The task's output is moved to the job's output directory.
comparator() - Method in class org.apache.hadoop.io.SortedMapWritable
COMPARATOR_JCLASS - Static variable in class org.apache.hadoop.io.file.tfile.TFile
comparator prefix: java class
COMPARATOR_MEMCMP - Static variable in class org.apache.hadoop.io.file.tfile.TFile
comparator: memcmp
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.examples.SecondarySort.FirstGroupingComparator
 
compare(SecondarySort.IntPair, SecondarySort.IntPair) - Method in class org.apache.hadoop.examples.SecondarySort.FirstGroupingComparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.examples.SecondarySort.IntPair.Comparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.BooleanWritable.Comparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.BytesWritable.Comparator
Compare the buffers in serialized form.
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.ByteWritable.Comparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.DoubleWritable.Comparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.FloatWritable.Comparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.IntWritable.Comparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.LongWritable.Comparator
 
compare(WritableComparable, WritableComparable) - Method in class org.apache.hadoop.io.LongWritable.DecreasingComparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.LongWritable.DecreasingComparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.MD5Hash.Comparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.NullWritable.Comparator
Compare the buffers in serialized form.
compare(byte[], int, int, byte[], int, int) - Method in interface org.apache.hadoop.io.RawComparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.serializer.DeserializerComparator
 
compare(T, T) - Method in class org.apache.hadoop.io.serializer.JavaSerializationComparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.Text.Comparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.UTF8.Comparator
Deprecated.  
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.io.WritableComparator
Optimization hook.
compare(WritableComparable, WritableComparable) - Method in class org.apache.hadoop.io.WritableComparator
Compare two WritableComparables.
compare(Object, Object) - Method in class org.apache.hadoop.io.WritableComparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.mapred.lib.KeyFieldBasedComparator
 
compare(byte[], int, int, byte[], int, int) - Method in class org.apache.hadoop.record.RecordComparator
 
compare(byte[], int) - Method in class org.apache.hadoop.util.DataChecksum
Compares the checksum located at buf[offset] with the current checksum.
compare(int, int) - Method in interface org.apache.hadoop.util.IndexedSortable
Compare items at the given addresses consistent with the semantics of Comparator.compare(Object, Object).
compareBytes(byte[], int, int, byte[], int, int) - Static method in class org.apache.hadoop.io.WritableComparator
Lexicographic order of binary data.
compareBytes(byte[], int, int, byte[], int, int) - Static method in class org.apache.hadoop.record.Utils
Lexicographic order of binary data.
compareTo(Object) - Method in class org.apache.hadoop.contrib.index.mapred.DocumentID
 
compareTo(Object) - Method in class org.apache.hadoop.contrib.index.mapred.Shard
 
compareTo(Shard) - Method in class org.apache.hadoop.contrib.index.mapred.Shard
Compare to another shard.
compareTo(Object) - Method in class org.apache.hadoop.examples.MultiFileWordCount.WordOffset
 
compareTo(SecondarySort.IntPair) - Method in class org.apache.hadoop.examples.SecondarySort.IntPair
 
compareTo(Object) - Method in class org.apache.hadoop.fs.FileStatus
Compare this object to another object
compareTo(Object) - Method in class org.apache.hadoop.fs.Path
 
compareTo(BinaryComparable) - Method in class org.apache.hadoop.io.BinaryComparable
Compare bytes from {#getBytes()}.
compareTo(byte[], int, int) - Method in class org.apache.hadoop.io.BinaryComparable
Compare bytes from {#getBytes()} to those provided.
compareTo(Object) - Method in class org.apache.hadoop.io.BooleanWritable
 
compareTo(Object) - Method in class org.apache.hadoop.io.ByteWritable
Compares two ByteWritables.
compareTo(Object) - Method in class org.apache.hadoop.io.DoubleWritable
 
compareTo(byte[]) - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
Compare the entry key to another key.
compareTo(byte[], int, int) - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
Compare the entry key to another key.
compareTo(RawComparable) - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
Compare an entry with a RawComparable object.
compareTo(Utils.Version) - Method in class org.apache.hadoop.io.file.tfile.Utils.Version
Compare this version with another version.
compareTo(Object) - Method in class org.apache.hadoop.io.FloatWritable
Compares two FloatWritables.
compareTo(Object) - Method in class org.apache.hadoop.io.IntWritable
Compares two IntWritables.
compareTo(Object) - Method in class org.apache.hadoop.io.LongWritable
Compares two LongWritables.
compareTo(MD5Hash) - Method in class org.apache.hadoop.io.MD5Hash
Compares this object with the specified object for order.
compareTo(Object) - Method in class org.apache.hadoop.io.NullWritable
 
compareTo(Object) - Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
 
compareTo(Object) - Method in class org.apache.hadoop.io.UTF8
Deprecated. Compare two UTF8s.
compareTo(Object) - Method in class org.apache.hadoop.io.VIntWritable
Compares two VIntWritables.
compareTo(Object) - Method in class org.apache.hadoop.io.VLongWritable
Compares two VLongWritables.
compareTo(ComposableRecordReader<K, ?>) - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
Implement Comparable contract (compare key of join or head of heap with that of another).
compareTo(ComposableRecordReader<K, ?>) - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
Implement Comparable contract (compare key at head of proxied RR with that of another).
compareTo(ID) - Method in class org.apache.hadoop.mapreduce.ID
Compare IDs by associated numbers
compareTo(ID) - Method in class org.apache.hadoop.mapreduce.JobID
Compare JobIds by first jtIdentifiers, then by job numbers
compareTo(ID) - Method in class org.apache.hadoop.mapreduce.TaskAttemptID
Compare TaskIds by first tipIds, then by task numbers.
compareTo(ID) - Method in class org.apache.hadoop.mapreduce.TaskID
Compare TaskInProgressIds by first jobIds, then by tip numbers.
compareTo(Object) - Method in class org.apache.hadoop.record.Buffer
Define the sort order of the Buffer.
compareTo(Object) - Method in class org.apache.hadoop.record.meta.RecordTypeInfo
This class doesn't implement Comparable as it's not meant to be used for anything besides de/serializing.
compareTo(Object) - Method in class org.apache.hadoop.record.Record
 
compareTo(Key) - Method in class org.apache.hadoop.util.bloom.Key
 
compatibleWith(Utils.Version) - Method in class org.apache.hadoop.io.file.tfile.Utils.Version
Test compatibility.
complete() - Method in class org.apache.hadoop.util.Progress
Completes this node, moving the parent node to its next child.
completedJobs() - Method in class org.apache.hadoop.mapred.JobTracker
 
completeLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
 
completeLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
Called when we're all done writing to the target.
completeLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Called when we're all done writing to the target.
completeLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.HarFileSystem
not implemented.
completeLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
 
completeLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
ComposableInputFormat<K extends WritableComparable,V extends Writable> - Interface in org.apache.hadoop.mapred.join
Refinement of InputFormat requiring implementors to provide ComposableRecordReader instead of RecordReader.
ComposableRecordReader<K extends WritableComparable,V extends Writable> - Interface in org.apache.hadoop.mapred.join
Additional operations required of a RecordReader to participate in a join.
compose(Class<? extends InputFormat>, String) - Static method in class org.apache.hadoop.mapred.join.CompositeInputFormat
Convenience method for constructing composite formats.
compose(String, Class<? extends InputFormat>, String...) - Static method in class org.apache.hadoop.mapred.join.CompositeInputFormat
Convenience method for constructing composite formats.
compose(String, Class<? extends InputFormat>, Path...) - Static method in class org.apache.hadoop.mapred.join.CompositeInputFormat
Convenience method for constructing composite formats.
CompositeContext - Class in org.apache.hadoop.metrics.spi
 
CompositeContext() - Constructor for class org.apache.hadoop.metrics.spi.CompositeContext
 
CompositeInputFormat<K extends WritableComparable> - Class in org.apache.hadoop.mapred.join
An InputFormat capable of performing joins over a set of data sources sorted and partitioned the same way.
CompositeInputFormat() - Constructor for class org.apache.hadoop.mapred.join.CompositeInputFormat
 
CompositeInputSplit - Class in org.apache.hadoop.mapred.join
This InputSplit contains a set of child InputSplits.
CompositeInputSplit() - Constructor for class org.apache.hadoop.mapred.join.CompositeInputSplit
 
CompositeInputSplit(int) - Constructor for class org.apache.hadoop.mapred.join.CompositeInputSplit
 
CompositeRecordReader<K extends WritableComparable,V extends Writable,X extends Writable> - Class in org.apache.hadoop.mapred.join
A RecordReader that can effect joins of RecordReaders sharing a common key type and partitioning.
CompositeRecordReader(int, int, Class<? extends WritableComparator>) - Constructor for class org.apache.hadoop.mapred.join.CompositeRecordReader
Create a RecordReader with capacity children to position id in the parent reader.
compress() - Method in class org.apache.hadoop.io.compress.BlockCompressorStream
 
compress(byte[], int, int) - Method in class org.apache.hadoop.io.compress.bzip2.BZip2DummyCompressor
 
compress(byte[], int, int) - Method in interface org.apache.hadoop.io.compress.Compressor
Fills specified buffer with compressed data.
compress() - Method in class org.apache.hadoop.io.compress.CompressorStream
 
compress(byte[], int, int) - Method in class org.apache.hadoop.io.compress.zlib.BuiltInZlibDeflater
 
compress(byte[], int, int) - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
 
compressedValSerializer - Variable in class org.apache.hadoop.io.SequenceFile.Writer
 
CompressedWritable - Class in org.apache.hadoop.io
A base-class for Writables which store themselves compressed and lazily inflate on field access.
CompressedWritable() - Constructor for class org.apache.hadoop.io.CompressedWritable
 
COMPRESSION_GZ - Static variable in class org.apache.hadoop.io.file.tfile.TFile
compression: gzip
COMPRESSION_LZO - Static variable in class org.apache.hadoop.io.file.tfile.TFile
compression: lzo
COMPRESSION_NONE - Static variable in class org.apache.hadoop.io.file.tfile.TFile
compression: none
COMPRESSION_SUFFIX - Static variable in class org.apache.hadoop.contrib.failmon.LocalStore
 
CompressionCodec - Interface in org.apache.hadoop.io.compress
This class encapsulates a streaming compression/decompression pair.
CompressionCodecFactory - Class in org.apache.hadoop.io.compress
A factory that will find the correct codec for a given filename.
CompressionCodecFactory(Configuration) - Constructor for class org.apache.hadoop.io.compress.CompressionCodecFactory
Find the codecs specified in the config value io.compression.codecs and register them.
CompressionInputStream - Class in org.apache.hadoop.io.compress
A compression input stream.
CompressionInputStream(InputStream) - Constructor for class org.apache.hadoop.io.compress.CompressionInputStream
Create a compression input stream that reads the decompressed bytes from the given stream.
CompressionOutputStream - Class in org.apache.hadoop.io.compress
A compression output stream.
CompressionOutputStream(OutputStream) - Constructor for class org.apache.hadoop.io.compress.CompressionOutputStream
Create a compression output stream that writes the compressed bytes to the given stream.
Compressor - Interface in org.apache.hadoop.io.compress
Specification of a stream-based 'compressor' which can be plugged into a CompressionOutputStream to compress data.
compressor - Variable in class org.apache.hadoop.io.compress.CompressorStream
 
CompressorStream - Class in org.apache.hadoop.io.compress
 
CompressorStream(OutputStream, Compressor, int) - Constructor for class org.apache.hadoop.io.compress.CompressorStream
 
CompressorStream(OutputStream, Compressor) - Constructor for class org.apache.hadoop.io.compress.CompressorStream
 
CompressorStream(OutputStream) - Constructor for class org.apache.hadoop.io.compress.CompressorStream
Allow derived classes to directly set the underlying stream.
computeSplitSize(long, long, long) - Method in class org.apache.hadoop.mapred.FileInputFormat
Deprecated.  
computeSplitSize(long, long, long) - Method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
 
conf - Variable in class org.apache.hadoop.mapred.SequenceFileRecordReader
 
conf - Variable in class org.apache.hadoop.mapreduce.JobContext
 
conf - Variable in class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
 
config_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
Configurable - Interface in org.apache.hadoop.conf
Something that may be configured with a Configuration.
Configuration - Class in org.apache.hadoop.conf
Provides access to configuration parameters.
Configuration() - Constructor for class org.apache.hadoop.conf.Configuration
A new configuration.
Configuration(boolean) - Constructor for class org.apache.hadoop.conf.Configuration
A new configuration where the behavior of reading from the default resources can be turned off.
Configuration(Configuration) - Constructor for class org.apache.hadoop.conf.Configuration
A new configuration with the same settings cloned from another.
Configuration.IntegerRanges - Class in org.apache.hadoop.conf
A class that represents a set of positive integer ranges.
Configuration.IntegerRanges() - Constructor for class org.apache.hadoop.conf.Configuration.IntegerRanges
 
Configuration.IntegerRanges(String) - Constructor for class org.apache.hadoop.conf.Configuration.IntegerRanges
 
configure(JobConf) - Method in class org.apache.hadoop.contrib.index.example.IdentityLocalAnalysis
 
configure(JobConf) - Method in class org.apache.hadoop.contrib.index.example.LineDocLocalAnalysis
 
configure(JobConf) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateCombiner
 
configure(JobConf) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateMapper
 
configure(JobConf) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdatePartitioner
 
configure(JobConf) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateReducer
 
configure(IndexUpdateConfiguration) - Method in class org.apache.hadoop.contrib.index.mapred.IntermediateForm
Configure using an index update configuration.
configure(JobConf) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
 
configure(JobConf) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
configure(JobConf) - Method in class org.apache.hadoop.contrib.utils.join.JobBase
Initializes a new instance from a JobConf.
configure(JobConf) - Method in class org.apache.hadoop.examples.dancing.DistributedPentomino.PentMap
 
configure(JobConf) - Method in class org.apache.hadoop.examples.PiEstimator.PiReducer
Store job configuration.
configure(JobConf) - Method in class org.apache.hadoop.examples.SleepJob
 
configure(JobConf) - Method in interface org.apache.hadoop.mapred.JobConfigurable
Deprecated. Initializes a new instance from a JobConf.
configure(JobConf) - Method in class org.apache.hadoop.mapred.KeyValueTextInputFormat
 
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.aggregate.UserDefinedValueAggregatorDescriptor
Do nothing.
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
get the input file name.
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorCombiner
Combiner does not need to configure.
configure(JobConf) - Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorDescriptor
Configure the object
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJobBase
 
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.ChainMapper
Configures the ChainMapper and all the Mappers in the chain.
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.ChainReducer
Configures the ChainReducer, the Reducer and all the Mappers in the chain.
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat
Initializes a new instance from a JobConf.
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.DelegatingMapper
 
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
 
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.HashPartitioner
Deprecated.  
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.KeyFieldBasedComparator
 
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner
 
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.MultithreadedMapRunner
 
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.NLineInputFormat
 
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.RegexMapper
 
configure(JobConf) - Method in class org.apache.hadoop.mapred.lib.TotalOrderPartitioner
Read in the partition file and build indexing data structures.
configure(JobConf) - Method in class org.apache.hadoop.mapred.MapReduceBase
Deprecated. Default implementation that does nothing.
configure(JobConf) - Method in class org.apache.hadoop.mapred.MapRunner
 
configure(JobConf) - Method in class org.apache.hadoop.mapred.TextInputFormat
Deprecated.  
configure(JobConf) - Method in class org.apache.hadoop.streaming.PipeMapper
 
configure(JobConf) - Method in class org.apache.hadoop.streaming.PipeMapRed
 
configure(JobConf) - Method in class org.apache.hadoop.streaming.PipeReducer
 
Configured - Class in org.apache.hadoop.conf
Base class for things that may be configured with a Configuration.
Configured() - Constructor for class org.apache.hadoop.conf.Configured
Construct a Configured.
Configured(Configuration) - Constructor for class org.apache.hadoop.conf.Configured
Construct a Configured.
configureDB(JobConf, String, String, String, String) - Static method in class org.apache.hadoop.mapred.lib.db.DBConfiguration
Sets the DB access related fields in the JobConf.
configureDB(JobConf, String, String) - Static method in class org.apache.hadoop.mapred.lib.db.DBConfiguration
Sets the DB access related fields in the JobConf.
ConfiguredPolicy - Class in org.apache.hadoop.security.authorize
A Configuration based security Policy for Hadoop.
ConfiguredPolicy(Configuration, PolicyProvider) - Constructor for class org.apache.hadoop.security.authorize.ConfiguredPolicy
 
connect(Socket, SocketAddress, int) - Static method in class org.apache.hadoop.net.NetUtils
This is a drop-in replacement for Socket.connect(SocketAddress, int).
ConnectionPermission - Class in org.apache.hadoop.security.authorize
Permission to initiate a connection to a given service.
ConnectionPermission(Class<?>) - Constructor for class org.apache.hadoop.security.authorize.ConnectionPermission
ConnectionPermission for a given service.
constructQuery(String, String[]) - Method in class org.apache.hadoop.mapred.lib.db.DBOutputFormat
Constructs the query used as the prepared statement to insert data.
Consts - Class in org.apache.hadoop.record.compiler
const definitions for Record I/O compiler
contains(Node) - Method in class org.apache.hadoop.net.NetworkTopology
Check if the tree contains node node
containsKey(Object) - Method in class org.apache.hadoop.io.MapWritable
containsKey(Object) - Method in class org.apache.hadoop.io.SortedMapWritable
containsValue(Object) - Method in class org.apache.hadoop.io.MapWritable
containsValue(Object) - Method in class org.apache.hadoop.io.SortedMapWritable
ContentSummary - Class in org.apache.hadoop.fs
Store the summary of a content (a directory or a file).
ContentSummary() - Constructor for class org.apache.hadoop.fs.ContentSummary
Constructor
ContentSummary(long, long, long) - Constructor for class org.apache.hadoop.fs.ContentSummary
Constructor
ContentSummary(long, long, long, long, long, long) - Constructor for class org.apache.hadoop.fs.ContentSummary
Constructor
ContextFactory - Class in org.apache.hadoop.metrics
Factory class for creating MetricsContext objects.
ContextFactory() - Constructor for class org.apache.hadoop.metrics.ContextFactory
Creates a new instance of ContextFactory
Continuous - Class in org.apache.hadoop.contrib.failmon
This class runs FailMon in a continuous mode on the local node.
Continuous() - Constructor for class org.apache.hadoop.contrib.failmon.Continuous
 
convertToByteStream(Checksum, int) - Static method in class org.apache.hadoop.fs.FSOutputSummer
Converts a checksum integer value to a byte stream
copy(FileSystem, Path, FileSystem, Path, boolean, Configuration) - Static method in class org.apache.hadoop.fs.FileUtil
Copy files between FileSystems.
copy(FileSystem, Path[], FileSystem, Path, boolean, boolean, Configuration) - Static method in class org.apache.hadoop.fs.FileUtil
 
copy(FileSystem, Path, FileSystem, Path, boolean, boolean, Configuration) - Static method in class org.apache.hadoop.fs.FileUtil
Copy files between FileSystems.
copy(File, FileSystem, Path, boolean, Configuration) - Static method in class org.apache.hadoop.fs.FileUtil
Copy local files to a FileSystem.
copy(FileSystem, Path, File, boolean, Configuration) - Static method in class org.apache.hadoop.fs.FileUtil
Copy FileSystem files to local files.
copy(Writable) - Method in class org.apache.hadoop.io.AbstractMapWritable
Used by child copy constructors.
copy(byte[], int, int) - Method in class org.apache.hadoop.record.Buffer
Copy the specified byte array to the Buffer.
copy(Configuration, T, T) - Static method in class org.apache.hadoop.util.ReflectionUtils
Make a copy of the writable object using serialization to a buffer
copyBytes(InputStream, OutputStream, int, boolean) - Static method in class org.apache.hadoop.io.IOUtils
Copies from one stream to another.
copyBytes(InputStream, OutputStream, Configuration) - Static method in class org.apache.hadoop.io.IOUtils
Copies from one stream to another.
copyBytes(InputStream, OutputStream, Configuration, boolean) - Static method in class org.apache.hadoop.io.IOUtils
Copies from one stream to another.
copyFromLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
 
copyFromLocalFile(Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
The src file is on the local disk.
copyFromLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
The src file is on the local disk.
copyFromLocalFile(boolean, boolean, Path[], Path) - Method in class org.apache.hadoop.fs.FileSystem
The src files are on the local disk.
copyFromLocalFile(boolean, boolean, Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
The src file is on the local disk.
copyFromLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
The src file is on the local disk.
copyFromLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.HarFileSystem
not implemented.
copyFromLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
 
copyFromLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.LocalFileSystem
 
copyMerge(FileSystem, Path, FileSystem, Path, boolean, Configuration, String) - Static method in class org.apache.hadoop.fs.FileUtil
Copy all files in a directory to one output file (merge).
copyToHDFS(String, String) - Static method in class org.apache.hadoop.contrib.failmon.LocalStore
Copy a local file to HDFS
copyToLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
The src file is under FS, and the dst is on the local disk.
copyToLocalFile(Path, Path, boolean) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
The src file is under FS, and the dst is on the local disk.
copyToLocalFile(Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
The src file is under FS, and the dst is on the local disk.
copyToLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
The src file is under FS, and the dst is on the local disk.
copyToLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
The src file is under FS, and the dst is on the local disk.
copyToLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.HarFileSystem
copies the file in the har filesystem to a local file.
copyToLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
 
copyToLocalFile(boolean, Path, Path) - Method in class org.apache.hadoop.fs.LocalFileSystem
 
Count - Class in org.apache.hadoop.fs.shell
Count the number of directories, files, bytes, quota, and remaining quota.
Count(String[], int, Configuration) - Constructor for class org.apache.hadoop.fs.shell.Count
Constructor
countCounters() - Method in class org.apache.hadoop.mapreduce.Counters
Returns the total number of counters, by summing the number of counters in each group.
Counter - Class in org.apache.hadoop.mapreduce
A named counter that tracks the progress of a map/reduce job.
Counter() - Constructor for class org.apache.hadoop.mapreduce.Counter
 
Counter(String, String) - Constructor for class org.apache.hadoop.mapreduce.Counter
 
COUNTER_GROUP - Static variable in class org.apache.hadoop.mapred.SkipBadRecords
Special counters which are written by the application and are used by the framework for detecting bad records.
COUNTER_MAP_PROCESSED_RECORDS - Static variable in class org.apache.hadoop.mapred.SkipBadRecords
Number of processed map records.
COUNTER_REDUCE_PROCESSED_GROUPS - Static variable in class org.apache.hadoop.mapred.SkipBadRecords
Number of processed reduce groups.
CounterGroup - Class in org.apache.hadoop.mapreduce
A group of Counters that logically belong together.
CounterGroup(String) - Constructor for class org.apache.hadoop.mapreduce.CounterGroup
 
CounterGroup(String, String) - Constructor for class org.apache.hadoop.mapreduce.CounterGroup
 
Counters - Class in org.apache.hadoop.mapred
Deprecated. Use Counters instead.
Counters() - Constructor for class org.apache.hadoop.mapred.Counters
Deprecated.  
Counters - Class in org.apache.hadoop.mapreduce
 
Counters() - Constructor for class org.apache.hadoop.mapreduce.Counters
 
Counters.Counter - Class in org.apache.hadoop.mapred
Deprecated. A counter record, comprising its name and value.
Counters.Group - Class in org.apache.hadoop.mapred
Deprecated. Group of counters, comprising of counters from a particular counter Enum class.
CountingBloomFilter - Class in org.apache.hadoop.util.bloom
Implements a counting Bloom filter, as defined by Fan et al.
CountingBloomFilter() - Constructor for class org.apache.hadoop.util.bloom.CountingBloomFilter
Default constructor - use with readFields
CountingBloomFilter(int, int, int) - Constructor for class org.apache.hadoop.util.bloom.CountingBloomFilter
Constructor
countNumOfAvailableNodes(String, List<Node>) - Method in class org.apache.hadoop.net.NetworkTopology
return the number of leaves in scope but not in excludedNodes if scope starts with ~, return the number of nodes that are not in scope and excludedNodes;
CPUParser - Class in org.apache.hadoop.contrib.failmon
Objects of this class parse the /proc/cpuinfo file to gather information about present processors in the system.
CPUParser() - Constructor for class org.apache.hadoop.contrib.failmon.CPUParser
Constructs a CPUParser
create(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Opens an FSDataOutputStream at the indicated Path with write-progress reporting.
create(FileSystem, Path, FsPermission) - Static method in class org.apache.hadoop.fs.FileSystem
create a file with the provided permission The permission of the file is set to be the provided permission as in setPermission, not permission&~umask It is implemented using two RPCs.
create(Path) - Method in class org.apache.hadoop.fs.FileSystem
Opens an FSDataOutputStream at the indicated Path.
create(Path, boolean) - Method in class org.apache.hadoop.fs.FileSystem
Opens an FSDataOutputStream at the indicated Path.
create(Path, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
Create an FSDataOutputStream at the indicated Path with write-progress reporting.
create(Path, short) - Method in class org.apache.hadoop.fs.FileSystem
Opens an FSDataOutputStream at the indicated Path.
create(Path, short, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
Opens an FSDataOutputStream at the indicated Path with write-progress reporting.
create(Path, boolean, int) - Method in class org.apache.hadoop.fs.FileSystem
Opens an FSDataOutputStream at the indicated Path.
create(Path, boolean, int, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
Opens an FSDataOutputStream at the indicated Path with write-progress reporting.
create(Path, boolean, int, short, long) - Method in class org.apache.hadoop.fs.FileSystem
Opens an FSDataOutputStream at the indicated Path.
create(Path, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
Opens an FSDataOutputStream at the indicated Path with write-progress reporting.
create(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.FileSystem
Opens an FSDataOutputStream at the indicated Path with write-progress reporting.
create(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.FilterFileSystem
Opens an FSDataOutputStream at the indicated Path with write-progress reporting.
create(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
A stream obtained via this call must be closed before using other APIs of this class or else the invocation will block.
create(Path, int) - Method in class org.apache.hadoop.fs.HarFileSystem
 
create(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.HarFileSystem
 
create(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
 
create(Path, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
Opens an FSDataOutputStream at the indicated Path with write-progress reporting.
create(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
Opens an FSDataOutputStream at the indicated Path with write-progress reporting.
create(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
create(Path, FsPermission, boolean, int, short, long, Progressable) - Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
 
create(Class<?>, Object, RetryPolicy) - Static method in class org.apache.hadoop.io.retry.RetryProxy
Create a proxy for an interface of an implementation class using the same retry policy for each method in the interface.
create(Class<?>, Object, Map<String, RetryPolicy>) - Static method in class org.apache.hadoop.io.retry.RetryProxy
Create a proxy for an interface of an implementation class using the a set of retry policies specified by method name.
createAllSymlink(Configuration, File, File) - Static method in class org.apache.hadoop.filecache.DistributedCache
This method create symlinks for all files in a given dir in another directory
createBaseListener(Configuration) - Method in class org.apache.hadoop.http.HttpServer
Create a required listener for the Jetty instance listening on the port provided.
createCompressor() - Method in class org.apache.hadoop.io.compress.BZip2Codec
This functionality is currently not supported.
createCompressor() - Method in interface org.apache.hadoop.io.compress.CompressionCodec
Create a new Compressor for use by this CompressionCodec.
createCompressor() - Method in class org.apache.hadoop.io.compress.DefaultCodec
 
createCompressor() - Method in class org.apache.hadoop.io.compress.GzipCodec
 
createDataFileReader(FileSystem, Path, Configuration) - Method in class org.apache.hadoop.io.MapFile.Reader
Override this method to specialize the type of SequenceFile.Reader returned.
createDataJoinJob(String[]) - Static method in class org.apache.hadoop.contrib.utils.join.DataJoinJob
 
createDecompressor() - Method in class org.apache.hadoop.io.compress.BZip2Codec
This functionality is currently not supported.
createDecompressor() - Method in interface org.apache.hadoop.io.compress.CompressionCodec
Create a new Decompressor for use by this CompressionCodec.
createDecompressor() - Method in class org.apache.hadoop.io.compress.DefaultCodec
 
createDecompressor() - Method in class org.apache.hadoop.io.compress.GzipCodec
 
createHardLink(File, File) - Static method in class org.apache.hadoop.fs.FileUtil.HardLink
Creates a hardlink
createImmutable(short) - Static method in class org.apache.hadoop.fs.permission.FsPermission
Create an immutable FsPermission object.
createImmutable(String, String, FsPermission) - Static method in class org.apache.hadoop.fs.permission.PermissionStatus
Create an immutable PermissionStatus object.
createImmutable(String[]) - Static method in class org.apache.hadoop.security.UnixUserGroupInformation
Create an immutable UnixUserGroupInformation object.
createInputStream(InputStream) - Method in class org.apache.hadoop.io.compress.BZip2Codec
Creates CompressionInputStream to be used to read off uncompressed data.
createInputStream(InputStream, Decompressor) - Method in class org.apache.hadoop.io.compress.BZip2Codec
This functionality is currently not supported.
createInputStream(InputStream) - Method in interface org.apache.hadoop.io.compress.CompressionCodec
Create a stream decompressor that will read from the given input stream.
createInputStream(InputStream, Decompressor) - Method in interface org.apache.hadoop.io.compress.CompressionCodec
Create a CompressionInputStream that will read from the given InputStream with the given Decompressor.
createInputStream(InputStream) - Method in class org.apache.hadoop.io.compress.DefaultCodec
 
createInputStream(InputStream, Decompressor) - Method in class org.apache.hadoop.io.compress.DefaultCodec
 
createInputStream(InputStream) - Method in class org.apache.hadoop.io.compress.GzipCodec
 
createInputStream(InputStream, Decompressor) - Method in class org.apache.hadoop.io.compress.GzipCodec
 
createInstance(String) - Static method in class org.apache.hadoop.mapred.lib.aggregate.UserDefinedValueAggregatorDescriptor
Create an instance of the given class
createInternalValue() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
Create a value to be used internally for joins.
createIOException(List<IOException>) - Static method in exception org.apache.hadoop.io.MultipleIOException
A convenient method to create an IOException.
createJob(String[]) - Static method in class org.apache.hadoop.streaming.StreamJob
This method creates a streaming job from the given argument list.
createKey() - Method in class org.apache.hadoop.contrib.index.example.LineDocRecordReader
 
createKey() - Method in class org.apache.hadoop.examples.MultiFileWordCount.MultiFileLineRecordReader
 
createKey() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
Create a new key value common to all child RRs.
createKey() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
Request new key from proxied RR.
createKey() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
 
createKey() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
 
createKey() - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.DBRecordReader
Create an object of the appropriate type to be used as a key.
createKey() - Method in class org.apache.hadoop.mapred.LineRecordReader
Deprecated.  
createKey() - Method in interface org.apache.hadoop.mapred.RecordReader
Create an object of the appropriate type to be used as a key.
createKey() - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
 
createKey() - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
 
createKey() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
 
createKey() - Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
 
createLocalTempFile(File, String, boolean) - Static method in class org.apache.hadoop.fs.FileUtil
Create a tmp file for a base file.
createNewFile(Path) - Method in class org.apache.hadoop.fs.FileSystem
Creates the given Path as a brand-new zero-length file.
createOutput(String) - Method in class org.apache.hadoop.contrib.index.lucene.FileSystemDirectory
 
createOutputStream(OutputStream) - Method in class org.apache.hadoop.io.compress.BZip2Codec
Creates CompressionOutputStream for BZip2
createOutputStream(OutputStream, Compressor) - Method in class org.apache.hadoop.io.compress.BZip2Codec
This functionality is currently not supported.
createOutputStream(OutputStream) - Method in interface org.apache.hadoop.io.compress.CompressionCodec
Create a CompressionOutputStream that will write to the given OutputStream.
createOutputStream(OutputStream, Compressor) - Method in interface org.apache.hadoop.io.compress.CompressionCodec
Create a CompressionOutputStream that will write to the given OutputStream with the given Compressor.
createOutputStream(OutputStream) - Method in class org.apache.hadoop.io.compress.DefaultCodec
 
createOutputStream(OutputStream, Compressor) - Method in class org.apache.hadoop.io.compress.DefaultCodec
 
createOutputStream(OutputStream) - Method in class org.apache.hadoop.io.compress.GzipCodec
 
createOutputStream(OutputStream, Compressor) - Method in class org.apache.hadoop.io.compress.GzipCodec
 
createPool(JobConf, List<PathFilter>) - Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
Create a new pool and add the filters to it.
createPool(JobConf, PathFilter...) - Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
Create a new pool and add the filters to it.
createRecord(String) - Method in interface org.apache.hadoop.metrics.MetricsContext
Creates a new MetricsRecord instance with the given recordName.
createRecord(MetricsContext, String) - Static method in class org.apache.hadoop.metrics.MetricsUtil
Utility method to create and return new metrics record instance within the given context.
createRecord(String) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Creates a new AbstractMetricsRecord instance with the given recordName.
createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.InputFormat
Create a record reader for a given split.
createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat
 
createRecordReader(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.TextInputFormat
 
createResetableIterator() - Method in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
The subclass can provide a different implementation on ResetableIterator.
createScanner() - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader
Get a scanner than can scan the whole TFile.
createScanner(long, long) - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader
Get a scanner that covers a portion of TFile based on byte offsets.
createScanner(byte[], byte[]) - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader
Get a scanner that covers a portion of TFile based on keys.
createScanner(RawComparable, RawComparable) - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader
Get a scanner that covers a specific key range.
createSocket() - Method in class org.apache.hadoop.net.SocksSocketFactory
 
createSocket(InetAddress, int) - Method in class org.apache.hadoop.net.SocksSocketFactory
 
createSocket(InetAddress, int, InetAddress, int) - Method in class org.apache.hadoop.net.SocksSocketFactory
 
createSocket(String, int) - Method in class org.apache.hadoop.net.SocksSocketFactory
 
createSocket(String, int, InetAddress, int) - Method in class org.apache.hadoop.net.SocksSocketFactory
 
createSocket() - Method in class org.apache.hadoop.net.StandardSocketFactory
 
createSocket(InetAddress, int) - Method in class org.apache.hadoop.net.StandardSocketFactory
 
createSocket(InetAddress, int, InetAddress, int) - Method in class org.apache.hadoop.net.StandardSocketFactory
 
createSocket(String, int) - Method in class org.apache.hadoop.net.StandardSocketFactory
 
createSocket(String, int, InetAddress, int) - Method in class org.apache.hadoop.net.StandardSocketFactory
 
createSocketAddr(String) - Static method in class org.apache.hadoop.net.NetUtils
Util method to build socket addr from either: : ://:/
createSocketAddr(String, int) - Static method in class org.apache.hadoop.net.NetUtils
Util method to build socket addr from either: : ://:/
createSymlink(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
This method allows you to create symlinks in the current working directory of the task to all the cache files/archives
createTmpFileForWrite(String, long, Configuration) - Method in class org.apache.hadoop.fs.LocalDirAllocator
Creates a temporary file in the local FS.
createURLStreamHandler(String) - Method in class org.apache.hadoop.fs.FsUrlStreamHandlerFactory
 
createValue() - Method in class org.apache.hadoop.contrib.index.example.LineDocRecordReader
 
createValue() - Method in class org.apache.hadoop.examples.MultiFileWordCount.MultiFileLineRecordReader
 
createValue() - Method in class org.apache.hadoop.mapred.join.JoinRecordReader
Create an object of the appropriate type to be used as a value.
createValue() - Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader
Create an object of the appropriate type to be used as a value.
createValue() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
Request new value from proxied RR.
createValue() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
 
createValue() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
 
createValue() - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.DBRecordReader
Create an object of the appropriate type to be used as a value.
createValue() - Method in class org.apache.hadoop.mapred.LineRecordReader
Deprecated.  
createValue() - Method in interface org.apache.hadoop.mapred.RecordReader
Create an object of the appropriate type to be used as a value.
createValue() - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
 
createValue() - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
 
createValue() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
 
createValue() - Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
 
createValueAggregatorJob(String[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
Create an Aggregate based map/reduce job.
createValueAggregatorJob(String[], Class<? extends ValueAggregatorDescriptor>[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
 
createValueAggregatorJobs(String[], Class<? extends ValueAggregatorDescriptor>[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
 
createValueAggregatorJobs(String[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
 
createValueBytes() - Method in class org.apache.hadoop.io.SequenceFile.Reader
 
createWriter(FileSystem, Configuration, Path, Class, Class) - Static method in class org.apache.hadoop.io.SequenceFile
Construct the preferred type of SequenceFile Writer.
createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType) - Static method in class org.apache.hadoop.io.SequenceFile
Construct the preferred type of SequenceFile Writer.
createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType, Progressable) - Static method in class org.apache.hadoop.io.SequenceFile
Construct the preferred type of SequenceFile Writer.
createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType, CompressionCodec) - Static method in class org.apache.hadoop.io.SequenceFile
Construct the preferred type of SequenceFile Writer.
createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType, CompressionCodec, Progressable, SequenceFile.Metadata) - Static method in class org.apache.hadoop.io.SequenceFile
Construct the preferred type of SequenceFile Writer.
createWriter(FileSystem, Configuration, Path, Class, Class, int, short, long, SequenceFile.CompressionType, CompressionCodec, Progressable, SequenceFile.Metadata) - Static method in class org.apache.hadoop.io.SequenceFile
Construct the preferred type of SequenceFile Writer.
createWriter(FileSystem, Configuration, Path, Class, Class, SequenceFile.CompressionType, CompressionCodec, Progressable) - Static method in class org.apache.hadoop.io.SequenceFile
Construct the preferred type of SequenceFile Writer.
createWriter(Configuration, FSDataOutputStream, Class, Class, SequenceFile.CompressionType, CompressionCodec, SequenceFile.Metadata) - Static method in class org.apache.hadoop.io.SequenceFile
Construct the preferred type of 'raw' SequenceFile Writer.
createWriter(Configuration, FSDataOutputStream, Class, Class, SequenceFile.CompressionType, CompressionCodec) - Static method in class org.apache.hadoop.io.SequenceFile
Construct the preferred type of 'raw' SequenceFile Writer.
CSTRING_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
CsvRecordInput - Class in org.apache.hadoop.record
 
CsvRecordInput(InputStream) - Constructor for class org.apache.hadoop.record.CsvRecordInput
Creates a new instance of CsvRecordInput
CsvRecordOutput - Class in org.apache.hadoop.record
 
CsvRecordOutput(OutputStream) - Constructor for class org.apache.hadoop.record.CsvRecordOutput
Creates a new instance of CsvRecordOutput
CUR_DIR - Static variable in class org.apache.hadoop.fs.Path
 
curChar - Variable in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
curReader - Variable in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
 
CURRENT_VERSION - Static variable in class org.apache.hadoop.ipc.Server
 
currentToken - Variable in exception org.apache.hadoop.record.compiler.generated.ParseException
This is the last token that has been consumed successfully.
CyclicIteration<K,V> - Class in org.apache.hadoop.util
Provide an cyclic Iterator for a NavigableMap.
CyclicIteration(NavigableMap<K, V>, K) - Constructor for class org.apache.hadoop.util.CyclicIteration
Construct an Iterable object, so that an Iterator can be created for iterating the given NavigableMap.

D

Daemon - Class in org.apache.hadoop.util
A thread that has called Thread.setDaemon(boolean) with true.
Daemon() - Constructor for class org.apache.hadoop.util.Daemon
Construct a daemon thread.
Daemon(Runnable) - Constructor for class org.apache.hadoop.util.Daemon
Construct a daemon thread.
Daemon(ThreadGroup, Runnable) - Constructor for class org.apache.hadoop.util.Daemon
Construct a daemon thread to be part of a specified thread group.
DancingLinks<ColumnName> - Class in org.apache.hadoop.examples.dancing
A generic solver for tile laying problems using Knuth's dancing link algorithm.
DancingLinks() - Constructor for class org.apache.hadoop.examples.dancing.DancingLinks
 
DancingLinks.SolutionAcceptor<ColumnName> - Interface in org.apache.hadoop.examples.dancing
Applications should implement this to receive the solutions to their problems.
DATA_FILE_NAME - Static variable in class org.apache.hadoop.io.MapFile
The name of the data file.
DataChecksum - Class in org.apache.hadoop.util
This class provides inteface and utilities for processing checksums for DFS data transfers.
DataInputBuffer - Class in org.apache.hadoop.io
A reusable DataInput implementation that reads from an in-memory buffer.
DataInputBuffer() - Constructor for class org.apache.hadoop.io.DataInputBuffer
Constructs a new empty buffer.
DataJoinJob - Class in org.apache.hadoop.contrib.utils.join
This class implements the main function for creating a map/reduce job to join data of different sources.
DataJoinJob() - Constructor for class org.apache.hadoop.contrib.utils.join.DataJoinJob
 
DataJoinMapperBase - Class in org.apache.hadoop.contrib.utils.join
This abstract class serves as the base class for the mapper class of a data join job.
DataJoinMapperBase() - Constructor for class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
 
DataJoinReducerBase - Class in org.apache.hadoop.contrib.utils.join
This abstract class serves as the base class for the reducer class of a data join job.
DataJoinReducerBase() - Constructor for class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
DataOutputBuffer - Class in org.apache.hadoop.io
A reusable DataOutput implementation that writes to an in-memory buffer.
DataOutputBuffer() - Constructor for class org.apache.hadoop.io.DataOutputBuffer
Constructs a new empty buffer.
DataOutputBuffer(int) - Constructor for class org.apache.hadoop.io.DataOutputBuffer
 
dateForm - Static variable in class org.apache.hadoop.fs.FsShell
 
DBConfiguration - Class in org.apache.hadoop.mapred.lib.db
A container for configuration property names for jobs with DB input/output.
DBCountPageView - Class in org.apache.hadoop.examples
This is a demonstrative program, which uses DBInputFormat for reading the input data from a database, and DBOutputFormat for writing the data to the database.
DBCountPageView() - Constructor for class org.apache.hadoop.examples.DBCountPageView
 
DBInputFormat<T extends DBWritable> - Class in org.apache.hadoop.mapred.lib.db
A InputFormat that reads input data from an SQL table.
DBInputFormat() - Constructor for class org.apache.hadoop.mapred.lib.db.DBInputFormat
 
DBInputFormat.DBInputSplit - Class in org.apache.hadoop.mapred.lib.db
A InputSplit that spans a set of rows
DBInputFormat.DBInputSplit() - Constructor for class org.apache.hadoop.mapred.lib.db.DBInputFormat.DBInputSplit
Default Constructor
DBInputFormat.DBInputSplit(long, long) - Constructor for class org.apache.hadoop.mapred.lib.db.DBInputFormat.DBInputSplit
Convenience Constructor
DBInputFormat.DBRecordReader - Class in org.apache.hadoop.mapred.lib.db
A RecordReader that reads records from a SQL table.
DBInputFormat.DBRecordReader(DBInputFormat.DBInputSplit, Class<T>, JobConf) - Constructor for class org.apache.hadoop.mapred.lib.db.DBInputFormat.DBRecordReader
 
DBInputFormat.NullDBWritable - Class in org.apache.hadoop.mapred.lib.db
A Class that does nothing, implementing DBWritable
DBInputFormat.NullDBWritable() - Constructor for class org.apache.hadoop.mapred.lib.db.DBInputFormat.NullDBWritable
 
DBOutputFormat<K extends DBWritable,V> - Class in org.apache.hadoop.mapred.lib.db
A OutputFormat that sends the reduce output to a SQL table.
DBOutputFormat() - Constructor for class org.apache.hadoop.mapred.lib.db.DBOutputFormat
 
DBOutputFormat.DBRecordWriter - Class in org.apache.hadoop.mapred.lib.db
A RecordWriter that writes the reduce output to a SQL table
DBOutputFormat.DBRecordWriter(Connection, PreparedStatement) - Constructor for class org.apache.hadoop.mapred.lib.db.DBOutputFormat.DBRecordWriter
 
DBWritable - Interface in org.apache.hadoop.mapred.lib.db
Objects that are read from/written to a database should implement DBWritable.
debug_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
debugStream - Variable in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
decDfsUsed(long) - Method in class org.apache.hadoop.fs.DU
Decrease how much disk space we use.
decode(byte[]) - Static method in class org.apache.hadoop.io.Text
Converts the provided byte array to a String using the UTF-8 encoding.
decode(byte[], int, int) - Static method in class org.apache.hadoop.io.Text
 
decode(byte[], int, int, boolean) - Static method in class org.apache.hadoop.io.Text
Converts the provided byte array to a String using the UTF-8 encoding.
decodeJobHistoryFileName(String) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Helper function to decode the URL of the filename of the job-history log file.
decodeVIntSize(byte) - Static method in class org.apache.hadoop.io.WritableUtils
Parse the first byte of a vint/vlong to determine the number of bytes
decompress(byte[], int, int) - Method in class org.apache.hadoop.io.compress.BlockDecompressorStream
 
decompress(byte[], int, int) - Method in class org.apache.hadoop.io.compress.bzip2.BZip2DummyDecompressor
 
decompress(byte[], int, int) - Method in interface org.apache.hadoop.io.compress.Decompressor
Fills specified buffer with uncompressed data.
decompress(byte[], int, int) - Method in class org.apache.hadoop.io.compress.DecompressorStream
 
decompress(byte[], int, int) - Method in class org.apache.hadoop.io.compress.zlib.BuiltInZlibInflater
 
decompress(byte[], int, int) - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
Decompressor - Interface in org.apache.hadoop.io.compress
Specification of a stream-based 'de-compressor' which can be plugged into a CompressionInputStream to compress data.
decompressor - Variable in class org.apache.hadoop.io.compress.DecompressorStream
 
DecompressorStream - Class in org.apache.hadoop.io.compress
 
DecompressorStream(InputStream, Decompressor, int) - Constructor for class org.apache.hadoop.io.compress.DecompressorStream
 
DecompressorStream(InputStream, Decompressor) - Constructor for class org.apache.hadoop.io.compress.DecompressorStream
 
DecompressorStream(InputStream) - Constructor for class org.apache.hadoop.io.compress.DecompressorStream
Allow derived classes to directly set the underlying stream.
DEFAULT - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
DEFAULT_BLOCK_SIZE - Static variable in class org.apache.hadoop.fs.ftp.FTPFileSystem
 
DEFAULT_BUFFER_SIZE - Static variable in class org.apache.hadoop.fs.ftp.FTPFileSystem
 
DEFAULT_GROUP - Static variable in class org.apache.hadoop.security.UnixUserGroupInformation
 
DEFAULT_HOST_LEVEL - Static variable in class org.apache.hadoop.net.NetworkTopology
 
DEFAULT_LOG_INTERVAL - Static variable in class org.apache.hadoop.contrib.failmon.Environment
 
DEFAULT_LOG_INTERVAL - Static variable in class org.apache.hadoop.contrib.failmon.Executor
 
DEFAULT_PATH - Static variable in class org.apache.hadoop.mapred.lib.TotalOrderPartitioner
 
DEFAULT_PERIOD - Static variable in interface org.apache.hadoop.metrics.MetricsContext
Default period in seconds at which data is sent to the metrics system.
DEFAULT_POLICY_PROVIDER - Static variable in class org.apache.hadoop.security.authorize.PolicyProvider
A default PolicyProvider without any defined services.
DEFAULT_POLL_INTERVAL - Static variable in class org.apache.hadoop.contrib.failmon.Environment
 
DEFAULT_POLL_INTERVAL - Static variable in class org.apache.hadoop.contrib.failmon.Executor
 
DEFAULT_QUEUE_NAME - Static variable in class org.apache.hadoop.mapred.JobConf
Deprecated. Name of the queue to which jobs will be submitted, if no queue name is mentioned.
DEFAULT_RACK - Static variable in class org.apache.hadoop.net.NetworkTopology
 
DEFAULT_SLEEPTIME_BEFORE_SIGKILL - Static variable in class org.apache.hadoop.util.ProcfsBasedProcessTree
 
DEFAULT_UMASK - Static variable in class org.apache.hadoop.fs.permission.FsPermission
 
DEFAULT_USERNAME - Static variable in class org.apache.hadoop.security.UnixUserGroupInformation
 
DefaultCodec - Class in org.apache.hadoop.io.compress
 
DefaultCodec() - Constructor for class org.apache.hadoop.io.compress.DefaultCodec
 
defaultContexts - Variable in class org.apache.hadoop.http.HttpServer
 
DefaultJobHistoryParser - Class in org.apache.hadoop.mapred
Default parser for job history files.
DefaultJobHistoryParser() - Constructor for class org.apache.hadoop.mapred.DefaultJobHistoryParser
 
DefaultStringifier<T> - Class in org.apache.hadoop.io
DefaultStringifier is the default implementation of the Stringifier interface which stringifies the objects using base64 encoding of the serialized version of the objects.
DefaultStringifier(Configuration, Class<T>) - Constructor for class org.apache.hadoop.io.DefaultStringifier
 
define(Class, WritableComparator) - Static method in class org.apache.hadoop.io.WritableComparator
Register an optimized comparator for a WritableComparable implementation.
define(Class, RecordComparator) - Static method in class org.apache.hadoop.record.RecordComparator
Register an optimized comparator for a Record implementation.
defineFilter(Context, String, String, Map<String, String>, String[]) - Method in class org.apache.hadoop.http.HttpServer
Define a filter for a context and set up default url mappings.
DelegatingInputFormat<K,V> - Class in org.apache.hadoop.mapred.lib
An InputFormat that delegates behaviour of paths to multiple other InputFormats.
DelegatingInputFormat() - Constructor for class org.apache.hadoop.mapred.lib.DelegatingInputFormat
 
DelegatingMapper<K1,V1,K2,V2> - Class in org.apache.hadoop.mapred.lib
An Mapper that delegates behaviour of paths to multiple other mappers.
DelegatingMapper() - Constructor for class org.apache.hadoop.mapred.lib.DelegatingMapper
 
DELETE - Static variable in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp.Op
 
delete(Path, boolean) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Implement the delete(Path, boolean) in checksum file system.
delete(Path) - Method in class org.apache.hadoop.fs.FileSystem
Deprecated. Use delete(Path, boolean) instead
delete(Path, boolean) - Method in class org.apache.hadoop.fs.FileSystem
Delete a file.
delete(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Deprecated. 
delete(Path, boolean) - Method in class org.apache.hadoop.fs.FilterFileSystem
Delete a file
delete(Path) - Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
Deprecated. Use delete(Path, boolean) instead
delete(Path, boolean) - Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
 
delete(Path, boolean) - Method in class org.apache.hadoop.fs.HarFileSystem
Not implemented.
delete(Path, boolean) - Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
 
delete(Path) - Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
Deprecated. 
delete(Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
Deprecated. 
delete(Path, boolean) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
delete(Path, boolean) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
delete(Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
Deprecated. 
delete(Path) - Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
Deprecated. 
delete(Path, boolean) - Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
 
delete(FileSystem, String) - Static method in class org.apache.hadoop.io.BloomMapFile
 
delete(FileSystem, String) - Static method in class org.apache.hadoop.io.MapFile
Deletes the named map file.
delete(Key) - Method in class org.apache.hadoop.util.bloom.CountingBloomFilter
Removes a specified key from this counting Bloom filter.
deleteBlock(Block) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
deleteFile(String) - Method in class org.apache.hadoop.contrib.index.lucene.FileSystemDirectory
 
deleteINode(Path) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
deleteLocalFiles() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated.  
deleteLocalFiles(String) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated.  
deleteOnExit(Path) - Method in class org.apache.hadoop.fs.FileSystem
Mark a path to be deleted when FileSystem is closed.
deleteTermIterator() - Method in class org.apache.hadoop.contrib.index.mapred.IntermediateForm
Get an iterator for the delete terms in the intermediate form.
DEPENDENT_FAILED - Static variable in class org.apache.hadoop.mapred.jobcontrol.Job
 
depth() - Method in class org.apache.hadoop.fs.Path
Return the number of elements in this path.
DEPTH_THRESH - Static variable in class org.apache.hadoop.io.compress.bzip2.CBZip2OutputStream
This constant is accessible by subclasses for historical purposes.
DESCRIPTION - Static variable in class org.apache.hadoop.fs.shell.Count
 
deserialize(InputStream) - Static method in class org.apache.hadoop.fs.s3.INode
 
deserialize(T) - Method in interface org.apache.hadoop.io.serializer.Deserializer
Deserialize the next object from the underlying input stream.
deserialize(RecordInput, String) - Method in class org.apache.hadoop.record.meta.RecordTypeInfo
Deserialize the type information for a record
deserialize(RecordInput, String) - Method in class org.apache.hadoop.record.Record
Deserialize a record with a tag (usually field name)
deserialize(RecordInput) - Method in class org.apache.hadoop.record.Record
Deserialize a record without a tag
Deserializer<T> - Interface in org.apache.hadoop.io.serializer
Provides a facility for deserializing objects of type from an InputStream.
DeserializerComparator<T> - Class in org.apache.hadoop.io.serializer
A RawComparator that uses a Deserializer to deserialize the objects to be compared so that the standard Comparator can be used to compare them.
DeserializerComparator(Deserializer<T>) - Constructor for class org.apache.hadoop.io.serializer.DeserializerComparator
 
destroy() - Method in class org.apache.hadoop.util.ProcfsBasedProcessTree
Destroy the process-tree.
detailedUsage_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
DF - Class in org.apache.hadoop.fs
Filesystem disk space usage statistics.
DF(File, Configuration) - Constructor for class org.apache.hadoop.fs.DF
 
DF(File, long) - Constructor for class org.apache.hadoop.fs.DF
 
DF_INTERVAL_DEFAULT - Static variable in class org.apache.hadoop.fs.DF
 
dfmt(double) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
digest(byte[]) - Static method in class org.apache.hadoop.io.MD5Hash
Construct a hash value for a byte array.
digest(InputStream) - Static method in class org.apache.hadoop.io.MD5Hash
Construct a hash value for the content from the InputStream.
digest(byte[], int, int) - Static method in class org.apache.hadoop.io.MD5Hash
Construct a hash value for a byte array.
digest(String) - Static method in class org.apache.hadoop.io.MD5Hash
Construct a hash value for a String.
digest(UTF8) - Static method in class org.apache.hadoop.io.MD5Hash
Construct a hash value for a String.
DIRECTORY_INODE - Static variable in class org.apache.hadoop.fs.s3.INode
 
disable_tracing() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
DISABLED_MEMORY_LIMIT - Static variable in class org.apache.hadoop.mapred.JobConf
Deprecated. A value which if set for memory related configuration options, indicates that the options are turned off.
DiskChecker - Class in org.apache.hadoop.util
Class that provides utility functions for checking disk problem
DiskChecker() - Constructor for class org.apache.hadoop.util.DiskChecker
 
DiskChecker.DiskErrorException - Exception in org.apache.hadoop.util
 
DiskChecker.DiskErrorException(String) - Constructor for exception org.apache.hadoop.util.DiskChecker.DiskErrorException
 
DiskChecker.DiskOutOfSpaceException - Exception in org.apache.hadoop.util
 
DiskChecker.DiskOutOfSpaceException(String) - Constructor for exception org.apache.hadoop.util.DiskChecker.DiskOutOfSpaceException
 
displayByteArray(byte[]) - Static method in class org.apache.hadoop.io.WritableUtils
 
displayTasks(JobID, String, String) - Method in class org.apache.hadoop.mapred.JobClient
Display the information about a job's tasks, of a particular type and in a particular state
DistributedCache - Class in org.apache.hadoop.filecache
Distribute application-specific large, read-only files efficiently.
DistributedCache() - Constructor for class org.apache.hadoop.filecache.DistributedCache
 
DistributedPentomino - Class in org.apache.hadoop.examples.dancing
Launch a distributed pentomino solver.
DistributedPentomino() - Constructor for class org.apache.hadoop.examples.dancing.DistributedPentomino
 
DistributedPentomino.PentMap - Class in org.apache.hadoop.examples.dancing
Each map takes a line, which represents a prefix move and finds all of the solutions that start with that prefix.
DistributedPentomino.PentMap() - Constructor for class org.apache.hadoop.examples.dancing.DistributedPentomino.PentMap
 
DNS - Class in org.apache.hadoop.net
A class that provides direct and reverse lookup functionalities, allowing the querying of specific network interfaces or nameservers.
DNS() - Constructor for class org.apache.hadoop.net.DNS
 
DNSToSwitchMapping - Interface in org.apache.hadoop.net
An interface that should be implemented to allow pluggable DNS-name/IP-address to RackID resolvers.
DocumentAndOp - Class in org.apache.hadoop.contrib.index.mapred
This class represents an indexing operation.
DocumentAndOp() - Constructor for class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
Constructor for no operation.
DocumentAndOp(DocumentAndOp.Op, Document) - Constructor for class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
Constructor for an insert operation.
DocumentAndOp(DocumentAndOp.Op, Term) - Constructor for class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
Constructor for a delete operation.
DocumentAndOp(DocumentAndOp.Op, Document, Term) - Constructor for class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
Constructor for an insert, a delete or an update operation.
DocumentAndOp.Op - Class in org.apache.hadoop.contrib.index.mapred
This class represents the type of an operation - an insert, a delete or an update.
DocumentID - Class in org.apache.hadoop.contrib.index.mapred
The class represents a document id, which is of type text.
DocumentID() - Constructor for class org.apache.hadoop.contrib.index.mapred.DocumentID
Constructor.
doGet(HttpServletRequest, HttpServletResponse) - Method in class org.apache.hadoop.http.HttpServer.StackServlet
 
doGet(HttpServletRequest, HttpServletResponse) - Method in class org.apache.hadoop.log.LogLevel.Servlet
 
doGet(HttpServletRequest, HttpServletResponse) - Method in class org.apache.hadoop.mapred.TaskGraphServlet
 
doGet(HttpServletRequest, HttpServletResponse) - Method in class org.apache.hadoop.mapred.TaskLogServlet
Get the logs via http.
doGet(HttpServletRequest, HttpServletResponse) - Method in class org.apache.hadoop.mapred.TaskTracker.MapOutputServlet
 
done(TaskAttemptID) - Method in class org.apache.hadoop.mapred.TaskTracker
The task is done.
Done() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
done() - Method in interface org.apache.hadoop.record.Index
 
doSync() - Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
Do the sync checks
DOT_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
DOUBLE - Static variable in class org.apache.hadoop.record.meta.TypeID.RIOType
 
DOUBLE_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
DOUBLE_VALUE_SUM - Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
DoubleTypeID - Static variable in class org.apache.hadoop.record.meta.TypeID
 
DoubleValueSum - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements a value aggregator that sums up a sequence of double values.
DoubleValueSum() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
The default constructor
DoubleWritable - Class in org.apache.hadoop.io
Writable for Double values.
DoubleWritable() - Constructor for class org.apache.hadoop.io.DoubleWritable
 
DoubleWritable(double) - Constructor for class org.apache.hadoop.io.DoubleWritable
 
DoubleWritable.Comparator - Class in org.apache.hadoop.io
A Comparator optimized for DoubleWritable.
DoubleWritable.Comparator() - Constructor for class org.apache.hadoop.io.DoubleWritable.Comparator
 
doUpdates(MetricsContext) - Method in class org.apache.hadoop.ipc.metrics.RpcMetrics
Push the metrics to the monitoring subsystem on doUpdate() call.
doUpdates(MetricsContext) - Method in class org.apache.hadoop.metrics.jvm.JvmMetrics
This will be called periodically (with the period being configuration dependent).
doUpdates(MetricsContext) - Method in interface org.apache.hadoop.metrics.Updater
Timer-based call-back from the metric library.
downgrade(JobID) - Static method in class org.apache.hadoop.mapred.JobID
Deprecated. Downgrade a new JobID to an old one
downgrade(TaskAttemptID) - Static method in class org.apache.hadoop.mapred.TaskAttemptID
Deprecated. Downgrade a new TaskAttemptID to an old one
downgrade(TaskID) - Static method in class org.apache.hadoop.mapred.TaskID
Deprecated. Downgrade a new TaskID to an old one
driver(String[]) - Static method in class org.apache.hadoop.record.compiler.generated.Rcc
 
driver(String[]) - Method in class org.apache.hadoop.util.ProgramDriver
This is a driver for the example programs.
DRIVER_CLASS_PROPERTY - Static variable in class org.apache.hadoop.mapred.lib.db.DBConfiguration
The JDBC Driver class name
DU - Class in org.apache.hadoop.fs
Filesystem disk space usage statistics.
DU(File, long) - Constructor for class org.apache.hadoop.fs.DU
Keeps track of disk usage.
DU(File, Configuration) - Constructor for class org.apache.hadoop.fs.DU
Keeps track of disk usage.
dump() - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
Diagnostic method to dump all INodes to the console.
DynamicBloomFilter - Class in org.apache.hadoop.util.bloom
Implements a dynamic Bloom filter, as defined in the INFOCOM 2006 paper.
DynamicBloomFilter() - Constructor for class org.apache.hadoop.util.bloom.DynamicBloomFilter
Zero-args constructor for the serialization.
DynamicBloomFilter(int, int, int, int) - Constructor for class org.apache.hadoop.util.bloom.DynamicBloomFilter
Constructor.

E

emit(TupleWritable) - Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader
For each tuple emitted, return a value (typically one of the values in the tuple).
emit(TupleWritable) - Method in class org.apache.hadoop.mapred.join.OverrideRecordReader
Emit the value with the highest position in the tuple.
emitRecord(String, String, OutputRecord) - Method in class org.apache.hadoop.metrics.file.FileContext
Emits a metrics record to a file.
emitRecord(String, String, OutputRecord) - Method in class org.apache.hadoop.metrics.ganglia.GangliaContext
 
emitRecord(String, String, OutputRecord) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Sends a record to the metrics system.
emitRecord(String, String, OutputRecord) - Method in class org.apache.hadoop.metrics.spi.CompositeContext
 
emitRecord(String, String, OutputRecord) - Method in class org.apache.hadoop.metrics.spi.NullContext
Do-nothing version of emitRecord
emitRecord(String, String, OutputRecord) - Method in class org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
Do-nothing version of emitRecord
EMPTY_ARRAY - Static variable in class org.apache.hadoop.mapred.TaskCompletionEvent
 
enable_tracing() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
encode(String) - Static method in class org.apache.hadoop.io.Text
Converts the provided String to bytes using the UTF-8 encoding.
encode(String, boolean) - Static method in class org.apache.hadoop.io.Text
Converts the provided String to bytes using the UTF-8 encoding.
encodeJobHistoryFileName(String) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Helper function to encode the URL of the filename of the job-history log file.
encodeJobHistoryFilePath(String) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Helper function to encode the URL of the path of the job-history log file.
end() - Method in class org.apache.hadoop.io.compress.bzip2.BZip2DummyCompressor
 
end() - Method in class org.apache.hadoop.io.compress.bzip2.BZip2DummyDecompressor
 
end() - Method in interface org.apache.hadoop.io.compress.Compressor
Closes the compressor and discards any unprocessed input.
end() - Method in interface org.apache.hadoop.io.compress.Decompressor
Closes the decompressor and discards any unprocessed input.
end() - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
 
end() - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
endColumn - Variable in class org.apache.hadoop.record.compiler.generated.Token
beginLine and beginColumn describe the position of the first character of this token; endLine and endColumn describe the position of the last character of this token.
endLine - Variable in class org.apache.hadoop.record.compiler.generated.Token
beginLine and beginColumn describe the position of the first character of this token; endLine and endColumn describe the position of the last character of this token.
endMap(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
endMap(TreeMap, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
endMap(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
endMap(TreeMap, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
endMap(String) - Method in interface org.apache.hadoop.record.RecordInput
Check the mark for end of the serialized map.
endMap(TreeMap, String) - Method in interface org.apache.hadoop.record.RecordOutput
Mark the end of a serialized map.
endMap(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
endMap(TreeMap, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
endRecord(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
endRecord(Record, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
endRecord(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
endRecord(Record, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
endRecord(String) - Method in interface org.apache.hadoop.record.RecordInput
Check the mark for end of the serialized record.
endRecord(Record, String) - Method in interface org.apache.hadoop.record.RecordOutput
Mark the end of a serialized record.
endRecord(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
endRecord(Record, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
endVector(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
endVector(ArrayList, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
endVector(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
endVector(ArrayList, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
endVector(String) - Method in interface org.apache.hadoop.record.RecordInput
Check the mark for end of the serialized vector.
endVector(ArrayList, String) - Method in interface org.apache.hadoop.record.RecordOutput
Mark the end of a serialized vector.
endVector(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
endVector(ArrayList, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
ensureInflated() - Method in class org.apache.hadoop.io.CompressedWritable
Must be called by all methods which access fields to ensure that the data has been uncompressed.
entry() - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner
Get an entry to access the key and value.
entrySet() - Method in class org.apache.hadoop.io.MapWritable
entrySet() - Method in class org.apache.hadoop.io.SortedMapWritable
env_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
Environment - Class in org.apache.hadoop.contrib.failmon
This class provides various methods for interaction with the configuration and the operating system environment.
Environment() - Constructor for class org.apache.hadoop.contrib.failmon.Environment
 
Environment - Class in org.apache.hadoop.streaming
This is a class used to get the current environment on the host machines running the map/reduce.
Environment() - Constructor for class org.apache.hadoop.streaming.Environment
 
eof - Variable in class org.apache.hadoop.io.compress.DecompressorStream
 
EOF - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
eol - Variable in exception org.apache.hadoop.record.compiler.generated.ParseException
The end of line string for this machine.
equals(Object) - Method in class org.apache.hadoop.contrib.index.mapred.Shard
 
equals(Object) - Method in class org.apache.hadoop.examples.MultiFileWordCount.WordOffset
 
equals(Object) - Method in class org.apache.hadoop.examples.SecondarySort.IntPair
 
equals(Object) - Method in class org.apache.hadoop.fs.FileChecksum
Return true if both the algorithms and the values are the same.
equals(Object) - Method in class org.apache.hadoop.fs.FileStatus
Compare if this object is equal to another object
equals(Object) - Method in class org.apache.hadoop.fs.Path
 
equals(Object) - Method in class org.apache.hadoop.fs.permission.FsPermission
equals(Object) - Method in class org.apache.hadoop.io.BinaryComparable
Return true if bytes from {#getBytes()} match.
equals(Object) - Method in class org.apache.hadoop.io.BooleanWritable
 
equals(Object) - Method in class org.apache.hadoop.io.BytesWritable
Are the two byte sequences equal?
equals(Object) - Method in class org.apache.hadoop.io.ByteWritable
Returns true iff o is a ByteWritable with the same value.
equals(Object) - Method in class org.apache.hadoop.io.DoubleWritable
Returns true iff o is a DoubleWritable with the same value.
equals(Object) - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
Compare whether this and other points to the same key value.
equals(Object) - Method in class org.apache.hadoop.io.file.tfile.Utils.Version
 
equals(Object) - Method in class org.apache.hadoop.io.FloatWritable
Returns true iff o is a FloatWritable with the same value.
equals(Object) - Method in class org.apache.hadoop.io.IntWritable
Returns true iff o is a IntWritable with the same value.
equals(Object) - Method in class org.apache.hadoop.io.LongWritable
Returns true iff o is a LongWritable with the same value.
equals(Object) - Method in class org.apache.hadoop.io.MD5Hash
Returns true iff o is an MD5Hash whose digest contains the same values.
equals(Object) - Method in class org.apache.hadoop.io.NullWritable
 
equals(SequenceFile.Metadata) - Method in class org.apache.hadoop.io.SequenceFile.Metadata
 
equals(Object) - Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
 
equals(Object) - Method in class org.apache.hadoop.io.Text
Returns true iff o is a Text with the same contents.
equals(Object) - Method in class org.apache.hadoop.io.UTF8
Deprecated. Returns true iff o is a UTF8 with the same contents.
equals(Object) - Method in class org.apache.hadoop.io.VIntWritable
Returns true iff o is a VIntWritable with the same value.
equals(Object) - Method in class org.apache.hadoop.io.VLongWritable
Returns true iff o is a VLongWritable with the same value.
equals(Object) - Method in class org.apache.hadoop.mapred.Counters
Deprecated.  
equals(Object) - Method in class org.apache.hadoop.mapred.Counters.Group
Deprecated. Checks for (content) equality of Groups
equals(Object) - Method in class org.apache.hadoop.mapred.join.TupleWritable
equals(Object) - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
Return true iff compareTo(other) retn true.
equals(Object) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
 
equals(Object) - Method in class org.apache.hadoop.mapred.TaskReport
 
equals(Object) - Method in class org.apache.hadoop.mapreduce.Counter
 
equals(Object) - Method in class org.apache.hadoop.mapreduce.CounterGroup
 
equals(Object) - Method in class org.apache.hadoop.mapreduce.Counters
 
equals(Object) - Method in class org.apache.hadoop.mapreduce.ID
 
equals(Object) - Method in class org.apache.hadoop.mapreduce.JobID
 
equals(Object) - Method in class org.apache.hadoop.mapreduce.TaskAttemptID
 
equals(Object) - Method in class org.apache.hadoop.mapreduce.TaskID
 
equals(Object) - Method in class org.apache.hadoop.net.SocksSocketFactory
 
equals(Object) - Method in class org.apache.hadoop.net.StandardSocketFactory
 
equals(Object) - Method in class org.apache.hadoop.record.Buffer
 
equals(Object) - Method in class org.apache.hadoop.record.meta.FieldTypeInfo
Two FieldTypeInfos are equal if ach of their fields matches
equals(FieldTypeInfo) - Method in class org.apache.hadoop.record.meta.FieldTypeInfo
 
equals(Object) - Method in class org.apache.hadoop.record.meta.MapTypeID
Two map typeIDs are equal if their constituent elements have the same type
equals(Object) - Method in class org.apache.hadoop.record.meta.TypeID
Two base typeIDs are equal if they refer to the same type
equals(Object) - Method in class org.apache.hadoop.record.meta.VectorTypeID
Two vector typeIDs are equal if their constituent elements have the same type
equals(Object) - Method in class org.apache.hadoop.security.authorize.ConnectionPermission
 
equals(Object) - Method in class org.apache.hadoop.security.Group
 
equals(Object) - Method in class org.apache.hadoop.security.UnixUserGroupInformation
Decide if two UGIs are the same
equals(Object) - Method in class org.apache.hadoop.security.User
 
equals(Object) - Method in class org.apache.hadoop.util.bloom.Key
 
ESCAPE_CHAR - Static variable in class org.apache.hadoop.util.StringUtils
 
escapeHTML(String) - Static method in class org.apache.hadoop.util.StringUtils
Escapes HTML Special characters present in the string.
escapeString(String) - Static method in class org.apache.hadoop.util.StringUtils
Escape commas in the string using the default escape char
escapeString(String, char, char) - Static method in class org.apache.hadoop.util.StringUtils
Escape charToEscape in the string with the escape char escapeChar
escapeString(String, char, char[]) - Static method in class org.apache.hadoop.util.StringUtils
 
estimate(int, long, JobConf) - Static method in class org.apache.hadoop.examples.PiEstimator
Run a map/reduce job for estimating Pi.
EventCounter - Class in org.apache.hadoop.metrics.jvm
A log4J Appender that simply counts logging events in three levels: fatal, error and warn.
EventCounter() - Constructor for class org.apache.hadoop.metrics.jvm.EventCounter
 
EventRecord - Class in org.apache.hadoop.contrib.failmon
Objects of this class represent metrics collected for a specific hardware source.
EventRecord(String, Object[], Calendar, String, String, String, String) - Constructor for class org.apache.hadoop.contrib.failmon.EventRecord
Create the EventRecord given the most common properties among different metric types.
EventRecord() - Constructor for class org.apache.hadoop.contrib.failmon.EventRecord
Create the EventRecord with no fields other than "invalid" as the hostname.
ExampleDriver - Class in org.apache.hadoop.examples
A description of an example program based on its class and a human-readable description.
ExampleDriver() - Constructor for class org.apache.hadoop.examples.ExampleDriver
 
execCommand(String...) - Static method in class org.apache.hadoop.util.Shell
Static method to execute a shell command.
execCommand(Map<String, String>, String...) - Static method in class org.apache.hadoop.util.Shell
Static method to execute a shell command.
execute() - Method in class org.apache.hadoop.record.compiler.ant.RccTask
Invoke the Hadoop record compiler on each record definition file
execute() - Method in class org.apache.hadoop.util.Shell.ShellCommandExecutor
Execute the shell command.
Executor - Class in org.apache.hadoop.contrib.failmon
This class executes monitoring jobs on all nodes of the cluster, on which we intend to gather failure metrics.
Executor(Configuration) - Constructor for class org.apache.hadoop.contrib.failmon.Executor
Create an instance of the class and read the configuration file to determine the set of jobs that will be run and the maximum interval for which the thread can sleep before it wakes up to execute a monitoring job on the node.
exists(Path) - Method in class org.apache.hadoop.fs.FileSystem
Check if exists.
exitUsage(boolean) - Method in class org.apache.hadoop.streaming.StreamJob
 
ExpandBuff(boolean) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
expectedTokenSequences - Variable in exception org.apache.hadoop.record.compiler.generated.ParseException
Each entry in this array is an array of integers.
exponentialBackoffRetry(int, long, TimeUnit) - Static method in class org.apache.hadoop.io.retry.RetryPolicies
Keep trying a limited number of times, waiting a growing amount of time between attempts, and then fail by re-throwing the exception.
expunge() - Method in class org.apache.hadoop.fs.Trash
Delete old checkpoints.

F

fail(String) - Method in class org.apache.hadoop.streaming.StreamJob
 
FAILED - Static variable in class org.apache.hadoop.mapred.jobcontrol.Job
 
FAILED - Static variable in class org.apache.hadoop.mapred.JobStatus
 
failedJobs() - Method in class org.apache.hadoop.mapred.JobTracker
 
failJob(JobInProgress) - Method in class org.apache.hadoop.mapred.JobTracker
Fail a job and inform the listeners.
failTask(TaskAttemptID) - Method in class org.apache.hadoop.mapreduce.Job
Fail indicated task attempt.
fatalError(TaskAttemptID, String) - Method in class org.apache.hadoop.mapred.TaskTracker
A child task had a fatal error.
Field() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
FIELD_SEPARATOR - Static variable in class org.apache.hadoop.contrib.failmon.LocalStore
 
FieldSelectionMapReduce<K,V> - Class in org.apache.hadoop.mapred.lib
This class implements a mapper/reducer class that can be used to perform field selections in a manner similar to unix cut.
FieldSelectionMapReduce() - Constructor for class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
 
FieldTypeInfo - Class in org.apache.hadoop.record.meta
Represents a type information for a field, which is made up of its ID (name) and its type (a TypeID object).
file - Variable in class org.apache.hadoop.fs.FSInputChecker
The file name from which data is read from
FILE_NAME_PROPERTY - Static variable in class org.apache.hadoop.metrics.file.FileContext
 
FILE_TYPES - Static variable in class org.apache.hadoop.fs.s3.INode
 
FileAlreadyExistsException - Exception in org.apache.hadoop.mapred
Used when target file already exists for any operation and is not configured to be overwritten.
FileAlreadyExistsException() - Constructor for exception org.apache.hadoop.mapred.FileAlreadyExistsException
 
FileAlreadyExistsException(String) - Constructor for exception org.apache.hadoop.mapred.FileAlreadyExistsException
 
FileChecksum - Class in org.apache.hadoop.fs
An abstract class representing file checksums for files.
FileChecksum() - Constructor for class org.apache.hadoop.fs.FileChecksum
 
FileContext - Class in org.apache.hadoop.metrics.file
Metrics context for writing metrics to a file.

This class is configured by setting ContextFactory attributes which in turn are usually configured through a properties file.

FileContext() - Constructor for class org.apache.hadoop.metrics.file.FileContext
Creates a new instance of FileContext
fileExists(String) - Method in class org.apache.hadoop.contrib.index.lucene.FileSystemDirectory
 
fileExtension(String) - Method in class org.apache.hadoop.streaming.JarBuilder
 
FileInputFormat<K,V> - Class in org.apache.hadoop.mapred
Deprecated. Use FileInputFormat instead.
FileInputFormat() - Constructor for class org.apache.hadoop.mapred.FileInputFormat
Deprecated.  
FileInputFormat<K,V> - Class in org.apache.hadoop.mapreduce.lib.input
A base class for file-based InputFormats.
FileInputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
 
fileLength(String) - Method in class org.apache.hadoop.contrib.index.lucene.FileSystemDirectory
 
fileModified(String) - Method in class org.apache.hadoop.contrib.index.lucene.FileSystemDirectory
 
FileOutputCommitter - Class in org.apache.hadoop.mapred
An OutputCommitter that commits files specified in job output directory i.e.
FileOutputCommitter() - Constructor for class org.apache.hadoop.mapred.FileOutputCommitter
 
FileOutputCommitter - Class in org.apache.hadoop.mapreduce.lib.output
An OutputCommitter that commits files specified in job output directory i.e.
FileOutputCommitter(Path, TaskAttemptContext) - Constructor for class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
Create a file output committer
FileOutputFormat<K,V> - Class in org.apache.hadoop.mapred
A base class for OutputFormat.
FileOutputFormat() - Constructor for class org.apache.hadoop.mapred.FileOutputFormat
 
FileOutputFormat<K,V> - Class in org.apache.hadoop.mapreduce.lib.output
A base class for OutputFormats that read from FileSystems.
FileOutputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
 
FileSplit - Class in org.apache.hadoop.mapred
Deprecated. Use FileSplit instead.
FileSplit(Path, long, long, JobConf) - Constructor for class org.apache.hadoop.mapred.FileSplit
Deprecated.  
FileSplit(Path, long, long, String[]) - Constructor for class org.apache.hadoop.mapred.FileSplit
Deprecated. Constructs a split with host information
FileSplit - Class in org.apache.hadoop.mapreduce.lib.input
A section of an input file.
FileSplit(Path, long, long, String[]) - Constructor for class org.apache.hadoop.mapreduce.lib.input.FileSplit
Constructs a split with host information
FileStatus - Class in org.apache.hadoop.fs
Interface that represents the client side information for a file.
FileStatus() - Constructor for class org.apache.hadoop.fs.FileStatus
 
FileStatus(long, boolean, int, long, long, Path) - Constructor for class org.apache.hadoop.fs.FileStatus
 
FileStatus(long, boolean, int, long, long, long, FsPermission, String, String, Path) - Constructor for class org.apache.hadoop.fs.FileStatus
 
FileSystem - Class in org.apache.hadoop.fs
An abstract base class for a fairly generic filesystem.
FileSystem() - Constructor for class org.apache.hadoop.fs.FileSystem
 
FileSystem.Statistics - Class in org.apache.hadoop.fs
 
FileSystem.Statistics(String) - Constructor for class org.apache.hadoop.fs.FileSystem.Statistics
 
FileSystemDirectory - Class in org.apache.hadoop.contrib.index.lucene
This class implements a Lucene Directory on top of a general FileSystem.
FileSystemDirectory(FileSystem, Path, boolean, Configuration) - Constructor for class org.apache.hadoop.contrib.index.lucene.FileSystemDirectory
Constructor
FileSystemStore - Interface in org.apache.hadoop.fs.s3
A facility for storing and retrieving INodes and Blocks.
fileURIs - Variable in class org.apache.hadoop.streaming.StreamJob
 
FileUtil - Class in org.apache.hadoop.fs
A collection of file-processing util methods
FileUtil() - Constructor for class org.apache.hadoop.fs.FileUtil
 
FileUtil.HardLink - Class in org.apache.hadoop.fs
Class for creating hardlinks.
FileUtil.HardLink() - Constructor for class org.apache.hadoop.fs.FileUtil.HardLink
 
FillBuff() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
fillJoinCollector(K) - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
For all child RRs offering the key provided, obtain an iterator at that position in the JoinCollector.
fillJoinCollector(K) - Method in class org.apache.hadoop.mapred.join.OverrideRecordReader
Instead of filling the JoinCollector with iterators from all data sources, fill only the rightmost for this key.
Filter - Class in org.apache.hadoop.util.bloom
Defines the general behavior of a filter.
Filter() - Constructor for class org.apache.hadoop.util.bloom.Filter
 
Filter(int, int, int) - Constructor for class org.apache.hadoop.util.bloom.Filter
Constructor.
FilterContainer - Interface in org.apache.hadoop.http
A container class for javax.servlet.Filter.
FilterFileSystem - Class in org.apache.hadoop.fs
A FilterFileSystem contains some other file system, which it uses as its basic file system, possibly transforming the data along the way or providing additional functionality.
FilterFileSystem() - Constructor for class org.apache.hadoop.fs.FilterFileSystem
 
FilterFileSystem(FileSystem) - Constructor for class org.apache.hadoop.fs.FilterFileSystem
 
FilterInitializer - Class in org.apache.hadoop.http
Initialize a javax.servlet.Filter.
FilterInitializer() - Constructor for class org.apache.hadoop.http.FilterInitializer
 
filterNames - Variable in class org.apache.hadoop.http.HttpServer
 
finalize() - Method in class org.apache.hadoop.io.compress.bzip2.CBZip2OutputStream
Overriden to close the stream.
finalize() - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
finalKey(WritableComparable) - Method in class org.apache.hadoop.io.MapFile.Reader
Reads the final key from the file.
find(String) - Method in class org.apache.hadoop.io.Text
 
find(String, int) - Method in class org.apache.hadoop.io.Text
Finds any occurence of what in the backing buffer, starting as position start.
findAll(String, String, int, String) - Method in class org.apache.hadoop.contrib.failmon.ShellParser
Finds all occurences of a pattern in a piece of text and returns the matching groups.
findByte(byte[], int, int, byte) - Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
Deprecated. use UTF8ByteArrayUtils.findByte(byte[], int, int, byte)
findByte(byte[], int, int, byte) - Static method in class org.apache.hadoop.util.UTF8ByteArrayUtils
Find the first occurrence of the given byte b in a UTF-8 encoded string
findBytes(byte[], int, int, byte[]) - Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
Deprecated. use UTF8ByteArrayUtils.findBytes(byte[], int, int, byte[])
findBytes(byte[], int, int, byte[]) - Static method in class org.apache.hadoop.util.UTF8ByteArrayUtils
Find the first occurrence of the given bytes b in a UTF-8 encoded string
findCounter(Enum) - Method in class org.apache.hadoop.mapred.Counters
Deprecated. Find the counter for the given enum.
findCounter(String, String) - Method in class org.apache.hadoop.mapred.Counters
Deprecated. Find a counter given the group and the name.
findCounter(String, int, String) - Method in class org.apache.hadoop.mapred.Counters
Deprecated.  
findCounter(String, String) - Method in class org.apache.hadoop.mapreduce.CounterGroup
Internal to find a counter in a group.
findCounter(String) - Method in class org.apache.hadoop.mapreduce.CounterGroup
 
findCounter(String, String) - Method in class org.apache.hadoop.mapreduce.Counters
 
findCounter(Enum<?>) - Method in class org.apache.hadoop.mapreduce.Counters
Find the counter for the given enum.
findInClasspath(String) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
findInClasspath(String, ClassLoader) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
findNext(String, char, char, int, StringBuilder) - Static method in class org.apache.hadoop.util.StringUtils
Finds the first occurrence of the separator character ignoring the escaped separators starting from the index.
findNthByte(byte[], int, int, byte, int) - Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
Deprecated. use UTF8ByteArrayUtils.findNthByte(byte[], int, int, byte, int)
findNthByte(byte[], byte, int) - Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
Deprecated. use UTF8ByteArrayUtils.findNthByte(byte[], byte, int)
findNthByte(byte[], int, int, byte, int) - Static method in class org.apache.hadoop.util.UTF8ByteArrayUtils
Find the nth occurrence of the given byte b in a UTF-8 encoded string
findNthByte(byte[], byte, int) - Static method in class org.apache.hadoop.util.UTF8ByteArrayUtils
Find the nth occurrence of the given byte b in a UTF-8 encoded string
findPattern(String, String, int) - Method in class org.apache.hadoop.contrib.failmon.ShellParser
Find the first occurence ofa pattern in a piece of text and return a specific group.
findPort - Variable in class org.apache.hadoop.http.HttpServer
 
findSeparator(byte[], int, int, byte) - Static method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
 
findTab(byte[], int, int) - Static method in class org.apache.hadoop.streaming.StreamKeyValUtil
Find the first occured tab in a UTF-8 encoded string
findTab(byte[]) - Static method in class org.apache.hadoop.streaming.StreamKeyValUtil
Find the first occured tab in a UTF-8 encoded string
findTab(byte[], int, int) - Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
Deprecated. use StreamKeyValUtil.findTab(byte[], int, int)
findTab(byte[]) - Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
Deprecated. use StreamKeyValUtil.findTab(byte[])
finish() - Method in class org.apache.hadoop.io.compress.BlockCompressorStream
 
finish() - Method in class org.apache.hadoop.io.compress.bzip2.BZip2DummyCompressor
 
finish() - Method in class org.apache.hadoop.io.compress.bzip2.CBZip2OutputStream
 
finish() - Method in class org.apache.hadoop.io.compress.CompressionOutputStream
Finishes writing compressed data to the output stream without closing the underlying stream.
finish() - Method in interface org.apache.hadoop.io.compress.Compressor
When called, indicates that compression should end with the current contents of the input buffer.
finish() - Method in class org.apache.hadoop.io.compress.CompressorStream
 
finish() - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
 
finish() - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
 
finished() - Method in class org.apache.hadoop.io.compress.bzip2.BZip2DummyCompressor
 
finished() - Method in class org.apache.hadoop.io.compress.bzip2.BZip2DummyDecompressor
 
finished() - Method in interface org.apache.hadoop.io.compress.Compressor
Returns true if the end of the compressed data output stream has been reached.
finished() - Method in interface org.apache.hadoop.io.compress.Decompressor
Returns true if the end of the compressed data output stream has been reached.
finished() - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
 
finished() - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
firstKey() - Method in class org.apache.hadoop.io.SortedMapWritable
fix(FileSystem, Path, Class<? extends Writable>, Class<? extends Writable>, boolean, Configuration) - Static method in class org.apache.hadoop.io.MapFile
This method attempts to fix a corrupt MapFile by re-creating its index.
FLOAT - Static variable in class org.apache.hadoop.record.meta.TypeID.RIOType
 
FLOAT_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
FloatTypeID - Static variable in class org.apache.hadoop.record.meta.TypeID
 
FloatWritable - Class in org.apache.hadoop.io
A WritableComparable for floats.
FloatWritable() - Constructor for class org.apache.hadoop.io.FloatWritable
 
FloatWritable(float) - Constructor for class org.apache.hadoop.io.FloatWritable
 
FloatWritable.Comparator - Class in org.apache.hadoop.io
A Comparator optimized for FloatWritable.
FloatWritable.Comparator() - Constructor for class org.apache.hadoop.io.FloatWritable.Comparator
 
flush() - Method in class org.apache.hadoop.io.compress.bzip2.CBZip2OutputStream
 
flush() - Method in class org.apache.hadoop.io.compress.CompressionOutputStream
 
flush() - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
 
flush() - Method in class org.apache.hadoop.mapred.TaskLogAppender
 
flush() - Method in class org.apache.hadoop.metrics.file.FileContext
Flushes the output writer, forcing updates to disk.
flush() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Called each period after all records have been emitted, this method does nothing.
flush() - Method in class org.apache.hadoop.metrics.spi.CompositeContext
 
flushBuffer() - Method in class org.apache.hadoop.fs.FSOutputSummer
 
flushBuffer(boolean) - Method in class org.apache.hadoop.fs.FSOutputSummer
 
formatBytes(long) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
formatBytes2(long) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
formatPercent(double, int) - Static method in class org.apache.hadoop.util.StringUtils
Format a percentage for presentation to the user.
formatTime(long) - Static method in class org.apache.hadoop.util.StringUtils
Given the time in long milliseconds, returns a String in the format Xhrs, Ymins, Z sec.
formatTimeDiff(long, long) - Static method in class org.apache.hadoop.util.StringUtils
Given a finish and start time in long milliseconds, returns a String in the format Xhrs, Ymins, Z sec, for the time difference between two times.
forName(String) - Static method in class org.apache.hadoop.mapred.JobID
Deprecated. Construct a JobId object from given string
forName(String) - Static method in class org.apache.hadoop.mapred.TaskAttemptID
Deprecated. Construct a TaskAttemptID object from given string
forName(String) - Static method in class org.apache.hadoop.mapred.TaskID
Deprecated.  
forName(String) - Static method in class org.apache.hadoop.mapreduce.JobID
Construct a JobId object from given string
forName(String) - Static method in class org.apache.hadoop.mapreduce.TaskAttemptID
Construct a TaskAttemptID object from given string
forName(String) - Static method in class org.apache.hadoop.mapreduce.TaskID
Construct a TaskID object from given string
fourRotations - Static variable in class org.apache.hadoop.examples.dancing.Pentomino
Are all 4 rotations unique?
fromEscapedCompactString(String) - Static method in class org.apache.hadoop.mapred.Counters
Deprecated. Convert a stringified counter representation into a counter object.
fromShort(short) - Method in class org.apache.hadoop.fs.permission.FsPermission
 
fromString(String) - Method in class org.apache.hadoop.io.DefaultStringifier
 
fromString(String) - Method in interface org.apache.hadoop.io.Stringifier
Restores the object from its string representation.
fs - Variable in class org.apache.hadoop.fs.FilterFileSystem
 
fs - Variable in class org.apache.hadoop.fs.FsShell
 
fs - Variable in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
 
FsAction - Enum in org.apache.hadoop.fs.permission
File system actions, e.g.
FSDataInputStream - Class in org.apache.hadoop.fs
Utility that wraps a FSInputStream in a DataInputStream and buffers input through a BufferedInputStream.
FSDataInputStream(InputStream) - Constructor for class org.apache.hadoop.fs.FSDataInputStream
 
FSDataOutputStream - Class in org.apache.hadoop.fs
Utility that wraps a OutputStream in a DataOutputStream, buffers output through a BufferedOutputStream and creates a checksum file.
FSDataOutputStream(OutputStream) - Constructor for class org.apache.hadoop.fs.FSDataOutputStream
Deprecated. 
FSDataOutputStream(OutputStream, FileSystem.Statistics) - Constructor for class org.apache.hadoop.fs.FSDataOutputStream
 
FSDataOutputStream(OutputStream, FileSystem.Statistics, long) - Constructor for class org.apache.hadoop.fs.FSDataOutputStream
 
FSError - Error in org.apache.hadoop.fs
Thrown for unexpected filesystem errors, presumed to reflect disk errors in the native filesystem.
fsError(TaskAttemptID, String) - Method in class org.apache.hadoop.mapred.TaskTracker
A child task had a local filesystem error.
FSInputChecker - Class in org.apache.hadoop.fs
This is a generic input stream for verifying checksums for data before it is read by a user.
FSInputChecker(Path, int) - Constructor for class org.apache.hadoop.fs.FSInputChecker
Constructor
FSInputChecker(Path, int, boolean, Checksum, int, int) - Constructor for class org.apache.hadoop.fs.FSInputChecker
Constructor
FSInputStream - Class in org.apache.hadoop.fs
FSInputStream is a generic old InputStream with a little bit of RAF-style seek ability.
FSInputStream() - Constructor for class org.apache.hadoop.fs.FSInputStream
 
FSOutputSummer - Class in org.apache.hadoop.fs
This is a generic output stream for generating checksums for data before it is written to the underlying stream
FSOutputSummer(Checksum, int, int) - Constructor for class org.apache.hadoop.fs.FSOutputSummer
 
FsPermission - Class in org.apache.hadoop.fs.permission
A class for file/directory permissions.
FsPermission(FsAction, FsAction, FsAction) - Constructor for class org.apache.hadoop.fs.permission.FsPermission
Construct by the given FsAction.
FsPermission(short) - Constructor for class org.apache.hadoop.fs.permission.FsPermission
Construct by the given mode.
FsPermission(FsPermission) - Constructor for class org.apache.hadoop.fs.permission.FsPermission
Copy constructor
FsShell - Class in org.apache.hadoop.fs
Provide command line access to a FileSystem.
FsShell() - Constructor for class org.apache.hadoop.fs.FsShell
 
FsShell(Configuration) - Constructor for class org.apache.hadoop.fs.FsShell
 
FsUrlStreamHandlerFactory - Class in org.apache.hadoop.fs
Factory for URL stream handlers.
FsUrlStreamHandlerFactory() - Constructor for class org.apache.hadoop.fs.FsUrlStreamHandlerFactory
 
FsUrlStreamHandlerFactory(Configuration) - Constructor for class org.apache.hadoop.fs.FsUrlStreamHandlerFactory
 
FTPException - Exception in org.apache.hadoop.fs.ftp
A class to wrap a Throwable into a Runtime Exception.
FTPException(String) - Constructor for exception org.apache.hadoop.fs.ftp.FTPException
 
FTPException(Throwable) - Constructor for exception org.apache.hadoop.fs.ftp.FTPException
 
FTPException(String, Throwable) - Constructor for exception org.apache.hadoop.fs.ftp.FTPException
 
FTPFileSystem - Class in org.apache.hadoop.fs.ftp
A FileSystem backed by an FTP client provided by Apache Commons Net.
FTPFileSystem() - Constructor for class org.apache.hadoop.fs.ftp.FTPFileSystem
 
FTPInputStream - Class in org.apache.hadoop.fs.ftp
 
FTPInputStream(InputStream, FTPClient, FileSystem.Statistics) - Constructor for class org.apache.hadoop.fs.ftp.FTPInputStream
 
fullyDelete(File) - Static method in class org.apache.hadoop.fs.FileUtil
Delete a directory and all its contents.
fullyDelete(FileSystem, Path) - Static method in class org.apache.hadoop.fs.FileUtil
Deprecated. Use FileSystem.delete(Path, boolean)

G

G_SIZE - Static variable in interface org.apache.hadoop.io.compress.bzip2.BZip2Constants
 
GangliaContext - Class in org.apache.hadoop.metrics.ganglia
Context for sending metrics to Ganglia.
GangliaContext() - Constructor for class org.apache.hadoop.metrics.ganglia.GangliaContext
Creates a new instance of GangliaContext
gcd(int, int) - Static method in class org.apache.hadoop.contrib.failmon.Environment
Determines the greatest common divisor (GCD) of two integers.
gcd(int[]) - Static method in class org.apache.hadoop.contrib.failmon.Environment
Determines the greatest common divisor (GCD) of a list of integers.
genCode(String, String, ArrayList<String>) - Method in class org.apache.hadoop.record.compiler.JFile
Generate record code in given language.
generateActualKey(K, V) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
Generate the actual key from the given key/value.
generateActualValue(K, V) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
Generate the actual value from the given key and value.
generateEntry(String, String, Text) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
generateFileNameForKeyValue(K, V, String) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
Generate the file output file name based on the given key and the leaf file name.
generateGroupKey(TaggedMapOutput) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
Generate a map output key.
generateInputTag(String) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
Determine the source tag based on the input file name.
generateKeyValPairs(Object, Object) - Method in class org.apache.hadoop.examples.AggregateWordCount.WordCountPlugInClass
 
generateKeyValPairs(Object, Object) - Method in class org.apache.hadoop.examples.AggregateWordHistogram.AggregateWordHistogramPlugin
Parse the given value, generate an aggregation-id/value pair per word.
generateKeyValPairs(Object, Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.UserDefinedValueAggregatorDescriptor
Generate a list of aggregation-id/value pairs for the given key/value pairs by delegating the invocation to the real object.
generateKeyValPairs(Object, Object) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
Generate 1 or 2 aggregation-id/value pairs for the given key/value pair.
generateKeyValPairs(Object, Object) - Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorDescriptor
Generate a list of aggregation-id/value pairs for the given key/value pair.
generateLeafFileName(String) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
Generate the leaf name for the output file name.
generateParseException() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
generateTaggedMapOutput(Object) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
Generate a tagged map output value.
generateValueAggregator(String) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
generationFromSegmentsFileName(String) - Static method in class org.apache.hadoop.contrib.index.lucene.LuceneUtil
Parse the generation off the segments file name and return it.
GenericOptionsParser - Class in org.apache.hadoop.util
GenericOptionsParser is a utility to parse command line arguments generic to the Hadoop framework.
GenericOptionsParser(Options, String[]) - Constructor for class org.apache.hadoop.util.GenericOptionsParser
Create an options parser with the given options to parse the args.
GenericOptionsParser(String[]) - Constructor for class org.apache.hadoop.util.GenericOptionsParser
Create an options parser to parse the args.
GenericOptionsParser(Configuration, String[]) - Constructor for class org.apache.hadoop.util.GenericOptionsParser
Create a GenericOptionsParser to parse only the generic Hadoop arguments.
GenericOptionsParser(Configuration, Options, String[]) - Constructor for class org.apache.hadoop.util.GenericOptionsParser
Create a GenericOptionsParser to parse given options as well as generic Hadoop options.
GenericsUtil - Class in org.apache.hadoop.util
Contains utility methods for dealing with Java Generics.
GenericsUtil() - Constructor for class org.apache.hadoop.util.GenericsUtil
 
GenericWritable - Class in org.apache.hadoop.io
A wrapper for Writable instances.
GenericWritable() - Constructor for class org.apache.hadoop.io.GenericWritable
 
get(String) - Method in class org.apache.hadoop.conf.Configuration
Get the value of the name property, null if no such property exists.
get(String, String) - Method in class org.apache.hadoop.conf.Configuration
Get the value of the name property.
get(String) - Method in class org.apache.hadoop.contrib.failmon.EventRecord
Get the value of a property of the EventRecord.
get(String) - Method in class org.apache.hadoop.contrib.failmon.SerializedRecord
Get the value of a property of the EventRecord.
get(Configuration) - Static method in class org.apache.hadoop.fs.FileSystem
Returns the configured filesystem implementation.
get(URI, Configuration) - Static method in class org.apache.hadoop.fs.FileSystem
Returns the FileSystem for this URI's scheme and authority.
get(long, Writable) - Method in class org.apache.hadoop.io.ArrayFile.Reader
Return the nth value in the file.
get() - Method in class org.apache.hadoop.io.ArrayWritable
 
get(WritableComparable, Writable) - Method in class org.apache.hadoop.io.BloomMapFile.Reader
Fast version of the MapFile.Reader.get(WritableComparable, Writable) method.
get() - Method in class org.apache.hadoop.io.BooleanWritable
Returns the value of the BooleanWritable
get() - Method in class org.apache.hadoop.io.BytesWritable
Deprecated. Use BytesWritable.getBytes() instead.
get() - Method in class org.apache.hadoop.io.ByteWritable
Return the value of this ByteWritable.
get() - Method in class org.apache.hadoop.io.DoubleWritable
 
get(BytesWritable, BytesWritable) - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
Copy the key and value in one shot into BytesWritables.
get() - Method in class org.apache.hadoop.io.FloatWritable
Return the value of this FloatWritable.
get() - Method in class org.apache.hadoop.io.GenericWritable
Return the wrapped instance.
get() - Method in class org.apache.hadoop.io.IntWritable
Return the value of this IntWritable.
get() - Method in class org.apache.hadoop.io.LongWritable
Return the value of this LongWritable.
get(WritableComparable, Writable) - Method in class org.apache.hadoop.io.MapFile.Reader
Return the value for the named key, or null if none exists.
get(Object) - Method in class org.apache.hadoop.io.MapWritable
get() - Static method in class org.apache.hadoop.io.NullWritable
Returns the single instance of this class.
get() - Method in class org.apache.hadoop.io.ObjectWritable
Return the instance, or null if none.
get(Text) - Method in class org.apache.hadoop.io.SequenceFile.Metadata
 
get(WritableComparable) - Method in class org.apache.hadoop.io.SetFile.Reader
Read the matching key from a set into key.
get(Object) - Method in class org.apache.hadoop.io.SortedMapWritable
get() - Method in class org.apache.hadoop.io.TwoDArrayWritable
 
get() - Method in class org.apache.hadoop.io.VIntWritable
Return the value of this VIntWritable.
get() - Method in class org.apache.hadoop.io.VLongWritable
Return the value of this LongWritable.
get(Class<? extends WritableComparable>) - Static method in class org.apache.hadoop.io.WritableComparator
Get a comparator for a WritableComparable implementation.
get() - Static method in class org.apache.hadoop.ipc.Server
Returns the server instance called under or null.
get(int) - Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
Get ith child InputSplit.
get(int) - Method in class org.apache.hadoop.mapred.join.TupleWritable
Get ith Writable from Tuple.
get() - Method in class org.apache.hadoop.metrics.util.MetricsIntValue
Get value
get() - Method in class org.apache.hadoop.metrics.util.MetricsLongValue
Get value
get(String) - Method in class org.apache.hadoop.metrics.util.MetricsRegistry
 
get(DataInput) - Static method in class org.apache.hadoop.record.BinaryRecordInput
Get a thread-local record input for the supplied DataInput.
get(DataOutput) - Static method in class org.apache.hadoop.record.BinaryRecordOutput
Get a thread-local record output for the supplied DataOutput.
get() - Method in class org.apache.hadoop.record.Buffer
Get the data from the Buffer.
get() - Method in class org.apache.hadoop.util.Progress
Returns the overall progress of the root.
getAbsolutePath(String) - Method in class org.apache.hadoop.streaming.PathFinder
Returns the full path name of this file if it is listed in the path
getAccessKey() - Method in class org.apache.hadoop.fs.s3.S3Credentials
 
getAccessTime() - Method in class org.apache.hadoop.fs.FileStatus
Get the access time of the file.
getActions() - Method in class org.apache.hadoop.security.authorize.ConnectionPermission
 
getActiveTrackerNames() - Method in class org.apache.hadoop.mapred.ClusterStatus
Get the names of task trackers in the cluster.
getAddress(Configuration) - Static method in class org.apache.hadoop.mapred.JobTracker
 
getAlgorithmName() - Method in class org.apache.hadoop.fs.FileChecksum
The checksum algorithm name
getAlgorithmName() - Method in class org.apache.hadoop.fs.MD5MD5CRC32FileChecksum
The checksum algorithm name
getAllJobs() - Method in class org.apache.hadoop.mapred.JobClient
Get the jobs that are submitted.
getAllJobs() - Method in class org.apache.hadoop.mapred.JobTracker
 
getAllStaticResolutions() - Static method in class org.apache.hadoop.net.NetUtils
This is used to get all the resolutions that were added using NetUtils.addStaticResolution(String, String).
getAllStatistics() - Static method in class org.apache.hadoop.fs.FileSystem
Return the FileSystem classes that have Statistics
getAllTasks() - Method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Returns all map and reduce tasks .
getApproxChkSumLength(long) - Static method in class org.apache.hadoop.fs.ChecksumFileSystem
 
getArchiveClassPaths(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Get the archive entries in classpath as an array of Path
getArchiveTimestamps(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Get the timestamps of the archives
getAssignedJobID() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
getAssignedTracker(TaskAttemptID) - Method in class org.apache.hadoop.mapred.JobTracker
Get tracker name for a given task id.
getAttemptsToStartSkipping(Configuration) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
Get the number of Task attempts AFTER which skip mode will be kicked off.
getAttribute(String) - Method in class org.apache.hadoop.http.HttpServer
Get the value in the webapp context.
getAttribute(String) - Method in class org.apache.hadoop.metrics.ContextFactory
Returns the value of the named attribute, or null if there is no attribute of that name.
getAttribute(String) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Convenience method for subclasses to access factory attributes.
getAttribute(String) - Method in class org.apache.hadoop.metrics.util.MetricsDynamicMBeanBase
 
getAttributeNames() - Method in class org.apache.hadoop.metrics.ContextFactory
Returns the names of all the factory's attributes.
getAttributes(String[]) - Method in class org.apache.hadoop.metrics.util.MetricsDynamicMBeanBase
 
getAttributeTable(String) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Returns an attribute-value map derived from the factory attributes by finding all factory attributes that begin with contextName.tableName.
getAutoIncrMapperProcCount(Configuration) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
Get the flag which if set to true, SkipBadRecords.COUNTER_MAP_PROCESSED_RECORDS is incremented by MapRunner after invoking the map function.
getAutoIncrReducerProcCount(Configuration) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
Get the flag which if set to true, SkipBadRecords.COUNTER_REDUCE_PROCESSED_GROUPS is incremented by framework after invoking the reduce function.
getAvailable() - Method in class org.apache.hadoop.fs.DF
 
getBasePathInJarOut(String) - Method in class org.apache.hadoop.streaming.JarBuilder
 
getBaseRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
 
getBaseRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.MultipleSequenceFileOutputFormat
 
getBaseRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.MultipleTextOutputFormat
 
getBeginColumn() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
getBeginLine() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
getBlacklistedTrackerNames() - Method in class org.apache.hadoop.mapred.ClusterStatus
Get the names of task trackers in the cluster.
getBlacklistedTrackers() - Method in class org.apache.hadoop.mapred.ClusterStatus
Get the number of blacklisted task trackers in the cluster.
getBlockIndex(BlockLocation[], long) - Method in class org.apache.hadoop.mapred.FileInputFormat
Deprecated.  
getBlockIndex(BlockLocation[], long) - Method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
 
getBlocks() - Method in class org.apache.hadoop.fs.s3.INode
 
getBlockSize() - Method in class org.apache.hadoop.fs.FileStatus
Get the block size of the file.
getBlockSize(Path) - Method in class org.apache.hadoop.fs.FileSystem
Deprecated. Use getFileStatus() instead
getBlockSize() - Method in class org.apache.hadoop.io.compress.bzip2.CBZip2OutputStream
Returns the blocksize parameter specified at construction time.
getBloomFilter() - Method in class org.apache.hadoop.io.BloomMapFile.Reader
Retrieve the Bloom filter used by this instance of the Reader.
getBoolean(String, boolean) - Method in class org.apache.hadoop.conf.Configuration
Get the value of the name property as a boolean.
getBoundAntProperty(String, String) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
getBuildVersion() - Method in class org.apache.hadoop.mapred.JobTracker
 
getBuildVersion() - Static method in class org.apache.hadoop.util.VersionInfo
Returns the buildVersion which includes version, revision, user and date.
getBytes() - Method in class org.apache.hadoop.fs.FileChecksum
The value of the checksum in bytes
getBytes() - Method in class org.apache.hadoop.fs.MD5MD5CRC32FileChecksum
The value of the checksum in bytes
getBytes() - Method in class org.apache.hadoop.io.BinaryComparable
Return representative byte array for this instance.
getBytes() - Method in class org.apache.hadoop.io.BytesWritable
Get the data from the BytesWritable.
getBytes() - Method in class org.apache.hadoop.io.Text
Returns the raw bytes; however, only data up to Text.getLength() is valid.
getBytes() - Method in class org.apache.hadoop.io.UTF8
Deprecated. The raw bytes.
getBytes(String) - Static method in class org.apache.hadoop.io.UTF8
Deprecated. Convert a string to a UTF-8 encoded byte array.
getBytes() - Method in class org.apache.hadoop.util.bloom.Key
 
getBytesPerChecksum() - Method in class org.apache.hadoop.util.DataChecksum
 
getBytesPerSum() - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Return the bytes Per Checksum
getBytesRead() - Method in class org.apache.hadoop.fs.FileSystem.Statistics
Get the total number of bytes read
getBytesRead() - Method in class org.apache.hadoop.io.compress.bzip2.BZip2DummyCompressor
 
getBytesRead() - Method in interface org.apache.hadoop.io.compress.Compressor
Return number of uncompressed bytes input so far.
getBytesRead() - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
Returns the total number of uncompressed bytes input so far.
getBytesRead() - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
Returns the total number of uncompressed bytes input so far.
getBytesWritten() - Method in class org.apache.hadoop.fs.FileSystem.Statistics
Get the total number of bytes written
getBytesWritten() - Method in class org.apache.hadoop.io.compress.bzip2.BZip2DummyCompressor
 
getBytesWritten() - Method in interface org.apache.hadoop.io.compress.Compressor
Return number of compressed bytes output so far.
getBytesWritten() - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
Returns the total number of compressed bytes output so far.
getBytesWritten() - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
Returns the total number of compressed bytes output so far.
getCacheArchives(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Get cache archives set in the Configuration
getCacheFiles(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Get cache files set in the Configuration
getCallQueueLen() - Method in interface org.apache.hadoop.ipc.metrics.RpcMgtMBean
The number of rpc calls in the queue.
getCallQueueLen() - Method in class org.apache.hadoop.ipc.Server
The number of rpc calls in the queue.
getCapacity() - Method in class org.apache.hadoop.fs.DF
 
getCapacity() - Method in class org.apache.hadoop.io.BytesWritable
Get the capacity, which is the maximum size that could handled without resizing the backing storage.
getCapacity() - Method in class org.apache.hadoop.record.Buffer
Get the capacity, which is the maximum count that could handled without resizing the backing storage.
getCategory(List<List<Pentomino.ColumnName>>) - Method in class org.apache.hadoop.examples.dancing.Pentomino
Find whether the solution has the x in the upper left quadrant, the x-midline, the y-midline or in the center.
getChannel() - Method in class org.apache.hadoop.net.SocketInputStream
Returns underlying channel used by inputstream.
getChannel() - Method in class org.apache.hadoop.net.SocketOutputStream
Returns underlying channel used by this stream.
getChecksumFile(Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Return the name of the checksum file associated with a file.
getChecksumFileLength(Path, long) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Return the length of the checksum file given the size of the actual file.
getChecksumHeaderSize() - Static method in class org.apache.hadoop.util.DataChecksum
 
getChecksumLength(long, int) - Static method in class org.apache.hadoop.fs.ChecksumFileSystem
Calculated the length of the checksum file in bytes.
getChecksumSize() - Method in class org.apache.hadoop.util.DataChecksum
 
getChecksumType() - Method in class org.apache.hadoop.util.DataChecksum
 
getChunkPosition(long) - Method in class org.apache.hadoop.fs.FSInputChecker
Return position of beginning of chunk containing pos.
getClass(String, Class<?>) - Method in class org.apache.hadoop.conf.Configuration
Get the value of the name property as a Class.
getClass(String, Class<? extends U>, Class<U>) - Method in class org.apache.hadoop.conf.Configuration
Get the value of the name property as a Class implementing the interface specified by xface.
getClass(byte) - Method in class org.apache.hadoop.io.AbstractMapWritable
 
getClass(String, Configuration) - Static method in class org.apache.hadoop.io.WritableName
Return the class for a name.
getClass(T) - Static method in class org.apache.hadoop.util.GenericsUtil
Returns the Class object (of type Class<T>) of the argument of type T.
getClass(T) - Static method in class org.apache.hadoop.util.ReflectionUtils
Return the correctly-typed Class of the given object.
getClassByName(String) - Method in class org.apache.hadoop.conf.Configuration
Load a class by name.
getClassByName(String) - Static method in class org.apache.hadoop.contrib.utils.join.DataJoinJob
 
getClasses(String, Class<?>...) - Method in class org.apache.hadoop.conf.Configuration
Get the value of the name property as an array of Class.
getClassLoader() - Method in class org.apache.hadoop.conf.Configuration
Get the ClassLoader for this job.
getClassName() - Method in exception org.apache.hadoop.ipc.RemoteException
 
getCleanupTaskReports(JobID) - Method in class org.apache.hadoop.mapred.JobClient
Get the information of the current state of the cleanup tasks of a job.
getCleanupTaskReports(JobID) - Method in class org.apache.hadoop.mapred.JobTracker
 
getClientVersion() - Method in exception org.apache.hadoop.ipc.RPC.VersionMismatch
Get the client's preferred version
getClosest(WritableComparable, Writable) - Method in class org.apache.hadoop.io.MapFile.Reader
Finds the record that is the closest match to the specified key.
getClosest(WritableComparable, Writable, boolean) - Method in class org.apache.hadoop.io.MapFile.Reader
Finds the record that is the closest match to the specified key.
getClusterNick() - Method in class org.apache.hadoop.streaming.StreamJob
Deprecated. 
getClusterStatus() - Method in class org.apache.hadoop.mapred.JobClient
Get status information about the Map-Reduce cluster.
getClusterStatus(boolean) - Method in class org.apache.hadoop.mapred.JobClient
Get status information about the Map-Reduce cluster.
getClusterStatus() - Method in class org.apache.hadoop.mapred.JobTracker
Deprecated. use JobTracker.getClusterStatus(boolean)
getClusterStatus(boolean) - Method in class org.apache.hadoop.mapred.JobTracker
 
getCodec(Path) - Method in class org.apache.hadoop.io.compress.CompressionCodecFactory
Find the relevant compression codec for the given file based on its filename suffix.
getCodecClasses(Configuration) - Static method in class org.apache.hadoop.io.compress.CompressionCodecFactory
Get the list of codecs listed in the configuration
getCollector(String, Reporter) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputs
Gets the output collector for a named output.
getCollector(String, String, Reporter) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputs
Gets the output collector for a multi named output.
getColumnName(int) - Method in class org.apache.hadoop.examples.dancing.DancingLinks
Get the name of a given column as a string
getCombinerClass() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the user-defined combiner class used to combine map-outputs before being sent to the reducers.
getCombinerClass() - Method in class org.apache.hadoop.mapreduce.JobContext
Get the combiner class for the job.
getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
 
getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
 
getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
 
getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
 
getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
 
getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
 
getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
 
getCombinerOutput() - Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregator
 
getCombinerOutput() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
 
getCommandLine() - Method in class org.apache.hadoop.util.GenericOptionsParser
Returns the commons-cli CommandLine object to process the parsed arguments.
getCommandName() - Method in class org.apache.hadoop.fs.shell.Command
Return the command's name excluding the leading character -
getCommandName() - Method in class org.apache.hadoop.fs.shell.Count
 
getComparator() - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader
Get an instance of the RawComparator that is constructed based on the string comparator representation.
getComparator() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
Return comparator defining the ordering for RecordReaders in this composite.
getComparatorName() - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader
Get the string representation of the comparator.
getCompressedData() - Method in class org.apache.hadoop.io.compress.BlockDecompressorStream
 
getCompressedData() - Method in class org.apache.hadoop.io.compress.DecompressorStream
 
getCompressionCodec() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns the compression codec of data in this file.
getCompressionCodec() - Method in class org.apache.hadoop.io.SequenceFile.Writer
Returns the compression codec of data in this file.
getCompressionType(Configuration) - Static method in class org.apache.hadoop.io.SequenceFile
Deprecated. Use SequenceFileOutputFormat.getOutputCompressionType(org.apache.hadoop.mapred.JobConf) to get SequenceFile.CompressionType for job-outputs.
getCompressMapOutput() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Are the outputs of the maps be compressed?
getCompressor(CompressionCodec) - Static method in class org.apache.hadoop.io.compress.CodecPool
Get a Compressor for the given CompressionCodec from the pool or a new one.
getCompressorType() - Method in class org.apache.hadoop.io.compress.BZip2Codec
This functionality is currently not supported.
getCompressorType() - Method in interface org.apache.hadoop.io.compress.CompressionCodec
Get the type of Compressor needed by this CompressionCodec.
getCompressorType() - Method in class org.apache.hadoop.io.compress.DefaultCodec
 
getCompressorType() - Method in class org.apache.hadoop.io.compress.GzipCodec
 
getCompressOutput(JobConf) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
Is the job output compressed?
getCompressOutput(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
Is the job output compressed?
getConf() - Method in interface org.apache.hadoop.conf.Configurable
Return the configuration used by this object.
getConf() - Method in class org.apache.hadoop.conf.Configured
 
getConf() - Method in class org.apache.hadoop.fs.FilterFileSystem
 
getConf() - Method in class org.apache.hadoop.io.AbstractMapWritable
 
getConf() - Method in class org.apache.hadoop.io.compress.DefaultCodec
 
getConf() - Method in class org.apache.hadoop.io.GenericWritable
 
getConf() - Method in class org.apache.hadoop.io.ObjectWritable
 
getConf() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
Return the configuration used by this object.
getConf() - Method in class org.apache.hadoop.mapred.lib.InputSampler
 
getConf() - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.FilterBase
 
getConf() - Method in class org.apache.hadoop.net.ScriptBasedMapping
 
getConf() - Method in class org.apache.hadoop.net.SocksSocketFactory
 
getConf() - Method in class org.apache.hadoop.security.authorize.ConfiguredPolicy
 
getConf() - Method in class org.apache.hadoop.streaming.StreamJob
 
getConfiguration() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Get the underlying configuration object.
getConfiguration() - Method in class org.apache.hadoop.mapreduce.JobContext
Return the configuration for the job.
getConfiguration() - Method in class org.apache.hadoop.util.GenericOptionsParser
Get the modified configuration
getConfResourceAsInputStream(String) - Method in class org.apache.hadoop.conf.Configuration
Get an input stream attached to the configuration resource with the given name.
getConfResourceAsReader(String) - Method in class org.apache.hadoop.conf.Configuration
Get a Reader attached to the configuration resource with the given name.
getConnectAddress(Server) - Static method in class org.apache.hadoop.net.NetUtils
Returns InetSocketAddress that a client can use to connect to the server.
getContentSummary(Path) - Method in class org.apache.hadoop.fs.FileSystem
Return the ContentSummary of a given Path.
getContext(String, String) - Method in class org.apache.hadoop.metrics.ContextFactory
Returns the named MetricsContext instance, constructing it if necessary using the factory's current configuration attributes.
getContext(String) - Method in class org.apache.hadoop.metrics.ContextFactory
 
getContext(String) - Static method in class org.apache.hadoop.metrics.MetricsUtil
 
getContext(String, String) - Static method in class org.apache.hadoop.metrics.MetricsUtil
Utility method to return the named context.
getContext() - Method in class org.apache.hadoop.streaming.PipeMapRed
 
getContextFactory() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Returns the factory by which this context was created.
getContextName() - Method in interface org.apache.hadoop.metrics.MetricsContext
Returns the context name.
getContextName() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Returns the context name.
getCount() - Method in class org.apache.hadoop.record.Buffer
Get the current count of the buffer.
getCounter() - Method in class org.apache.hadoop.mapred.Counters.Counter
Deprecated. What is the current value of this counter?
getCounter(Enum) - Method in class org.apache.hadoop.mapred.Counters
Deprecated. Returns current value of the specified counter, or 0 if the counter does not exist.
getCounter(String) - Method in class org.apache.hadoop.mapred.Counters.Group
Deprecated. Returns the value of the specified counter, or 0 if the counter does not exist.
getCounter(int, String) - Method in class org.apache.hadoop.mapred.Counters.Group
Deprecated. use Counters.Group.getCounter(String) instead
getCounter(Enum<?>) - Method in interface org.apache.hadoop.mapred.Reporter
Get the Counters.Counter of the given group with the given name.
getCounter(String, String) - Method in interface org.apache.hadoop.mapred.Reporter
Get the Counters.Counter of the given group with the given name.
getCounter(Enum<?>) - Method in class org.apache.hadoop.mapreduce.StatusReporter
 
getCounter(String, String) - Method in class org.apache.hadoop.mapreduce.StatusReporter
 
getCounter(Enum<?>) - Method in class org.apache.hadoop.mapreduce.TaskInputOutputContext
 
getCounter(String, String) - Method in class org.apache.hadoop.mapreduce.TaskInputOutputContext
 
getCounterForName(String) - Method in class org.apache.hadoop.mapred.Counters.Group
Deprecated. Get the counter for the given name and create it if it doesn't exist.
getCounters() - Method in interface org.apache.hadoop.mapred.RunningJob
Gets the counters for this job.
getCounters() - Method in class org.apache.hadoop.mapred.TaskReport
A table of counters.
getCounters() - Method in class org.apache.hadoop.mapreduce.Job
Gets the counters for this job.
getCountersEnabled(JobConf) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
Returns if the counters for the named outputs are enabled or not.
getCountQuery() - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat
Returns the query for getting the total number of rows, subclasses can override this for custom behaviour.
getCumulativeVmem() - Method in class org.apache.hadoop.util.ProcfsBasedProcessTree
Get the cumulative virtual memory used by all the processes in the process-tree.
getCumulativeVmem(int) - Method in class org.apache.hadoop.util.ProcfsBasedProcessTree
Get the cumulative virtual memory used by all the processes in the process-tree that are older than the passed in age.
getCurrentIntervalValue() - Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingInt
The Value at the current interval
getCurrentIntervalValue() - Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingLong
The Value at the current interval
getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.input.LineRecordReader
 
getCurrentKey() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
 
getCurrentKey() - Method in class org.apache.hadoop.mapreduce.MapContext
 
getCurrentKey() - Method in class org.apache.hadoop.mapreduce.RecordReader
Get the current key
getCurrentKey() - Method in class org.apache.hadoop.mapreduce.ReduceContext
 
getCurrentKey() - Method in class org.apache.hadoop.mapreduce.TaskInputOutputContext
Get the current key.
getCurrentSegmentGeneration(Directory) - Static method in class org.apache.hadoop.contrib.index.lucene.LuceneUtil
Get the generation (N) of the current segments_N file in the directory.
getCurrentSegmentGeneration(String[]) - Static method in class org.apache.hadoop.contrib.index.lucene.LuceneUtil
Get the generation (N) of the current segments_N file from a list of files.
getCurrentSplit(JobConf) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
getCurrentStatus() - Method in class org.apache.hadoop.mapred.TaskReport
The current status
getCurrentTrashDir() - Method in class org.apache.hadoop.fs.FsShell
Returns the Trash object associated with this shell.
getCurrentUGI() - Static method in class org.apache.hadoop.security.UserGroupInformation
 
getCurrentValue(Writable) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Get the 'value' corresponding to the last read 'key'.
getCurrentValue(Object) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Get the 'value' corresponding to the last read 'key'.
getCurrentValue(V) - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
 
getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.input.LineRecordReader
 
getCurrentValue() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
 
getCurrentValue() - Method in class org.apache.hadoop.mapreduce.MapContext
 
getCurrentValue() - Method in class org.apache.hadoop.mapreduce.RecordReader
Get the current value.
getCurrentValue() - Method in class org.apache.hadoop.mapreduce.ReduceContext
 
getCurrentValue() - Method in class org.apache.hadoop.mapreduce.TaskInputOutputContext
Get the current value.
getData() - Method in class org.apache.hadoop.contrib.utils.join.TaggedMapOutput
 
getData() - Method in class org.apache.hadoop.io.DataInputBuffer
 
getData() - Method in class org.apache.hadoop.io.DataOutputBuffer
Returns the current contents of the buffer.
getData() - Method in class org.apache.hadoop.io.OutputBuffer
Returns the current contents of the buffer.
getDate() - Static method in class org.apache.hadoop.util.VersionInfo
The date that Hadoop was compiled.
getDeclaredClass() - Method in class org.apache.hadoop.io.ObjectWritable
Return the class this is meant to be.
getDecompressor(CompressionCodec) - Static method in class org.apache.hadoop.io.compress.CodecPool
Get a Decompressor for the given CompressionCodec from the pool or a new one.
getDecompressorType() - Method in class org.apache.hadoop.io.compress.BZip2Codec
This functionality is currently not supported.
getDecompressorType() - Method in interface org.apache.hadoop.io.compress.CompressionCodec
Get the type of Decompressor needed by this CompressionCodec.
getDecompressorType() - Method in class org.apache.hadoop.io.compress.DefaultCodec
 
getDecompressorType() - Method in class org.apache.hadoop.io.compress.GzipCodec
 
getDefault() - Static method in class org.apache.hadoop.fs.permission.FsPermission
Get the default permission.
getDefaultBlockSize() - Method in class org.apache.hadoop.fs.FileSystem
Return the number of bytes that large input files should be optimally be split into to minimize i/o time.
getDefaultBlockSize() - Method in class org.apache.hadoop.fs.FilterFileSystem
Return the number of bytes that large input files should be optimally be split into to minimize i/o time.
getDefaultBlockSize() - Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
 
getDefaultExtension() - Method in class org.apache.hadoop.io.compress.BZip2Codec
.bz2 is recognized as the default extension for compressed BZip2 files
getDefaultExtension() - Method in interface org.apache.hadoop.io.compress.CompressionCodec
Get the default filename extension for this kind of compression.
getDefaultExtension() - Method in class org.apache.hadoop.io.compress.DefaultCodec
 
getDefaultExtension() - Method in class org.apache.hadoop.io.compress.GzipCodec
 
getDefaultHost(String, String) - Static method in class org.apache.hadoop.net.DNS
Returns the default (first) host name associated by the provided nameserver with the address bound to the specified network interface
getDefaultHost(String) - Static method in class org.apache.hadoop.net.DNS
Returns the default (first) host name associated by the default nameserver with the address bound to the specified network interface
getDefaultIP(String) - Static method in class org.apache.hadoop.net.DNS
Returns the first available IP address associated with the provided network interface
getDefaultMaps() - Method in class org.apache.hadoop.mapred.JobClient
Get status information about the max available Maps in the cluster.
getDefaultReduces() - Method in class org.apache.hadoop.mapred.JobClient
Get status information about the max available Reduces in the cluster.
getDefaultReplication() - Method in class org.apache.hadoop.fs.FileSystem
Get the default replication.
getDefaultReplication() - Method in class org.apache.hadoop.fs.FilterFileSystem
Get the default replication.
getDefaultReplication() - Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
 
getDefaultSocketFactory(Configuration) - Static method in class org.apache.hadoop.net.NetUtils
Get the default socket factory as specified by the configuration parameter hadoop.rpc.socket.factory.default
getDefaultUri(Configuration) - Static method in class org.apache.hadoop.fs.FileSystem
Get the default filesystem URI from a configuration.
getDefaultWorkFile(TaskAttemptContext, String) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
Get the default path and filename for the output format.
getDelegate() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
Obtain an iterator over the child RRs apropos of the value type ultimately emitted from this join.
getDelegate() - Method in class org.apache.hadoop.mapred.join.JoinRecordReader
Return an iterator wrapping the JoinCollector.
getDelegate() - Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader
Return an iterator returning a single value from the tuple.
getDependingJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
getDescription() - Method in class org.apache.hadoop.metrics.util.MetricsBase
 
getDeserializer(Class<Serializable>) - Method in class org.apache.hadoop.io.serializer.JavaSerialization
 
getDeserializer(Class<T>) - Method in interface org.apache.hadoop.io.serializer.Serialization
 
getDeserializer(Class<T>) - Method in class org.apache.hadoop.io.serializer.SerializationFactory
 
getDeserializer(Class<Writable>) - Method in class org.apache.hadoop.io.serializer.WritableSerialization
 
getDiagnostics() - Method in class org.apache.hadoop.mapred.TaskReport
A list of error messages.
getDigest() - Method in class org.apache.hadoop.io.MD5Hash
Returns the digest bytes.
getDirectory() - Method in class org.apache.hadoop.contrib.index.mapred.IntermediateForm
Get the ram directory of the intermediate form.
getDirectory() - Method in class org.apache.hadoop.contrib.index.mapred.Shard
Get the directory where this shard resides.
getDirectoryCount() - Method in class org.apache.hadoop.fs.ContentSummary
 
getDirPath() - Method in class org.apache.hadoop.fs.DF
 
getDirPath() - Method in class org.apache.hadoop.fs.DU
 
getDisplayName() - Method in class org.apache.hadoop.mapred.Counters.Group
Deprecated. Returns localized name of the group.
getDisplayName() - Method in class org.apache.hadoop.mapreduce.Counter
Get the name of the counter.
getDisplayName() - Method in class org.apache.hadoop.mapreduce.CounterGroup
Get the display name of the group.
getDistance(Node, Node) - Method in class org.apache.hadoop.net.NetworkTopology
Return the distance between two nodes It is assumed that the distance from one node to its parent is 1 The distance between two nodes is calculated by summing up their distances to their closest common ancestor.
getDistributionPolicyClass() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Get the distribution policy class.
getDocument() - Method in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
Get the document.
getDocumentAnalyzerClass() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Get the analyzer class.
getDoubleValue(Object) - Method in class org.apache.hadoop.contrib.utils.join.JobBase
 
getDU(File) - Static method in class org.apache.hadoop.fs.FileUtil
Takes an input dir and returns the du on that local directory.
getElementTypeID() - Method in class org.apache.hadoop.record.meta.VectorTypeID
 
getEmptier() - Method in class org.apache.hadoop.fs.Trash
Return a Runnable that periodically empties the trash of all users, intended to be run by the superuser.
getEnd() - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.DBInputSplit
 
getEndColumn() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
getEndLine() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
getEntry(MapFile.Reader[], Partitioner<K, V>, K, V) - Static method in class org.apache.hadoop.mapred.MapFileOutputFormat
Get an entry from output generated by this class.
getEntryComparator() - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader
Get a Comparator object to compare Entries.
getEntryCount() - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader
Get the number of key-value pair entries in TFile.
getError() - Static method in class org.apache.hadoop.metrics.jvm.EventCounter
 
getEventId() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
Returns event Id.
getExceptions() - Method in exception org.apache.hadoop.io.MultipleIOException
 
getExcludedHosts() - Method in class org.apache.hadoop.util.HostsFileReader
 
getExecString() - Method in class org.apache.hadoop.fs.DF
 
getExecString() - Method in class org.apache.hadoop.fs.DU
 
getExecString() - Method in class org.apache.hadoop.util.Shell
return an array containing the command name & its parameters
getExecString() - Method in class org.apache.hadoop.util.Shell.ShellCommandExecutor
 
getExecutable(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Get the URI of the application's executable.
getExitCode() - Method in exception org.apache.hadoop.util.Shell.ExitCodeException
 
getExitCode() - Method in class org.apache.hadoop.util.Shell
get the exit code
getFactor() - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Get the number of streams to merge at once.
getFactory(Class) - Static method in class org.apache.hadoop.io.WritableFactories
Define a factory for a class.
getFactory() - Static method in class org.apache.hadoop.metrics.ContextFactory
Returns the singleton ContextFactory instance, constructing it if necessary.
getFailedJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
 
getFatal() - Static method in class org.apache.hadoop.metrics.jvm.EventCounter
 
getFieldID() - Method in class org.apache.hadoop.record.meta.FieldTypeInfo
get the field's id (name)
getFieldTypeInfos() - Method in class org.apache.hadoop.record.meta.RecordTypeInfo
Return a collection of field type infos
getFieldTypeInfos() - Method in class org.apache.hadoop.record.meta.StructTypeID
 
getFile(String, String) - Method in class org.apache.hadoop.conf.Configuration
Get a local file name under a directory named in dirsProp with the given path.
getFileBlockLocations(FileStatus, long, long) - Method in class org.apache.hadoop.fs.FileSystem
Return an array containing hostnames, offset and size of portions of the given file.
getFileBlockLocations(FileStatus, long, long) - Method in class org.apache.hadoop.fs.FilterFileSystem
 
getFileBlockLocations(FileStatus, long, long) - Method in class org.apache.hadoop.fs.HarFileSystem
get block locations from the underlying fs
getFileBlockLocations(FileStatus, long, long) - Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
Return null if the file doesn't exist; otherwise, get the locations of the various chunks of the file file from KFS.
getFileChecksum(Path) - Method in class org.apache.hadoop.fs.FileSystem
Get the checksum of a file.
getFileChecksum(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Get the checksum of a file.
getFileClassPaths(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Get the file entries in classpath as an array of Path
getFileCount() - Method in class org.apache.hadoop.fs.ContentSummary
 
getFileName() - Method in class org.apache.hadoop.metrics.file.FileContext
Returns the configured file name, or null.
getFiles(PathFilter) - Method in class org.apache.hadoop.fs.InMemoryFileSystem
Deprecated.  
getFileStatus(Path) - Method in class org.apache.hadoop.fs.FileSystem
Return a file status object that represents the path.
getFileStatus(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Get file status.
getFileStatus(Path) - Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
 
getFileStatus(Path) - Method in class org.apache.hadoop.fs.HarFileSystem
return the filestatus of files in har archive.
getFileStatus(Path) - Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
 
getFileStatus(Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
getFileStatus(Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
FileStatus for S3 file systems.
getFileStatus(Path) - Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
 
getFilesystem() - Method in class org.apache.hadoop.fs.DF
 
getFileSystem(Configuration) - Method in class org.apache.hadoop.fs.Path
Return the FileSystem that owns this Path.
getFilesystemName() - Method in class org.apache.hadoop.mapred.JobTracker
Grab the local fs name
getFileTimestamps(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Get the timestamps of the files
getFileType() - Method in class org.apache.hadoop.fs.s3.INode
 
getFinalSync(JobConf) - Static method in class org.apache.hadoop.examples.terasort.TeraOutputFormat
Does the user want a final sync at close?
getFinishTime() - Method in class org.apache.hadoop.mapred.TaskReport
Get finish time of task.
getFirst() - Method in class org.apache.hadoop.examples.SecondarySort.IntPair
 
getFirstKey() - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader
Get the first key in the TFile.
getFlippable() - Method in class org.apache.hadoop.examples.dancing.Pentomino.Piece
 
getFloat(String, float) - Method in class org.apache.hadoop.conf.Configuration
Get the value of the name property as a float.
getFormatMinSplitSize() - Method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
Get the lower bound on split size imposed by the format.
getFormatMinSplitSize() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat
 
getFormattedTimeWithDiff(DateFormat, long, long) - Static method in class org.apache.hadoop.util.StringUtils
Formats time in ms and appends difference (finishTime - startTime) as returned by formatTimeDiff().
getFs() - Method in class org.apache.hadoop.mapred.JobClient
Get a filesystem handle.
getFSSize() - Method in class org.apache.hadoop.fs.InMemoryFileSystem
Deprecated.  
getGeneration() - Method in class org.apache.hadoop.contrib.index.mapred.Shard
Get the generation of the Lucene instance.
getGET_PERMISSION_COMMAND() - Static method in class org.apache.hadoop.util.Shell
Return a Unix command to get permission information.
getGroup() - Method in class org.apache.hadoop.fs.FileStatus
Get the group associated with the file.
getGroup(String) - Method in class org.apache.hadoop.mapred.Counters
Deprecated. Returns the named counter group, or an empty group if there is none with the specified name.
getGroup(String) - Method in class org.apache.hadoop.mapreduce.Counters
Returns the named counter group, or an empty group if there is none with the specified name.
getGroupAction() - Method in class org.apache.hadoop.fs.permission.FsPermission
Return group FsAction.
getGroupingComparator() - Method in class org.apache.hadoop.mapreduce.JobContext
Get the user defined RawComparator comparator for grouping keys of inputs to the reduce.
getGroupName() - Method in class org.apache.hadoop.fs.permission.PermissionStatus
Return group name
getGroupNames() - Method in class org.apache.hadoop.mapred.Counters
Deprecated. Returns the names of all counter classes.
getGroupNames() - Method in class org.apache.hadoop.mapreduce.Counters
Returns the names of all counter classes.
getGroupNames() - Method in class org.apache.hadoop.security.UnixUserGroupInformation
Return an array of group names
getGroupNames() - Method in class org.apache.hadoop.security.UserGroupInformation
Get the name of the groups that the user belong to
getGroups() - Method in class org.apache.hadoop.security.SecurityUtil.AccessControlList
 
getGROUPS_COMMAND() - Static method in class org.apache.hadoop.util.Shell
a Unix command to get the current user's groups list
getHadoopClientHome() - Method in class org.apache.hadoop.streaming.StreamJob
 
getHarHash(Path) - Static method in class org.apache.hadoop.fs.HarFileSystem
the hash of the path p inside iniside the filesystem
getHarVersion() - Method in class org.apache.hadoop.fs.HarFileSystem
 
getHashType(Configuration) - Static method in class org.apache.hadoop.util.hash.Hash
This utility method converts the name of the configured hash type to a symbolic constant.
getHeader(boolean) - Static method in class org.apache.hadoop.fs.ContentSummary
Return the header of the output.
getHeader() - Method in class org.apache.hadoop.util.DataChecksum
 
getHomeDirectory() - Method in class org.apache.hadoop.fs.FileSystem
Return the current user's home directory in this filesystem.
getHomeDirectory() - Method in class org.apache.hadoop.fs.FilterFileSystem
 
getHomeDirectory() - Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
 
getHomeDirectory() - Method in class org.apache.hadoop.fs.HarFileSystem
return the top level archive path.
getHomeDirectory() - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
getHost() - Method in class org.apache.hadoop.streaming.Environment
 
getHostname() - Static method in class org.apache.hadoop.util.StringUtils
Return hostname without throwing exception.
getHosts() - Method in class org.apache.hadoop.fs.BlockLocation
Get the list of hosts (hostname) hosting this block
getHosts(String, String) - Static method in class org.apache.hadoop.net.DNS
Returns all the host names associated by the provided nameserver with the address bound to the specified network interface
getHosts(String) - Static method in class org.apache.hadoop.net.DNS
Returns all the host names associated by the default nameserver with the address bound to the specified network interface
getHosts() - Method in class org.apache.hadoop.util.HostsFileReader
 
getId() - Method in class org.apache.hadoop.fs.s3.Block
 
getId(Class) - Method in class org.apache.hadoop.io.AbstractMapWritable
 
getID() - Method in interface org.apache.hadoop.mapred.RunningJob
Get the job identifier.
getId() - Method in class org.apache.hadoop.mapreduce.ID
returns the int which represents the identifier
GetImage() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
getIndexFile(String) - Static method in class org.apache.hadoop.mapred.TaskLog
 
getIndexFile(String, boolean) - Static method in class org.apache.hadoop.mapred.TaskLog
 
getIndexInputFormatClass() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Get the index input format class.
getIndexInterval() - Method in class org.apache.hadoop.io.MapFile.Writer
The number of entries that are added before an index entry is added.
getIndexMaxFieldLength() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Get the max field length for a Lucene instance.
getIndexMaxNumSegments() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Get the max number of segments for a Lucene instance.
getIndexShards() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Get the string representation of a number of shards.
getIndexShards(IndexUpdateConfiguration) - Static method in class org.apache.hadoop.contrib.index.mapred.Shard
 
getIndexUpdaterClass() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Get the index updater class.
getIndexUseCompoundFile() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Check whether to use the compound file format for a Lucene instance.
getInfo() - Method in class org.apache.hadoop.contrib.failmon.CPUParser
Return a String with information about this class
getInfo() - Method in class org.apache.hadoop.contrib.failmon.HadoopLogParser
Return a String with information about this class
getInfo() - Method in interface org.apache.hadoop.contrib.failmon.Monitored
Return a String with information about the implementing class
getInfo() - Method in class org.apache.hadoop.contrib.failmon.NICParser
Return a String with information about this class
getInfo() - Method in class org.apache.hadoop.contrib.failmon.SensorsParser
Return a String with information about this class
getInfo() - Method in class org.apache.hadoop.contrib.failmon.SMARTParser
Return a String with information about this class
getInfo() - Method in class org.apache.hadoop.contrib.failmon.SystemLogParser
Return a String with information about this class
getInfo() - Static method in class org.apache.hadoop.metrics.jvm.EventCounter
 
getInfoPort() - Method in class org.apache.hadoop.mapred.JobTracker
 
getInputFileBasedOutputFileName(JobConf, String) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
Generate the outfile name based on a given anme and the input file name.
getInputFormat() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the InputFormat implementation for the map-reduce job, defaults to TextInputFormat if not specified explicity.
getInputFormatClass() - Method in class org.apache.hadoop.mapreduce.JobContext
Get the InputFormat class for the job.
getInputPathFilter(JobConf) - Static method in class org.apache.hadoop.mapred.FileInputFormat
Deprecated. Get a PathFilter instance of the filter set for the input paths.
getInputPathFilter(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
Get a PathFilter instance of the filter set for the input paths.
getInputPaths(JobConf) - Static method in class org.apache.hadoop.mapred.FileInputFormat
Deprecated. Get the list of input Paths for the map-reduce job.
getInputPaths(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
Get the list of input Paths for the map-reduce job.
getInputSplit() - Method in interface org.apache.hadoop.mapred.Reporter
Get the InputSplit object for a map.
getInputSplit() - Method in class org.apache.hadoop.mapreduce.MapContext
Get the input split for this map.
getInputStream(Socket) - Static method in class org.apache.hadoop.net.NetUtils
Same as getInputStream(socket, socket.getSoTimeout()).

From documentation for NetUtils.getInputStream(Socket, long):
Returns InputStream for the socket.
getInputStream(Socket, long) - Static method in class org.apache.hadoop.net.NetUtils
Returns InputStream for the socket.
getInstance(int) - Static method in class org.apache.hadoop.util.hash.Hash
Get a singleton instance of hash function of a given type.
getInstance(Configuration) - Static method in class org.apache.hadoop.util.hash.Hash
Get a singleton instance of hash function of a type defined in the configuration.
getInstance() - Static method in class org.apache.hadoop.util.hash.JenkinsHash
 
getInstance() - Static method in class org.apache.hadoop.util.hash.MurmurHash
 
getInstrumentationClass(Configuration) - Static method in class org.apache.hadoop.mapred.JobTracker
 
getInstrumentationClass(Configuration) - Static method in class org.apache.hadoop.mapred.TaskTracker
 
getInt(String, int) - Method in class org.apache.hadoop.conf.Configuration
Get the value of the name property as an int.
getInterfaceName() - Method in exception org.apache.hadoop.ipc.RPC.VersionMismatch
Get the interface name
getInterval(ArrayList<MonitorJob>) - Static method in class org.apache.hadoop.contrib.failmon.Environment
Determines the minimum interval at which the executor thread needs to wake upto execute jobs.
getIOSortMB() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Get the IO sort space in MB.
getIPs(String) - Static method in class org.apache.hadoop.net.DNS
Returns all the IPs associated with the provided interface, if any, in textual form.
getIsJavaMapper(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Check whether the job is using a Java Mapper.
getIsJavaRecordReader(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Check whether the job is using a Java RecordReader
getIsJavaRecordWriter(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Will the reduce use a Java RecordWriter?
getIsJavaReducer(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Check whether the job is using a Java Reducer.
getJar() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the user jar for the map-reduce job.
getJar() - Method in class org.apache.hadoop.mapreduce.Job
Get the pathname of the job's jar.
getJar() - Method in class org.apache.hadoop.mapreduce.JobContext
Get the pathname of the job's jar.
getJob(JobID) - Method in class org.apache.hadoop.mapred.JobClient
Get an RunningJob object to track an ongoing job.
getJob(String) - Method in class org.apache.hadoop.mapred.JobClient
Deprecated. Applications should rather use JobClient.getJob(JobID).
getJob(JobID) - Method in class org.apache.hadoop.mapred.JobTracker
 
getJob() - Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
 
getJobClient() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
getJobClient() - Method in class org.apache.hadoop.mapred.TaskTracker
The connection to the JobTracker, used by the TaskRunner for locating remote files.
getJobConf() - Method in class org.apache.hadoop.mapred.JobContext
Deprecated. Get the job Configuration
getJobConf() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
getJobConf() - Method in class org.apache.hadoop.mapred.TaskAttemptContext
Deprecated.  
getJobCounters(JobID) - Method in class org.apache.hadoop.mapred.JobTracker
 
getJobEndNotificationURI() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the uri to be invoked in-order to send a notification after the job has completed (success/failure).
getJobFile() - Method in class org.apache.hadoop.mapred.JobProfile
Get the configuration file for the job.
getJobFile() - Method in interface org.apache.hadoop.mapred.RunningJob
Get the path of the submitted job configuration.
getJobHistoryFileName(JobConf, JobID) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Recover the job history filename from the history folder.
getJobHistoryLogLocation(String) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Get the job history file path given the history filename
getJobHistoryLogLocationForUser(String, JobConf) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Get the user job history file path
getJobID() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
getJobID() - Method in class org.apache.hadoop.mapred.JobProfile
Get the job id.
getJobId() - Method in class org.apache.hadoop.mapred.JobProfile
Deprecated. use getJobID() instead
getJobId() - Method in class org.apache.hadoop.mapred.JobStatus
Deprecated. use getJobID instead
getJobID() - Method in class org.apache.hadoop.mapred.JobStatus
 
getJobID() - Method in interface org.apache.hadoop.mapred.RunningJob
Deprecated. This method is deprecated and will be removed. Applications should rather use RunningJob.getID().
getJobID() - Method in class org.apache.hadoop.mapred.TaskAttemptID
Deprecated.  
getJobID() - Method in class org.apache.hadoop.mapred.TaskID
Deprecated.  
getJobID() - Method in class org.apache.hadoop.mapreduce.JobContext
Get the unique ID for the job.
getJobID() - Method in class org.apache.hadoop.mapreduce.TaskAttemptID
Returns the JobID object that this task attempt belongs to
getJobID() - Method in class org.apache.hadoop.mapreduce.TaskID
Returns the JobID object that this tip belongs to
getJobIDsPattern(String, Integer) - Static method in class org.apache.hadoop.mapred.JobID
Deprecated. 
getJobLocalDir() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get job-specific shared directory for use as scratch space
getJobName() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the user-specified job name.
getJobName() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
getJobName() - Method in class org.apache.hadoop.mapred.JobProfile
Get the user-specified job name.
getJobName() - Method in interface org.apache.hadoop.mapred.RunningJob
Get the name of the job.
getJobName() - Method in class org.apache.hadoop.mapreduce.JobContext
Get the user-specified job name.
getJobPriority() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the JobPriority for this job.
getJobPriority() - Method in class org.apache.hadoop.mapred.JobStatus
Return the priority of the job
getJobProfile(JobID) - Method in class org.apache.hadoop.mapred.JobTracker
 
getJobs() - Static method in class org.apache.hadoop.contrib.failmon.Environment
Scans the configuration file to determine which monitoring utilities are available in the system.
getJobsFromQueue(String) - Method in class org.apache.hadoop.mapred.JobClient
Gets all the jobs which were added to particular Job Queue
getJobsFromQueue(String) - Method in class org.apache.hadoop.mapred.JobTracker
 
getJobState() - Method in interface org.apache.hadoop.mapred.RunningJob
Returns the current state of the Job.
getJobStatus(JobID) - Method in class org.apache.hadoop.mapred.JobTracker
 
getJobTrackerHostPort() - Method in class org.apache.hadoop.streaming.StreamJob
 
getJobTrackerMachine() - Method in class org.apache.hadoop.mapred.JobTracker
 
getJobTrackerState() - Method in class org.apache.hadoop.mapred.ClusterStatus
Get the current state of the JobTracker, as JobTracker.State
getJtIdentifier() - Method in class org.apache.hadoop.mapreduce.JobID
 
getJvmManagerInstance() - Method in class org.apache.hadoop.mapred.TaskTracker
 
getKeepCommandFile(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Does the user want to keep the command file for debugging? If this is true, pipes will write a copy of the command data to a file in the task directory named "downlink.data", which may be used to run the C++ program under the debugger.
getKeepFailedTaskFiles() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Should the temporary files for failed tasks be kept?
getKeepTaskFilesPattern() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the regular expression that is matched against the task names to see if we need to keep the files.
getKey(BytesWritable) - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
Copy the key into BytesWritable.
getKey(byte[]) - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
Copy the key into user supplied buffer.
getKey(byte[], int) - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
Copy the key into user supplied buffer.
getKey() - Method in interface org.apache.hadoop.io.SequenceFile.Sorter.RawKeyValueIterator
Gets the current raw key
getKey() - Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
Returns the stored rawKey
getKey() - Method in interface org.apache.hadoop.mapred.RawKeyValueIterator
Gets the current raw key.
getKeyClass() - Method in class org.apache.hadoop.io.MapFile.Reader
Returns the class of keys in this file.
getKeyClass() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns the class of keys in this file.
getKeyClass() - Method in class org.apache.hadoop.io.SequenceFile.Writer
Returns the class of keys in this file.
getKeyClass() - Method in class org.apache.hadoop.io.WritableComparator
Returns the WritableComparable implementation class.
getKeyClass() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
 
getKeyClass() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
The class of key that must be passed to SequenceFileRecordReader.next(Object, Object)..
getKeyClassName() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns the name of the key class.
getKeyClassName() - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
Retrieve the name of the key class for this SequenceFile.
getKeyFieldComparatorOption() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the KeyFieldBasedComparator options
getKeyFieldPartitionerOption() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the KeyFieldBasedPartitioner options
getKeyLength() - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
Get the length of the key.
getKeyList() - Method in class org.apache.hadoop.metrics.util.MetricsRegistry
 
getKeyNear(long) - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader
Get a sample key that is within a block whose starting offset is greater than or equal to the specified offset.
getKeyStream() - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
Streaming access to the key.
getKeyTypeID() - Method in class org.apache.hadoop.record.meta.MapTypeID
get the TypeID of the map's key element
getLastKey() - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader
Get the last key in the TFile.
getLen() - Method in class org.apache.hadoop.fs.FileStatus
 
getLength() - Method in class org.apache.hadoop.examples.SleepJob.EmptySplit
 
getLength() - Method in class org.apache.hadoop.fs.BlockLocation
Get the length of the block
getLength() - Method in class org.apache.hadoop.fs.ContentSummary
 
getLength() - Method in class org.apache.hadoop.fs.FileChecksum
The length of the checksum in bytes
getLength(Path) - Method in class org.apache.hadoop.fs.FileSystem
Deprecated. Use getFileStatus() instead
getLength(Path) - Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
Deprecated. 
getLength() - Method in class org.apache.hadoop.fs.MD5MD5CRC32FileChecksum
The length of the checksum in bytes
getLength() - Method in class org.apache.hadoop.fs.s3.Block
 
getLength() - Method in class org.apache.hadoop.io.BinaryComparable
Return n st bytes 0..n-1 from {#getBytes()} are valid.
getLength() - Method in class org.apache.hadoop.io.BytesWritable
Get the current size of the buffer.
getLength() - Method in class org.apache.hadoop.io.DataInputBuffer
Returns the length of the input.
getLength() - Method in class org.apache.hadoop.io.DataOutputBuffer
Returns the length of the valid data currently in the buffer.
getLength() - Method in class org.apache.hadoop.io.InputBuffer
Returns the length of the input.
getLength() - Method in class org.apache.hadoop.io.OutputBuffer
Returns the length of the valid data currently in the buffer.
getLength() - Method in class org.apache.hadoop.io.SequenceFile.Writer
Returns the current length of the output file.
getLength() - Method in class org.apache.hadoop.io.Text
Returns the number of bytes in the byte array
getLength() - Method in class org.apache.hadoop.io.UTF8
Deprecated. The number of bytes in the encoded string.
getLength() - Method in class org.apache.hadoop.mapred.FileSplit
Deprecated. The number of bytes in the file to process.
getLength() - Method in interface org.apache.hadoop.mapred.InputSplit
Deprecated. Get the total number of bytes in the data of the InputSplit.
getLength() - Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
Return the aggregate length of all child InputSplits currently added.
getLength(int) - Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
Get the length of ith child InputSplit.
getLength() - Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
 
getLength(int) - Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
Returns the length of the ith Path
getLength() - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.DBInputSplit
 
getLength() - Method in class org.apache.hadoop.mapreduce.InputSplit
Get the size of the split, so that the input splits can be sorted by size.
getLength() - Method in class org.apache.hadoop.mapreduce.lib.input.FileSplit
The number of bytes in the file to process.
getLengths() - Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
Returns an array containing the lengths of the files in the split
getLevel() - Method in interface org.apache.hadoop.net.Node
Return this node's level in the tree.
getLevel() - Method in class org.apache.hadoop.net.NodeBase
Return this node's level in the tree.
getLibJars(Configuration) - Static method in class org.apache.hadoop.util.GenericOptionsParser
If libjars are set in the conf, parse the libjars.
getLinkCount(File) - Static method in class org.apache.hadoop.fs.FileUtil.HardLink
Retrieves the number of links to the specified file.
getListenerAddress() - Method in class org.apache.hadoop.ipc.Server
Return the socket (ip+port) on which the RPC server is listening to.
getLoadNativeLibraries(Configuration) - Method in class org.apache.hadoop.util.NativeCodeLoader
Return if native hadoop libraries, if present, can be used for this job.
getLocal(Configuration) - Static method in class org.apache.hadoop.fs.FileSystem
Get the local file syste
getLocalAnalysisClass() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Get the local analysis class.
getLocalCache(URI, Configuration, Path, FileStatus, boolean, long, Path) - Static method in class org.apache.hadoop.filecache.DistributedCache
Get the locally cached file or archive; it could either be previously cached (and valid) or copy it from the FileSystem now.
getLocalCache(URI, Configuration, Path, FileStatus, boolean, long, Path, boolean) - Static method in class org.apache.hadoop.filecache.DistributedCache
Get the locally cached file or archive; it could either be previously cached (and valid) or copy it from the FileSystem now.
getLocalCache(URI, Configuration, Path, boolean, long, Path) - Static method in class org.apache.hadoop.filecache.DistributedCache
Get the locally cached file or archive; it could either be previously cached (and valid) or copy it from the FileSystem now.
getLocalCacheArchives(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Return the path array of the localized caches
getLocalCacheFiles(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Return the path array of the localized files
getLocalDirs() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated.  
getLocalJobFilePath(JobID) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Get the path of the locally stored job file
getLocalJobFilePath(JobID) - Static method in class org.apache.hadoop.mapred.JobTracker
Get the localized job file path on the job trackers local file system
getLocalPath(String, String) - Method in class org.apache.hadoop.conf.Configuration
Get a local file under a directory named by dirsProp with the given path.
getLocalPath(String) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Constructs a local file name.
getLocalPathForWrite(String, Configuration) - Method in class org.apache.hadoop.fs.LocalDirAllocator
Get a path from the local FS.
getLocalPathForWrite(String, long, Configuration) - Method in class org.apache.hadoop.fs.LocalDirAllocator
Get a path from the local FS.
getLocalPathToRead(String, Configuration) - Method in class org.apache.hadoop.fs.LocalDirAllocator
Get a path from the local FS for reading.
getLocation(int) - Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
getLocations from ith InputSplit.
getLocations() - Method in class org.apache.hadoop.examples.SleepJob.EmptySplit
 
getLocations() - Method in class org.apache.hadoop.mapred.FileSplit
Deprecated.  
getLocations() - Method in interface org.apache.hadoop.mapred.InputSplit
Deprecated. Get the list of hostnames where the input split is located.
getLocations() - Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
Collect a set of hosts from all child InputSplits.
getLocations() - Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
Returns all the Paths where this input-split resides
getLocations() - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.DBInputSplit
Get the list of hostnames where the input split is located.
getLocations() - Method in class org.apache.hadoop.mapred.MultiFileSplit
Deprecated.  
getLocations() - Method in class org.apache.hadoop.mapreduce.InputSplit
Get the list of nodes by name where the data for the split would be local.
getLocations() - Method in class org.apache.hadoop.mapreduce.lib.input.FileSplit
 
getLong(String, long) - Method in class org.apache.hadoop.conf.Configuration
Get the value of the name property as a long.
getLongValue(Object) - Method in class org.apache.hadoop.contrib.utils.join.JobBase
 
getMajor() - Method in class org.apache.hadoop.io.file.tfile.Utils.Version
Get the major version.
getMap() - Method in class org.apache.hadoop.contrib.failmon.EventRecord
Return the HashMap of properties of the EventRecord.
getMapCompletionEvents(JobID, int, int, TaskAttemptID) - Method in class org.apache.hadoop.mapred.TaskTracker
 
getMapDebugScript() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the map task's debug script.
getMapOutputCompressorClass(Class<? extends CompressionCodec>) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the CompressionCodec for compressing the map outputs.
getMapOutputKeyClass() - Static method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateMapper
Get the map output key class.
getMapOutputKeyClass() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the key class for the map output data.
getMapOutputKeyClass() - Method in class org.apache.hadoop.mapreduce.JobContext
Get the key class for the map output data.
getMapOutputValueClass() - Static method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateMapper
Get the map output value class.
getMapOutputValueClass() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the value class for the map output data.
getMapOutputValueClass() - Method in class org.apache.hadoop.mapreduce.JobContext
Get the value class for the map output data.
getMapper() - Method in class org.apache.hadoop.mapred.MapRunner
 
getMapperClass() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the Mapper class for the job.
getMapperClass() - Method in class org.apache.hadoop.mapreduce.JobContext
Get the Mapper class for the job.
getMapperClass(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.map.MultithreadedMapper
Get the application's mapper class.
getMapperMaxSkipRecords(Configuration) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
Get the number of acceptable skip records surrounding the bad record PER bad record in mapper.
getMapredJobID() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
Deprecated. use Job.getAssignedJobID() instead
getMapredTempDir() - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Get the Map/Reduce temp directory.
getMapRunnerClass() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the MapRunnable class for the job.
getMapSpeculativeExecution() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Should speculative execution be used for this job for map tasks? Defaults to true.
getMapTaskReports(JobID) - Method in class org.apache.hadoop.mapred.JobClient
Get the information of the current state of the map tasks of a job.
getMapTaskReports(String) - Method in class org.apache.hadoop.mapred.JobClient
Deprecated. Applications should rather use JobClient.getMapTaskReports(JobID)
getMapTaskReports(JobID) - Method in class org.apache.hadoop.mapred.JobTracker
 
getMapTasks() - Method in class org.apache.hadoop.mapred.ClusterStatus
Get the number of currently running map tasks in the cluster.
getMaxDepth(int) - Static method in class org.apache.hadoop.util.QuickSort
Deepest recursion before giving up and doing a heapsort.
getMaxMapAttempts() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the configured number of maximum attempts that will be made to run a map task, as specified by the mapred.map.max.attempts property.
getMaxMapTaskFailuresPercent() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the maximum percentage of map tasks that can fail without the job being aborted.
getMaxMapTasks() - Method in class org.apache.hadoop.mapred.ClusterStatus
Get the maximum capacity for running map tasks in the cluster.
getMaxMemory() - Method in class org.apache.hadoop.mapred.ClusterStatus
Get the maximum configured heap memory that can be used by the JobTracker
getMaxPhysicalMemoryForTask() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. this variable is deprecated and nolonger in use.
getMaxReduceAttempts() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the configured number of maximum attempts that will be made to run a reduce task, as specified by the mapred.reduce.max.attempts property.
getMaxReduceTaskFailuresPercent() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the maximum percentage of reduce tasks that can fail without the job being aborted.
getMaxReduceTasks() - Method in class org.apache.hadoop.mapred.ClusterStatus
Get the maximum capacity for running reduce tasks in the cluster.
getMaxSplitSize(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
Get the maximum split size.
getMaxTaskFailuresPerTracker() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Expert: Get the maximum no.
getMaxTime() - Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingRate
The max time for a single operation since the last reset MetricsTimeVaryingRate.resetMinMax()
getMaxVirtualMemoryForTask() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Use JobConf.getMemoryForMapTask() and JobConf.getMemoryForReduceTask()
getMBeanInfo() - Method in class org.apache.hadoop.metrics.util.MetricsDynamicMBeanBase
 
getMD5Hash(String) - Static method in class org.apache.hadoop.contrib.failmon.Anonymizer
Create the MD5 digest of an input text.
getMemory() - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Get the total amount of buffer memory, in bytes.
getMemoryCalculatorPlugin(Class<? extends MemoryCalculatorPlugin>, Configuration) - Static method in class org.apache.hadoop.util.MemoryCalculatorPlugin
Get the MemoryCalculatorPlugin from the class name and configure it.
getMemoryForMapTask() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated.  
getMemoryForReduceTask() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated.  
getMessage() - Method in exception org.apache.hadoop.mapred.InvalidInputException
Get a summary message of the problems found.
getMessage() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
getMessage() - Method in exception org.apache.hadoop.mapreduce.lib.input.InvalidInputException
Get a summary message of the problems found.
getMessage() - Method in exception org.apache.hadoop.record.compiler.generated.ParseException
This method has the standard behavior when this object has been created using the standard constructors.
getMessage() - Method in error org.apache.hadoop.record.compiler.generated.TokenMgrError
You can also modify the body of this method to customize your error messages.
getMetaBlock(String) - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader
Stream access to a meta block.``
getMetadata() - Method in class org.apache.hadoop.io.SequenceFile.Metadata
 
getMetadata() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns the metadata object of the file
getMetric(String) - Method in class org.apache.hadoop.metrics.spi.OutputRecord
Returns the metric object which can be a Float, Integer, Short or Byte.
getMetricNames() - Method in class org.apache.hadoop.metrics.spi.OutputRecord
Returns the set of metric names.
getMetricsList() - Method in class org.apache.hadoop.metrics.util.MetricsRegistry
 
getMinor() - Method in class org.apache.hadoop.io.file.tfile.Utils.Version
Get the minor version.
getMinSplitSize(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
Get the minimum split size
getMinTime() - Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingRate
The min time for a single operation since the last reset MetricsTimeVaryingRate.resetMinMax()
getModificationTime() - Method in class org.apache.hadoop.fs.FileStatus
Get the modification time of the file.
getMount() - Method in class org.apache.hadoop.fs.DF
 
getName() - Method in class org.apache.hadoop.examples.dancing.Pentomino.Piece
 
getName() - Method in class org.apache.hadoop.fs.FileSystem
Deprecated. call #getUri() instead.
getName() - Method in class org.apache.hadoop.fs.FilterFileSystem
Deprecated. call #getUri() instead.
getName() - Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
Deprecated. 
getName() - Method in class org.apache.hadoop.fs.Path
Returns the final component of this path.
getName() - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
getName(Class) - Static method in class org.apache.hadoop.io.WritableName
Return the name for a class.
getName() - Method in class org.apache.hadoop.mapred.Counters.Group
Deprecated. Returns raw name of the group.
getName() - Method in class org.apache.hadoop.mapreduce.Counter
 
getName() - Method in class org.apache.hadoop.mapreduce.CounterGroup
Get the internal name of the group
getName() - Method in class org.apache.hadoop.metrics.util.MetricsBase
 
getName() - Method in interface org.apache.hadoop.net.Node
Return this node's name
getName() - Method in class org.apache.hadoop.net.NodeBase
Return this node's name
getName() - Method in class org.apache.hadoop.record.meta.RecordTypeInfo
return the name of the record
getName() - Method in class org.apache.hadoop.security.Group
 
getName() - Method in class org.apache.hadoop.security.UnixUserGroupInformation
 
getName() - Method in class org.apache.hadoop.security.User
 
getNamed(String, Configuration) - Static method in class org.apache.hadoop.fs.FileSystem
Deprecated. call #get(URI,Configuration) instead.
getNamedOutputFormatClass(JobConf, String) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
Returns the named output OutputFormat.
getNamedOutputKeyClass(JobConf, String) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
Returns the key class for a named output.
getNamedOutputs() - Method in class org.apache.hadoop.mapred.lib.MultipleOutputs
Returns iterator with the defined name outputs.
getNamedOutputsList(JobConf) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
Returns list of channel names.
getNamedOutputValueClass(JobConf, String) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
Returns the value class for a named output.
getNames() - Method in class org.apache.hadoop.fs.BlockLocation
Get the list of names (hostname:port) hosting this block
getNestedStructTypeInfo(String) - Method in class org.apache.hadoop.record.meta.RecordTypeInfo
Return the type info of a nested record.
getNetworkLocation() - Method in interface org.apache.hadoop.net.Node
Return the string representation of this node's network location
getNetworkLocation() - Method in class org.apache.hadoop.net.NodeBase
Return this node's network location
getNewJobId() - Method in class org.apache.hadoop.mapred.JobTracker
Allocates a new JobId string.
getNext() - Method in class org.apache.hadoop.contrib.failmon.LogParser
Continue parsing the log file until a valid log entry is identified.
getNextHeartbeatInterval() - Method in class org.apache.hadoop.mapred.JobTracker
Calculates next heartbeat interval using cluster size.
getNextToken() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
getNextToken() - Method in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
getNode(String) - Method in class org.apache.hadoop.mapred.JobTracker
Return the Node in the network topology that corresponds to the hostname
getNode() - Method in class org.apache.hadoop.mapred.join.Parser.NodeToken
 
getNode() - Method in class org.apache.hadoop.mapred.join.Parser.Token
 
getNode(String) - Method in class org.apache.hadoop.net.NetworkTopology
Given a string representation of a node, return its reference
getNodesAtMaxLevel() - Method in class org.apache.hadoop.mapred.JobTracker
Returns a collection of nodes at the max level
getNullContext(String) - Static method in class org.apache.hadoop.metrics.ContextFactory
Returns a "null" context - one which does nothing.
getNum() - Method in class org.apache.hadoop.mapred.join.Parser.NumToken
 
getNum() - Method in class org.apache.hadoop.mapred.join.Parser.Token
 
getNumber() - Method in class org.apache.hadoop.metrics.spi.MetricValue
 
getNumberColumns() - Method in class org.apache.hadoop.examples.dancing.DancingLinks
Get the number of columns.
getNumberOfThreads(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.map.MultithreadedMapper
The number of threads in the thread pool that will run the map function.
getNumberOfUniqueHosts() - Method in class org.apache.hadoop.mapred.JobTracker
 
getNumBytesInSum() - Method in class org.apache.hadoop.util.DataChecksum
 
getNumFiles(PathFilter) - Method in class org.apache.hadoop.fs.InMemoryFileSystem
Deprecated.  
getNumMapTasks() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get configured the number of reduce tasks for this job.
getNumOfLeaves() - Method in class org.apache.hadoop.net.NetworkTopology
Return the total number of nodes
getNumOfRacks() - Method in class org.apache.hadoop.net.NetworkTopology
Return the total number of racks
getNumOpenConnections() - Method in interface org.apache.hadoop.ipc.metrics.RpcMgtMBean
The number of open RPC conections
getNumOpenConnections() - Method in class org.apache.hadoop.ipc.Server
The number of open RPC conections
getNumPaths() - Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
Returns the number of Paths in the split
getNumReduceTasks() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get configured the number of reduce tasks for this job.
getNumReduceTasks() - Method in class org.apache.hadoop.mapreduce.JobContext
Get configured the number of reduce tasks for this job.
getNumResolvedTaskTrackers() - Method in class org.apache.hadoop.mapred.JobTracker
 
getNumTaskCacheLevels() - Method in class org.apache.hadoop.mapred.JobTracker
 
getNumTasksToExecutePerJvm() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the number of tasks that a spawned JVM should execute
getOffset() - Method in class org.apache.hadoop.fs.BlockLocation
Get the start offset of file associated with this block
getOffset(int) - Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
Returns the start offset of the ith Path
getOp() - Method in class org.apache.hadoop.contrib.index.example.LineDocTextAndOp
Get the type of the operation.
getOp() - Method in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
Get the type of operation.
getOpt(String) - Method in class org.apache.hadoop.fs.shell.CommandFormat
Return if the option is set or not
getOtherAction() - Method in class org.apache.hadoop.fs.permission.FsPermission
Return other FsAction.
getOutput() - Method in class org.apache.hadoop.util.Shell.ShellCommandExecutor
Get the output of the shell command.
getOutputCommitter() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the OutputCommitter implementation for the map-reduce job, defaults to FileOutputCommitter if not specified explicitly.
getOutputCommitter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
 
getOutputCommitter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.NullOutputFormat
 
getOutputCommitter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.OutputFormat
Get the output committer for this output format.
getOutputCommitter() - Method in class org.apache.hadoop.mapreduce.TaskInputOutputContext
 
getOutputCompressionType(JobConf) - Static method in class org.apache.hadoop.mapred.SequenceFileOutputFormat
Deprecated. Get the SequenceFile.CompressionType for the output SequenceFile.
getOutputCompressionType(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat
Get the SequenceFile.CompressionType for the output SequenceFile.
getOutputCompressorClass(JobConf, Class<? extends CompressionCodec>) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
Get the CompressionCodec for compressing the job outputs.
getOutputCompressorClass(JobContext, Class<? extends CompressionCodec>) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
Get the CompressionCodec for compressing the job outputs.
getOutputFormat() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the OutputFormat implementation for the map-reduce job, defaults to TextOutputFormat if not specified explicity.
getOutputFormatClass() - Method in class org.apache.hadoop.mapreduce.JobContext
Get the OutputFormat class for the job.
getOutputKeyClass() - Static method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateReducer
Get the reduce output key class.
getOutputKeyClass() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the key class for the job output data.
getOutputKeyClass() - Method in class org.apache.hadoop.mapreduce.JobContext
Get the key class for the job output data.
getOutputKeyComparator() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the RawComparator comparator used to compare keys.
getOutputPath(JobConf) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
Get the Path to the output directory for the map-reduce job.
getOutputPath(JobContext) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
Get the Path to the output directory for the map-reduce job.
getOutputStream(Socket) - Static method in class org.apache.hadoop.net.NetUtils
Same as getOutputStream(socket, 0).
getOutputStream(Socket, long) - Static method in class org.apache.hadoop.net.NetUtils
Returns OutputStream for the socket.
getOutputValueClass() - Static method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateReducer
Get the reduce output value class.
getOutputValueClass() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the value class for job outputs.
getOutputValueClass() - Method in class org.apache.hadoop.mapreduce.JobContext
Get the value class for job outputs.
getOutputValueGroupingComparator() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the user defined WritableComparable comparator for grouping keys of inputs to the reduce.
getOwner() - Method in class org.apache.hadoop.fs.FileStatus
Get the owner of the file.
getParameter(ServletRequest, String) - Static method in class org.apache.hadoop.util.ServletUtil
Get a parameter from a ServletRequest.
getParent() - Method in class org.apache.hadoop.fs.Path
Returns the parent of a path or null if at root.
getParent() - Method in interface org.apache.hadoop.net.Node
Return this node's parent
getParent() - Method in class org.apache.hadoop.net.NodeBase
Return this node's parent
getParentNode(Node, int) - Static method in class org.apache.hadoop.mapred.JobTracker
 
getPartition(Shard, IntermediateForm, int) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdatePartitioner
 
getPartition(SecondarySort.IntPair, IntWritable, int) - Method in class org.apache.hadoop.examples.SecondarySort.FirstPartitioner
 
getPartition(IntWritable, NullWritable, int) - Method in class org.apache.hadoop.examples.SleepJob
 
getPartition(K2, V2, int) - Method in class org.apache.hadoop.mapred.lib.HashPartitioner
Deprecated. Use Object.hashCode() to partition.
getPartition(K2, V2, int) - Method in class org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner
 
getPartition(int, int) - Method in class org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner
 
getPartition(K, V, int) - Method in class org.apache.hadoop.mapred.lib.TotalOrderPartitioner
 
getPartition(K2, V2, int) - Method in interface org.apache.hadoop.mapred.Partitioner
Deprecated. Get the paritition number for a given key (hence record) given the total number of partitions i.e.
getPartition(K, V, int) - Method in class org.apache.hadoop.mapreduce.lib.partition.HashPartitioner
Use Object.hashCode() to partition.
getPartition(KEY, VALUE, int) - Method in class org.apache.hadoop.mapreduce.Partitioner
Get the partition number for a given key (hence record) given the total number of partitions i.e.
getPartitionerClass() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the Partitioner used to partition Mapper-outputs to be sent to the Reducers.
getPartitionerClass() - Method in class org.apache.hadoop.mapreduce.JobContext
Get the Partitioner class for the job.
getPartitionFile(JobConf) - Static method in class org.apache.hadoop.mapred.lib.TotalOrderPartitioner
Get the path to the SequenceFile storing the sorted partition keyset.
getPath() - Method in class org.apache.hadoop.fs.FileStatus
 
getPath() - Method in class org.apache.hadoop.mapred.FileSplit
Deprecated. The file containing this split's data.
getPath(int) - Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
Returns the ith Path
getPath() - Method in class org.apache.hadoop.mapreduce.lib.input.FileSplit
The file containing this split's data.
getPath(Node) - Static method in class org.apache.hadoop.net.NodeBase
Return this node's path
getPathForCustomFile(JobConf, String) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
Helper function to generate a Path for a file that is unique for the task within the job output directory.
getPathForWorkFile(TaskInputOutputContext<?, ?, ?, ?>, String, String) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
Helper function to generate a Path for a file that is unique for the task within the job output directory.
getPaths() - Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
Returns all the Paths in the split
getPercentUsed() - Method in class org.apache.hadoop.fs.DF
 
getPercentUsed() - Method in class org.apache.hadoop.fs.InMemoryFileSystem
Deprecated.  
getPeriod() - Method in interface org.apache.hadoop.metrics.MetricsContext
Returns the timer period.
getPeriod() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Returns the timer period.
getPermission() - Method in class org.apache.hadoop.fs.FileStatus
Get FsPermission associated with the file.
getPermission() - Method in class org.apache.hadoop.fs.permission.PermissionStatus
Return permission
getPermission() - Method in class org.apache.hadoop.security.authorize.Service
Get the Permission required to access the service.
getPermissions(ProtectionDomain) - Method in class org.apache.hadoop.security.authorize.ConfiguredPolicy
 
getPhysicalMemorySize() - Method in class org.apache.hadoop.util.LinuxMemoryCalculatorPlugin
Obtain the total size of the physical memory present in the system.
getPhysicalMemorySize() - Method in class org.apache.hadoop.util.MemoryCalculatorPlugin
Obtain the total size of the physical memory present in the system.
getPidFromPidFile(String) - Static method in class org.apache.hadoop.util.ProcfsBasedProcessTree
Get PID from a pid-file.
getPlatformName() - Static method in class org.apache.hadoop.util.PlatformName
Get the complete platform as per the java-vm.
getPolicy() - Static method in class org.apache.hadoop.security.SecurityUtil
Get the current global security policy for Hadoop.
getPort() - Method in class org.apache.hadoop.http.HttpServer
Get the port that the server is on
getPos() - Method in class org.apache.hadoop.contrib.index.example.LineDocRecordReader
 
getPos() - Method in class org.apache.hadoop.examples.MultiFileWordCount.MultiFileLineRecordReader
 
getPos() - Method in class org.apache.hadoop.fs.BufferedFSInputStream
 
getPos() - Method in exception org.apache.hadoop.fs.ChecksumException
 
getPos() - Method in class org.apache.hadoop.fs.FSDataInputStream
 
getPos() - Method in class org.apache.hadoop.fs.FSDataOutputStream
 
getPos() - Method in class org.apache.hadoop.fs.FSInputChecker
 
getPos() - Method in class org.apache.hadoop.fs.FSInputStream
Return the current offset from the start of the file
getPos() - Method in class org.apache.hadoop.fs.ftp.FTPInputStream
 
getPos() - Method in interface org.apache.hadoop.fs.Seekable
Return the current offset from the start of the file
getPos() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
Unsupported (returns zero in all cases).
getPos() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
Request position from proxied RR.
getPos() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
 
getPos() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
return the amount of data processed
getPos() - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.DBRecordReader
Returns the current position in the input.
getPos() - Method in class org.apache.hadoop.mapred.LineRecordReader
Deprecated.  
getPos() - Method in interface org.apache.hadoop.mapred.RecordReader
Returns the current position in the input.
getPos() - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
 
getPos() - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
 
getPos() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
 
getPos() - Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
Returns the current position in the input.
getPosition() - Method in class org.apache.hadoop.io.DataInputBuffer
Returns the current position in the input.
getPosition() - Method in class org.apache.hadoop.io.InputBuffer
Returns the current position in the input.
getPosition() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Return the current byte position in the input file.
getPreviousIntervalAverageTime() - Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingRate
The average rate of an operation in the previous interval
getPreviousIntervalNumOps() - Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingRate
The number of operations in the previous interval
getPreviousIntervalValue() - Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingInt
The Value at the Previous interval
getPreviousIntervalValue() - Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingLong
The Value at the Previous interval
getProblems() - Method in exception org.apache.hadoop.mapred.InvalidInputException
Get the complete list of the problems reported.
getProblems() - Method in exception org.apache.hadoop.mapreduce.lib.input.InvalidInputException
Get the complete list of the problems reported.
getProcess() - Method in class org.apache.hadoop.util.Shell
get the current sub-process executing the given command
getProcessTree() - Method in class org.apache.hadoop.util.ProcfsBasedProcessTree
Get the process-tree with latest state.
getProfileEnabled() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get whether the task profiling is enabled.
getProfileParams() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the profiler configuration arguments.
getProfileTaskRange(boolean) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the range of maps or reduces to profile.
getProgress() - Method in class org.apache.hadoop.contrib.index.example.LineDocRecordReader
 
getProgress() - Method in class org.apache.hadoop.examples.MultiFileWordCount.MultiFileLineRecordReader
 
getProgress() - Method in interface org.apache.hadoop.io.SequenceFile.Sorter.RawKeyValueIterator
Gets the Progress object; this has a float (0.0 - 1.0) indicating the bytes processed by the iterator so far
getProgress() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
Report progress as the minimum of all child RR progress.
getProgress() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
Request progress from proxied RR.
getProgress() - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
 
getProgress() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
return progress based on the amount of data processed so far.
getProgress() - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.DBRecordReader
How much of the input has the RecordReader consumed i.e.
getProgress() - Method in class org.apache.hadoop.mapred.LineRecordReader
Deprecated. Get the progress within the split
getProgress() - Method in interface org.apache.hadoop.mapred.RawKeyValueIterator
Gets the Progress object; this has a float (0.0 - 1.0) indicating the bytes processed by the iterator so far
getProgress() - Method in interface org.apache.hadoop.mapred.RecordReader
How much of the input has the RecordReader consumed i.e.
getProgress() - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
Return the progress within the input split
getProgress() - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
 
getProgress() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
Return the progress within the input split
getProgress() - Method in class org.apache.hadoop.mapred.TaskReport
The amount completed, between zero and one.
getProgress() - Method in class org.apache.hadoop.mapreduce.lib.input.LineRecordReader
Get the progress within the split
getProgress() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
Return the progress within the input split
getProgress() - Method in class org.apache.hadoop.mapreduce.RecordReader
The current progress of the record reader through its data.
getProgress() - Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
 
getProgressible() - Method in class org.apache.hadoop.mapred.JobContext
Deprecated. Get the progress mechanism for reporting progress.
getProgressible() - Method in class org.apache.hadoop.mapred.TaskAttemptContext
Deprecated.  
getProperty(String) - Static method in class org.apache.hadoop.contrib.failmon.Environment
Fetches the value of a property from the configuration file.
getProtocolVersion(String, long) - Method in interface org.apache.hadoop.ipc.VersionedProtocol
Return protocol version corresponding to protocol interface.
getProtocolVersion(String, long) - Method in class org.apache.hadoop.mapred.JobTracker
 
getProtocolVersion(String, long) - Method in class org.apache.hadoop.mapred.TaskTracker
 
getProxy(Class<?>, long, InetSocketAddress, Configuration, SocketFactory) - Static method in class org.apache.hadoop.ipc.RPC
Construct a client-side proxy object that implements the named protocol, talking to a server at the named address.
getProxy(Class<?>, long, InetSocketAddress, UserGroupInformation, Configuration, SocketFactory) - Static method in class org.apache.hadoop.ipc.RPC
Construct a client-side proxy object that implements the named protocol, talking to a server at the named address.
getProxy(Class<?>, long, InetSocketAddress, Configuration) - Static method in class org.apache.hadoop.ipc.RPC
Construct a client-side proxy object with the default SocketFactory
getQueueInfo(String) - Method in class org.apache.hadoop.mapred.JobClient
Gets the queue information associated to a particular Job Queue
getQueueInfo(String) - Method in class org.apache.hadoop.mapred.JobTracker
 
getQueueManager() - Method in class org.apache.hadoop.mapred.JobTracker
Return the QueueManager associated with the JobTracker.
getQueueName() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Return the name of the queue to which this job is submitted.
getQueueName() - Method in class org.apache.hadoop.mapred.JobProfile
Get the name of the queue to which the job is submitted.
getQueueName() - Method in class org.apache.hadoop.mapred.JobQueueInfo
Get the queue name from JobQueueInfo
getQueues() - Method in class org.apache.hadoop.mapred.JobClient
Return an array of queue information objects about all the Job Queues configured.
getQueues() - Method in class org.apache.hadoop.mapred.JobTracker
 
getQuota() - Method in class org.apache.hadoop.fs.ContentSummary
Return the directory quota
getRange(String, String) - Method in class org.apache.hadoop.conf.Configuration
Parse the given attribute as a set of integer ranges
getRaw(String) - Method in class org.apache.hadoop.conf.Configuration
Get the value of the name property, without doing variable expansion.
getRaw() - Method in class org.apache.hadoop.fs.LocalFileSystem
 
getRawFileSystem() - Method in class org.apache.hadoop.fs.ChecksumFileSystem
get the raw file system
getReader() - Method in class org.apache.hadoop.contrib.failmon.LogParser
Return the BufferedReader, that reads the log file
getReaders(FileSystem, Path, Configuration) - Static method in class org.apache.hadoop.mapred.MapFileOutputFormat
Open the output generated by this format.
getReaders(Configuration, Path) - Static method in class org.apache.hadoop.mapred.SequenceFileOutputFormat
Deprecated. Open the output generated by this format.
getReadyJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
 
getRealTaskLogFileLocation(TaskAttemptID, TaskLog.LogName) - Static method in class org.apache.hadoop.mapred.TaskLog
 
getRecordName() - Method in interface org.apache.hadoop.metrics.MetricsRecord
Returns the record name.
getRecordName() - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Returns the record name.
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.contrib.index.example.LineDocInputFormat
 
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.examples.MultiFileWordCount.MyInputFormat
 
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.examples.SleepJob.SleepInputFormat
 
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.examples.terasort.TeraInputFormat
 
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.FileInputFormat
Deprecated.  
getRecordReader(InputSplit, JobConf, Reporter) - Method in interface org.apache.hadoop.mapred.InputFormat
Deprecated. Get the RecordReader for the given InputSplit.
getRecordReader(InputSplit, JobConf, Reporter) - Method in interface org.apache.hadoop.mapred.join.ComposableInputFormat
 
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.join.CompositeInputFormat
Construct a CompositeRecordReader for the children of this InputFormat as defined in the init expression.
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.KeyValueTextInputFormat
 
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
This is not implemented yet.
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat
Get the RecordReader for the given InputSplit.
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.lib.DelegatingInputFormat
 
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.lib.NLineInputFormat
 
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.MultiFileInputFormat
Deprecated.  
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat
 
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.SequenceFileAsTextInputFormat
 
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter
Create a record reader for the given split
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.SequenceFileInputFormat
Deprecated.  
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.mapred.TextInputFormat
Deprecated.  
getRecordReader(InputSplit, JobConf, Reporter) - Method in class org.apache.hadoop.streaming.StreamInputFormat
 
getRecordReaderQueue() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
Return sorted list of RecordReaders for this composite.
getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateOutputFormat
 
getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.examples.terasort.TeraOutputFormat
 
getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.FileOutputFormat
 
getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.db.DBOutputFormat
Get the RecordWriter for the given job.
getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
Create a composite record writer that can write key/value data to different output files
getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.lib.NullOutputFormat
Deprecated.  
getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.MapFileOutputFormat
 
getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in interface org.apache.hadoop.mapred.OutputFormat
Deprecated. Get the RecordWriter for the given job.
getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
 
getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.SequenceFileOutputFormat
Deprecated.  
getRecordWriter(FileSystem, JobConf, String, Progressable) - Method in class org.apache.hadoop.mapred.TextOutputFormat
Deprecated.  
getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
 
getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.NullOutputFormat
 
getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat
 
getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.TextOutputFormat
 
getRecordWriter(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.OutputFormat
Get the RecordWriter for the given task.
getRecoveryDuration() - Method in class org.apache.hadoop.mapred.JobTracker
How long the jobtracker took to recover from restart.
getReduceDebugScript() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the reduce task's debug Script
getReducerClass() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the Reducer class for the job.
getReducerClass() - Method in class org.apache.hadoop.mapreduce.JobContext
Get the Reducer class for the job.
getReducerMaxSkipGroups(Configuration) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
Get the number of acceptable skip groups surrounding the bad group PER bad group in reducer.
getReduceSpeculativeExecution() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Should speculative execution be used for this job for reduce tasks? Defaults to true.
getReduceTaskReports(JobID) - Method in class org.apache.hadoop.mapred.JobClient
Get the information of the current state of the reduce tasks of a job.
getReduceTaskReports(String) - Method in class org.apache.hadoop.mapred.JobClient
Deprecated. Applications should rather use JobClient.getReduceTaskReports(JobID)
getReduceTaskReports(JobID) - Method in class org.apache.hadoop.mapred.JobTracker
 
getReduceTasks() - Method in class org.apache.hadoop.mapred.ClusterStatus
Get the number of currently running reduce tasks in the cluster.
getRemainingArgs() - Method in class org.apache.hadoop.util.GenericOptionsParser
Returns an array of Strings containing only application-specific arguments.
getRemoteAddress() - Static method in class org.apache.hadoop.ipc.Server
Returns remote address as a string when invoked inside an RPC.
getRemoteIp() - Static method in class org.apache.hadoop.ipc.Server
Returns the remote side ip address when invoked inside an RPC Returns null incase of an error.
getReplication() - Method in class org.apache.hadoop.fs.FileStatus
Get the replication factor of a file.
getReplication(Path) - Method in class org.apache.hadoop.fs.FileSystem
Deprecated. Use getFileStatus() instead
getReplication(Path) - Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
Deprecated. 
getReport() - Method in class org.apache.hadoop.contrib.utils.join.JobBase
log the counters
getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
 
getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
 
getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
 
getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
 
getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
 
getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
 
getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
 
getReport() - Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregator
 
getReport() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
 
getReportDetails() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
 
getReportItems() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
 
getResource(String) - Method in class org.apache.hadoop.conf.Configuration
Get the URL for the named resource.
getResult() - Method in class org.apache.hadoop.examples.Sort
Get the last job that was run using this instance.
getRevision() - Static method in class org.apache.hadoop.util.VersionInfo
Get the subversion revision number for the root directory
getRotations() - Method in class org.apache.hadoop.examples.dancing.Pentomino.Piece
 
getRpcOpsAvgProcessingTime() - Method in interface org.apache.hadoop.ipc.metrics.RpcMgtMBean
Average time for RPC Operations in last interval
getRpcOpsAvgProcessingTimeMax() - Method in interface org.apache.hadoop.ipc.metrics.RpcMgtMBean
The Maximum RPC Operation Processing Time since reset was called
getRpcOpsAvgProcessingTimeMin() - Method in interface org.apache.hadoop.ipc.metrics.RpcMgtMBean
The Minimum RPC Operation Processing Time since reset was called
getRpcOpsAvgQueueTime() - Method in interface org.apache.hadoop.ipc.metrics.RpcMgtMBean
The Average RPC Operation Queued Time in the last interval
getRpcOpsAvgQueueTimeMax() - Method in interface org.apache.hadoop.ipc.metrics.RpcMgtMBean
The Maximum RPC Operation Queued Time since reset was called
getRpcOpsAvgQueueTimeMin() - Method in interface org.apache.hadoop.ipc.metrics.RpcMgtMBean
The Minimum RPC Operation Queued Time since reset was called
getRpcOpsNumber() - Method in interface org.apache.hadoop.ipc.metrics.RpcMgtMBean
Number of RPC Operations in the last interval
getRunnable() - Method in class org.apache.hadoop.util.Daemon
 
getRunningJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
 
getRunningJobs() - Method in class org.apache.hadoop.mapred.JobTracker
Version that is called from a timer thread, and therefore needs to be careful to synchronize.
getRunningTaskAttempts() - Method in class org.apache.hadoop.mapred.TaskReport
Get the running task attempt IDs for this task
getRunState() - Method in class org.apache.hadoop.mapred.JobStatus
 
getSample(InputFormat<K, V>, JobConf) - Method in class org.apache.hadoop.mapred.lib.InputSampler.IntervalSampler
For each split sampled, emit when the ratio of the number of records retained to the total record count is less than the specified frequency.
getSample(InputFormat<K, V>, JobConf) - Method in class org.apache.hadoop.mapred.lib.InputSampler.RandomSampler
Randomize the split order, then take the specified number of keys from each split sampled, where each key is selected with the specified probability and possibly replaced by a subsequently selected key when the quota of keys from that split is satisfied.
getSample(InputFormat<K, V>, JobConf) - Method in interface org.apache.hadoop.mapred.lib.InputSampler.Sampler
For a given job, collect and return a subset of the keys from the input data.
getSample(InputFormat<K, V>, JobConf) - Method in class org.apache.hadoop.mapred.lib.InputSampler.SplitSampler
From each split sampled, take the first numSamples / numSplits records.
getSchedulingInfo() - Method in class org.apache.hadoop.mapred.JobQueueInfo
Gets the scheduling information associated to particular job queue.
getSchedulingInfo() - Method in class org.apache.hadoop.mapred.JobStatus
Gets the Scheduling information associated to a particular Job.
getScheme() - Method in class org.apache.hadoop.fs.FileSystem.Statistics
Get the uri scheme associated with this statistics object.
getSecond() - Method in class org.apache.hadoop.examples.SecondarySort.IntPair
 
getSecretAccessKey() - Method in class org.apache.hadoop.fs.s3.S3Credentials
 
getSelectQuery() - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.DBRecordReader
Returns the query for selecting the records, subclasses can override this for custom behaviour.
getSequenceFileOutputKeyClass(JobConf) - Static method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
Get the key class for the SequenceFile
getSequenceFileOutputValueClass(JobConf) - Static method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
Get the value class for the SequenceFile
getSerialization(Class<T>) - Method in class org.apache.hadoop.io.serializer.SerializationFactory
 
getSerializedLength() - Method in class org.apache.hadoop.fs.s3.INode
 
getSerializer(Class<Serializable>) - Method in class org.apache.hadoop.io.serializer.JavaSerialization
 
getSerializer(Class<T>) - Method in interface org.apache.hadoop.io.serializer.Serialization
 
getSerializer(Class<T>) - Method in class org.apache.hadoop.io.serializer.SerializationFactory
 
getSerializer(Class<Writable>) - Method in class org.apache.hadoop.io.serializer.WritableSerialization
 
getServer(Object, String, int, Configuration) - Static method in class org.apache.hadoop.ipc.RPC
Construct a server for a protocol implementation instance listening on a port and address.
getServer(Object, String, int, int, boolean, Configuration) - Static method in class org.apache.hadoop.ipc.RPC
Construct a server for a protocol implementation instance listening on a port and address.
getServerAddress(Configuration, String, String, String) - Static method in class org.apache.hadoop.net.NetUtils
Deprecated. 
getServerVersion() - Method in exception org.apache.hadoop.ipc.RPC.VersionMismatch
Get the server's agreed to version.
getServiceKey() - Method in class org.apache.hadoop.security.authorize.Service
Get the configuration key for the service.
getServices() - Method in class org.apache.hadoop.mapred.MapReducePolicyProvider
 
getServices() - Method in class org.apache.hadoop.security.authorize.PolicyProvider
Get the Service definitions from the PolicyProvider.
getSessionId() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the user-specified session identifier.
getSetupTaskReports(JobID) - Method in class org.apache.hadoop.mapred.JobClient
Get the information of the current state of the setup tasks of a job.
getSetupTaskReports(JobID) - Method in class org.apache.hadoop.mapred.JobTracker
 
getShape(boolean, int) - Method in class org.apache.hadoop.examples.dancing.Pentomino.Piece
 
getSize() - Method in class org.apache.hadoop.io.BytesWritable
Deprecated. Use BytesWritable.getLength() instead.
getSize() - Method in interface org.apache.hadoop.io.SequenceFile.ValueBytes
Size of stored data.
getSize() - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat.WritableValueBytes
 
getSkipOutputPath(Configuration) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
Get the directory to which skipped records are written.
getSocketFactory(Configuration, Class<?>) - Static method in class org.apache.hadoop.net.NetUtils
Get the socket factory for the given class according to its configuration parameter hadoop.rpc.socket.factory.class.<ClassName>.
getSocketFactoryFromProperty(Configuration, String) - Static method in class org.apache.hadoop.net.NetUtils
Get the socket factory corresponding to the given proxy URI.
getSortComparator() - Method in class org.apache.hadoop.mapreduce.JobContext
Get the RawComparator comparator used to compare keys.
getSpace(int) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
getSpaceConsumed() - Method in class org.apache.hadoop.fs.ContentSummary
Retuns (disk) space consumed
getSpaceQuota() - Method in class org.apache.hadoop.fs.ContentSummary
Returns (disk) space quota
getSpeculativeExecution() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Should speculative execution be used for this job? Defaults to true.
getSplitHosts(BlockLocation[], long, long, NetworkTopology) - Method in class org.apache.hadoop.mapred.FileInputFormat
Deprecated. This function identifies and returns the hosts that contribute most for a given split.
getSplits(int) - Method in class org.apache.hadoop.examples.dancing.Pentomino
Generate a list of prefixes to a given depth
getSplits(JobConf, int) - Method in class org.apache.hadoop.examples.SleepJob.SleepInputFormat
 
getSplits(JobConf, int) - Method in class org.apache.hadoop.examples.terasort.TeraInputFormat
 
getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.FileInputFormat
Deprecated. Splits files returned by FileInputFormat.listStatus(JobConf) when they're too big.
getSplits(JobConf, int) - Method in interface org.apache.hadoop.mapred.InputFormat
Deprecated. Logically split the set of input files for the job.
getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.join.CompositeInputFormat
Build a CompositeInputSplit from the child InputFormats by assigning the ith split from each child to the ith composite split.
getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
 
getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat
Logically split the set of input files for the job.
getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.lib.DelegatingInputFormat
 
getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.lib.NLineInputFormat
Logically splits the set of input files for the job, splits N lines of the input as one split.
getSplits(JobConf, int) - Method in class org.apache.hadoop.mapred.MultiFileInputFormat
Deprecated.  
getSplits(JobContext) - Method in class org.apache.hadoop.mapreduce.InputFormat
Logically split the set of input files for the job.
getSplits(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
Generate the list of files and make them into FileSplits.
getStackTrace() - Method in exception org.apache.hadoop.security.authorize.AuthorizationException
 
getStart() - Method in class org.apache.hadoop.mapred.FileSplit
Deprecated. The position of the first byte in the file to process.
getStart() - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.DBInputSplit
 
getStart() - Method in class org.apache.hadoop.mapreduce.lib.input.FileSplit
The position of the first byte in the file to process.
getStartOffsets() - Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
Returns an array containing the startoffsets of the files in the split
getStartTime() - Method in class org.apache.hadoop.mapred.JobStatus
 
getStartTime() - Method in class org.apache.hadoop.mapred.JobTracker
 
getStartTime() - Method in class org.apache.hadoop.mapred.TaskReport
Get start time of task.
getState(String) - Static method in class org.apache.hadoop.contrib.failmon.PersistentState
Read and return the state of parsing for a particular log file.
getState() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
getState() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
 
getState() - Method in class org.apache.hadoop.mapred.TaskReport
The most recent state, reported by a Reporter.
getStaticResolution(String) - Static method in class org.apache.hadoop.net.NetUtils
Retrieves the resolved name for the passed host.
getStatistics() - Static method in class org.apache.hadoop.fs.FileSystem
Deprecated. use FileSystem.getAllStatistics() instead
getStatistics(String, Class<? extends FileSystem>) - Static method in class org.apache.hadoop.fs.FileSystem
Get the statistics for a particular file system
getStatus() - Method in class org.apache.hadoop.mapreduce.TaskAttemptContext
Get the last set status message.
getStr() - Method in class org.apache.hadoop.mapred.join.Parser.StrToken
 
getStr() - Method in class org.apache.hadoop.mapred.join.Parser.Token
 
getStringCollection(String) - Method in class org.apache.hadoop.conf.Configuration
Get the comma delimited values of the name property as a collection of Strings.
getStringCollection(String) - Static method in class org.apache.hadoop.util.StringUtils
Returns a collection of strings.
getStrings(String) - Method in class org.apache.hadoop.conf.Configuration
Get the comma delimited values of the name property as an array of Strings.
getStrings(String, String...) - Method in class org.apache.hadoop.conf.Configuration
Get the comma delimited values of the name property as an array of Strings.
getStrings(String) - Static method in class org.apache.hadoop.util.StringUtils
Returns an arraylist of strings.
getSubject(UserGroupInformation) - Static method in class org.apache.hadoop.security.SecurityUtil
Get the Subject for the user identified by ugi.
getSuccessfulJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
 
getSuccessfulTaskAttempt() - Method in class org.apache.hadoop.mapred.TaskReport
Get the attempt ID that took this task to completion
GetSuffix(int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
getSum() - Method in class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
 
getSum() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
 
getSupportedCompressionAlgorithms() - Static method in class org.apache.hadoop.io.file.tfile.TFile
Get names of supported compression algorithms.
getSymlink(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
This method checks to see if symlinks are to be create for the localized cache files in the current working directory
getSystemDir() - Method in class org.apache.hadoop.mapred.JobClient
Grab the jobtracker system directory path where job-specific files are to be placed.
getSystemDir() - Method in class org.apache.hadoop.mapred.JobTracker
 
getTabSize(int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
getTag() - Method in class org.apache.hadoop.contrib.utils.join.TaggedMapOutput
 
getTag(String) - Method in class org.apache.hadoop.metrics.spi.OutputRecord
Returns a tag object which is can be a String, Integer, Short or Byte.
getTagNames() - Method in class org.apache.hadoop.metrics.spi.OutputRecord
Returns the set of tag names
getTask(JVMId) - Method in class org.apache.hadoop.mapred.TaskTracker
Called upon startup by the child process, to fetch Task data.
getTaskAttemptID() - Method in class org.apache.hadoop.mapred.TaskAttemptContext
Deprecated. Get the taskAttemptID.
getTaskAttemptId() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
Returns task id.
getTaskAttemptID() - Method in class org.apache.hadoop.mapreduce.TaskAttemptContext
Get the unique name for this task attempt.
getTaskAttemptIDsPattern(String, Integer, Boolean, Integer, Integer) - Static method in class org.apache.hadoop.mapred.TaskAttemptID
Deprecated. 
getTaskAttempts() - Method in class org.apache.hadoop.mapred.JobHistory.Task
Returns all task attempts for this task.
getTaskCompletionEvents(JobID, int, int) - Method in class org.apache.hadoop.mapred.JobTracker
 
getTaskCompletionEvents(int) - Method in interface org.apache.hadoop.mapred.RunningJob
Get events indicating completion (success/failure) of component tasks.
getTaskCompletionEvents(int) - Method in class org.apache.hadoop.mapreduce.Job
Get events indicating completion (success/failure) of component tasks.
getTaskDiagnostics(TaskAttemptID) - Method in class org.apache.hadoop.mapred.JobTracker
Get the diagnostics for a given task
getTaskDiagnostics(TaskAttemptID) - Method in interface org.apache.hadoop.mapred.RunningJob
Gets the diagnostic messages for a given task attempt.
getTaskID() - Method in class org.apache.hadoop.mapred.TaskAttemptID
Deprecated.  
getTaskId() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
Deprecated. use TaskCompletionEvent.getTaskAttemptId() instead.
getTaskId() - Method in class org.apache.hadoop.mapred.TaskLogAppender
Getter/Setter methods for log4j.
getTaskId() - Method in class org.apache.hadoop.mapred.TaskReport
Deprecated. use TaskReport.getTaskID() instead
getTaskID() - Method in class org.apache.hadoop.mapred.TaskReport
The id of the task.
getTaskID() - Method in class org.apache.hadoop.mapreduce.TaskAttemptID
Returns the TaskID object that this task attempt belongs to
getTaskIDsPattern(String, Integer, Boolean, Integer) - Static method in class org.apache.hadoop.mapred.TaskID
Deprecated. 
getTaskInfo(JobConf) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
getTaskLogFile(TaskAttemptID, TaskLog.LogName) - Static method in class org.apache.hadoop.mapred.TaskLog
 
getTaskLogLength(JobConf) - Static method in class org.apache.hadoop.mapred.TaskLog
Get the desired maximum length of task's logs.
getTaskLogsUrl(JobHistory.TaskAttempt) - Static method in class org.apache.hadoop.mapred.JobHistory
Return the TaskLogsUrl of a particular TaskAttempt
getTaskLogUrl(String, String, String) - Static method in class org.apache.hadoop.mapred.TaskLogServlet
Construct the taskLogUrl
getTaskMemoryManager() - Method in class org.apache.hadoop.mapred.TaskTracker
 
getTaskOutputFilter(JobConf) - Static method in class org.apache.hadoop.mapred.JobClient
Get the task output filter out of the JobConf.
getTaskOutputFilter() - Method in class org.apache.hadoop.mapred.JobClient
Deprecated. 
getTaskOutputPath(JobConf, String) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
Helper function to create the task's temporary output directory and return the path to the task's output file.
getTaskRunTime() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
Returns time (in millisec) the task took to complete.
getTaskStatus() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
Returns enum Status.SUCESS or Status.FAILURE.
getTaskTracker(String) - Method in class org.apache.hadoop.mapred.JobTracker
 
getTaskTrackerHttp() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
http location of the tasktracker where this task ran.
getTaskTrackerInstrumentation() - Method in class org.apache.hadoop.mapred.TaskTracker
 
getTaskTrackerReportAddress() - Method in class org.apache.hadoop.mapred.TaskTracker
Return the port at which the tasktracker bound to
getTaskTrackers() - Method in class org.apache.hadoop.mapred.ClusterStatus
Get the number of task trackers in the cluster.
getTerm() - Method in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
Get the term.
getText() - Method in class org.apache.hadoop.contrib.index.example.LineDocTextAndOp
Get the text that represents a document.
getText() - Method in class org.apache.hadoop.contrib.index.mapred.DocumentID
The text of the document id.
getTimestamp(Configuration, URI) - Static method in class org.apache.hadoop.filecache.DistributedCache
Returns mtime of a given cache file on hdfs.
getTip(TaskID) - Method in class org.apache.hadoop.mapred.JobTracker
Returns specified TaskInProgress, or null.
getToken(int) - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
getTopologyPaths() - Method in class org.apache.hadoop.fs.BlockLocation
Get the list of network topology paths for each of the hosts.
getTotalLogFileSize() - Method in class org.apache.hadoop.mapred.TaskLogAppender
 
getTotalSubmissions() - Method in class org.apache.hadoop.mapred.JobTracker
 
getTrackerIdentifier() - Method in class org.apache.hadoop.mapred.JobTracker
Get the unique identifier (ie.
getTrackerPort() - Method in class org.apache.hadoop.mapred.JobTracker
 
getTrackingURL() - Method in interface org.apache.hadoop.mapred.RunningJob
Get the URL where some job progress information will be displayed.
getTrackingURL() - Method in class org.apache.hadoop.mapreduce.Job
Get the URL where some job progress information will be displayed.
getTTExpiryInterval() - Method in class org.apache.hadoop.mapred.ClusterStatus
Get the tasktracker expiry interval for the cluster
getType() - Method in class org.apache.hadoop.mapred.join.Parser.Token
 
getTypeID() - Method in class org.apache.hadoop.record.meta.FieldTypeInfo
get the field's TypeID object
getTypes() - Method in class org.apache.hadoop.io.GenericWritable
Return all classes that may be wrapped.
getTypeVal() - Method in class org.apache.hadoop.record.meta.TypeID
Get the type value.
getUlimitMemoryCommand(Configuration) - Static method in class org.apache.hadoop.util.Shell
Get the Unix command for setting the maximum virtual memory available to a given child process.
getUMask(Configuration) - Static method in class org.apache.hadoop.fs.permission.FsPermission
Get the user file creation mask (umask)
getUniqueFile(TaskAttemptContext, String, String) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
Generate a unique filename, based on the task id, name, and extension
getUniqueItems() - Method in class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
 
getUniqueName(JobConf, String) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
Helper function to generate a name that is unique for the task.
getUri() - Method in class org.apache.hadoop.fs.FileSystem
Returns a URI whose scheme and authority identify this FileSystem.
getUri() - Method in class org.apache.hadoop.fs.FilterFileSystem
Returns a URI whose scheme and authority identify this FileSystem.
getUri() - Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
 
getUri() - Method in class org.apache.hadoop.fs.HarFileSystem
Returns the uri of this filesystem.
getUri() - Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
 
getUri() - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
getUri() - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
getUri() - Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
 
getURIs(String, String) - Method in class org.apache.hadoop.streaming.StreamJob
get the uris of all the files/caches
getURL() - Method in class org.apache.hadoop.mapred.JobProfile
Get the link to the web-ui for details of the job.
getUrl() - Static method in class org.apache.hadoop.util.VersionInfo
Get the subversion URL for the root Hadoop directory.
getUsed() - Method in class org.apache.hadoop.fs.DF
 
getUsed() - Method in class org.apache.hadoop.fs.DU
 
getUsed() - Method in class org.apache.hadoop.fs.FileSystem
Return the total size of all files in the filesystem.
getUsedMemory() - Method in class org.apache.hadoop.mapred.ClusterStatus
Get the total heap memory used by the JobTracker
getUseNewMapper() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Should the framework use the new context-object code for running the mapper?
getUseNewReducer() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Should the framework use the new context-object code for running the reducer?
getUser() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the reported username for this job.
getUser() - Method in class org.apache.hadoop.mapred.JobProfile
Get the user id.
getUser() - Static method in class org.apache.hadoop.util.VersionInfo
The user that compiled Hadoop.
getUserAction() - Method in class org.apache.hadoop.fs.permission.FsPermission
Return user FsAction.
getUserName() - Method in class org.apache.hadoop.fs.permission.PermissionStatus
Return user name
getUserName(JobConf) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Get the user name from the job conf
getUsername() - Method in class org.apache.hadoop.mapred.JobStatus
 
getUserName() - Method in class org.apache.hadoop.security.UnixUserGroupInformation
Return the user's name
getUserName() - Method in class org.apache.hadoop.security.UserGroupInformation
Get username
getUsers() - Method in class org.apache.hadoop.security.SecurityUtil.AccessControlList
 
getVal() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
 
getVal() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
 
getVal() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
 
getVal() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
 
getValue(BytesWritable) - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
Copy the value into BytesWritable.
getValue(byte[]) - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
Copy value into user-supplied buffer.
getValue(byte[], int) - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
Copy value into user-supplied buffer.
getValue() - Method in interface org.apache.hadoop.io.SequenceFile.Sorter.RawKeyValueIterator
Gets the current raw value
getValue() - Method in interface org.apache.hadoop.mapred.RawKeyValueIterator
Gets the current raw value.
getValue() - Method in class org.apache.hadoop.mapreduce.Counter
What is the current value of this counter?
getValue() - Method in class org.apache.hadoop.util.DataChecksum
 
getValueClass() - Method in class org.apache.hadoop.io.ArrayWritable
 
getValueClass() - Method in class org.apache.hadoop.io.MapFile.Reader
Returns the class of values in this file.
getValueClass() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns the class of values in this file.
getValueClass() - Method in class org.apache.hadoop.io.SequenceFile.Writer
Returns the class of values in this file.
getValueClass() - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
The class of value that must be passed to SequenceFileRecordReader.next(Object, Object)..
getValueClassName() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns the name of the value class.
getValueClassName() - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
Retrieve the name of the value class for this SequenceFile.
getValueLength() - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
Get the length of the value.
getValues() - Method in class org.apache.hadoop.mapreduce.ReduceContext
Iterate through the values for the current key, reusing the same value object, which is stored in the context.
getValueStream() - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
Stream access to value.
getValueTypeID() - Method in class org.apache.hadoop.record.meta.MapTypeID
get the TypeID of the map's value element
getVectorSize() - Method in class org.apache.hadoop.util.bloom.BloomFilter
 
getVersion() - Method in class org.apache.hadoop.contrib.index.mapred.Shard
Get the version number of the entire index.
getVersion() - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
getVersion() - Method in class org.apache.hadoop.io.VersionedWritable
Return the version number of the current implementation.
getVersion() - Static method in class org.apache.hadoop.util.VersionInfo
Get the Hadoop version.
getVIntSize(long) - Static method in class org.apache.hadoop.io.WritableUtils
Get the encoded length if an integer is stored in a variable-length format
getVIntSize(long) - Static method in class org.apache.hadoop.record.Utils
Get the encoded length if an integer is stored in a variable-length format
getVirtualMemorySize() - Method in class org.apache.hadoop.util.LinuxMemoryCalculatorPlugin
Obtain the total size of the virtual memory present in the system.
getVirtualMemorySize() - Method in class org.apache.hadoop.util.MemoryCalculatorPlugin
Obtain the total size of the virtual memory present in the system.
getWaitingJobs() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
 
getWarn() - Static method in class org.apache.hadoop.metrics.jvm.EventCounter
 
getWebAppsPath() - Method in class org.apache.hadoop.http.HttpServer
Get the pathname to the webapps files.
getWeight() - Method in class org.apache.hadoop.util.bloom.Key
 
getWorkingDirectory() - Method in class org.apache.hadoop.fs.FileSystem
Get the current working directory for the given file system
getWorkingDirectory() - Method in class org.apache.hadoop.fs.FilterFileSystem
Get the current working directory for the given file system
getWorkingDirectory() - Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
 
getWorkingDirectory() - Method in class org.apache.hadoop.fs.HarFileSystem
return the top level archive.
getWorkingDirectory() - Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
 
getWorkingDirectory() - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
getWorkingDirectory() - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
getWorkingDirectory() - Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
 
getWorkingDirectory() - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Get the current working directory for the default file system.
getWorkingDirectory() - Method in class org.apache.hadoop.mapreduce.JobContext
Get the current working directory for the default file system.
getWorkOutputPath(JobConf) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
Get the Path to the task's temporary output directory for the map-reduce job

Tasks' Side-Effect Files
getWorkOutputPath(TaskInputOutputContext<?, ?, ?, ?>) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
Get the Path to the task's temporary output directory for the map-reduce job

Tasks' Side-Effect Files
getWorkPath() - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
Get the directory that the task should write results into
getWrappedStream() - Method in class org.apache.hadoop.fs.FSDataOutputStream
 
getZlibCompressor(Configuration) - Static method in class org.apache.hadoop.io.compress.zlib.ZlibFactory
Return the appropriate implementation of the zlib compressor.
getZlibCompressorType(Configuration) - Static method in class org.apache.hadoop.io.compress.zlib.ZlibFactory
Return the appropriate type of the zlib compressor.
getZlibDecompressor(Configuration) - Static method in class org.apache.hadoop.io.compress.zlib.ZlibFactory
Return the appropriate implementation of the zlib decompressor.
getZlibDecompressorType(Configuration) - Static method in class org.apache.hadoop.io.compress.zlib.ZlibFactory
Return the appropriate type of the zlib decompressor.
globStatus(Path) - Method in class org.apache.hadoop.fs.FileSystem
Return all the files that match filePattern and are not checksum files.
globStatus(Path, PathFilter) - Method in class org.apache.hadoop.fs.FileSystem
Return an array of FileStatus objects whose path names match pathPattern and is accepted by the user-supplied path filter.
go() - Method in class org.apache.hadoop.streaming.StreamJob
Deprecated. use StreamJob.run(String[]) instead.
goodClassOrNull(Configuration, String, String) - Static method in class org.apache.hadoop.streaming.StreamUtil
It may seem strange to silently switch behaviour when a String is not a classname; the reason is simplified Usage:
GREATER_ICOST - Static variable in class org.apache.hadoop.io.compress.bzip2.CBZip2OutputStream
This constant is accessible by subclasses for historical purposes.
Grep - Class in org.apache.hadoop.examples
 
Group - Class in org.apache.hadoop.security
A group to which a user belongs to.
Group(String) - Constructor for class org.apache.hadoop.security.Group
Create a new Group with the given groupname.
GT_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
GzipCodec - Class in org.apache.hadoop.io.compress
This class creates gzip compressors/decompressors.
GzipCodec() - Constructor for class org.apache.hadoop.io.compress.GzipCodec
 
GzipCodec.GzipInputStream - Class in org.apache.hadoop.io.compress
 
GzipCodec.GzipInputStream(InputStream) - Constructor for class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
 
GzipCodec.GzipInputStream(DecompressorStream) - Constructor for class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
Allow subclasses to directly set the inflater stream.
GzipCodec.GzipOutputStream - Class in org.apache.hadoop.io.compress
A bridge that wraps around a DeflaterOutputStream to make it a CompressionOutputStream.
GzipCodec.GzipOutputStream(OutputStream) - Constructor for class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
 
GzipCodec.GzipOutputStream(CompressorStream) - Constructor for class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
Allow children types to put a different type in here.


H

HADOOP_POLICY_FILE - Static variable in class org.apache.hadoop.security.authorize.ConfiguredPolicy
 
HadoopLogParser - Class in org.apache.hadoop.contrib.failmon
An object of this class parses a Hadoop log file to create appropriate EventRecords.
HadoopLogParser(String) - Constructor for class org.apache.hadoop.contrib.failmon.HadoopLogParser
Create a new parser object and try to find the hostname of the node that generated the log
HadoopStreaming - Class in org.apache.hadoop.streaming
The main entrypoint.
HadoopStreaming() - Constructor for class org.apache.hadoop.streaming.HadoopStreaming
 
HadoopVersionAnnotation - Annotation Type in org.apache.hadoop
A package attribute that captures the version of Hadoop that was compiled.
halfDigest() - Method in class org.apache.hadoop.io.MD5Hash
Construct a half-sized version of this MD5.
handle(JobHistory.RecordTypes, Map<JobHistory.Keys, String>) - Method in interface org.apache.hadoop.mapred.JobHistory.Listener
Callback method for history parser.
HarFileSystem - Class in org.apache.hadoop.fs
This is an implementation of the Hadoop Archive Filesystem.
HarFileSystem() - Constructor for class org.apache.hadoop.fs.HarFileSystem
public construction of harfilesystem
HarFileSystem(FileSystem) - Constructor for class org.apache.hadoop.fs.HarFileSystem
Constructor to create a HarFileSystem with an underlying filesystem.
has(int) - Method in class org.apache.hadoop.mapred.join.TupleWritable
Return true if tuple has an element at the position provided.
hash - Variable in class org.apache.hadoop.util.bloom.Filter
The hash function used to map a key to several positions in the vector.
hash(Key) - Method in class org.apache.hadoop.util.bloom.HashFunction
Hashes a specified key into several integers.
Hash - Class in org.apache.hadoop.util.hash
This class represents a common API for hashing functions.
Hash() - Constructor for class org.apache.hadoop.util.hash.Hash
 
hash(byte[]) - Method in class org.apache.hadoop.util.hash.Hash
Calculate a hash using all bytes from the input argument, and a seed of -1.
hash(byte[], int) - Method in class org.apache.hadoop.util.hash.Hash
Calculate a hash using all bytes from the input argument, and a provided seed value.
hash(byte[], int, int) - Method in class org.apache.hadoop.util.hash.Hash
Calculate a hash using bytes from 0 to length, and the provided seed value
hash(byte[], int, int) - Method in class org.apache.hadoop.util.hash.JenkinsHash
taken from hashlittle() -- hash a variable-length key into a 32-bit value
hash(byte[], int, int) - Method in class org.apache.hadoop.util.hash.MurmurHash
 
HASH_COUNT - Static variable in class org.apache.hadoop.io.BloomMapFile
 
hashBytes(byte[], int) - Static method in class org.apache.hadoop.io.WritableComparator
Compute hash for binary data.
hashCode() - Method in class org.apache.hadoop.contrib.index.mapred.DocumentID
 
hashCode() - Method in class org.apache.hadoop.contrib.index.mapred.Shard
 
hashCode() - Method in class org.apache.hadoop.examples.MultiFileWordCount.WordOffset
 
hashCode() - Method in class org.apache.hadoop.examples.SecondarySort.IntPair
 
hashCode() - Method in class org.apache.hadoop.fs.FileChecksum
hashCode() - Method in class org.apache.hadoop.fs.FileStatus
Returns a hash code value for the object, which is defined as the hash code of the path name.
hashCode() - Method in class org.apache.hadoop.fs.Path
 
hashCode() - Method in class org.apache.hadoop.fs.permission.FsPermission
hashCode() - Method in class org.apache.hadoop.io.BinaryComparable
Return a hash of the bytes returned from {#getBytes()}.
hashCode() - Method in class org.apache.hadoop.io.BooleanWritable
 
hashCode() - Method in class org.apache.hadoop.io.BytesWritable
 
hashCode() - Method in class org.apache.hadoop.io.ByteWritable
 
hashCode() - Method in class org.apache.hadoop.io.DoubleWritable
 
hashCode() - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
 
hashCode() - Method in class org.apache.hadoop.io.file.tfile.Utils.Version
 
hashCode() - Method in class org.apache.hadoop.io.FloatWritable
 
hashCode() - Method in class org.apache.hadoop.io.IntWritable
 
hashCode() - Method in class org.apache.hadoop.io.LongWritable
 
hashCode() - Method in class org.apache.hadoop.io.MD5Hash
Returns a hash code value for this object.
hashCode() - Method in class org.apache.hadoop.io.NullWritable
 
hashCode() - Method in class org.apache.hadoop.io.SequenceFile.Metadata
 
hashCode() - Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
 
hashCode() - Method in class org.apache.hadoop.io.Text
 
hashCode() - Method in class org.apache.hadoop.io.UTF8
Deprecated.  
hashCode() - Method in class org.apache.hadoop.io.VIntWritable
 
hashCode() - Method in class org.apache.hadoop.io.VLongWritable
 
hashCode() - Method in class org.apache.hadoop.mapred.Counters.Group
Deprecated.  
hashCode() - Method in class org.apache.hadoop.mapred.Counters
Deprecated.  
hashCode() - Method in class org.apache.hadoop.mapred.join.TupleWritable
 
hashCode() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
 
hashCode(byte[], int, int, int) - Method in class org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner
 
hashCode() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
 
hashCode() - Method in class org.apache.hadoop.mapred.TaskReport
 
hashCode() - Method in class org.apache.hadoop.mapreduce.Counter
 
hashCode() - Method in class org.apache.hadoop.mapreduce.CounterGroup
 
hashCode() - Method in class org.apache.hadoop.mapreduce.Counters
 
hashCode() - Method in class org.apache.hadoop.mapreduce.ID
 
hashCode() - Method in class org.apache.hadoop.mapreduce.JobID
 
hashCode() - Method in class org.apache.hadoop.mapreduce.TaskAttemptID
 
hashCode() - Method in class org.apache.hadoop.mapreduce.TaskID
 
hashCode() - Method in class org.apache.hadoop.net.SocksSocketFactory
 
hashCode() - Method in class org.apache.hadoop.net.StandardSocketFactory
 
hashCode() - Method in class org.apache.hadoop.record.Buffer
 
hashCode() - Method in class org.apache.hadoop.record.meta.FieldTypeInfo
We use a basic hashcode implementation, since this class will likely not be used as a hashmap key
hashCode() - Method in class org.apache.hadoop.record.meta.MapTypeID
We use a basic hashcode implementation, since this class will likely not be used as a hashmap key
hashCode() - Method in class org.apache.hadoop.record.meta.TypeID
We use a basic hashcode implementation, since this class will likely not be used as a hashmap key
hashCode() - Method in class org.apache.hadoop.record.meta.VectorTypeID
We use a basic hashcode implementation, since this class will likely not be used as a hashmap key
hashCode() - Method in class org.apache.hadoop.security.authorize.ConnectionPermission
 
hashCode() - Method in class org.apache.hadoop.security.Group
 
hashCode() - Method in class org.apache.hadoop.security.UnixUserGroupInformation
Returns a hash code for this UGI.
hashCode() - Method in class org.apache.hadoop.security.User
 
hashCode() - Method in class org.apache.hadoop.util.bloom.Key
 
HashFunction - Class in org.apache.hadoop.util.bloom
Implements a hash object that returns a certain number of hashed values.
HashFunction(int, int, int) - Constructor for class org.apache.hadoop.util.bloom.HashFunction
Constructor.
HashingDistributionPolicy - Class in org.apache.hadoop.contrib.index.example
Choose a shard for each insert or delete based on document id hashing.
HashingDistributionPolicy() - Constructor for class org.apache.hadoop.contrib.index.example.HashingDistributionPolicy
 
HashPartitioner<K2,V2> - Class in org.apache.hadoop.mapred.lib
Deprecated. Use HashPartitioner instead.
HashPartitioner() - Constructor for class org.apache.hadoop.mapred.lib.HashPartitioner
Deprecated.  
HashPartitioner<K,V> - Class in org.apache.hadoop.mapreduce.lib.partition
Partition keys by their Object.hashCode().
HashPartitioner() - Constructor for class org.apache.hadoop.mapreduce.lib.partition.HashPartitioner
 
hashType - Variable in class org.apache.hadoop.util.bloom.Filter
Type of hashing function to use.
hasNext() - Method in class org.apache.hadoop.contrib.utils.join.ArrayListBackedIterator
 
hasNext() - Method in class org.apache.hadoop.mapred.join.ArrayListBackedIterator
 
hasNext() - Method in interface org.apache.hadoop.mapred.join.ComposableRecordReader
Returns true if the stream is not empty, but provides no guarantee that a call to next(K,V) will succeed.
hasNext() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
Return true if it is possible that this could emit more values.
hasNext() - Method in class org.apache.hadoop.mapred.join.JoinRecordReader.JoinDelegationIterator
 
hasNext() - Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader.MultiFilterDelegationIterator
 
hasNext() - Method in class org.apache.hadoop.mapred.join.ResetableIterator.EMPTY
 
hasNext() - Method in interface org.apache.hadoop.mapred.join.ResetableIterator
True if a call to next may return a value.
hasNext() - Method in class org.apache.hadoop.mapred.join.StreamBackedIterator
 
hasNext() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
Return true if the RR- including the k,v pair stored in this object- is exhausted.
hasNext() - Method in class org.apache.hadoop.mapreduce.ReduceContext.ValueIterator
 
hasRecovered() - Method in class org.apache.hadoop.mapred.JobTracker
Whether the JT has recovered upon restart
hasRestarted() - Method in class org.apache.hadoop.mapred.JobTracker
Whether the JT has restarted
hasSimpleInputSpecs_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
hbMakeCodeLengths(char[], int[], int, int) - Static method in class org.apache.hadoop.io.compress.bzip2.CBZip2OutputStream
This method is accessible by subclasses for historical purposes.
HDFSMerger - Class in org.apache.hadoop.contrib.failmon
 
HDFSMerger() - Constructor for class org.apache.hadoop.contrib.failmon.HDFSMerger
 
HEADER - Static variable in class org.apache.hadoop.ipc.Server
The first four bytes of Hadoop RPC connections
HEADER_LEN - Static variable in class org.apache.hadoop.util.DataChecksum
 
headMap(WritableComparable) - Method in class org.apache.hadoop.io.SortedMapWritable
HeapSort - Class in org.apache.hadoop.util
An implementation of the core algorithm of HeapSort.
HeapSort() - Constructor for class org.apache.hadoop.util.HeapSort
 
heartbeat(TaskTrackerStatus, boolean, boolean, boolean, short) - Method in class org.apache.hadoop.mapred.JobTracker
The periodic heartbeat mechanism between the TaskTracker and the JobTracker.
height - Variable in class org.apache.hadoop.examples.dancing.Pentomino
 
height - Static variable in class org.apache.hadoop.mapred.TaskGraphServlet
height of the graph w/o margins
hexchars - Static variable in class org.apache.hadoop.record.Utils
 
hexStringToByte(String) - Static method in class org.apache.hadoop.util.StringUtils
Given a hexstring this will return the byte array corresponding to the string
HostsFileReader - Class in org.apache.hadoop.util
 
HostsFileReader(String, String) - Constructor for class org.apache.hadoop.util.HostsFileReader
 
HTML_TAIL - Static variable in class org.apache.hadoop.util.ServletUtil
 
htmlFooter() - Static method in class org.apache.hadoop.util.ServletUtil
HTML footer to be added in the jsps.
HttpServer - Class in org.apache.hadoop.http
Create a Jetty embedded server to answer http requests.
HttpServer(String, String, int, boolean) - Constructor for class org.apache.hadoop.http.HttpServer
Same as this(name, bindAddress, port, findPort, null);
HttpServer(String, String, int, boolean, Configuration) - Constructor for class org.apache.hadoop.http.HttpServer
Create a status server on the given port.
HttpServer.StackServlet - Class in org.apache.hadoop.http
A very simple servlet to serve up a text representation of the current stack traces.
HttpServer.StackServlet() - Constructor for class org.apache.hadoop.http.HttpServer.StackServlet
 
humanReadableInt(long) - Static method in class org.apache.hadoop.util.StringUtils
Given an integer, return a string that is in an approximate, but human readable format.

I

ID - Class in org.apache.hadoop.mapred
Deprecated. 
ID(int) - Constructor for class org.apache.hadoop.mapred.ID
Deprecated. constructs an ID object from the given int
ID() - Constructor for class org.apache.hadoop.mapred.ID
Deprecated.  
id() - Method in interface org.apache.hadoop.mapred.join.ComposableRecordReader
Return the position in the collector this class occupies.
id() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
Return the position in the collector this class occupies.
id - Variable in class org.apache.hadoop.mapred.join.Parser.Node
 
id() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
Return the position in the collector this class occupies.
ID - Class in org.apache.hadoop.mapreduce
A general identifier, which internally stores the id as an integer.
ID(int) - Constructor for class org.apache.hadoop.mapreduce.ID
constructs an ID object from the given int
ID() - Constructor for class org.apache.hadoop.mapreduce.ID
 
id - Variable in class org.apache.hadoop.mapreduce.ID
 
ident - Variable in class org.apache.hadoop.mapred.join.Parser.Node
 
IDENT_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
IdentityLocalAnalysis - Class in org.apache.hadoop.contrib.index.example
Identity local analysis maps inputs directly into outputs.
IdentityLocalAnalysis() - Constructor for class org.apache.hadoop.contrib.index.example.IdentityLocalAnalysis
 
IdentityMapper<K,V> - Class in org.apache.hadoop.mapred.lib
Deprecated. Use Mapper instead.
IdentityMapper() - Constructor for class org.apache.hadoop.mapred.lib.IdentityMapper
Deprecated.  
IdentityReducer<K,V> - Class in org.apache.hadoop.mapred.lib
Deprecated. Use Reducer instead.
IdentityReducer() - Constructor for class org.apache.hadoop.mapred.lib.IdentityReducer
Deprecated.  
idFormat - Static variable in class org.apache.hadoop.mapreduce.JobID
 
idFormat - Static variable in class org.apache.hadoop.mapreduce.TaskID
 
IDistributionPolicy - Interface in org.apache.hadoop.contrib.index.mapred
A distribution policy decides, given a document with a document id, which one shard the request should be sent to if the request is an insert, and which shard(s) the request should be sent to if the request is a delete.
idWithinJob() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
 
idx - Variable in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
 
ifExists(String, Configuration) - Method in class org.apache.hadoop.fs.LocalDirAllocator
We search through all the configured dirs for the file's existence and return true when we find
ifmt(double) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
IIndexUpdater - Interface in org.apache.hadoop.contrib.index.mapred
A class implements an index updater interface should create a Map/Reduce job configuration and run the Map/Reduce job to analyze documents and update Lucene instances in parallel.
ILocalAnalysis<K extends WritableComparable,V extends Writable> - Interface in org.apache.hadoop.contrib.index.mapred
Application specific local analysis.
image - Variable in class org.apache.hadoop.record.compiler.generated.Token
The string image of the token.
implies(FsAction) - Method in enum org.apache.hadoop.fs.permission.FsAction
Return true if this action implies that action.
implies(ProtectionDomain, Permission) - Method in class org.apache.hadoop.security.authorize.ConfiguredPolicy
 
implies(Permission) - Method in class org.apache.hadoop.security.authorize.ConnectionPermission
 
in - Variable in class org.apache.hadoop.io.compress.CompressionInputStream
The input stream to be compressed.
inBuf - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
inc(int) - Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingInt
Inc metrics for incr vlaue
inc() - Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingInt
Inc metrics by one
inc(long) - Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingLong
Inc metrics for incr vlaue
inc() - Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingLong
Inc metrics by one
inc(int, long) - Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingRate
Increment the metrics for numOps operations
inc(long) - Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingRate
Increment the metrics for one operation
incDfsUsed(long) - Method in class org.apache.hadoop.fs.DU
Increase how much disk space we use.
Include() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
INCLUDE_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
incr() - Method in interface org.apache.hadoop.record.Index
 
incrAllCounters(Counters) - Method in class org.apache.hadoop.mapred.Counters
Deprecated. Increments multiple counters by their amounts in another Counters instance.
incrAllCounters(CounterGroup) - Method in class org.apache.hadoop.mapreduce.CounterGroup
 
incrAllCounters(Counters) - Method in class org.apache.hadoop.mapreduce.Counters
Increments multiple counters by their amounts in another Counters instance.
incrCounter(Enum, long) - Method in class org.apache.hadoop.mapred.Counters
Deprecated. Increments the specified counter by the specified amount, creating it if it didn't already exist.
incrCounter(String, String, long) - Method in class org.apache.hadoop.mapred.Counters
Deprecated. Increments the specified counter by the specified amount, creating it if it didn't already exist.
incrCounter(Enum<?>, long) - Method in interface org.apache.hadoop.mapred.Reporter
Increments the counter identified by the key, which can be of any Enum type, by the specified amount.
incrCounter(String, String, long) - Method in interface org.apache.hadoop.mapred.Reporter
Increments the counter identified by the group and counter name by the specified amount.
increment(long) - Method in class org.apache.hadoop.mapreduce.Counter
Increment this counter by the given value
INCREMENT - Static variable in class org.apache.hadoop.metrics.spi.MetricValue
 
incrementBytesRead(long) - Method in class org.apache.hadoop.fs.FileSystem.Statistics
Increment the bytes read in the statistics
incrementBytesWritten(long) - Method in class org.apache.hadoop.fs.FileSystem.Statistics
Increment the bytes written in the statistics
incrementWeight(double) - Method in class org.apache.hadoop.util.bloom.Key
Increments the weight of this key with a specified value.
incrementWeight() - Method in class org.apache.hadoop.util.bloom.Key
Increments the weight of this key by one.
incrMetric(String, int) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Increments the named metric by the specified value.
incrMetric(String, long) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Increments the named metric by the specified value.
incrMetric(String, short) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Increments the named metric by the specified value.
incrMetric(String, byte) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Increments the named metric by the specified value.
incrMetric(String, float) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Increments the named metric by the specified value.
incrMetric(String, int) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Increments the named metric by the specified value.
incrMetric(String, long) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Increments the named metric by the specified value.
incrMetric(String, short) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Increments the named metric by the specified value.
incrMetric(String, byte) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Increments the named metric by the specified value.
incrMetric(String, float) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Increments the named metric by the specified value.
Index - Interface in org.apache.hadoop.record
Interface that acts as an iterator for deserializing maps.
INDEX_FILE_NAME - Static variable in class org.apache.hadoop.io.MapFile
The name of the index file.
IndexedSortable - Interface in org.apache.hadoop.util
Interface for collections capable of being sorted by IndexedSorter algorithms.
IndexedSorter - Interface in org.apache.hadoop.util
Interface for sort algorithms accepting IndexedSortable items.
IndexUpdateCombiner - Class in org.apache.hadoop.contrib.index.mapred
This combiner combines multiple intermediate forms into one intermediate form.
IndexUpdateCombiner() - Constructor for class org.apache.hadoop.contrib.index.mapred.IndexUpdateCombiner
 
IndexUpdateConfiguration - Class in org.apache.hadoop.contrib.index.mapred
This class provides the getters and the setters to a number of parameters.
IndexUpdateConfiguration(Configuration) - Constructor for class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Constructor
IndexUpdateMapper<K extends WritableComparable,V extends Writable> - Class in org.apache.hadoop.contrib.index.mapred
This class applies local analysis on a key-value pair and then convert the result docid-operation pair to a shard-and-intermediate form pair.
IndexUpdateMapper() - Constructor for class org.apache.hadoop.contrib.index.mapred.IndexUpdateMapper
 
IndexUpdateOutputFormat - Class in org.apache.hadoop.contrib.index.mapred
The record writer of this output format simply puts a message in an output path when a shard update is done.
IndexUpdateOutputFormat() - Constructor for class org.apache.hadoop.contrib.index.mapred.IndexUpdateOutputFormat
 
IndexUpdatePartitioner - Class in org.apache.hadoop.contrib.index.mapred
This partitioner class puts the values of the same key - in this case the same shard - in the same partition.
IndexUpdatePartitioner() - Constructor for class org.apache.hadoop.contrib.index.mapred.IndexUpdatePartitioner
 
IndexUpdater - Class in org.apache.hadoop.contrib.index.mapred
An implementation of an index updater interface which creates a Map/Reduce job configuration and run the Map/Reduce job to analyze documents and update Lucene instances in parallel.
IndexUpdater() - Constructor for class org.apache.hadoop.contrib.index.mapred.IndexUpdater
 
IndexUpdateReducer - Class in org.apache.hadoop.contrib.index.mapred
This reducer applies to a shard the changes for it.
IndexUpdateReducer() - Constructor for class org.apache.hadoop.contrib.index.mapred.IndexUpdateReducer
 
init(Shard[]) - Method in class org.apache.hadoop.contrib.index.example.HashingDistributionPolicy
 
init(Shard[]) - Method in class org.apache.hadoop.contrib.index.example.RoundRobinDistributionPolicy
 
init(Shard[]) - Method in interface org.apache.hadoop.contrib.index.mapred.IDistributionPolicy
Initialization.
init() - Method in class org.apache.hadoop.fs.FsShell
 
init(JobConf) - Method in class org.apache.hadoop.mapred.JobClient
Connect to the default JobTracker.
init(JobConf, String, long) - Static method in class org.apache.hadoop.mapred.JobHistory
Initialize JobHistory files.
init(String, ContextFactory) - Method in class org.apache.hadoop.metrics.file.FileContext
 
init(String, ContextFactory) - Method in class org.apache.hadoop.metrics.ganglia.GangliaContext
 
init(String, String) - Static method in class org.apache.hadoop.metrics.jvm.JvmMetrics
 
init(String, String, String) - Static method in class org.apache.hadoop.metrics.jvm.JvmMetrics
 
init(String, ContextFactory) - Method in interface org.apache.hadoop.metrics.MetricsContext
Initialize this context.
init(String, ContextFactory) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Initializes the context.
init(String, ContextFactory) - Method in class org.apache.hadoop.metrics.spi.CompositeContext
 
init(String, ContextFactory) - Method in class org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
 
init() - Method in class org.apache.hadoop.streaming.StreamJob
 
init() - Method in class org.apache.hadoop.streaming.StreamXmlRecordReader
 
initHTML(ServletResponse, String) - Static method in class org.apache.hadoop.util.ServletUtil
Initial HTML header
initialize(URI, Configuration) - Method in class org.apache.hadoop.fs.FileSystem
Called after a new FileSystem instance is constructed.
initialize(URI, Configuration) - Method in class org.apache.hadoop.fs.FilterFileSystem
Called after a new FileSystem instance is constructed.
initialize(URI, Configuration) - Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
 
initialize(URI, Configuration) - Method in class org.apache.hadoop.fs.HarFileSystem
Initialize a Har filesystem per har archive.
initialize(URI, Configuration) - Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
 
initialize(URI, Configuration) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
initialize(URI, Configuration) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
initialize(URI) - Method in class org.apache.hadoop.fs.s3.MigrationTool
 
initialize(URI, Configuration) - Method in class org.apache.hadoop.fs.s3.S3Credentials
 
initialize(URI, Configuration) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
initialize(URI, Configuration) - Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
 
initialize(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.LineRecordReader
 
initialize(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
 
initialize(InputSplit, TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.RecordReader
Called once at initialization.
initialize(int) - Method in class org.apache.hadoop.util.PriorityQueue
Subclass constructors must call this.
initializePieces() - Method in class org.apache.hadoop.examples.dancing.OneSidedPentomino
Define the one sided pieces.
initializePieces() - Method in class org.apache.hadoop.examples.dancing.Pentomino
Fill in the pieces list.
initJob(JobInProgress) - Method in class org.apache.hadoop.mapred.JobTracker
 
initNextRecordReader() - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
Get the record reader for the next chunk in this CombineFileSplit.
InMemoryFileSystem - Class in org.apache.hadoop.fs
Deprecated. 
InMemoryFileSystem() - Constructor for class org.apache.hadoop.fs.InMemoryFileSystem
Deprecated.  
InMemoryFileSystem(URI, Configuration) - Constructor for class org.apache.hadoop.fs.InMemoryFileSystem
Deprecated.  
InnerJoinRecordReader<K extends WritableComparable> - Class in org.apache.hadoop.mapred.join
Full inner join.
INode - Class in org.apache.hadoop.fs.s3
Holds file metadata including type (regular file, or directory), and the list of blocks that are pointers to the data.
INode(INode.FileType, Block[]) - Constructor for class org.apache.hadoop.fs.s3.INode
 
inodeExists(Path) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
Input() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
INPUT_CLASS_PROPERTY - Static variable in class org.apache.hadoop.mapred.lib.db.DBConfiguration
Class name implementing DBWritable which will hold input tuples
INPUT_CONDITIONS_PROPERTY - Static variable in class org.apache.hadoop.mapred.lib.db.DBConfiguration
WHERE clause in the input SELECT statement
INPUT_COUNT_QUERY - Static variable in class org.apache.hadoop.mapred.lib.db.DBConfiguration
Input query to get the count of records
INPUT_FIELD_NAMES_PROPERTY - Static variable in class org.apache.hadoop.mapred.lib.db.DBConfiguration
Field names in the Input table
INPUT_FORMAT_CLASS_ATTR - Static variable in class org.apache.hadoop.mapreduce.JobContext
 
INPUT_ORDER_BY_PROPERTY - Static variable in class org.apache.hadoop.mapred.lib.db.DBConfiguration
ORDER BY clause in the input SELECT statement
INPUT_QUERY - Static variable in class org.apache.hadoop.mapred.lib.db.DBConfiguration
Whole input query, exluding LIMIT...OFFSET
input_stream - Variable in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
INPUT_TABLE_NAME_PROPERTY - Static variable in class org.apache.hadoop.mapred.lib.db.DBConfiguration
Input table name
InputBuffer - Class in org.apache.hadoop.io
A reusable InputStream implementation that reads from an in-memory buffer.
InputBuffer() - Constructor for class org.apache.hadoop.io.InputBuffer
Constructs a new empty buffer.
inputFile - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
 
inputFile - Variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
InputFormat<K,V> - Interface in org.apache.hadoop.mapred
Deprecated. Use InputFormat instead.
InputFormat<K,V> - Class in org.apache.hadoop.mapreduce
InputFormat describes the input-specification for a Map-Reduce job.
InputFormat() - Constructor for class org.apache.hadoop.mapreduce.InputFormat
 
inputFormatSpec_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
InputSampler<K,V> - Class in org.apache.hadoop.mapred.lib
Utility for collecting samples and writing a partition file for TotalOrderPartitioner.
InputSampler(JobConf) - Constructor for class org.apache.hadoop.mapred.lib.InputSampler
 
InputSampler.IntervalSampler<K,V> - Class in org.apache.hadoop.mapred.lib
Sample from s splits at regular intervals.
InputSampler.IntervalSampler(double) - Constructor for class org.apache.hadoop.mapred.lib.InputSampler.IntervalSampler
Create a new IntervalSampler sampling all splits.
InputSampler.IntervalSampler(double, int) - Constructor for class org.apache.hadoop.mapred.lib.InputSampler.IntervalSampler
Create a new IntervalSampler.
InputSampler.RandomSampler<K,V> - Class in org.apache.hadoop.mapred.lib
Sample from random points in the input.
InputSampler.RandomSampler(double, int) - Constructor for class org.apache.hadoop.mapred.lib.InputSampler.RandomSampler
Create a new RandomSampler sampling all splits.
InputSampler.RandomSampler(double, int, int) - Constructor for class org.apache.hadoop.mapred.lib.InputSampler.RandomSampler
Create a new RandomSampler.
InputSampler.Sampler<K,V> - Interface in org.apache.hadoop.mapred.lib
Interface to sample using an InputFormat.
InputSampler.SplitSampler<K,V> - Class in org.apache.hadoop.mapred.lib
Samples the first n records from s splits.
InputSampler.SplitSampler(int) - Constructor for class org.apache.hadoop.mapred.lib.InputSampler.SplitSampler
Create a SplitSampler sampling all splits.
InputSampler.SplitSampler(int, int) - Constructor for class org.apache.hadoop.mapred.lib.InputSampler.SplitSampler
Create a new SplitSampler.
inputSpecs_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
InputSplit - Interface in org.apache.hadoop.mapred
Deprecated. Use InputSplit instead.
InputSplit - Class in org.apache.hadoop.mapreduce
InputSplit represents the data to be processed by an individual Mapper.
InputSplit() - Constructor for class org.apache.hadoop.mapreduce.InputSplit
 
inputStream - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
inputTag - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
 
inReaderSpec_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
insert(EventRecord) - Method in class org.apache.hadoop.contrib.failmon.LocalStore
Insert an EventRecord to the local storage, after it gets serialized and anonymized.
insert(EventRecord[]) - Method in class org.apache.hadoop.contrib.failmon.LocalStore
Insert an array of EventRecords to the local storage, after they get serialized and anonymized.
INSERT - Static variable in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp.Op
 
insert(T) - Method in class org.apache.hadoop.util.PriorityQueue
Adds element to the PriorityQueue in log(size) time if either the PriorityQueue is not full, or not lessThan(element, top()).
instances - Static variable in class org.apache.hadoop.contrib.failmon.Executor
 
INT - Static variable in class org.apache.hadoop.record.meta.TypeID.RIOType
 
INT_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
IntermediateForm - Class in org.apache.hadoop.contrib.index.mapred
An intermediate form for one or more parsed Lucene documents and/or delete terms.
IntermediateForm() - Constructor for class org.apache.hadoop.contrib.index.mapred.IntermediateForm
Constructor
IntSumReducer<Key> - Class in org.apache.hadoop.mapreduce.lib.reduce
 
IntSumReducer() - Constructor for class org.apache.hadoop.mapreduce.lib.reduce.IntSumReducer
 
IntTypeID - Static variable in class org.apache.hadoop.record.meta.TypeID
 
IntWritable - Class in org.apache.hadoop.io
A WritableComparable for ints.
IntWritable() - Constructor for class org.apache.hadoop.io.IntWritable
 
IntWritable(int) - Constructor for class org.apache.hadoop.io.IntWritable
 
IntWritable.Comparator - Class in org.apache.hadoop.io
A Comparator optimized for IntWritable.
IntWritable.Comparator() - Constructor for class org.apache.hadoop.io.IntWritable.Comparator
 
INVALID_HASH - Static variable in class org.apache.hadoop.util.hash.Hash
Constant to denote invalid hash type.
InvalidFileTypeException - Exception in org.apache.hadoop.mapred
Used when file type differs from the desired file type.
InvalidFileTypeException() - Constructor for exception org.apache.hadoop.mapred.InvalidFileTypeException
 
InvalidFileTypeException(String) - Constructor for exception org.apache.hadoop.mapred.InvalidFileTypeException
 
InvalidInputException - Exception in org.apache.hadoop.mapred
This class wraps a list of problems with the input, so that the user can get a list of problems together instead of finding and fixing them one by one.
InvalidInputException(List<IOException>) - Constructor for exception org.apache.hadoop.mapred.InvalidInputException
Create the exception with the given list.
InvalidInputException - Exception in org.apache.hadoop.mapreduce.lib.input
This class wraps a list of problems with the input, so that the user can get a list of problems together instead of finding and fixing them one by one.
InvalidInputException(List<IOException>) - Constructor for exception org.apache.hadoop.mapreduce.lib.input.InvalidInputException
Create the exception with the given list.
InvalidJobConfException - Exception in org.apache.hadoop.mapred
This exception is thrown when jobconf misses some mendatory attributes or value of some attributes is invalid.
InvalidJobConfException() - Constructor for exception org.apache.hadoop.mapred.InvalidJobConfException
 
InvalidJobConfException(String) - Constructor for exception org.apache.hadoop.mapred.InvalidJobConfException
 
InverseMapper<K,V> - Class in org.apache.hadoop.mapred.lib
Deprecated. Use InverseMapper instead.
InverseMapper() - Constructor for class org.apache.hadoop.mapred.lib.InverseMapper
Deprecated.  
InverseMapper<K,V> - Class in org.apache.hadoop.mapreduce.lib.map
A Mapper that swaps keys and values.
InverseMapper() - Constructor for class org.apache.hadoop.mapreduce.lib.map.InverseMapper
 
invoke(String, Object[], String[]) - Method in class org.apache.hadoop.metrics.util.MetricsDynamicMBeanBase
 
IOUtils - Class in org.apache.hadoop.io
An utility class for I/O related functionality.
IOUtils() - Constructor for class org.apache.hadoop.io.IOUtils
 
IOUtils.NullOutputStream - Class in org.apache.hadoop.io
/dev/null of OutputStreams.
IOUtils.NullOutputStream() - Constructor for class org.apache.hadoop.io.IOUtils.NullOutputStream
 
isAbsolute() - Method in class org.apache.hadoop.fs.Path
True if the directory of this path is absolute.
isAbsolute() - Method in class org.apache.hadoop.metrics.spi.MetricValue
 
isAlive() - Method in class org.apache.hadoop.util.ProcfsBasedProcessTree
Is the process-tree alive? Currently we care only about the status of the root-process.
isAvailable() - Static method in class org.apache.hadoop.util.ProcfsBasedProcessTree
Checks if the ProcfsBasedProcessTree is available on this system.
isBlacklisted(String) - Method in class org.apache.hadoop.mapred.JobTracker
Whether the tracker is blacklisted or not
isBlockCompressed() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns true if records are block-compressed.
isChecksumFile(Path) - Static method in class org.apache.hadoop.fs.ChecksumFileSystem
Return true iff file is a checksum file name.
isComplete() - Method in interface org.apache.hadoop.mapred.RunningJob
Check if the job is finished or not.
isComplete() - Method in class org.apache.hadoop.mapreduce.Job
Check if the job is finished or not.
isCompleted() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
isCompressed() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns true if values are compressed.
isContextValid(String) - Static method in class org.apache.hadoop.fs.LocalDirAllocator
Method to check whether a context is valid
isCygwin() - Static method in class org.apache.hadoop.streaming.StreamUtil
 
isDir() - Method in class org.apache.hadoop.fs.FileStatus
Is this a directory?
isDirectory(Path) - Method in class org.apache.hadoop.fs.FileSystem
Deprecated. Use getFileStatus() instead
isDirectory(Path) - Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
Deprecated. 
isDirectory() - Method in class org.apache.hadoop.fs.s3.INode
 
isDisableHistory() - Static method in class org.apache.hadoop.mapred.JobHistory
Returns history disable status.
isEmpty() - Method in class org.apache.hadoop.io.MapWritable
isEmpty() - Method in class org.apache.hadoop.io.SortedMapWritable
isFile(Path) - Method in class org.apache.hadoop.fs.FileSystem
True iff the named path is a regular file.
isFile(Path) - Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
Deprecated. 
isFile() - Method in class org.apache.hadoop.fs.s3.INode
 
isFile(Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
isIdle() - Method in class org.apache.hadoop.mapred.TaskTracker
Is this task tracker idle?
isIncluded(int) - Method in class org.apache.hadoop.conf.Configuration.IntegerRanges
Is the given value in the set of ranges
isIncrement() - Method in class org.apache.hadoop.metrics.spi.MetricValue
 
isJobComplete() - Method in class org.apache.hadoop.mapred.JobStatus
Returns true if the status is for a completed job.
isJobDirValid(Path, FileSystem) - Static method in class org.apache.hadoop.mapred.JobClient
Checks if the job directory is clean and has all the required components for (re) starting the job
isLocalHadoop() - Method in class org.apache.hadoop.streaming.StreamJob
 
isLocalJobTracker(JobConf) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
isMap() - Method in class org.apache.hadoop.mapreduce.TaskAttemptID
Returns whether this TaskAttemptID is a map ID
isMap() - Method in class org.apache.hadoop.mapreduce.TaskID
Returns whether this TaskID is a map ID
isMapTask() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
 
isMonitoring() - Method in interface org.apache.hadoop.metrics.MetricsContext
Returns true if monitoring is currently in progress.
isMonitoring() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Returns true if monitoring is currently in progress.
isMonitoring() - Method in class org.apache.hadoop.metrics.spi.CompositeContext
Return true if all subcontexts are monitoring.
isMultiNamedOutput(JobConf, String) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
Returns if a named output is multiple.
isNativeCodeLoaded() - Static method in class org.apache.hadoop.util.NativeCodeLoader
Check if native-hadoop code is loaded for this platform.
isNativeZlibLoaded(Configuration) - Static method in class org.apache.hadoop.io.compress.zlib.ZlibFactory
Check if native-zlib code is loaded & initialized correctly and can be loaded for this job.
isNegativeVInt(byte) - Static method in class org.apache.hadoop.io.WritableUtils
Given the first byte of a vint/vlong, determine the sign
IsolationRunner - Class in org.apache.hadoop.mapred
 
IsolationRunner() - Constructor for class org.apache.hadoop.mapred.IsolationRunner
 
isOnSameRack(Node, Node) - Method in class org.apache.hadoop.net.NetworkTopology
Check if two nodes are on the same rack
isOpen() - Method in class org.apache.hadoop.net.SocketInputStream
 
isOpen() - Method in class org.apache.hadoop.net.SocketOutputStream
 
isReady() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
isSegmentsFile(String) - Static method in class org.apache.hadoop.contrib.index.lucene.LuceneUtil
Check if the file is a segments_N file
isSegmentsGenFile(String) - Static method in class org.apache.hadoop.contrib.index.lucene.LuceneUtil
Check if the file is the segments.gen file
isSorted() - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader
Is the TFile sorted?
isSplitable(FileSystem, Path) - Method in class org.apache.hadoop.mapred.FileInputFormat
Deprecated. Is the given filename splitable? Usually, true, but if the file is stream compressed, it will not be.
isSplitable(FileSystem, Path) - Method in class org.apache.hadoop.mapred.KeyValueTextInputFormat
 
isSplitable(FileSystem, Path) - Method in class org.apache.hadoop.mapred.TextInputFormat
Deprecated.  
isSplitable(JobContext, Path) - Method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
Is the given filename splitable? Usually, true, but if the file is stream compressed, it will not be.
isSplitable(JobContext, Path) - Method in class org.apache.hadoop.mapreduce.lib.input.TextInputFormat
 
isSuccessful() - Method in interface org.apache.hadoop.mapred.RunningJob
Check if the job completed successfully.
isSuccessful() - Method in class org.apache.hadoop.mapreduce.Job
Check if the job completed successfully.
isTaskMemoryManagerEnabled() - Method in class org.apache.hadoop.mapred.TaskTracker
Is the TaskMemoryManager Enabled on this system?
isValid() - Method in class org.apache.hadoop.contrib.failmon.EventRecord
Check if the EventRecord is a valid one, i.e., whether it represents meaningful metric values.
isValid() - Method in class org.apache.hadoop.contrib.failmon.SerializedRecord
Check if the SerializedRecord is a valid one, i.e., whether it represents meaningful metric values.
isValueLengthKnown() - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
Check whether it is safe to call getValueLength().
iterator() - Method in class org.apache.hadoop.conf.Configuration
Get an Iterator to go through the list of String key-value pairs in the configuration.
iterator() - Method in class org.apache.hadoop.mapred.Counters.Group
Deprecated.  
iterator() - Method in class org.apache.hadoop.mapred.Counters
Deprecated.  
iterator() - Method in class org.apache.hadoop.mapred.join.TupleWritable
Return an iterator over the elements in this tuple.
iterator() - Method in class org.apache.hadoop.mapreduce.CounterGroup
 
iterator() - Method in class org.apache.hadoop.mapreduce.Counters
 
iterator() - Method in class org.apache.hadoop.mapreduce.ReduceContext.ValueIterable
 
iterator() - Method in class org.apache.hadoop.util.CyclicIteration

J

jar_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
JarBuilder - Class in org.apache.hadoop.streaming
This class is the main class for generating job.jar for Hadoop Streaming jobs.
JarBuilder() - Constructor for class org.apache.hadoop.streaming.JarBuilder
 
JavaSerialization - Class in org.apache.hadoop.io.serializer
An experimental Serialization for Java Serializable classes.
JavaSerialization() - Constructor for class org.apache.hadoop.io.serializer.JavaSerialization
 
JavaSerializationComparator<T extends Serializable & Comparable<T>> - Class in org.apache.hadoop.io.serializer
A RawComparator that uses a JavaSerialization Deserializer to deserialize objects that are then compared via their Comparable interfaces.
JavaSerializationComparator() - Constructor for class org.apache.hadoop.io.serializer.JavaSerializationComparator
 
JBoolean - Class in org.apache.hadoop.record.compiler
 
JBoolean() - Constructor for class org.apache.hadoop.record.compiler.JBoolean
Creates a new instance of JBoolean
JBuffer - Class in org.apache.hadoop.record.compiler
Code generator for "buffer" type.
JBuffer() - Constructor for class org.apache.hadoop.record.compiler.JBuffer
Creates a new instance of JBuffer
JByte - Class in org.apache.hadoop.record.compiler
Code generator for "byte" type.
JByte() - Constructor for class org.apache.hadoop.record.compiler.JByte
 
jc - Variable in class org.apache.hadoop.mapred.join.CompositeRecordReader
 
jc - Variable in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
 
jc_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
JDouble - Class in org.apache.hadoop.record.compiler
 
JDouble() - Constructor for class org.apache.hadoop.record.compiler.JDouble
Creates a new instance of JDouble
JENKINS_HASH - Static variable in class org.apache.hadoop.util.hash.Hash
Constant to denote JenkinsHash.
JenkinsHash - Class in org.apache.hadoop.util.hash
Produces 32-bit hash for hash table lookup.
JenkinsHash() - Constructor for class org.apache.hadoop.util.hash.JenkinsHash
 
JField<T> - Class in org.apache.hadoop.record.compiler
A thin wrappper around record field.
JField(String, T) - Constructor for class org.apache.hadoop.record.compiler.JField
Creates a new instance of JField
JFile - Class in org.apache.hadoop.record.compiler
Container for the Hadoop Record DDL.
JFile(String, ArrayList<JFile>, ArrayList<JRecord>) - Constructor for class org.apache.hadoop.record.compiler.JFile
Creates a new instance of JFile
JFloat - Class in org.apache.hadoop.record.compiler
 
JFloat() - Constructor for class org.apache.hadoop.record.compiler.JFloat
Creates a new instance of JFloat
JInt - Class in org.apache.hadoop.record.compiler
Code generator for "int" type
JInt() - Constructor for class org.apache.hadoop.record.compiler.JInt
Creates a new instance of JInt
jj_nt - Variable in class org.apache.hadoop.record.compiler.generated.Rcc
 
jjFillToken() - Method in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
jjnewLexState - Static variable in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
jjstrLiteralImages - Static variable in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
JLong - Class in org.apache.hadoop.record.compiler
Code generator for "long" type
JLong() - Constructor for class org.apache.hadoop.record.compiler.JLong
Creates a new instance of JLong
JMap - Class in org.apache.hadoop.record.compiler
 
JMap(JType, JType) - Constructor for class org.apache.hadoop.record.compiler.JMap
Creates a new instance of JMap
job - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
 
job - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
Job - Class in org.apache.hadoop.mapred.jobcontrol
This class encapsulates a MapReduce job and its dependency.
Job(JobConf, ArrayList<Job>) - Constructor for class org.apache.hadoop.mapred.jobcontrol.Job
Construct a job.
Job(JobConf) - Constructor for class org.apache.hadoop.mapred.jobcontrol.Job
Construct a job.
Job - Class in org.apache.hadoop.mapreduce
The job submitter's view of the Job.
Job() - Constructor for class org.apache.hadoop.mapreduce.Job
 
Job(Configuration) - Constructor for class org.apache.hadoop.mapreduce.Job
 
Job(Configuration, String) - Constructor for class org.apache.hadoop.mapreduce.Job
 
JOB - Static variable in class org.apache.hadoop.mapreduce.JobID
 
Job.JobState - Enum in org.apache.hadoop.mapreduce
 
JOB_NAME_TRIM_LENGTH - Static variable in class org.apache.hadoop.mapred.JobHistory
 
JobBase - Class in org.apache.hadoop.contrib.utils.join
A common base implementing some statics collecting mechanisms that are commonly used in a typical map/reduce job.
JobBase() - Constructor for class org.apache.hadoop.contrib.utils.join.JobBase
 
JobClient - Class in org.apache.hadoop.mapred
JobClient is the primary interface for the user-job to interact with the JobTracker.
JobClient() - Constructor for class org.apache.hadoop.mapred.JobClient
Create a job client.
JobClient(JobConf) - Constructor for class org.apache.hadoop.mapred.JobClient
Build a job client with the given JobConf, and connect to the default JobTracker.
JobClient(InetSocketAddress, Configuration) - Constructor for class org.apache.hadoop.mapred.JobClient
Build a job client, connect to the indicated job tracker.
JobClient.TaskStatusFilter - Enum in org.apache.hadoop.mapred
 
JobConf - Class in org.apache.hadoop.mapred
Deprecated. Use Configuration instead
JobConf() - Constructor for class org.apache.hadoop.mapred.JobConf
Deprecated. Construct a map/reduce job configuration.
JobConf(Class) - Constructor for class org.apache.hadoop.mapred.JobConf
Deprecated. Construct a map/reduce job configuration.
JobConf(Configuration) - Constructor for class org.apache.hadoop.mapred.JobConf
Deprecated. Construct a map/reduce job configuration.
JobConf(Configuration, Class) - Constructor for class org.apache.hadoop.mapred.JobConf
Deprecated. Construct a map/reduce job configuration.
JobConf(String) - Constructor for class org.apache.hadoop.mapred.JobConf
Deprecated. Construct a map/reduce configuration.
JobConf(Path) - Constructor for class org.apache.hadoop.mapred.JobConf
Deprecated. Construct a map/reduce configuration.
JobConf(boolean) - Constructor for class org.apache.hadoop.mapred.JobConf
Deprecated. A new map/reduce configuration where the behavior of reading from the default resources can be turned off.
jobConf_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
JobConfigurable - Interface in org.apache.hadoop.mapred
Deprecated. 
JobContext - Class in org.apache.hadoop.mapred
Deprecated. Use JobContext instead.
JobContext - Class in org.apache.hadoop.mapreduce
A read-only view of the job that is provided to the tasks while they are running.
JobContext(Configuration, JobID) - Constructor for class org.apache.hadoop.mapreduce.JobContext
 
JobControl - Class in org.apache.hadoop.mapred.jobcontrol
This class encapsulates a set of MapReduce jobs and its dependency.
JobControl(String) - Constructor for class org.apache.hadoop.mapred.jobcontrol.JobControl
Construct a job control for a group of jobs.
JobEndNotifier - Class in org.apache.hadoop.mapred
 
JobEndNotifier() - Constructor for class org.apache.hadoop.mapred.JobEndNotifier
 
JobHistory - Class in org.apache.hadoop.mapred
Provides methods for writing to and reading from job history.
JobHistory() - Constructor for class org.apache.hadoop.mapred.JobHistory
 
JobHistory.HistoryCleaner - Class in org.apache.hadoop.mapred
Delete history files older than one month.
JobHistory.HistoryCleaner() - Constructor for class org.apache.hadoop.mapred.JobHistory.HistoryCleaner
 
JobHistory.JobInfo - Class in org.apache.hadoop.mapred
Helper class for logging or reading back events related to job start, finish or failure.
JobHistory.JobInfo(String) - Constructor for class org.apache.hadoop.mapred.JobHistory.JobInfo
Create new JobInfo
JobHistory.Keys - Enum in org.apache.hadoop.mapred
Job history files contain key="value" pairs, where keys belong to this enum.
JobHistory.Listener - Interface in org.apache.hadoop.mapred
Callback interface for reading back log events from JobHistory.
JobHistory.MapAttempt - Class in org.apache.hadoop.mapred
Helper class for logging or reading back events related to start, finish or failure of a Map Attempt on a node.
JobHistory.MapAttempt() - Constructor for class org.apache.hadoop.mapred.JobHistory.MapAttempt
 
JobHistory.RecordTypes - Enum in org.apache.hadoop.mapred
Record types are identifiers for each line of log in history files.
JobHistory.ReduceAttempt - Class in org.apache.hadoop.mapred
Helper class for logging or reading back events related to start, finish or failure of a Map Attempt on a node.
JobHistory.ReduceAttempt() - Constructor for class org.apache.hadoop.mapred.JobHistory.ReduceAttempt
 
JobHistory.Task - Class in org.apache.hadoop.mapred
Helper class for logging or reading back events related to Task's start, finish or failure.
JobHistory.Task() - Constructor for class org.apache.hadoop.mapred.JobHistory.Task
 
JobHistory.TaskAttempt - Class in org.apache.hadoop.mapred
Base class for Map and Reduce TaskAttempts.
JobHistory.TaskAttempt() - Constructor for class org.apache.hadoop.mapred.JobHistory.TaskAttempt
 
JobHistory.Values - Enum in org.apache.hadoop.mapred
This enum contains some of the values commonly used by history log events.
JobID - Class in org.apache.hadoop.mapred
Deprecated. 
JobID(String, int) - Constructor for class org.apache.hadoop.mapred.JobID
Deprecated. Constructs a JobID object
JobID() - Constructor for class org.apache.hadoop.mapred.JobID
Deprecated.  
JobID - Class in org.apache.hadoop.mapreduce
JobID represents the immutable and unique identifier for the job.
JobID(String, int) - Constructor for class org.apache.hadoop.mapreduce.JobID
Constructs a JobID object
JobID() - Constructor for class org.apache.hadoop.mapreduce.JobID
 
jobId_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
jobInfo() - Method in class org.apache.hadoop.streaming.StreamJob
 
JobPriority - Enum in org.apache.hadoop.mapred
Used to describe the priority of the running job.
JobProfile - Class in org.apache.hadoop.mapred
A JobProfile is a MapReduce primitive.
JobProfile() - Constructor for class org.apache.hadoop.mapred.JobProfile
Construct an empty JobProfile.
JobProfile(String, JobID, String, String, String) - Constructor for class org.apache.hadoop.mapred.JobProfile
Construct a JobProfile the userid, jobid, job config-file, job-details url and job name.
JobProfile(String, JobID, String, String, String, String) - Constructor for class org.apache.hadoop.mapred.JobProfile
Construct a JobProfile the userid, jobid, job config-file, job-details url and job name.
JobProfile(String, String, String, String, String) - Constructor for class org.apache.hadoop.mapred.JobProfile
Deprecated. use JobProfile(String, JobID, String, String, String) instead
JobQueueInfo - Class in org.apache.hadoop.mapred
Class that contains the information regarding the Job Queues which are maintained by the Hadoop Map/Reduce framework.
JobQueueInfo() - Constructor for class org.apache.hadoop.mapred.JobQueueInfo
Default constructor for Job Queue Info.
JobQueueInfo(String, String) - Constructor for class org.apache.hadoop.mapred.JobQueueInfo
Construct a new JobQueueInfo object using the queue name and the scheduling information passed.
JobStatus - Class in org.apache.hadoop.mapred
Describes the current status of a job.
JobStatus() - Constructor for class org.apache.hadoop.mapred.JobStatus
 
JobStatus(JobID, float, float, float, int) - Constructor for class org.apache.hadoop.mapred.JobStatus
Create a job status object for a given jobid.
JobStatus(JobID, float, float, int) - Constructor for class org.apache.hadoop.mapred.JobStatus
Create a job status object for a given jobid.
JobStatus(JobID, float, float, float, int, JobPriority) - Constructor for class org.apache.hadoop.mapred.JobStatus
Create a job status object for a given jobid.
JobStatus(JobID, float, float, float, float, int, JobPriority) - Constructor for class org.apache.hadoop.mapred.JobStatus
Create a job status object for a given jobid.
jobsToComplete() - Method in class org.apache.hadoop.mapred.JobClient
Get the jobs that are not completed and not failed.
jobsToComplete() - Method in class org.apache.hadoop.mapred.JobTracker
 
jobSubmit(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Submit a job to the Map-Reduce framework.
JobTracker - Class in org.apache.hadoop.mapred
JobTracker is the central location for submitting and tracking MR jobs in a network environment.
JobTracker.IllegalStateException - Exception in org.apache.hadoop.mapred
A client tried to submit a job before the Job Tracker was ready.
JobTracker.IllegalStateException(String) - Constructor for exception org.apache.hadoop.mapred.JobTracker.IllegalStateException
 
JobTracker.State - Enum in org.apache.hadoop.mapred
 
Join - Class in org.apache.hadoop.examples
This is the trivial map/reduce program that does absolutely nothing other than use the framework to fragment and sort the input values.
Join() - Constructor for class org.apache.hadoop.examples.Join
 
join() - Method in class org.apache.hadoop.http.HttpServer
 
join() - Method in class org.apache.hadoop.ipc.Server
Wait for the server to be stopped.
JoinRecordReader<K extends WritableComparable> - Class in org.apache.hadoop.mapred.join
Base class for Composite joins returning Tuples of arbitrary Writables.
JoinRecordReader(int, JobConf, int, Class<? extends WritableComparator>) - Constructor for class org.apache.hadoop.mapred.join.JoinRecordReader
 
JoinRecordReader.JoinDelegationIterator - Class in org.apache.hadoop.mapred.join
Since the JoinCollector is effecting our operation, we need only provide an iterator proxy wrapping its operation.
JoinRecordReader.JoinDelegationIterator() - Constructor for class org.apache.hadoop.mapred.join.JoinRecordReader.JoinDelegationIterator
 
JRecord - Class in org.apache.hadoop.record.compiler
 
JRecord(String, ArrayList<JField<JType>>) - Constructor for class org.apache.hadoop.record.compiler.JRecord
Creates a new instance of JRecord
JString - Class in org.apache.hadoop.record.compiler
 
JString() - Constructor for class org.apache.hadoop.record.compiler.JString
Creates a new instance of JString
JType - Class in org.apache.hadoop.record.compiler
Abstract Base class for all types supported by Hadoop Record I/O.
JType() - Constructor for class org.apache.hadoop.record.compiler.JType
 
JVector - Class in org.apache.hadoop.record.compiler
 
JVector(JType) - Constructor for class org.apache.hadoop.record.compiler.JVector
Creates a new instance of JVector
JvmMetrics - Class in org.apache.hadoop.metrics.jvm
Singleton class which reports Java Virtual Machine metrics to the metrics API.

K

key() - Method in class org.apache.hadoop.io.ArrayFile.Reader
Returns the key associated with the most recent call to ArrayFile.Reader.seek(long), ArrayFile.Reader.next(Writable), or ArrayFile.Reader.get(long,Writable).
key() - Method in interface org.apache.hadoop.mapred.join.ComposableRecordReader
Return the key this RecordReader would supply on a call to next(K,V)
key(K) - Method in interface org.apache.hadoop.mapred.join.ComposableRecordReader
Clone the key at the head of this RecordReader into the object provided.
key() - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
Return the key for the current join or the value at the top of the RecordReader heap.
key(K) - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
Clone the key at the top of this RR into the given object.
key() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
Return the key at the head of this RR.
key(K) - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
Clone the key at the head of this RR into the object supplied.
Key - Class in org.apache.hadoop.util.bloom
The general behavior of a key that must be stored in a filter.
Key() - Constructor for class org.apache.hadoop.util.bloom.Key
default constructor - use with readFields
Key(byte[]) - Constructor for class org.apache.hadoop.util.bloom.Key
Constructor.
Key(byte[], double) - Constructor for class org.apache.hadoop.util.bloom.Key
Constructor.
KeyFieldBasedComparator<K,V> - Class in org.apache.hadoop.mapred.lib
This comparator implementation provides a subset of the features provided by the Unix/GNU Sort.
KeyFieldBasedComparator() - Constructor for class org.apache.hadoop.mapred.lib.KeyFieldBasedComparator
 
KeyFieldBasedPartitioner<K2,V2> - Class in org.apache.hadoop.mapred.lib
Defines a way to partition keys based on certain key fields (also see KeyFieldBasedComparator.
KeyFieldBasedPartitioner() - Constructor for class org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner
 
keySerializer - Variable in class org.apache.hadoop.io.SequenceFile.Writer
 
keySet() - Method in class org.apache.hadoop.io.MapWritable
keySet() - Method in class org.apache.hadoop.io.SortedMapWritable
KeyValueLineRecordReader - Class in org.apache.hadoop.mapred
This class treats a line in the input as a key/value pair separated by a separator character.
KeyValueLineRecordReader(Configuration, FileSplit) - Constructor for class org.apache.hadoop.mapred.KeyValueLineRecordReader
 
KeyValueTextInputFormat - Class in org.apache.hadoop.mapred
An InputFormat for plain text files.
KeyValueTextInputFormat() - Constructor for class org.apache.hadoop.mapred.KeyValueTextInputFormat
 
kids - Variable in class org.apache.hadoop.mapred.join.CompositeRecordReader
 
KILLED - Static variable in class org.apache.hadoop.mapred.JobStatus
 
killJob(JobID) - Method in class org.apache.hadoop.mapred.JobTracker
 
killJob() - Method in interface org.apache.hadoop.mapred.RunningJob
Kill the running job.
killJob() - Method in class org.apache.hadoop.mapreduce.Job
Kill the running job.
killTask(TaskAttemptID, boolean) - Method in class org.apache.hadoop.mapred.JobTracker
Mark a Task to be killed
killTask(TaskAttemptID, boolean) - Method in interface org.apache.hadoop.mapred.RunningJob
Kill indicated task attempt.
killTask(String, boolean) - Method in interface org.apache.hadoop.mapred.RunningJob
Deprecated. Applications should rather use RunningJob.killTask(TaskAttemptID, boolean)
killTask(TaskAttemptID) - Method in class org.apache.hadoop.mapreduce.Job
Kill indicated task attempt.
kind - Variable in class org.apache.hadoop.record.compiler.generated.Token
An integer that describes the kind of this token.
KosmosFileSystem - Class in org.apache.hadoop.fs.kfs
A FileSystem backed by KFS.
KosmosFileSystem() - Constructor for class org.apache.hadoop.fs.kfs.KosmosFileSystem
 

L

largestNumOfValues - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
lastKey() - Method in class org.apache.hadoop.io.SortedMapWritable
LBRACE_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
LENGTH - Static variable in class org.apache.hadoop.fs.MD5MD5CRC32FileChecksum
 
LESSER_ICOST - Static variable in class org.apache.hadoop.io.compress.bzip2.CBZip2OutputStream
This constant is accessible by subclasses for historical purposes.
lessThan(Object, Object) - Method in class org.apache.hadoop.util.PriorityQueue
Determines the ordering of objects in this priority queue.
level - Variable in class org.apache.hadoop.net.NodeBase
 
LexicalError(boolean, int, int, int, String, char) - Static method in error org.apache.hadoop.record.compiler.generated.TokenMgrError
Returns a detailed message for the Error when it is thrown by the token manager to indicate a lexical error.
lexStateNames - Static variable in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
limitDecimalTo2(double) - Static method in class org.apache.hadoop.fs.FsShell
Deprecated. Consider using StringUtils.limitDecimalTo2(double) instead.
limitDecimalTo2(double) - Static method in class org.apache.hadoop.util.StringUtils
 
line - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
LineDocInputFormat - Class in org.apache.hadoop.contrib.index.example
An InputFormat for LineDoc for plain text files where each line is a doc.
LineDocInputFormat() - Constructor for class org.apache.hadoop.contrib.index.example.LineDocInputFormat
 
LineDocLocalAnalysis - Class in org.apache.hadoop.contrib.index.example
Convert LineDocTextAndOp to DocumentAndOp as required by ILocalAnalysis.
LineDocLocalAnalysis() - Constructor for class org.apache.hadoop.contrib.index.example.LineDocLocalAnalysis
 
LineDocRecordReader - Class in org.apache.hadoop.contrib.index.example
A simple RecordReader for LineDoc for plain text files where each line is a doc.
LineDocRecordReader(Configuration, FileSplit) - Constructor for class org.apache.hadoop.contrib.index.example.LineDocRecordReader
Constructor
LineDocTextAndOp - Class in org.apache.hadoop.contrib.index.example
This class represents an operation.
LineDocTextAndOp() - Constructor for class org.apache.hadoop.contrib.index.example.LineDocTextAndOp
Constructor
LineReader - Class in org.apache.hadoop.util
A class that provides a line reader from an input stream.
LineReader(InputStream) - Constructor for class org.apache.hadoop.util.LineReader
Create a line reader that reads from the given stream using the default buffer-size (64k).
LineReader(InputStream, int) - Constructor for class org.apache.hadoop.util.LineReader
Create a line reader that reads from the given stream using the given buffer-size.
LineReader(InputStream, Configuration) - Constructor for class org.apache.hadoop.util.LineReader
Create a line reader that reads from the given stream using the io.file.buffer.size specified in the given Configuration.
LineRecordReader - Class in org.apache.hadoop.mapred
Deprecated. Use LineRecordReader instead.
LineRecordReader(Configuration, FileSplit) - Constructor for class org.apache.hadoop.mapred.LineRecordReader
Deprecated.  
LineRecordReader(InputStream, long, long, int) - Constructor for class org.apache.hadoop.mapred.LineRecordReader
Deprecated.  
LineRecordReader(InputStream, long, long, Configuration) - Constructor for class org.apache.hadoop.mapred.LineRecordReader
Deprecated.  
LineRecordReader - Class in org.apache.hadoop.mapreduce.lib.input
Treats keys as offset in file and value as line.
LineRecordReader() - Constructor for class org.apache.hadoop.mapreduce.lib.input.LineRecordReader
 
LineRecordReader.LineReader - Class in org.apache.hadoop.mapred
Deprecated. Use LineReader instead.
LineRecordReader.LineReader(InputStream, Configuration) - Constructor for class org.apache.hadoop.mapred.LineRecordReader.LineReader
Deprecated.  
LINK_URI - Static variable in class org.apache.hadoop.streaming.StreamJob
 
LinuxMemoryCalculatorPlugin - Class in org.apache.hadoop.util
Plugin to calculate virtual and physical memories on Linux systems.
LinuxMemoryCalculatorPlugin() - Constructor for class org.apache.hadoop.util.LinuxMemoryCalculatorPlugin
 
list() - Method in class org.apache.hadoop.contrib.index.lucene.FileSystemDirectory
 
listDeepSubPaths(Path) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
listener - Variable in class org.apache.hadoop.http.HttpServer
 
listJobConfProperties() - Method in class org.apache.hadoop.streaming.StreamJob
Prints out the jobconf properties on stdout when verbose is specified.
listStatus(Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
List the statuses of the files/directories in the given path if the path is a directory.
listStatus(Path) - Method in class org.apache.hadoop.fs.FileSystem
List the statuses of the files/directories in the given path if the path is a directory.
listStatus(Path, PathFilter) - Method in class org.apache.hadoop.fs.FileSystem
Filter files/directories in the given path using the user-supplied path filter.
listStatus(Path[]) - Method in class org.apache.hadoop.fs.FileSystem
Filter files/directories in the given list of paths using default path filter.
listStatus(Path[], PathFilter) - Method in class org.apache.hadoop.fs.FileSystem
Filter files/directories in the given list of paths using user-supplied path filter.
listStatus(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
List files in a directory.
listStatus(Path) - Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
 
listStatus(Path) - Method in class org.apache.hadoop.fs.HarFileSystem
liststatus returns the children of a directory after looking up the index files.
listStatus(Path) - Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
 
listStatus(Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
listStatus(Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
listStatus(Path) - Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
If f is a file, this method will make a single call to S3.
listStatus(JobConf) - Method in class org.apache.hadoop.mapred.FileInputFormat
Deprecated. List input directories.
listStatus(JobConf) - Method in class org.apache.hadoop.mapred.SequenceFileInputFormat
Deprecated.  
listStatus(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
List input directories.
listStatus(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat
 
listSubPaths(Path) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
ljustify(String, int) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
load(Configuration, String, Class<K>) - Static method in class org.apache.hadoop.io.DefaultStringifier
Restores the object from the configuration.
loadArray(Configuration, String, Class<K>) - Static method in class org.apache.hadoop.io.DefaultStringifier
Restores the array of objects from the configuration.
LocalDirAllocator - Class in org.apache.hadoop.fs
An implementation of a round-robin scheme for disk allocation for creating files.
LocalDirAllocator(String) - Constructor for class org.apache.hadoop.fs.LocalDirAllocator
Create an allocator object
LocalFileSystem - Class in org.apache.hadoop.fs
Implement the FileSystem API for the checksumed local filesystem.
LocalFileSystem() - Constructor for class org.apache.hadoop.fs.LocalFileSystem
 
LocalFileSystem(FileSystem) - Constructor for class org.apache.hadoop.fs.LocalFileSystem
 
localHadoop_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
localizeBin(String) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
localRunnerNotification(JobConf, JobStatus) - Static method in class org.apache.hadoop.mapred.JobEndNotifier
 
LocalStore - Class in org.apache.hadoop.contrib.failmon
This class takes care of the temporary local storage of gathered metrics before they get uploaded into HDFS.
LocalStore() - Constructor for class org.apache.hadoop.contrib.failmon.LocalStore
Create an instance of the class and read the configuration file to determine some output parameters.
location - Variable in class org.apache.hadoop.net.NodeBase
 
lock(Path, boolean) - Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
Deprecated. 
LOG - Static variable in class org.apache.hadoop.contrib.failmon.Environment
 
LOG - Static variable in class org.apache.hadoop.contrib.index.main.UpdateIndex
 
LOG - Static variable in class org.apache.hadoop.contrib.index.mapred.IndexUpdater
 
LOG - Static variable in class org.apache.hadoop.contrib.utils.join.JobBase
 
LOG - Static variable in class org.apache.hadoop.fs.FileSystem
 
LOG - Static variable in class org.apache.hadoop.fs.FSInputChecker
 
LOG - Static variable in class org.apache.hadoop.fs.ftp.FTPFileSystem
 
LOG - Static variable in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
 
LOG - Static variable in class org.apache.hadoop.http.HttpServer
 
LOG - Static variable in class org.apache.hadoop.io.compress.CompressionCodecFactory
 
LOG - Static variable in class org.apache.hadoop.ipc.Client
 
LOG - Static variable in class org.apache.hadoop.ipc.Server
 
log(Log) - Method in class org.apache.hadoop.mapred.Counters
Deprecated. Logs the current counter values.
LOG - Static variable in class org.apache.hadoop.mapred.FileInputFormat
Deprecated.  
LOG - Static variable in class org.apache.hadoop.mapred.FileOutputCommitter
 
LOG - Static variable in class org.apache.hadoop.mapred.JobHistory
 
LOG - Static variable in class org.apache.hadoop.mapred.JobTracker
 
LOG - Static variable in class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
 
LOG - Static variable in class org.apache.hadoop.mapred.pipes.Submitter
 
LOG - Static variable in class org.apache.hadoop.mapred.TaskTracker
 
LOG - Static variable in class org.apache.hadoop.metrics.MetricsUtil
 
LOG - Static variable in class org.apache.hadoop.net.NetworkTopology
 
LOG - Static variable in class org.apache.hadoop.security.UserGroupInformation
 
LOG - Static variable in class org.apache.hadoop.streaming.PipeMapRed
 
LOG - Static variable in class org.apache.hadoop.streaming.StreamBaseRecordReader
 
LOG - Static variable in class org.apache.hadoop.streaming.StreamJob
 
LOG - Static variable in class org.apache.hadoop.util.Shell
 
logFailed(JobID, long, int, int) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Logs job failed event.
logFailed(TaskAttemptID, long, String, String) - Static method in class org.apache.hadoop.mapred.JobHistory.MapAttempt
Deprecated. Use JobHistory.MapAttempt.logFailed(TaskAttemptID, long, String, String, String)
logFailed(TaskAttemptID, long, String, String, String) - Static method in class org.apache.hadoop.mapred.JobHistory.MapAttempt
Log task attempt failed event.
logFailed(TaskAttemptID, long, String, String) - Static method in class org.apache.hadoop.mapred.JobHistory.ReduceAttempt
Deprecated. Use JobHistory.ReduceAttempt.logFailed(TaskAttemptID, long, String, String, String)
logFailed(TaskAttemptID, long, String, String, String) - Static method in class org.apache.hadoop.mapred.JobHistory.ReduceAttempt
Log failed reduce task attempt.
logFailed(TaskID, String, long, String) - Static method in class org.apache.hadoop.mapred.JobHistory.Task
Log job failed event.
logFailed(TaskID, String, long, String, TaskAttemptID) - Static method in class org.apache.hadoop.mapred.JobHistory.Task
 
logFinished(JobID, long, int, int, int, int, Counters) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Log job finished.
logFinished(TaskAttemptID, long, String) - Static method in class org.apache.hadoop.mapred.JobHistory.MapAttempt
Deprecated. Use JobHistory.MapAttempt.logFinished(TaskAttemptID, long, String, String, String, Counters)
logFinished(TaskAttemptID, long, String, String, String, Counters) - Static method in class org.apache.hadoop.mapred.JobHistory.MapAttempt
Log finish time of map task attempt.
logFinished(TaskAttemptID, long, long, long, String) - Static method in class org.apache.hadoop.mapred.JobHistory.ReduceAttempt
Deprecated. Use JobHistory.ReduceAttempt.logFinished(TaskAttemptID, long, long, long, String, String, String, Counters)
logFinished(TaskAttemptID, long, long, long, String, String, String, Counters) - Static method in class org.apache.hadoop.mapred.JobHistory.ReduceAttempt
Log finished event of this task.
logFinished(TaskID, String, long, Counters) - Static method in class org.apache.hadoop.mapred.JobHistory.Task
Log finish time of task.
login() - Static method in class org.apache.hadoop.security.UnixUserGroupInformation
Get current user's name and the names of all its groups from Unix.
login(Configuration) - Static method in class org.apache.hadoop.security.UnixUserGroupInformation
Equivalent to login(conf, false).
login(Configuration, boolean) - Static method in class org.apache.hadoop.security.UnixUserGroupInformation
Get a user's name & its group names from the given configuration; If it is not defined in the configuration, get the current user's information from Unix.
login(Configuration) - Static method in class org.apache.hadoop.security.UserGroupInformation
Login and return a UserGroupInformation object.
logInfo(String) - Static method in class org.apache.hadoop.contrib.failmon.Environment
 
logInited(JobID, long, int, int) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Logs launch time of job.
logJobInfo(JobID, long, long, int) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Deprecated. Use JobHistory.JobInfo.logJobInfo(JobID, long, long) instead.
logJobInfo(JobID, long, long) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
 
logJobPriority(JobID, JobPriority) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Log job's priority.
logKilled(JobID, long, int, int) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Logs job killed event.
logKilled(TaskAttemptID, long, String, String) - Static method in class org.apache.hadoop.mapred.JobHistory.MapAttempt
Deprecated. Use JobHistory.MapAttempt.logKilled(TaskAttemptID, long, String, String, String)
logKilled(TaskAttemptID, long, String, String, String) - Static method in class org.apache.hadoop.mapred.JobHistory.MapAttempt
Log task attempt killed event.
logKilled(TaskAttemptID, long, String, String) - Static method in class org.apache.hadoop.mapred.JobHistory.ReduceAttempt
Deprecated. Use JobHistory.ReduceAttempt.logKilled(TaskAttemptID, long, String, String, String)
logKilled(TaskAttemptID, long, String, String, String) - Static method in class org.apache.hadoop.mapred.JobHistory.ReduceAttempt
Log killed reduce task attempt.
LogLevel - Class in org.apache.hadoop.log
Change log level in runtime.
LogLevel() - Constructor for class org.apache.hadoop.log.LogLevel
 
LogLevel.Servlet - Class in org.apache.hadoop.log
A servlet implementation
LogLevel.Servlet() - Constructor for class org.apache.hadoop.log.LogLevel.Servlet
 
LogParser - Class in org.apache.hadoop.contrib.failmon
This class represents objects that provide log parsing functionality.
LogParser(String) - Constructor for class org.apache.hadoop.contrib.failmon.LogParser
Create a parser that will read from the specified log file.
logSpec() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJobBase
 
logStarted(JobID, long, int, int) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Deprecated. Use JobHistory.JobInfo.logInited(JobID, long, int, int) and JobHistory.JobInfo.logStarted(JobID)
logStarted(JobID) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Logs job as running
logStarted(TaskAttemptID, long, String) - Static method in class org.apache.hadoop.mapred.JobHistory.MapAttempt
Deprecated. Use JobHistory.MapAttempt.logStarted(TaskAttemptID, long, String, int, String)
logStarted(TaskAttemptID, long, String, int, String) - Static method in class org.apache.hadoop.mapred.JobHistory.MapAttempt
Log start time of this map task attempt.
logStarted(TaskAttemptID, long, String) - Static method in class org.apache.hadoop.mapred.JobHistory.ReduceAttempt
Deprecated. Use JobHistory.ReduceAttempt.logStarted(TaskAttemptID, long, String, int, String)
logStarted(TaskAttemptID, long, String, int, String) - Static method in class org.apache.hadoop.mapred.JobHistory.ReduceAttempt
Log start time of Reduce task attempt.
logStarted(TaskID, String, long, String) - Static method in class org.apache.hadoop.mapred.JobHistory.Task
Log start time of task (TIP).
logSubmitted(JobID, JobConf, String, long) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Deprecated. Use JobHistory.JobInfo.logSubmitted(JobID, JobConf, String, long, boolean) instead.
logSubmitted(JobID, JobConf, String, long, boolean) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
 
logThreadInfo(Log, String, long) - Static method in class org.apache.hadoop.util.ReflectionUtils
Log the current thread stacks at INFO level.
logUpdates(TaskID, long) - Static method in class org.apache.hadoop.mapred.JobHistory.Task
Update the finish time of task.
LONG - Static variable in class org.apache.hadoop.record.meta.TypeID.RIOType
 
LONG_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
LONG_VALUE_MAX - Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
LONG_VALUE_MIN - Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
LONG_VALUE_SUM - Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
LongSumReducer<K> - Class in org.apache.hadoop.mapred.lib
Deprecated. Use LongSumReducer instead.
LongSumReducer() - Constructor for class org.apache.hadoop.mapred.lib.LongSumReducer
Deprecated.  
LongSumReducer<KEY> - Class in org.apache.hadoop.mapreduce.lib.reduce
 
LongSumReducer() - Constructor for class org.apache.hadoop.mapreduce.lib.reduce.LongSumReducer
 
LongTypeID - Static variable in class org.apache.hadoop.record.meta.TypeID
 
LongValueMax - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements a value aggregator that maintain the maximum of a sequence of long values.
LongValueMax() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
the default constructor
LongValueMin - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements a value aggregator that maintain the minimum of a sequence of long values.
LongValueMin() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
the default constructor
LongValueSum - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements a value aggregator that sums up a sequence of long values.
LongValueSum() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
the default constructor
LongWritable - Class in org.apache.hadoop.io
A WritableComparable for longs.
LongWritable() - Constructor for class org.apache.hadoop.io.LongWritable
 
LongWritable(long) - Constructor for class org.apache.hadoop.io.LongWritable
 
LongWritable.Comparator - Class in org.apache.hadoop.io
A Comparator optimized for LongWritable.
LongWritable.Comparator() - Constructor for class org.apache.hadoop.io.LongWritable.Comparator
 
LongWritable.DecreasingComparator - Class in org.apache.hadoop.io
A decreasing Comparator optimized for LongWritable.
LongWritable.DecreasingComparator() - Constructor for class org.apache.hadoop.io.LongWritable.DecreasingComparator
 
lowerBound(byte[]) - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner
Move the cursor to the first entry whose key is greater than or equal to the input key.
lowerBound(byte[], int, int) - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner
Move the cursor to the first entry whose key is greater than or equal to the input key.
lowerBound(List<? extends T>, T, Comparator<? super T>) - Static method in class org.apache.hadoop.io.file.tfile.Utils
Lower bound binary search.
lowerBound(List<? extends Comparable<? super T>>, T) - Static method in class org.apache.hadoop.io.file.tfile.Utils
Lower bound binary search.
LT_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
LuceneUtil - Class in org.apache.hadoop.contrib.index.lucene
This class copies some methods from Lucene's SegmentInfos since that class is not public.
LuceneUtil() - Constructor for class org.apache.hadoop.contrib.index.lucene.LuceneUtil
 

M

main(String[]) - Static method in class org.apache.hadoop.conf.Configuration
For debugging.
main(String[]) - Static method in class org.apache.hadoop.contrib.failmon.Continuous
 
main(String[]) - Static method in class org.apache.hadoop.contrib.failmon.HDFSMerger
 
main(String[]) - Static method in class org.apache.hadoop.contrib.failmon.OfflineAnonymizer
 
main(String[]) - Static method in class org.apache.hadoop.contrib.failmon.RunOnce
 
main(String[]) - Static method in class org.apache.hadoop.contrib.index.main.UpdateIndex
The main() method
main(String[]) - Static method in class org.apache.hadoop.contrib.utils.join.DataJoinJob
 
main(String[]) - Static method in class org.apache.hadoop.examples.AggregateWordCount
The main driver for word count map/reduce program.
main(String[]) - Static method in class org.apache.hadoop.examples.AggregateWordHistogram
The main driver for word count map/reduce program.
main(String[]) - Static method in class org.apache.hadoop.examples.dancing.DistributedPentomino
Launch the solver on 9x10 board and the one sided pentominos.
main(String[]) - Static method in class org.apache.hadoop.examples.dancing.OneSidedPentomino
Solve the 3x30 puzzle.
main(String[]) - Static method in class org.apache.hadoop.examples.dancing.Pentomino
Solve the 6x10 pentomino puzzle.
main(String[]) - Static method in class org.apache.hadoop.examples.dancing.Sudoku
Solves a set of sudoku puzzles.
main(String[]) - Static method in class org.apache.hadoop.examples.DBCountPageView
 
main(String[]) - Static method in class org.apache.hadoop.examples.ExampleDriver
 
main(String[]) - Static method in class org.apache.hadoop.examples.Grep
 
main(String[]) - Static method in class org.apache.hadoop.examples.Join
 
main(String[]) - Static method in class org.apache.hadoop.examples.MultiFileWordCount
 
main(String[]) - Static method in class org.apache.hadoop.examples.PiEstimator
main method for running it as a stand alone command.
main(String[]) - Static method in class org.apache.hadoop.examples.RandomTextWriter
 
main(String[]) - Static method in class org.apache.hadoop.examples.RandomWriter
 
main(String[]) - Static method in class org.apache.hadoop.examples.SecondarySort
 
main(String[]) - Static method in class org.apache.hadoop.examples.SleepJob
 
main(String[]) - Static method in class org.apache.hadoop.examples.Sort
 
main(String[]) - Static method in class org.apache.hadoop.examples.terasort.TeraGen
 
main(String[]) - Static method in class org.apache.hadoop.examples.terasort.TeraSort
 
main(String[]) - Static method in class org.apache.hadoop.examples.terasort.TeraValidate
 
main(String[]) - Static method in class org.apache.hadoop.examples.WordCount
 
main(String[]) - Static method in class org.apache.hadoop.fs.DF
 
main(String[]) - Static method in class org.apache.hadoop.fs.DU
 
main(String[]) - Static method in class org.apache.hadoop.fs.FsShell
main() has some simple utility methods
main(String[]) - Static method in class org.apache.hadoop.fs.s3.MigrationTool
 
main(String[]) - Static method in class org.apache.hadoop.fs.Trash
Run an emptier.
main(String[]) - Static method in class org.apache.hadoop.io.compress.CompressionCodecFactory
A little test program.
main(String[]) - Static method in class org.apache.hadoop.io.file.tfile.TFile
Dumping the TFile information.
main(String[]) - Static method in class org.apache.hadoop.io.MapFile
 
main(String[]) - Static method in class org.apache.hadoop.log.LogLevel
A command line implementation
main(String[]) - Static method in class org.apache.hadoop.mapred.IsolationRunner
Run a single task
main(String[]) - Static method in class org.apache.hadoop.mapred.JobClient
 
main(String[]) - Static method in class org.apache.hadoop.mapred.JobTracker
Start the JobTracker process.
main(String[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
create and run an Aggregate based map/reduce job.
main(String[]) - Static method in class org.apache.hadoop.mapred.lib.InputSampler
 
main(String[]) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Submit a pipes job based on the command line arguments.
main(String[]) - Static method in class org.apache.hadoop.mapred.TaskTracker
Start the TaskTracker, point toward the indicated JobTracker
main(String[]) - Static method in class org.apache.hadoop.mapred.tools.MRAdmin
 
main(String[]) - Static method in class org.apache.hadoop.record.compiler.generated.Rcc
 
main(String[]) - Static method in class org.apache.hadoop.streaming.HadoopStreaming
 
main(String[]) - Static method in class org.apache.hadoop.streaming.JarBuilder
Test program
main(String[]) - Static method in class org.apache.hadoop.streaming.PathFinder
 
main(String[]) - Static method in class org.apache.hadoop.util.hash.JenkinsHash
Compute the hash of the specified file
main(String[]) - Static method in class org.apache.hadoop.util.LinuxMemoryCalculatorPlugin
Test the LinuxMemoryCalculatorPlugin
main(String[]) - Static method in class org.apache.hadoop.util.PlatformName
 
main(String[]) - Static method in class org.apache.hadoop.util.PrintJarMainClass
 
main(String[]) - Static method in class org.apache.hadoop.util.RunJar
Run a Hadoop job jar.
main(String[]) - Static method in class org.apache.hadoop.util.VersionInfo
 
makeCompactString() - Method in class org.apache.hadoop.mapred.Counters
Deprecated. Convert a counters object into a single line that is easy to parse.
makeComparator(String) - Static method in class org.apache.hadoop.io.file.tfile.TFile
Make a raw comparator from a string name.
makeEscapedCompactString() - Method in class org.apache.hadoop.mapred.Counters.Counter
Deprecated. Returns the compact stringified version of the counter in the format [(actual-name)(display-name)(value)]
makeEscapedCompactString() - Method in class org.apache.hadoop.mapred.Counters.Group
Deprecated. Returns the compact stringified version of the group in the format {(actual-name)(display-name)(value)[][][]} where [] are compact strings for the counters within.
makeEscapedCompactString() - Method in class org.apache.hadoop.mapred.Counters
Deprecated. Represent the counter in a textual format that can be converted back to its object form
makeJavaCommand(Class, String[]) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
makeLock(String) - Method in class org.apache.hadoop.contrib.index.lucene.FileSystemDirectory
 
makeQualified(Path) - Method in class org.apache.hadoop.fs.FileSystem
Make sure that a path specifies a FileSystem.
makeQualified(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Make sure that a path specifies a FileSystem.
makeQualified(Path) - Method in class org.apache.hadoop.fs.HarFileSystem
 
makeQualified(FileSystem) - Method in class org.apache.hadoop.fs.Path
Returns a qualified path object.
makeRelative(URI, Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
 
makeShellPath(String) - Static method in class org.apache.hadoop.fs.FileUtil
Convert a os-native filename to a path that works for the shell.
makeShellPath(File) - Static method in class org.apache.hadoop.fs.FileUtil
Convert a os-native filename to a path that works for the shell.
makeShellPath(File, boolean) - Static method in class org.apache.hadoop.fs.FileUtil
Convert a os-native filename to a path that works for the shell.
map(DocumentID, DocumentAndOp, OutputCollector<DocumentID, DocumentAndOp>, Reporter) - Method in class org.apache.hadoop.contrib.index.example.IdentityLocalAnalysis
 
map(DocumentID, LineDocTextAndOp, OutputCollector<DocumentID, DocumentAndOp>, Reporter) - Method in class org.apache.hadoop.contrib.index.example.LineDocLocalAnalysis
 
map(K, V, OutputCollector<Shard, IntermediateForm>, Reporter) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateMapper
Map a key-value pair to a shard-and-intermediate form pair.
map(Object, Object, OutputCollector, Reporter) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
 
map(Object, Object, OutputCollector, Reporter) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
map(WritableComparable, Text, OutputCollector<Text, Text>, Reporter) - Method in class org.apache.hadoop.examples.dancing.DistributedPentomino.PentMap
Break the prefix string into moves (a sequence of integer row ids that will be selected for each column in order).
map(MultiFileWordCount.WordOffset, Text, OutputCollector<Text, IntWritable>, Reporter) - Method in class org.apache.hadoop.examples.MultiFileWordCount.MapClass
 
map(LongWritable, LongWritable, OutputCollector<BooleanWritable, LongWritable>, Reporter) - Method in class org.apache.hadoop.examples.PiEstimator.PiMapper
Map method.
map(LongWritable, Text, Mapper<LongWritable, Text, SecondarySort.IntPair, IntWritable>.Context) - Method in class org.apache.hadoop.examples.SecondarySort.MapClass
 
map(IntWritable, IntWritable, OutputCollector<IntWritable, NullWritable>, Reporter) - Method in class org.apache.hadoop.examples.SleepJob
 
map(LongWritable, NullWritable, OutputCollector<Text, Text>, Reporter) - Method in class org.apache.hadoop.examples.terasort.TeraGen.SortGenMapper
 
map(Object, Text, Mapper<Object, Text, Text, IntWritable>.Context) - Method in class org.apache.hadoop.examples.WordCount.TokenizerMapper
 
map(K1, V1, OutputCollector<Text, Text>, Reporter) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorCombiner
Do nothing.
map(K1, V1, OutputCollector<Text, Text>, Reporter) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorMapper
the map function.
map(K1, V1, OutputCollector<Text, Text>, Reporter) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorReducer
Do nothing.
map(Object, Object, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.ChainMapper
Chains the map(...) methods of the Mappers in the chain.
map(K1, V1, OutputCollector<K2, V2>, Reporter) - Method in class org.apache.hadoop.mapred.lib.DelegatingMapper
 
map(K, V, OutputCollector<Text, Text>, Reporter) - Method in class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
The identify function.
map(K, V, OutputCollector<K, V>, Reporter) - Method in class org.apache.hadoop.mapred.lib.IdentityMapper
Deprecated. The identify function.
map(K, V, OutputCollector<V, K>, Reporter) - Method in class org.apache.hadoop.mapred.lib.InverseMapper
Deprecated. The inverse function.
map(K, Text, OutputCollector<Text, LongWritable>, Reporter) - Method in class org.apache.hadoop.mapred.lib.RegexMapper
 
map(K, Text, OutputCollector<Text, LongWritable>, Reporter) - Method in class org.apache.hadoop.mapred.lib.TokenCountMapper
Deprecated.  
map(K1, V1, OutputCollector<K2, V2>, Reporter) - Method in interface org.apache.hadoop.mapred.Mapper
Deprecated. Maps a single input key/value pair into an intermediate key/value pair.
map(K, V, Mapper<K, V, V, K>.Context) - Method in class org.apache.hadoop.mapreduce.lib.map.InverseMapper
The inverse function.
map(Object, Text, Mapper<Object, Text, Text, IntWritable>.Context) - Method in class org.apache.hadoop.mapreduce.lib.map.TokenCounterMapper
 
map(KEYIN, VALUEIN, Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT>.Context) - Method in class org.apache.hadoop.mapreduce.Mapper
Called once for each key/value pair in the input split.
Map() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
MAP - Static variable in class org.apache.hadoop.record.meta.TypeID.RIOType
 
map(Object, Object, OutputCollector, Reporter) - Method in class org.apache.hadoop.streaming.PipeMapper
 
MAP_CLASS_ATTR - Static variable in class org.apache.hadoop.mapreduce.JobContext
 
MAP_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
mapCmd_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
MapContext<KEYIN,VALUEIN,KEYOUT,VALUEOUT> - Class in org.apache.hadoop.mapreduce
The context that is given to the Mapper.
MapContext(Configuration, TaskAttemptID, RecordReader<KEYIN, VALUEIN>, RecordWriter<KEYOUT, VALUEOUT>, OutputCommitter, StatusReporter, InputSplit) - Constructor for class org.apache.hadoop.mapreduce.MapContext
 
mapDebugSpec_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
MapFile - Class in org.apache.hadoop.io
A file-based map from keys to values.
MapFile() - Constructor for class org.apache.hadoop.io.MapFile
 
MapFile.Reader - Class in org.apache.hadoop.io
Provide access to an existing map.
MapFile.Reader(FileSystem, String, Configuration) - Constructor for class org.apache.hadoop.io.MapFile.Reader
Construct a map reader for the named map.
MapFile.Reader(FileSystem, String, WritableComparator, Configuration) - Constructor for class org.apache.hadoop.io.MapFile.Reader
Construct a map reader for the named map using the named comparator.
MapFile.Reader(FileSystem, String, WritableComparator, Configuration, boolean) - Constructor for class org.apache.hadoop.io.MapFile.Reader
Hook to allow subclasses to defer opening streams until further initialization is complete.
MapFile.Writer - Class in org.apache.hadoop.io
Writes a new map.
MapFile.Writer(Configuration, FileSystem, String, Class<? extends WritableComparable>, Class) - Constructor for class org.apache.hadoop.io.MapFile.Writer
Create the named map for keys of the named class.
MapFile.Writer(Configuration, FileSystem, String, Class<? extends WritableComparable>, Class, SequenceFile.CompressionType, Progressable) - Constructor for class org.apache.hadoop.io.MapFile.Writer
Create the named map for keys of the named class.
MapFile.Writer(Configuration, FileSystem, String, Class<? extends WritableComparable>, Class, SequenceFile.CompressionType, CompressionCodec, Progressable) - Constructor for class org.apache.hadoop.io.MapFile.Writer
Create the named map for keys of the named class.
MapFile.Writer(Configuration, FileSystem, String, Class<? extends WritableComparable>, Class, SequenceFile.CompressionType) - Constructor for class org.apache.hadoop.io.MapFile.Writer
Create the named map for keys of the named class.
MapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class) - Constructor for class org.apache.hadoop.io.MapFile.Writer
Create the named map using the named key comparator.
MapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class, SequenceFile.CompressionType) - Constructor for class org.apache.hadoop.io.MapFile.Writer
Create the named map using the named key comparator.
MapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class, SequenceFile.CompressionType, Progressable) - Constructor for class org.apache.hadoop.io.MapFile.Writer
Create the named map using the named key comparator.
MapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class, SequenceFile.CompressionType, CompressionCodec, Progressable) - Constructor for class org.apache.hadoop.io.MapFile.Writer
Create the named map using the named key comparator.
MapFileOutputFormat - Class in org.apache.hadoop.mapred
An OutputFormat that writes MapFiles.
MapFileOutputFormat() - Constructor for class org.apache.hadoop.mapred.MapFileOutputFormat
 
mapOutputLost(TaskAttemptID, String) - Method in class org.apache.hadoop.mapred.TaskTracker
A completed map task's output has been lost.
Mapper<K1,V1,K2,V2> - Interface in org.apache.hadoop.mapred
Deprecated. Use Mapper instead.
Mapper<KEYIN,VALUEIN,KEYOUT,VALUEOUT> - Class in org.apache.hadoop.mapreduce
Maps input key/value pairs to a set of intermediate key/value pairs.
Mapper() - Constructor for class org.apache.hadoop.mapreduce.Mapper
 
Mapper.Context - Class in org.apache.hadoop.mapreduce
 
Mapper.Context(Configuration, TaskAttemptID, RecordReader<KEYIN, VALUEIN>, RecordWriter<KEYOUT, VALUEOUT>, OutputCommitter, StatusReporter, InputSplit) - Constructor for class org.apache.hadoop.mapreduce.Mapper.Context
 
mapProgress() - Method in class org.apache.hadoop.mapred.JobStatus
 
mapProgress() - Method in interface org.apache.hadoop.mapred.RunningJob
Get the progress of the job's map-tasks, as a float between 0.0 and 1.0.
mapProgress() - Method in class org.apache.hadoop.mapreduce.Job
Get the progress of the job's map-tasks, as a float between 0.0 and 1.0.
MAPRED_TASK_DEFAULT_MAXVMEM_PROPERTY - Static variable in class org.apache.hadoop.mapred.JobConf
Deprecated.  
MAPRED_TASK_MAXPMEM_PROPERTY - Static variable in class org.apache.hadoop.mapred.JobConf
Deprecated.  
MAPRED_TASK_MAXVMEM_PROPERTY - Static variable in class org.apache.hadoop.mapred.JobConf
Deprecated.  
mapRedFinished() - Method in class org.apache.hadoop.streaming.PipeMapRed
 
MapReduceBase - Class in org.apache.hadoop.mapred
Deprecated. 
MapReduceBase() - Constructor for class org.apache.hadoop.mapred.MapReduceBase
Deprecated.  
MapReducePolicyProvider - Class in org.apache.hadoop.mapred
PolicyProvider for Map-Reduce protocols.
MapReducePolicyProvider() - Constructor for class org.apache.hadoop.mapred.MapReducePolicyProvider
 
MapRunnable<K1,V1,K2,V2> - Interface in org.apache.hadoop.mapred
Deprecated. Use Mapper instead.
MapRunner<K1,V1,K2,V2> - Class in org.apache.hadoop.mapred
Default MapRunnable implementation.
MapRunner() - Constructor for class org.apache.hadoop.mapred.MapRunner
 
MapTypeID - Class in org.apache.hadoop.record.meta
Represents typeID for a Map
MapTypeID(TypeID, TypeID) - Constructor for class org.apache.hadoop.record.meta.MapTypeID
 
MapWritable - Class in org.apache.hadoop.io
A Writable Map.
MapWritable() - Constructor for class org.apache.hadoop.io.MapWritable
Default constructor.
MapWritable(MapWritable) - Constructor for class org.apache.hadoop.io.MapWritable
Copy constructor.
mark(int) - Method in class org.apache.hadoop.fs.FSInputChecker
 
mark(int) - Method in class org.apache.hadoop.fs.ftp.FTPInputStream
 
mark(int) - Method in class org.apache.hadoop.io.compress.DecompressorStream
 
markSupported() - Method in class org.apache.hadoop.fs.FSInputChecker
 
markSupported() - Method in class org.apache.hadoop.fs.ftp.FTPInputStream
 
markSupported() - Method in class org.apache.hadoop.io.compress.DecompressorStream
 
matches(String) - Static method in class org.apache.hadoop.fs.shell.Count
Check if a command is the count command
MAX_ALPHA_SIZE - Static variable in interface org.apache.hadoop.io.compress.bzip2.BZip2Constants
 
MAX_BLOCKSIZE - Static variable in class org.apache.hadoop.io.compress.bzip2.CBZip2OutputStream
The maximum supported blocksize == 9.
MAX_CODE_LEN - Static variable in interface org.apache.hadoop.io.compress.bzip2.BZip2Constants
 
MAX_OUTPUT_LENGTH - Static variable in class org.apache.hadoop.contrib.failmon.Environment
 
MAX_SELECTORS - Static variable in interface org.apache.hadoop.io.compress.bzip2.BZip2Constants
 
MAXIMUM_FP - Static variable in interface org.apache.hadoop.util.bloom.RemoveScheme
MaximumFP Selection.
maxNextCharInd - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
MBeanUtil - Class in org.apache.hadoop.metrics.util
This util class provides a method to register an MBean using our standard naming convention as described in the doc for {link MBeanUtil.registerMBean(String, String, Object)
MBeanUtil() - Constructor for class org.apache.hadoop.metrics.util.MBeanUtil
 
MD5_LEN - Static variable in class org.apache.hadoop.io.MD5Hash
 
MD5_LEN - Static variable in class org.apache.hadoop.mapred.SequenceFileInputFilter.MD5Filter
 
MD5Hash - Class in org.apache.hadoop.io
A Writable for MD5 hash values.
MD5Hash() - Constructor for class org.apache.hadoop.io.MD5Hash
Constructs an MD5Hash.
MD5Hash(String) - Constructor for class org.apache.hadoop.io.MD5Hash
Constructs an MD5Hash from a hex string.
MD5Hash(byte[]) - Constructor for class org.apache.hadoop.io.MD5Hash
Constructs an MD5Hash with a specified value.
MD5Hash.Comparator - Class in org.apache.hadoop.io
A WritableComparator optimized for MD5Hash keys.
MD5Hash.Comparator() - Constructor for class org.apache.hadoop.io.MD5Hash.Comparator
 
MD5MD5CRC32FileChecksum - Class in org.apache.hadoop.fs
MD5 of MD5 of CRC32.
MD5MD5CRC32FileChecksum() - Constructor for class org.apache.hadoop.fs.MD5MD5CRC32FileChecksum
Same as this(0, 0, null)
MD5MD5CRC32FileChecksum(int, long, MD5Hash) - Constructor for class org.apache.hadoop.fs.MD5MD5CRC32FileChecksum
Create a MD5FileChecksum
membershipTest(Key) - Method in class org.apache.hadoop.util.bloom.BloomFilter
 
membershipTest(Key) - Method in class org.apache.hadoop.util.bloom.CountingBloomFilter
 
membershipTest(Key) - Method in class org.apache.hadoop.util.bloom.DynamicBloomFilter
 
membershipTest(Key) - Method in class org.apache.hadoop.util.bloom.Filter
Determines wether a specified key belongs to this filter.
MemoryCalculatorPlugin - Class in org.apache.hadoop.util
Plugin to calculate virtual and physical memories on the system.
MemoryCalculatorPlugin() - Constructor for class org.apache.hadoop.util.MemoryCalculatorPlugin
 
merge(List<SequenceFile.Sorter.SegmentDescriptor>, Path) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Merges the list of segments of type SegmentDescriptor
merge(Path[], boolean, Path) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Merges the contents of files passed in Path[] using a max factor value that is already set
merge(Path[], boolean, int, Path) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Merges the contents of files passed in Path[]
merge(Path[], Path, boolean) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Merges the contents of files passed in Path[]
merge(Path[], Path) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Merge the provided files.
merge(List, List, String) - Method in class org.apache.hadoop.streaming.JarBuilder
 
MergeSort - Class in org.apache.hadoop.util
An implementation of the core algorithm of MergeSort.
MergeSort(Comparator<IntWritable>) - Constructor for class org.apache.hadoop.util.MergeSort
 
mergeSort(int[], int[], int, int) - Method in class org.apache.hadoop.util.MergeSort
 
MetaBlockAlreadyExists - Exception in org.apache.hadoop.io.file.tfile
Exception - Meta Block with the same name already exists.
MetaBlockDoesNotExist - Exception in org.apache.hadoop.io.file.tfile
Exception - No such Meta Block with the given name.
MetricsBase - Class in org.apache.hadoop.metrics.util
This is base class for all metrics
MetricsBase(String) - Constructor for class org.apache.hadoop.metrics.util.MetricsBase
 
MetricsBase(String, String) - Constructor for class org.apache.hadoop.metrics.util.MetricsBase
 
MetricsContext - Interface in org.apache.hadoop.metrics
The main interface to the metrics package.
MetricsDynamicMBeanBase - Class in org.apache.hadoop.metrics.util
This abstract base class facilitates creating dynamic mbeans automatically from metrics.
MetricsDynamicMBeanBase(MetricsRegistry, String) - Constructor for class org.apache.hadoop.metrics.util.MetricsDynamicMBeanBase
 
MetricsException - Exception in org.apache.hadoop.metrics
General-purpose, unchecked metrics exception.
MetricsException() - Constructor for exception org.apache.hadoop.metrics.MetricsException
Creates a new instance of MetricsException
MetricsException(String) - Constructor for exception org.apache.hadoop.metrics.MetricsException
Creates a new instance of MetricsException
MetricsIntValue - Class in org.apache.hadoop.metrics.util
The MetricsIntValue class is for a metric that is not time varied but changes only when it is set.
MetricsIntValue(String, MetricsRegistry, String) - Constructor for class org.apache.hadoop.metrics.util.MetricsIntValue
Constructor - create a new metric
MetricsIntValue(String, MetricsRegistry) - Constructor for class org.apache.hadoop.metrics.util.MetricsIntValue
Constructor - create a new metric
MetricsLongValue - Class in org.apache.hadoop.metrics.util
The MetricsLongValue class is for a metric that is not time varied but changes only when it is set.
MetricsLongValue(String, MetricsRegistry, String) - Constructor for class org.apache.hadoop.metrics.util.MetricsLongValue
Constructor - create a new metric
MetricsLongValue(String, MetricsRegistry) - Constructor for class org.apache.hadoop.metrics.util.MetricsLongValue
Constructor - create a new metric
MetricsRecord - Interface in org.apache.hadoop.metrics
A named and optionally tagged set of records to be sent to the metrics system.
MetricsRecordImpl - Class in org.apache.hadoop.metrics.spi
An implementation of MetricsRecord.
MetricsRecordImpl(String, AbstractMetricsContext) - Constructor for class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Creates a new instance of FileRecord
MetricsRegistry - Class in org.apache.hadoop.metrics.util
This is the registry for metrics.
MetricsRegistry() - Constructor for class org.apache.hadoop.metrics.util.MetricsRegistry
 
MetricsTimeVaryingInt - Class in org.apache.hadoop.metrics.util
The MetricsTimeVaryingInt class is for a metric that naturally varies over time (e.g.
MetricsTimeVaryingInt(String, MetricsRegistry, String) - Constructor for class org.apache.hadoop.metrics.util.MetricsTimeVaryingInt
Constructor - create a new metric
MetricsTimeVaryingInt(String, MetricsRegistry) - Constructor for class org.apache.hadoop.metrics.util.MetricsTimeVaryingInt
Constructor - create a new metric
MetricsTimeVaryingLong - Class in org.apache.hadoop.metrics.util
The MetricsTimeVaryingLong class is for a metric that naturally varies over time (e.g.
MetricsTimeVaryingLong(String, MetricsRegistry, String) - Constructor for class org.apache.hadoop.metrics.util.MetricsTimeVaryingLong
Constructor - create a new metric
MetricsTimeVaryingLong(String, MetricsRegistry) - Constructor for class org.apache.hadoop.metrics.util.MetricsTimeVaryingLong
Constructor - create a new metric
MetricsTimeVaryingRate - Class in org.apache.hadoop.metrics.util
The MetricsTimeVaryingRate class is for a rate based metric that naturally varies over time (e.g.
MetricsTimeVaryingRate(String, MetricsRegistry, String) - Constructor for class org.apache.hadoop.metrics.util.MetricsTimeVaryingRate
Constructor - create a new metric
MetricsTimeVaryingRate(String, MetricsRegistry) - Constructor for class org.apache.hadoop.metrics.util.MetricsTimeVaryingRate
Constructor - create a new metric
MetricsUtil - Class in org.apache.hadoop.metrics
Utility class to simplify creation and reporting of hadoop metrics.
MetricValue - Class in org.apache.hadoop.metrics.spi
A Number that is either an absolute or an incremental amount.
MetricValue(Number, boolean) - Constructor for class org.apache.hadoop.metrics.spi.MetricValue
Creates a new instance of MetricValue
midKey() - Method in class org.apache.hadoop.io.MapFile.Reader
Get the key at approximately the middle of the file.
MigrationTool - Class in org.apache.hadoop.fs.s3
This class is a tool for migrating data from an older to a newer version of an S3 filesystem.
MigrationTool() - Constructor for class org.apache.hadoop.fs.s3.MigrationTool
 
MIN_BLOCKSIZE - Static variable in class org.apache.hadoop.io.compress.bzip2.CBZip2OutputStream
The minimum supported blocksize == 1.
MIN_INTERVAL - Static variable in class org.apache.hadoop.contrib.failmon.Environment
 
MIN_INTERVAL - Static variable in class org.apache.hadoop.contrib.failmon.Executor
 
MINIMUM_FN - Static variable in interface org.apache.hadoop.util.bloom.RemoveScheme
MinimumFN Selection.
minRecWrittenToEnableSkip_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
mkdirs(Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
 
mkdirs(FileSystem, Path, FsPermission) - Static method in class org.apache.hadoop.fs.FileSystem
create a directory with the provided permission The permission of the directory is set to be the provided permission as in setPermission, not permission&~umask
mkdirs(Path) - Method in class org.apache.hadoop.fs.FileSystem
Call FileSystem.mkdirs(Path, FsPermission) with default permission.
mkdirs(Path, FsPermission) - Method in class org.apache.hadoop.fs.FileSystem
Make the given file and all non-existent parents into directories.
mkdirs(Path, FsPermission) - Method in class org.apache.hadoop.fs.FilterFileSystem
Make the given file and all non-existent parents into directories.
mkdirs(Path, FsPermission) - Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
 
mkdirs(Path, FsPermission) - Method in class org.apache.hadoop.fs.HarFileSystem
not implemented.
mkdirs(Path, FsPermission) - Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
 
mkdirs(Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
Creates the specified directory hierarchy.
mkdirs(Path, FsPermission) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
Make the given file and all non-existent parents into directories.
mkdirs(Path, FsPermission) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
mkdirs(Path, FsPermission) - Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
 
mkdirsWithExistsCheck(File) - Static method in class org.apache.hadoop.util.DiskChecker
The semantics of mkdirsWithExistsCheck method is different from the mkdirs method provided in the Sun's java.io.File class in the following way: While creating the non-existent parent directories, this method checks for the existence of those directories if the mkdir fails at any point (since that directory might have just been created by some other process).
modifFmt - Static variable in class org.apache.hadoop.fs.FsShell
 
Module() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
MODULE_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
ModuleName() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
monitor() - Method in class org.apache.hadoop.contrib.failmon.CPUParser
Invokes query() to do the parsing and handles parsing errors.
monitor(LocalStore) - Method in class org.apache.hadoop.contrib.failmon.LogParser
Insert all EventRecords that can be extracted for the represented hardware component into a LocalStore.
monitor() - Method in class org.apache.hadoop.contrib.failmon.LogParser
Get an array of all EventRecords that can be extracted for the represented hardware component.
monitor() - Method in interface org.apache.hadoop.contrib.failmon.Monitored
Get an array of all EventRecords that can be extracted for the represented hardware component.
monitor(LocalStore) - Method in interface org.apache.hadoop.contrib.failmon.Monitored
Inserts all EventRecords that can be extracted for the represented hardware component into a LocalStore.
monitor() - Method in class org.apache.hadoop.contrib.failmon.NICParser
Invokes query() to do the parsing and handles parsing errors for each one of the NICs specified in the configuration.
monitor() - Method in class org.apache.hadoop.contrib.failmon.SensorsParser
Invokes query() to do the parsing and handles parsing errors.
monitor(LocalStore) - Method in class org.apache.hadoop.contrib.failmon.ShellParser
Insert all EventRecords that can be extracted for the represented hardware component into a LocalStore.
monitor() - Method in class org.apache.hadoop.contrib.failmon.ShellParser
 
monitor() - Method in class org.apache.hadoop.contrib.failmon.SMARTParser
Invokes query() to do the parsing and handles parsing errors for each one of the disks specified in the configuration.
monitorAndPrintJob(JobConf, RunningJob) - Method in class org.apache.hadoop.mapred.JobClient
Monitor a job and print status in real-time as progress is made and tasks fail.
Monitored - Interface in org.apache.hadoop.contrib.failmon
Represents objects that monitor specific hardware resources and can query them to get EventRecords describing the state of these resources.
MonitorJob - Class in org.apache.hadoop.contrib.failmon
This class is a wrapper for a monitoring job.
MonitorJob(Monitored, String, int) - Constructor for class org.apache.hadoop.contrib.failmon.MonitorJob
 
moveFromLocalFile(Path[], Path) - Method in class org.apache.hadoop.fs.FileSystem
The src files is on the local disk.
moveFromLocalFile(Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
The src file is on the local disk.
moveFromLocalFile(Path, Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
moveToLocalFile(Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
The src file is under FS, and the dst is on the local disk.
moveToTrash(Path) - Method in class org.apache.hadoop.fs.Trash
Move a file or directory to the current trash directory.
MR_CLIENTTRACE_FORMAT - Static variable in class org.apache.hadoop.mapred.TaskTracker
 
MRAdmin - Class in org.apache.hadoop.mapred.tools
Administrative access to Hadoop Map-Reduce.
MRAdmin() - Constructor for class org.apache.hadoop.mapred.tools.MRAdmin
 
MRAdmin(Configuration) - Constructor for class org.apache.hadoop.mapred.tools.MRAdmin
 
msg(String) - Method in class org.apache.hadoop.streaming.StreamJob
 
MultiFileInputFormat<K,V> - Class in org.apache.hadoop.mapred
Deprecated. Use CombineFileInputFormat instead
MultiFileInputFormat() - Constructor for class org.apache.hadoop.mapred.MultiFileInputFormat
Deprecated.  
MultiFileSplit - Class in org.apache.hadoop.mapred
Deprecated. Use CombineFileSplit instead
MultiFileSplit(JobConf, Path[], long[]) - Constructor for class org.apache.hadoop.mapred.MultiFileSplit
Deprecated.  
MultiFileWordCount - Class in org.apache.hadoop.examples
MultiFileWordCount is an example to demonstrate the usage of MultiFileInputFormat.
MultiFileWordCount() - Constructor for class org.apache.hadoop.examples.MultiFileWordCount
 
MultiFileWordCount.MapClass - Class in org.apache.hadoop.examples
This Mapper is similar to the one in MultiFileWordCount.MapClass.
MultiFileWordCount.MapClass() - Constructor for class org.apache.hadoop.examples.MultiFileWordCount.MapClass
 
MultiFileWordCount.MultiFileLineRecordReader - Class in org.apache.hadoop.examples
RecordReader is responsible from extracting records from the InputSplit.
MultiFileWordCount.MultiFileLineRecordReader(Configuration, MultiFileSplit) - Constructor for class org.apache.hadoop.examples.MultiFileWordCount.MultiFileLineRecordReader
 
MultiFileWordCount.MyInputFormat - Class in org.apache.hadoop.examples
To use MultiFileInputFormat, one should extend it, to return a (custom) RecordReader.
MultiFileWordCount.MyInputFormat() - Constructor for class org.apache.hadoop.examples.MultiFileWordCount.MyInputFormat
 
MultiFileWordCount.WordOffset - Class in org.apache.hadoop.examples
This record keeps <filename,offset> pairs.
MultiFileWordCount.WordOffset() - Constructor for class org.apache.hadoop.examples.MultiFileWordCount.WordOffset
 
MultiFilterRecordReader<K extends WritableComparable,V extends Writable> - Class in org.apache.hadoop.mapred.join
Base class for Composite join returning values derived from multiple sources, but generally not tuples.
MultiFilterRecordReader(int, JobConf, int, Class<? extends WritableComparator>) - Constructor for class org.apache.hadoop.mapred.join.MultiFilterRecordReader
 
MultiFilterRecordReader.MultiFilterDelegationIterator - Class in org.apache.hadoop.mapred.join
Proxy the JoinCollector, but include callback to emit.
MultiFilterRecordReader.MultiFilterDelegationIterator() - Constructor for class org.apache.hadoop.mapred.join.MultiFilterRecordReader.MultiFilterDelegationIterator
 
MultipleInputs - Class in org.apache.hadoop.mapred.lib
This class supports MapReduce jobs that have multiple input paths with a different InputFormat and Mapper for each path
MultipleInputs() - Constructor for class org.apache.hadoop.mapred.lib.MultipleInputs
 
MultipleIOException - Exception in org.apache.hadoop.io
Encapsulate a list of IOException into an IOException
MultipleOutputFormat<K,V> - Class in org.apache.hadoop.mapred.lib
This abstract class extends the FileOutputFormat, allowing to write the output data to different output files.
MultipleOutputFormat() - Constructor for class org.apache.hadoop.mapred.lib.MultipleOutputFormat
 
MultipleOutputs - Class in org.apache.hadoop.mapred.lib
The MultipleOutputs class simplifies writting to additional outputs other than the job default output via the OutputCollector passed to the map() and reduce() methods of the Mapper and Reducer implementations.
MultipleOutputs(JobConf) - Constructor for class org.apache.hadoop.mapred.lib.MultipleOutputs
Creates and initializes multiple named outputs support, it should be instantiated in the Mapper/Reducer configure method.
MultipleSequenceFileOutputFormat<K,V> - Class in org.apache.hadoop.mapred.lib
This class extends the MultipleOutputFormat, allowing to write the output data to different output files in sequence file output format.
MultipleSequenceFileOutputFormat() - Constructor for class org.apache.hadoop.mapred.lib.MultipleSequenceFileOutputFormat
 
MultipleTextOutputFormat<K,V> - Class in org.apache.hadoop.mapred.lib
This class extends the MultipleOutputFormat, allowing to write the output data to different output files in Text output format.
MultipleTextOutputFormat() - Constructor for class org.apache.hadoop.mapred.lib.MultipleTextOutputFormat
 
MultithreadedMapper<K1,V1,K2,V2> - Class in org.apache.hadoop.mapreduce.lib.map
Multithreaded implementation for @link org.apache.hadoop.mapreduce.Mapper.
MultithreadedMapper() - Constructor for class org.apache.hadoop.mapreduce.lib.map.MultithreadedMapper
 
MultithreadedMapRunner<K1,V1,K2,V2> - Class in org.apache.hadoop.mapred.lib
Multithreaded implementation for @link org.apache.hadoop.mapred.MapRunnable.
MultithreadedMapRunner() - Constructor for class org.apache.hadoop.mapred.lib.MultithreadedMapRunner
 
MURMUR_HASH - Static variable in class org.apache.hadoop.util.hash.Hash
Constant to denote MurmurHash.
MurmurHash - Class in org.apache.hadoop.util.hash
This is a very fast, non-cryptographic hash suitable for general hash-based lookup.
MurmurHash() - Constructor for class org.apache.hadoop.util.hash.MurmurHash
 

N

N_GROUPS - Static variable in interface org.apache.hadoop.io.compress.bzip2.BZip2Constants
 
N_ITERS - Static variable in interface org.apache.hadoop.io.compress.bzip2.BZip2Constants
 
NAME - Static variable in class org.apache.hadoop.fs.shell.Count
 
name - Variable in class org.apache.hadoop.net.NodeBase
 
NativeCodeLoader - Class in org.apache.hadoop.util
A helper to load the native hadoop code i.e.
NativeCodeLoader() - Constructor for class org.apache.hadoop.util.NativeCodeLoader
 
NativeS3FileSystem - Class in org.apache.hadoop.fs.s3native
A FileSystem for reading and writing files stored on Amazon S3.
NativeS3FileSystem() - Constructor for class org.apache.hadoop.fs.s3native.NativeS3FileSystem
 
NativeS3FileSystem(NativeFileSystemStore) - Constructor for class org.apache.hadoop.fs.s3native.NativeS3FileSystem
 
nbHash - Variable in class org.apache.hadoop.util.bloom.Filter
The number of hash function to consider.
needChecksum() - Method in class org.apache.hadoop.fs.FSInputChecker
Return true if there is a need for checksum verification
needsDictionary() - Method in class org.apache.hadoop.io.compress.bzip2.BZip2DummyDecompressor
 
needsDictionary() - Method in interface org.apache.hadoop.io.compress.Decompressor
Returns true if a preset dictionary is needed for decompression.
needsDictionary() - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
needsInput() - Method in class org.apache.hadoop.io.compress.bzip2.BZip2DummyCompressor
 
needsInput() - Method in class org.apache.hadoop.io.compress.bzip2.BZip2DummyDecompressor
 
needsInput() - Method in interface org.apache.hadoop.io.compress.Compressor
Returns true if the input data buffer is empty and #setInput() should be called to provide more input.
needsInput() - Method in interface org.apache.hadoop.io.compress.Decompressor
Returns true if the input data buffer is empty and #setInput() should be called to provide more input.
needsInput() - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
 
needsInput() - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
needsTaskCommit(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.FileOutputCommitter
 
needsTaskCommit(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
Deprecated. Check whether task needs a commit
needsTaskCommit(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
Deprecated. This method implements the new interface by calling the old method.
needsTaskCommit(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
Did this task write any files in the work directory?
needsTaskCommit(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.OutputCommitter
Check whether task needs a commit
NetUtils - Class in org.apache.hadoop.net
 
NetUtils() - Constructor for class org.apache.hadoop.net.NetUtils
 
NetworkTopology - Class in org.apache.hadoop.net
The class represents a cluster of computer with a tree hierarchical network topology.
NetworkTopology() - Constructor for class org.apache.hadoop.net.NetworkTopology
 
newDataChecksum(int, int) - Static method in class org.apache.hadoop.util.DataChecksum
 
newDataChecksum(byte[], int) - Static method in class org.apache.hadoop.util.DataChecksum
Creates a DataChecksum from HEADER_LEN bytes from arr[offset].
newDataChecksum(DataInputStream) - Static method in class org.apache.hadoop.util.DataChecksum
This constructucts a DataChecksum by reading HEADER_LEN bytes from input stream in
newInstance(Class<? extends Writable>, Configuration) - Static method in class org.apache.hadoop.io.WritableFactories
Create a new instance of a class with a defined factory.
newInstance(Class<? extends Writable>) - Static method in class org.apache.hadoop.io.WritableFactories
Create a new instance of a class with a defined factory.
newInstance() - Method in interface org.apache.hadoop.io.WritableFactory
Return a new instance.
newInstance(Class<T>, Configuration) - Static method in class org.apache.hadoop.util.ReflectionUtils
Create an object for the given class and initialize it from conf
newKey() - Method in class org.apache.hadoop.io.WritableComparator
Construct a new WritableComparable instance.
newRecord(String) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Subclasses should override this if they subclass MetricsRecordImpl.
newRecord(String) - Method in class org.apache.hadoop.metrics.spi.CompositeContext
 
newToken(int) - Static method in class org.apache.hadoop.record.compiler.generated.Token
Returns a new Token object, by default.
next(DocumentID, LineDocTextAndOp) - Method in class org.apache.hadoop.contrib.index.example.LineDocRecordReader
 
next() - Method in class org.apache.hadoop.contrib.utils.join.ArrayListBackedIterator
 
next(MultiFileWordCount.WordOffset, Text) - Method in class org.apache.hadoop.examples.MultiFileWordCount.MultiFileLineRecordReader
 
next(Writable) - Method in class org.apache.hadoop.io.ArrayFile.Reader
Read and return the next value in the file.
next(WritableComparable, Writable) - Method in class org.apache.hadoop.io.MapFile.Reader
Read the next key/value pair in the map into key and val.
next(Writable) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Read the next key in the file into key, skipping its value.
next(Writable, Writable) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Read the next key/value pair in the file into key and val.
next(DataOutputBuffer) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Deprecated. Call SequenceFile.Reader.nextRaw(DataOutputBuffer,SequenceFile.ValueBytes).
next(Object) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Read the next key in the file, skipping its value.
next() - Method in interface org.apache.hadoop.io.SequenceFile.Sorter.RawKeyValueIterator
Sets up the current key and value (for getKey and getValue)
next(WritableComparable) - Method in class org.apache.hadoop.io.SetFile.Reader
Read the next key in a set into key.
next(X) - Method in class org.apache.hadoop.mapred.join.ArrayListBackedIterator
 
next(TupleWritable) - Method in class org.apache.hadoop.mapred.join.JoinRecordReader.JoinDelegationIterator
 
next(K, TupleWritable) - Method in class org.apache.hadoop.mapred.join.JoinRecordReader
Emit the next set of key, value pairs as defined by the child RecordReaders and operation associated with this composite RR.
next(V) - Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader.MultiFilterDelegationIterator
 
next(K, V) - Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader
Reads the next key/value pair from the input for processing.
next(U) - Method in class org.apache.hadoop.mapred.join.ResetableIterator.EMPTY
 
next(T) - Method in interface org.apache.hadoop.mapred.join.ResetableIterator
Assign next value to actual.
next(X) - Method in class org.apache.hadoop.mapred.join.StreamBackedIterator
 
next() - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
Read the next k,v pair into the head of this object; return true iff the RR and this are exhausted.
next(K, U) - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
Write key-value pair at the head of this stream to the objects provided; get next key-value pair from proxied RR.
next(Text, Text) - Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
Read key/value pair in a line.
next(K, V) - Method in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
 
next(LongWritable, T) - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.DBRecordReader
Reads the next key/value pair from the input for processing.
next(LongWritable, Text) - Method in class org.apache.hadoop.mapred.LineRecordReader
Deprecated. Read a line.
next() - Method in interface org.apache.hadoop.mapred.RawKeyValueIterator
Sets up the current key and value (for getKey and getValue).
next(K, V) - Method in interface org.apache.hadoop.mapred.RecordReader
Reads the next key/value pair from the input for processing.
next(BytesWritable, BytesWritable) - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
Read raw bytes from a SequenceFile.
next(Text, Text) - Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
Read key/value pair in a line.
next(K, V) - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
 
next(K) - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
 
next() - Method in class org.apache.hadoop.mapreduce.ReduceContext.ValueIterator
 
next - Variable in class org.apache.hadoop.record.compiler.generated.Token
A reference to the next regular (non-special) token from the input stream.
next(Text, Text) - Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
Read a record.
next(Text, Text) - Method in class org.apache.hadoop.streaming.StreamXmlRecordReader
 
nextKey() - Method in class org.apache.hadoop.mapreduce.ReduceContext
Start processing next unique key.
nextKeyValue() - Method in class org.apache.hadoop.mapreduce.lib.input.LineRecordReader
 
nextKeyValue() - Method in class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
 
nextKeyValue() - Method in class org.apache.hadoop.mapreduce.MapContext
 
nextKeyValue() - Method in class org.apache.hadoop.mapreduce.RecordReader
Read the next key, value pair.
nextKeyValue() - Method in class org.apache.hadoop.mapreduce.ReduceContext
Advance to the next key/value pair.
nextKeyValue() - Method in class org.apache.hadoop.mapreduce.TaskInputOutputContext
Advance to the next key, value pair, returning null if at end.
nextRaw(DataOutputBuffer, SequenceFile.ValueBytes) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Read 'raw' records.
nextRawKey(DataOutputBuffer) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Read 'raw' keys.
nextRawKey() - Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
Fills up the rawKey object with the key returned by the Reader
nextRawValue(SequenceFile.ValueBytes) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Read 'raw' values.
nextRawValue(SequenceFile.ValueBytes) - Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
Fills up the passed rawValue with the value corresponding to the key read earlier
NICParser - Class in org.apache.hadoop.contrib.failmon
Objects of this class parse the output of ifconfig to gather information about present Network Interface Cards in the system.
NICParser() - Constructor for class org.apache.hadoop.contrib.failmon.NICParser
Constructs a NICParser and reads the list of NICs to query
NLineInputFormat - Class in org.apache.hadoop.mapred.lib
NLineInputFormat which splits N lines of input as one split.
NLineInputFormat() - Constructor for class org.apache.hadoop.mapred.lib.NLineInputFormat
 
NO_DESCRIPTION - Static variable in class org.apache.hadoop.metrics.util.MetricsBase
 
Node - Interface in org.apache.hadoop.net
The interface defines a node in a network topology.
NodeBase - Class in org.apache.hadoop.net
A base class that implements interface Node
NodeBase() - Constructor for class org.apache.hadoop.net.NodeBase
Default constructor
NodeBase(String) - Constructor for class org.apache.hadoop.net.NodeBase
Construct a node from its path
NodeBase(String, String) - Constructor for class org.apache.hadoop.net.NodeBase
Construct a node from its name and its location
NodeBase(String, String, Node, int) - Constructor for class org.apache.hadoop.net.NodeBase
Construct a node from its name and its location
normalize(String) - Static method in class org.apache.hadoop.net.NodeBase
Normalize a path
normalizeHostName(String) - Static method in class org.apache.hadoop.net.NetUtils
Given a string representation of a host, return its ip address in textual presentation.
normalizeHostNames(Collection<String>) - Static method in class org.apache.hadoop.net.NetUtils
Given a collection of string representation of hosts, return a list of corresponding IP addresses in the textual representation.
normalizeMemoryConfigValue(long) - Static method in class org.apache.hadoop.mapred.JobConf
Deprecated. Normalize the negative values in configuration
normalizePath(String) - Static method in class org.apache.hadoop.contrib.index.mapred.Shard
 
not() - Method in enum org.apache.hadoop.fs.permission.FsAction
NOT operation.
not() - Method in class org.apache.hadoop.util.bloom.BloomFilter
 
not() - Method in class org.apache.hadoop.util.bloom.CountingBloomFilter
 
not() - Method in class org.apache.hadoop.util.bloom.DynamicBloomFilter
 
not() - Method in class org.apache.hadoop.util.bloom.Filter
Performs a logical NOT on this filter.
NULL - Static variable in interface org.apache.hadoop.mapred.Reporter
A constant of Reporter type that does nothing.
NullContext - Class in org.apache.hadoop.metrics.spi
Null metrics context: a metrics context which does nothing.
NullContext() - Constructor for class org.apache.hadoop.metrics.spi.NullContext
Creates a new instance of NullContext
NullContextWithUpdateThread - Class in org.apache.hadoop.metrics.spi
A null context which has a thread calling periodically when monitoring is started.
NullContextWithUpdateThread() - Constructor for class org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
Creates a new instance of NullContextWithUpdateThread
NullOutputFormat<K,V> - Class in org.apache.hadoop.mapred.lib
Deprecated. Use NullOutputFormat instead.
NullOutputFormat() - Constructor for class org.apache.hadoop.mapred.lib.NullOutputFormat
Deprecated.  
NullOutputFormat<K,V> - Class in org.apache.hadoop.mapreduce.lib.output
Consume all outputs and put them in /dev/null.
NullOutputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.output.NullOutputFormat
 
NullWritable - Class in org.apache.hadoop.io
Singleton Writable with no data.
NullWritable.Comparator - Class in org.apache.hadoop.io
A Comparator "optimized" for NullWritable.
NullWritable.Comparator() - Constructor for class org.apache.hadoop.io.NullWritable.Comparator
 
NUM_OF_VALUES_FIELD - Static variable in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
NUM_OVERSHOOT_BYTES - Static variable in interface org.apache.hadoop.io.compress.bzip2.BZip2Constants
 
numOfValues - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
numOpenConnections - Variable in class org.apache.hadoop.ipc.metrics.RpcMetrics
 
numReduceTasksSpec_ - Variable in class org.apache.hadoop.streaming.StreamJob
 

O

ObjectWritable - Class in org.apache.hadoop.io
A polymorphic Writable that writes an instance with it's class name.
ObjectWritable() - Constructor for class org.apache.hadoop.io.ObjectWritable
 
ObjectWritable(Object) - Constructor for class org.apache.hadoop.io.ObjectWritable
 
ObjectWritable(Class, Object) - Constructor for class org.apache.hadoop.io.ObjectWritable
 
offerService() - Method in class org.apache.hadoop.mapred.JobTracker
Run forever
OfflineAnonymizer - Class in org.apache.hadoop.contrib.failmon
This class can be used to anonymize logs independently of Hadoop and the Executor.
OfflineAnonymizer(OfflineAnonymizer.LogType, String) - Constructor for class org.apache.hadoop.contrib.failmon.OfflineAnonymizer
Creates an OfflineAnonymizer for a specific log file.
OfflineAnonymizer.LogType - Enum in org.apache.hadoop.contrib.failmon
 
offset() - Method in class org.apache.hadoop.io.file.tfile.ByteArray
 
offset() - Method in interface org.apache.hadoop.io.file.tfile.RawComparable
Get the offset of the first byte in the byte array.
ONE - Static variable in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorDescriptor
 
oneRotation - Static variable in class org.apache.hadoop.examples.dancing.Pentomino
Is the piece fixed under rotation?
OneSidedPentomino - Class in org.apache.hadoop.examples.dancing
Of the "normal" 12 pentominos, 6 of them have distinct shapes when flipped.
OneSidedPentomino() - Constructor for class org.apache.hadoop.examples.dancing.OneSidedPentomino
 
OneSidedPentomino(int, int) - Constructor for class org.apache.hadoop.examples.dancing.OneSidedPentomino
 
open(Path, int) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Opens an FSDataInputStream at the indicated Path.
open(Path, int) - Method in class org.apache.hadoop.fs.FileSystem
Opens an FSDataInputStream at the indicated Path.
open(Path) - Method in class org.apache.hadoop.fs.FileSystem
Opens an FSDataInputStream at the indicated Path.
open(Path, int) - Method in class org.apache.hadoop.fs.FilterFileSystem
Opens an FSDataInputStream at the indicated Path.
open(Path, int) - Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
 
open(Path, int) - Method in class org.apache.hadoop.fs.HarFileSystem
Returns a har input stream which fakes end of file.
open(Path, int) - Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
 
open(Path, int) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
open(Path, int) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
open(Path, int) - Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
 
open(FileSystem, String, WritableComparator, Configuration) - Method in class org.apache.hadoop.io.MapFile.Reader
 
open(InputStream) - Method in interface org.apache.hadoop.io.serializer.Deserializer
Prepare the deserializer for reading.
open(OutputStream) - Method in interface org.apache.hadoop.io.serializer.Serializer
Prepare the serializer for writing.
openFile(FileSystem, Path, int, long) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Override this method to specialize the type of FSDataInputStream returned.
openInput(String) - Method in class org.apache.hadoop.contrib.index.lucene.FileSystemDirectory
 
openInput(String, int) - Method in class org.apache.hadoop.contrib.index.lucene.FileSystemDirectory
 
or(FsAction) - Method in enum org.apache.hadoop.fs.permission.FsAction
OR operation.
or(Filter) - Method in class org.apache.hadoop.util.bloom.BloomFilter
 
or(Filter) - Method in class org.apache.hadoop.util.bloom.CountingBloomFilter
 
or(Filter) - Method in class org.apache.hadoop.util.bloom.DynamicBloomFilter
 
or(Filter) - Method in class org.apache.hadoop.util.bloom.Filter
Peforms a logical OR between this filter and a specified filter.
org.apache.hadoop - package org.apache.hadoop
 
org.apache.hadoop.conf - package org.apache.hadoop.conf
Configuration of system parameters.
org.apache.hadoop.contrib.failmon - package org.apache.hadoop.contrib.failmon
 
org.apache.hadoop.contrib.index.example - package org.apache.hadoop.contrib.index.example
 
org.apache.hadoop.contrib.index.lucene - package org.apache.hadoop.contrib.index.lucene
 
org.apache.hadoop.contrib.index.main - package org.apache.hadoop.contrib.index.main
 
org.apache.hadoop.contrib.index.mapred - package org.apache.hadoop.contrib.index.mapred
 
org.apache.hadoop.contrib.utils.join - package org.apache.hadoop.contrib.utils.join
 
org.apache.hadoop.examples - package org.apache.hadoop.examples
Hadoop example code.
org.apache.hadoop.examples.dancing - package org.apache.hadoop.examples.dancing
This package is a distributed implementation of Knuth's dancing links algorithm that can run under Hadoop.
org.apache.hadoop.examples.terasort - package org.apache.hadoop.examples.terasort
This package consists of 3 map/reduce applications for Hadoop to compete in the annual terabyte sort competition.
org.apache.hadoop.filecache - package org.apache.hadoop.filecache
 
org.apache.hadoop.fs - package org.apache.hadoop.fs
An abstract file system API.
org.apache.hadoop.fs.ftp - package org.apache.hadoop.fs.ftp
 
org.apache.hadoop.fs.kfs - package org.apache.hadoop.fs.kfs
A client for the Kosmos filesystem (KFS)
org.apache.hadoop.fs.permission - package org.apache.hadoop.fs.permission
 
org.apache.hadoop.fs.s3 - package org.apache.hadoop.fs.s3
A distributed, block-based implementation of FileSystem that uses Amazon S3 as a backing store.
org.apache.hadoop.fs.s3native - package org.apache.hadoop.fs.s3native
A distributed implementation of FileSystem for reading and writing files on Amazon S3.
org.apache.hadoop.fs.shell - package org.apache.hadoop.fs.shell
 
org.apache.hadoop.http - package org.apache.hadoop.http
 
org.apache.hadoop.io - package org.apache.hadoop.io
Generic i/o code for use when reading and writing data to the network, to databases, and to files.
org.apache.hadoop.io.compress - package org.apache.hadoop.io.compress
 
org.apache.hadoop.io.compress.bzip2 - package org.apache.hadoop.io.compress.bzip2
 
org.apache.hadoop.io.compress.zlib - package org.apache.hadoop.io.compress.zlib
 
org.apache.hadoop.io.file.tfile - package org.apache.hadoop.io.file.tfile
 
org.apache.hadoop.io.retry - package org.apache.hadoop.io.retry
A mechanism for selectively retrying methods that throw exceptions under certain circumstances.
org.apache.hadoop.io.serializer - package org.apache.hadoop.io.serializer
This package provides a mechanism for using different serialization frameworks in Hadoop.
org.apache.hadoop.ipc - package org.apache.hadoop.ipc
Tools to help define network clients and servers.
org.apache.hadoop.ipc.metrics - package org.apache.hadoop.ipc.metrics
 
org.apache.hadoop.log - package org.apache.hadoop.log
 
org.apache.hadoop.mapred - package org.apache.hadoop.mapred
A software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) parallelly on large clusters (thousands of nodes) built of commodity hardware in a reliable, fault-tolerant manner.
org.apache.hadoop.mapred.jobcontrol - package org.apache.hadoop.mapred.jobcontrol
Utilities for managing dependent jobs.
org.apache.hadoop.mapred.join - package org.apache.hadoop.mapred.join
Given a set of sorted datasets keyed with the same class and yielding equal partitions, it is possible to effect a join of those datasets prior to the map.
org.apache.hadoop.mapred.lib - package org.apache.hadoop.mapred.lib
Library of generally useful mappers, reducers, and partitioners.
org.apache.hadoop.mapred.lib.aggregate - package org.apache.hadoop.mapred.lib.aggregate
Classes for performing various counting and aggregations.
org.apache.hadoop.mapred.lib.db - package org.apache.hadoop.mapred.lib.db
org.apache.hadoop.mapred.lib.db Package
org.apache.hadoop.mapred.pipes - package org.apache.hadoop.mapred.pipes
Hadoop Pipes allows C++ code to use Hadoop DFS and map/reduce.
org.apache.hadoop.mapred.tools - package org.apache.hadoop.mapred.tools
 
org.apache.hadoop.mapreduce - package org.apache.hadoop.mapreduce
 
org.apache.hadoop.mapreduce.lib.input - package org.apache.hadoop.mapreduce.lib.input
 
org.apache.hadoop.mapreduce.lib.map - package org.apache.hadoop.mapreduce.lib.map
 
org.apache.hadoop.mapreduce.lib.output - package org.apache.hadoop.mapreduce.lib.output
 
org.apache.hadoop.mapreduce.lib.partition - package org.apache.hadoop.mapreduce.lib.partition
 
org.apache.hadoop.mapreduce.lib.reduce - package org.apache.hadoop.mapreduce.lib.reduce
 
org.apache.hadoop.metrics - package org.apache.hadoop.metrics
This package defines an API for reporting performance metric information.
org.apache.hadoop.metrics.file - package org.apache.hadoop.metrics.file
Implementation of the metrics package that writes the metrics to a file.
org.apache.hadoop.metrics.ganglia - package org.apache.hadoop.metrics.ganglia
Implementation of the metrics package that sends metric data to Ganglia.
org.apache.hadoop.metrics.jvm - package org.apache.hadoop.metrics.jvm
 
org.apache.hadoop.metrics.spi - package org.apache.hadoop.metrics.spi
The Service Provider Interface for the Metrics API.
org.apache.hadoop.metrics.util - package org.apache.hadoop.metrics.util
 
org.apache.hadoop.net - package org.apache.hadoop.net
Network-related classes.
org.apache.hadoop.record - package org.apache.hadoop.record
Hadoop record I/O contains classes and a record description language translator for simplifying serialization and deserialization of records in a language-neutral manner.
org.apache.hadoop.record.compiler - package org.apache.hadoop.record.compiler
This package contains classes needed for code generation from the hadoop record compiler.
org.apache.hadoop.record.compiler.ant - package org.apache.hadoop.record.compiler.ant
 
org.apache.hadoop.record.compiler.generated - package org.apache.hadoop.record.compiler.generated
This package contains code generated by JavaCC from the Hadoop record syntax file rcc.jj.
org.apache.hadoop.record.meta - package org.apache.hadoop.record.meta
 
org.apache.hadoop.security - package org.apache.hadoop.security
 
org.apache.hadoop.security.authorize - package org.apache.hadoop.security.authorize
 
org.apache.hadoop.streaming - package org.apache.hadoop.streaming
Hadoop Streaming is a utility which allows users to create and run Map-Reduce jobs with any executables (e.g.
org.apache.hadoop.util - package org.apache.hadoop.util
Common utilities.
org.apache.hadoop.util.bloom - package org.apache.hadoop.util.bloom
 
org.apache.hadoop.util.hash - package org.apache.hadoop.util.hash
 
out - Variable in class org.apache.hadoop.io.compress.CompressionOutputStream
The output stream to be compressed.
out - Variable in class org.apache.hadoop.mapred.TextOutputFormat.LineRecordWriter
Deprecated.  
out - Variable in class org.apache.hadoop.mapreduce.lib.output.TextOutputFormat.LineRecordWriter
 
OuterJoinRecordReader<K extends WritableComparable> - Class in org.apache.hadoop.mapred.join
Full outer join.
outerrThreadsThrowable - Variable in class org.apache.hadoop.streaming.PipeMapRed
 
output_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
OUTPUT_FIELD_NAMES_PROPERTY - Static variable in class org.apache.hadoop.mapred.lib.db.DBConfiguration
Field names in the Output table
OUTPUT_FORMAT_CLASS_ATTR - Static variable in class org.apache.hadoop.mapreduce.JobContext
 
OUTPUT_TABLE_NAME_PROPERTY - Static variable in class org.apache.hadoop.mapred.lib.db.DBConfiguration
Output table name
OutputBuffer - Class in org.apache.hadoop.io
A reusable OutputStream implementation that writes to an in-memory buffer.
OutputBuffer() - Constructor for class org.apache.hadoop.io.OutputBuffer
Constructs a new empty buffer.
OutputCollector<K,V> - Interface in org.apache.hadoop.mapred
Collects the <key, value> pairs output by Mappers and Reducers.
OutputCommitter - Class in org.apache.hadoop.mapred
Deprecated. Use OutputCommitter instead.
OutputCommitter() - Constructor for class org.apache.hadoop.mapred.OutputCommitter
Deprecated.  
OutputCommitter - Class in org.apache.hadoop.mapreduce
OutputCommitter describes the commit of task output for a Map-Reduce job.
OutputCommitter() - Constructor for class org.apache.hadoop.mapreduce.OutputCommitter
 
OutputFormat<K,V> - Interface in org.apache.hadoop.mapred
Deprecated. Use OutputFormat instead.
OutputFormat<K,V> - Class in org.apache.hadoop.mapreduce
OutputFormat describes the output-specification for a Map-Reduce job.
OutputFormat() - Constructor for class org.apache.hadoop.mapreduce.OutputFormat
 
outputFormatSpec_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
OutputLogFilter - Class in org.apache.hadoop.mapred
This class filters log files from directory given It doesnt accept paths having _logs.
OutputLogFilter() - Constructor for class org.apache.hadoop.mapred.OutputLogFilter
 
OutputRecord - Class in org.apache.hadoop.metrics.spi
Represents a record of metric data to be sent to a metrics system.
outputSingleNode_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
OverrideRecordReader<K extends WritableComparable,V extends Writable> - Class in org.apache.hadoop.mapred.join
Prefer the "rightmost" data source for this key.

P

pack(SerializedRecord) - Static method in class org.apache.hadoop.contrib.failmon.LocalStore
Pack a SerializedRecord into an array of bytes
packageFiles_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
packageJobJar() - Method in class org.apache.hadoop.streaming.StreamJob
 
parent - Variable in class org.apache.hadoop.net.NodeBase
 
parse(String[], int) - Method in class org.apache.hadoop.fs.shell.CommandFormat
Parse parameters starting from the given position
parse(String, int) - Static method in class org.apache.hadoop.metrics.spi.Util
Parses a space and/or comma separated sequence of server specifications of the form hostname or hostname:port.
parseDate(String, String) - Method in class org.apache.hadoop.contrib.failmon.HadoopLogParser
Parse a date found in the Hadoop log.
parseDate(String, String) - Method in class org.apache.hadoop.contrib.failmon.LogParser
Parse a date found in Hadoop log file.
parseDate(String, String) - Method in class org.apache.hadoop.contrib.failmon.SystemLogParser
Parse a date found in the system log.
ParseException - Exception in org.apache.hadoop.record.compiler.generated
This exception is thrown when parse errors are encountered.
ParseException(Token, int[][], String[]) - Constructor for exception org.apache.hadoop.record.compiler.generated.ParseException
This constructor is used by the method "generateParseException" in the generated parser.
ParseException() - Constructor for exception org.apache.hadoop.record.compiler.generated.ParseException
The following constructors are for use by you for whatever purpose you can think of.
ParseException(String) - Constructor for exception org.apache.hadoop.record.compiler.generated.ParseException
 
parseExecResult(BufferedReader) - Method in class org.apache.hadoop.fs.DF
 
parseExecResult(BufferedReader) - Method in class org.apache.hadoop.fs.DU
 
parseExecResult(BufferedReader) - Method in class org.apache.hadoop.util.Shell
Parse the execution result
parseExecResult(BufferedReader) - Method in class org.apache.hadoop.util.Shell.ShellCommandExecutor
 
parseHashType(String) - Static method in class org.apache.hadoop.util.hash.Hash
This utility method converts String representation of hash function name to a symbolic constant.
parseHistoryFromFS(String, JobHistory.Listener, FileSystem) - Static method in class org.apache.hadoop.mapred.JobHistory
Parses history file and invokes Listener.handle() for each line of history.
parseJobTasks(String, JobHistory.JobInfo, FileSystem) - Static method in class org.apache.hadoop.mapred.DefaultJobHistoryParser
Populates a JobInfo object from the job's history log file.
parseLine(String) - Method in class org.apache.hadoop.contrib.failmon.HadoopLogParser
Parses one line of the log.
parseLine(String) - Method in class org.apache.hadoop.contrib.failmon.LogParser
Parses one line of the log.
parseLine(String) - Method in class org.apache.hadoop.contrib.failmon.SystemLogParser
Parses one line of the log.
Parser - Class in org.apache.hadoop.mapred.join
Very simple shift-reduce parser for join expressions.
Parser() - Constructor for class org.apache.hadoop.mapred.join.Parser
 
Parser.Node - Class in org.apache.hadoop.mapred.join
 
Parser.Node(String) - Constructor for class org.apache.hadoop.mapred.join.Parser.Node
 
Parser.NodeToken - Class in org.apache.hadoop.mapred.join
 
Parser.NumToken - Class in org.apache.hadoop.mapred.join
 
Parser.NumToken(double) - Constructor for class org.apache.hadoop.mapred.join.Parser.NumToken
 
Parser.StrToken - Class in org.apache.hadoop.mapred.join
 
Parser.StrToken(Parser.TType, String) - Constructor for class org.apache.hadoop.mapred.join.Parser.StrToken
 
Parser.Token - Class in org.apache.hadoop.mapred.join
Tagged-union type for tokens from the join expression.
Parser.TType - Enum in org.apache.hadoop.mapred.join
 
Partitioner<K2,V2> - Interface in org.apache.hadoop.mapred
Deprecated. Use Partitioner instead.
Partitioner<KEY,VALUE> - Class in org.apache.hadoop.mapreduce
Partitions the key space.
Partitioner() - Constructor for class org.apache.hadoop.mapreduce.Partitioner
 
PARTITIONER_CLASS_ATTR - Static variable in class org.apache.hadoop.mapreduce.JobContext
 
partitionerSpec_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
PASSWORD_PROPERTY - Static variable in class org.apache.hadoop.mapred.lib.db.DBConfiguration
Password to access the database
Path - Class in org.apache.hadoop.fs
Names a file or directory in a FileSystem.
Path(String, String) - Constructor for class org.apache.hadoop.fs.Path
Resolve a child path against a parent path.
Path(Path, String) - Constructor for class org.apache.hadoop.fs.Path
Resolve a child path against a parent path.
Path(String, Path) - Constructor for class org.apache.hadoop.fs.Path
Resolve a child path against a parent path.
Path(Path, Path) - Constructor for class org.apache.hadoop.fs.Path
Resolve a child path against a parent path.
Path(String) - Constructor for class org.apache.hadoop.fs.Path
Construct a path from a String.
Path(String, String, String) - Constructor for class org.apache.hadoop.fs.Path
Construct a Path from components.
PATH_SEPARATOR - Static variable in class org.apache.hadoop.net.NodeBase
 
PATH_SEPARATOR_STR - Static variable in class org.apache.hadoop.net.NodeBase
 
PathFilter - Interface in org.apache.hadoop.fs
 
PathFinder - Class in org.apache.hadoop.streaming
Maps a relative pathname to an absolute pathname using the PATH enviroment.
PathFinder() - Constructor for class org.apache.hadoop.streaming.PathFinder
Construct a PathFinder object using the path from java.class.path
PathFinder(String) - Constructor for class org.apache.hadoop.streaming.PathFinder
Construct a PathFinder object using the path from the specified system environment variable.
pathToFile(Path) - Method in class org.apache.hadoop.fs.LocalFileSystem
Convert a path to a File.
pathToFile(Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
Convert a path to a File.
Pentomino - Class in org.apache.hadoop.examples.dancing
 
Pentomino(int, int) - Constructor for class org.apache.hadoop.examples.dancing.Pentomino
Create the model for a given pentomino set of pieces and board size.
Pentomino() - Constructor for class org.apache.hadoop.examples.dancing.Pentomino
Create the object without initialization.
Pentomino.ColumnName - Interface in org.apache.hadoop.examples.dancing
This interface just is a marker for what types I expect to get back as column names.
Pentomino.Piece - Class in org.apache.hadoop.examples.dancing
Maintain information about a puzzle piece.
Pentomino.Piece(String, String, boolean, int[]) - Constructor for class org.apache.hadoop.examples.dancing.Pentomino.Piece
 
Pentomino.SolutionCategory - Enum in org.apache.hadoop.examples.dancing
 
percentageGraph(int, int) - Static method in class org.apache.hadoop.util.ServletUtil
Generate the percentage graph and returns HTML representation string of the same.
percentageGraph(float, int) - Static method in class org.apache.hadoop.util.ServletUtil
Generate the percentage graph and returns HTML representation string of the same.
PERIOD_PROPERTY - Static variable in class org.apache.hadoop.metrics.file.FileContext
 
PermissionStatus - Class in org.apache.hadoop.fs.permission
Store permission related information.
PermissionStatus(String, String, FsPermission) - Constructor for class org.apache.hadoop.fs.permission.PermissionStatus
Constructor
PersistentState - Class in org.apache.hadoop.contrib.failmon
This class takes care of the information that needs to be persistently stored locally on nodes.
PersistentState() - Constructor for class org.apache.hadoop.contrib.failmon.PersistentState
 
phase() - Method in class org.apache.hadoop.util.Progress
Returns the current sub-node executing.
pieces - Variable in class org.apache.hadoop.examples.dancing.Pentomino
 
PiEstimator - Class in org.apache.hadoop.examples
A Map-reduce program to estimate the value of Pi using quasi-Monte Carlo method.
PiEstimator() - Constructor for class org.apache.hadoop.examples.PiEstimator
 
PiEstimator.PiMapper - Class in org.apache.hadoop.examples
Mapper class for Pi estimation.
PiEstimator.PiMapper() - Constructor for class org.apache.hadoop.examples.PiEstimator.PiMapper
 
PiEstimator.PiReducer - Class in org.apache.hadoop.examples
Reducer class for Pi estimation.
PiEstimator.PiReducer() - Constructor for class org.apache.hadoop.examples.PiEstimator.PiReducer
 
ping(TaskAttemptID) - Method in class org.apache.hadoop.mapred.TaskTracker
Child checking to see if we're alive.
PipeMapper - Class in org.apache.hadoop.streaming
A generic Mapper bridge.
PipeMapper() - Constructor for class org.apache.hadoop.streaming.PipeMapper
 
PipeMapRed - Class in org.apache.hadoop.streaming
Shared functionality for PipeMapper, PipeReducer.
PipeMapRed() - Constructor for class org.apache.hadoop.streaming.PipeMapRed
 
PipeMapRunner<K1,V1,K2,V2> - Class in org.apache.hadoop.streaming
 
PipeMapRunner() - Constructor for class org.apache.hadoop.streaming.PipeMapRunner
 
PipeReducer - Class in org.apache.hadoop.streaming
A generic Reducer bridge.
PipeReducer() - Constructor for class org.apache.hadoop.streaming.PipeReducer
 
PlatformName - Class in org.apache.hadoop.util
A helper class for getting build-info of the java-vm.
PlatformName() - Constructor for class org.apache.hadoop.util.PlatformName
 
POLICY_PROVIDER_CONFIG - Static variable in class org.apache.hadoop.security.authorize.PolicyProvider
Configuration key for the PolicyProvider implementation.
PolicyProvider - Class in org.apache.hadoop.security.authorize
PolicyProvider provides the Service definitions to the security Policy in effect for Hadoop.
PolicyProvider() - Constructor for class org.apache.hadoop.security.authorize.PolicyProvider
 
pop() - Method in class org.apache.hadoop.util.PriorityQueue
Removes and returns the least element of the PriorityQueue in log(size) time.
PositionedReadable - Interface in org.apache.hadoop.fs
Stream that permits positional reading.
PREP - Static variable in class org.apache.hadoop.mapred.JobStatus
 
prepare(String) - Static method in class org.apache.hadoop.contrib.failmon.Environment
Initializes structures needed by other methods.
prepareAppendKey(int) - Method in class org.apache.hadoop.io.file.tfile.TFile.Writer
Obtain an output stream for writing a key into TFile.
prepareAppendValue(int) - Method in class org.apache.hadoop.io.file.tfile.TFile.Writer
Obtain an output stream for writing a value into TFile.
prepareMetaBlock(String, String) - Method in class org.apache.hadoop.io.file.tfile.TFile.Writer
Obtain an output stream for creating a meta block.
prepareMetaBlock(String) - Method in class org.apache.hadoop.io.file.tfile.TFile.Writer
Obtain an output stream for creating a meta block.
prependPathComponent(String) - Method in class org.apache.hadoop.streaming.PathFinder
Appends the specified component to the path list
preserveInput(boolean) - Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
Whether to delete the files when no longer needed
prevCharIsCR - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
prevCharIsLF - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
printGenericCommandUsage(PrintStream) - Static method in class org.apache.hadoop.util.GenericOptionsParser
Print the usage message for generic command-line options supported.
printGenericCommandUsage(PrintStream) - Static method in class org.apache.hadoop.util.ToolRunner
Prints generic command-line argurments and usage information.
PrintJarMainClass - Class in org.apache.hadoop.util
A micro-application that prints the main class name out of a jar file.
PrintJarMainClass() - Constructor for class org.apache.hadoop.util.PrintJarMainClass
 
printStackTrace() - Method in exception org.apache.hadoop.security.authorize.AuthorizationException
 
printStackTrace(PrintStream) - Method in exception org.apache.hadoop.security.authorize.AuthorizationException
 
printStackTrace(PrintWriter) - Method in exception org.apache.hadoop.security.authorize.AuthorizationException
 
printStatistics() - Static method in class org.apache.hadoop.fs.FileSystem
 
printThreadInfo(PrintWriter, String) - Static method in class org.apache.hadoop.util.ReflectionUtils
Print all of the thread's information and stack traces.
PriorityQueue<T> - Class in org.apache.hadoop.util
A PriorityQueue maintains a partial ordering of its elements such that the least element can always be found in constant time.
PriorityQueue() - Constructor for class org.apache.hadoop.util.PriorityQueue
 
probablyHasKey(WritableComparable) - Method in class org.apache.hadoop.io.BloomMapFile.Reader
Checks if this MapFile has the indicated key.
process(IntermediateForm) - Method in class org.apache.hadoop.contrib.index.lucene.ShardWriter
Process an intermediate form by carrying out, on the Lucene instance of the shard, the deletes and the inserts (a ram index) in the form.
process(DocumentAndOp, Analyzer) - Method in class org.apache.hadoop.contrib.index.mapred.IntermediateForm
This method is used by the index update mapper and process a document operation into the current intermediate form.
process(IntermediateForm) - Method in class org.apache.hadoop.contrib.index.mapred.IntermediateForm
This method is used by the index update combiner and process an intermediate form into the current intermediate form.
processDeleteOnExit() - Method in class org.apache.hadoop.fs.FileSystem
Delete all files that were marked as delete-on-exit.
ProcfsBasedProcessTree - Class in org.apache.hadoop.util
A Proc file-system based ProcessTree.
ProcfsBasedProcessTree(String) - Constructor for class org.apache.hadoop.util.ProcfsBasedProcessTree
 
ProcfsBasedProcessTree(String, String) - Constructor for class org.apache.hadoop.util.ProcfsBasedProcessTree
 
ProgramDriver - Class in org.apache.hadoop.util
A driver that is used to run programs added to it
ProgramDriver() - Constructor for class org.apache.hadoop.util.ProgramDriver
 
progress - Variable in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
 
progress() - Method in class org.apache.hadoop.mapred.TaskAttemptContext
Deprecated.  
progress() - Method in class org.apache.hadoop.mapreduce.StatusReporter
 
progress() - Method in class org.apache.hadoop.mapreduce.TaskAttemptContext
Report progress.
progress() - Method in class org.apache.hadoop.mapreduce.TaskInputOutputContext
 
Progress - Class in org.apache.hadoop.util
Utility to assist with generation of progress reports.
Progress() - Constructor for class org.apache.hadoop.util.Progress
Creates a new root node.
progress() - Method in interface org.apache.hadoop.util.Progressable
Report progress to the Hadoop framework.
Progressable - Interface in org.apache.hadoop.util
A facility for reporting progress.
pseudoSortByDistance(Node, Node[]) - Method in class org.apache.hadoop.net.NetworkTopology
Sort nodes array by their distances to reader It linearly scans the array, if a local node is found, swap it with the first element of the array.
purge() - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
Delete everything.
purgeCache(Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Clear the entire contents of the cache and delete the backing files.
pushMetric(MetricsRecord) - Method in class org.apache.hadoop.metrics.util.MetricsBase
 
pushMetric(MetricsRecord) - Method in class org.apache.hadoop.metrics.util.MetricsIntValue
Push the metric to the mr.
pushMetric(MetricsRecord) - Method in class org.apache.hadoop.metrics.util.MetricsLongValue
Push the metric to the mr.
pushMetric(MetricsRecord) - Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingInt
Push the delta metrics to the mr.
pushMetric(MetricsRecord) - Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingLong
Push the delta metrics to the mr.
pushMetric(MetricsRecord) - Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingRate
Push the delta metrics to the mr.
put(Writable, Writable) - Method in class org.apache.hadoop.io.MapWritable
put(WritableComparable, Writable) - Method in class org.apache.hadoop.io.SortedMapWritable
put(T) - Method in class org.apache.hadoop.util.PriorityQueue
Adds an Object to a PriorityQueue in log(size) time.
putAll(Map<? extends Writable, ? extends Writable>) - Method in class org.apache.hadoop.io.MapWritable
putAll(Map<? extends WritableComparable, ? extends Writable>) - Method in class org.apache.hadoop.io.SortedMapWritable

Q

QSORT_STACK_SIZE - Static variable in class org.apache.hadoop.io.compress.bzip2.CBZip2OutputStream
This constant is accessible by subclasses for historical purposes.
quarterDigest() - Method in class org.apache.hadoop.io.MD5Hash
Return a 32-bit digest of the MD5.
query(String) - Method in class org.apache.hadoop.contrib.failmon.CPUParser
Reads and parses /proc/cpuinfo and creates an appropriate EventRecord that holds the desirable information.
query(String) - Method in class org.apache.hadoop.contrib.failmon.NICParser
Reads and parses the output of ifconfig for a specified NIC and creates an appropriate EventRecord that holds the desirable information for it.
query(String) - Method in class org.apache.hadoop.contrib.failmon.SensorsParser
Reads and parses the output of the 'sensors' command and creates an appropriate EventRecord that holds the desirable information.
query(String) - Method in class org.apache.hadoop.contrib.failmon.ShellParser
 
query(String) - Method in class org.apache.hadoop.contrib.failmon.SMARTParser
Reads and parses the output of smartctl for a specified disk and creates an appropriate EventRecord that holds the desirable information for it.
QuickSort - Class in org.apache.hadoop.util
An implementation of the core algorithm of QuickSort.
QuickSort() - Constructor for class org.apache.hadoop.util.QuickSort
 

R

RAMDirectoryUtil - Class in org.apache.hadoop.contrib.index.lucene
A utility class which writes an index in a ram dir into a DataOutput and read from a DataInput an index into a ram dir.
RAMDirectoryUtil() - Constructor for class org.apache.hadoop.contrib.index.lucene.RAMDirectoryUtil
 
RANDOM - Static variable in interface org.apache.hadoop.util.bloom.RemoveScheme
Random selection.
RandomTextWriter - Class in org.apache.hadoop.examples
This program uses map/reduce to just run a distributed job where there is no interaction between the tasks and each task writes a large unsorted random sequence of words.
RandomTextWriter() - Constructor for class org.apache.hadoop.examples.RandomTextWriter
 
RandomWriter - Class in org.apache.hadoop.examples
This program uses map/reduce to just run a distributed job where there is no interaction between the tasks and each task write a large unsorted random binary sequence file of BytesWritable.
RandomWriter() - Constructor for class org.apache.hadoop.examples.RandomWriter
 
RATIO - Static variable in interface org.apache.hadoop.util.bloom.RemoveScheme
Ratio Selection.
RawComparable - Interface in org.apache.hadoop.io.file.tfile
Interface for objects that can be compared through RawComparator.
RawComparator<T> - Interface in org.apache.hadoop.io
A Comparator that operates directly on byte representations of objects.
RawKeyValueIterator - Interface in org.apache.hadoop.mapred
RawKeyValueIterator is an iterator used to iterate over the raw keys and values during sort/merge of intermediate data.
RawLocalFileSystem - Class in org.apache.hadoop.fs
Implement the FileSystem API for the raw local filesystem.
RawLocalFileSystem() - Constructor for class org.apache.hadoop.fs.RawLocalFileSystem
 
rawMapping - Variable in class org.apache.hadoop.net.CachedDNSToSwitchMapping
 
RBRACE_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
Rcc - Class in org.apache.hadoop.record.compiler.generated
 
Rcc(InputStream) - Constructor for class org.apache.hadoop.record.compiler.generated.Rcc
 
Rcc(InputStream, String) - Constructor for class org.apache.hadoop.record.compiler.generated.Rcc
 
Rcc(Reader) - Constructor for class org.apache.hadoop.record.compiler.generated.Rcc
 
Rcc(RccTokenManager) - Constructor for class org.apache.hadoop.record.compiler.generated.Rcc
 
RccConstants - Interface in org.apache.hadoop.record.compiler.generated
 
RccTask - Class in org.apache.hadoop.record.compiler.ant
Hadoop record compiler ant Task
RccTask() - Constructor for class org.apache.hadoop.record.compiler.ant.RccTask
Creates a new instance of RccTask
RccTokenManager - Class in org.apache.hadoop.record.compiler.generated
 
RccTokenManager(SimpleCharStream) - Constructor for class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
RccTokenManager(SimpleCharStream, int) - Constructor for class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
read(long, byte[], int, int) - Method in class org.apache.hadoop.fs.BufferedFSInputStream
 
read(long, byte[], int, int) - Method in class org.apache.hadoop.fs.FSDataInputStream
 
read() - Method in class org.apache.hadoop.fs.FSInputChecker
Read one checksum-verified byte
read(byte[], int, int) - Method in class org.apache.hadoop.fs.FSInputChecker
Read checksum verified bytes from this byte-input stream into the specified byte array, starting at the given offset.
read(long, byte[], int, int) - Method in class org.apache.hadoop.fs.FSInputStream
 
read() - Method in class org.apache.hadoop.fs.ftp.FTPInputStream
 
read(byte[], int, int) - Method in class org.apache.hadoop.fs.ftp.FTPInputStream
 
read(DataInput) - Static method in class org.apache.hadoop.fs.permission.FsPermission
Create and initialize a FsPermission from DataInput.
read(DataInput) - Static method in class org.apache.hadoop.fs.permission.PermissionStatus
Create and initialize a PermissionStatus from DataInput.
read(long, byte[], int, int) - Method in interface org.apache.hadoop.fs.PositionedReadable
Read upto the specified number of bytes, from a given position within a file, and return the number of bytes read.
read() - Method in class org.apache.hadoop.io.compress.bzip2.CBZip2InputStream
 
read(byte[], int, int) - Method in class org.apache.hadoop.io.compress.bzip2.CBZip2InputStream
 
read(byte[], int, int) - Method in class org.apache.hadoop.io.compress.CompressionInputStream
Read bytes from the stream.
read() - Method in class org.apache.hadoop.io.compress.DecompressorStream
 
read(byte[], int, int) - Method in class org.apache.hadoop.io.compress.DecompressorStream
 
read() - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
 
read(byte[], int, int) - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
 
read(DataInput) - Static method in class org.apache.hadoop.io.MD5Hash
Constructs, reads and returns an instance.
read(DataInput) - Static method in class org.apache.hadoop.mapred.JobID
Deprecated. 
read(DataInput) - Static method in class org.apache.hadoop.mapred.TaskAttemptID
Deprecated. 
read(DataInput) - Static method in class org.apache.hadoop.mapred.TaskID
Deprecated. 
read() - Method in class org.apache.hadoop.net.SocketInputStream
 
read(byte[], int, int) - Method in class org.apache.hadoop.net.SocketInputStream
 
read(ByteBuffer) - Method in class org.apache.hadoop.net.SocketInputStream
 
readBool(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
readBool(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
readBool(String) - Method in interface org.apache.hadoop.record.RecordInput
Read a boolean from serialized record.
readBool(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
readBuffer(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
readBuffer(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
readBuffer(String) - Method in interface org.apache.hadoop.record.RecordInput
Read byte array from serialized record.
readBuffer(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
readByte(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
readByte(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
readByte(String) - Method in interface org.apache.hadoop.record.RecordInput
Read a byte from serialized record.
readByte(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
readChar() - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
readChunk(long, byte[], int, int, byte[]) - Method in class org.apache.hadoop.fs.FSInputChecker
Reads in next checksum chunk data into buf at offset and checksum into checksum.
readCompressedByteArray(DataInput) - Static method in class org.apache.hadoop.io.WritableUtils
 
readCompressedString(DataInput) - Static method in class org.apache.hadoop.io.WritableUtils
 
readCompressedStringArray(DataInput) - Static method in class org.apache.hadoop.io.WritableUtils
 
readDouble(byte[], int) - Static method in class org.apache.hadoop.io.WritableComparator
Parse a double from a byte array.
readDouble(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
readDouble(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
readDouble(String) - Method in interface org.apache.hadoop.record.RecordInput
Read a double-precision number from serialized record.
readDouble(byte[], int) - Static method in class org.apache.hadoop.record.Utils
Parse a double from a byte array.
readDouble(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
readEnum(DataInput, Class<T>) - Static method in class org.apache.hadoop.io.WritableUtils
Read an Enum value from DataInput, Enums are read and written using String values.
readFields(DataInput) - Method in class org.apache.hadoop.conf.Configuration
 
readFields(DataInput) - Method in class org.apache.hadoop.contrib.index.example.LineDocTextAndOp
 
readFields(DataInput) - Method in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
 
readFields(DataInput) - Method in class org.apache.hadoop.contrib.index.mapred.DocumentID
 
readFields(DataInput) - Method in class org.apache.hadoop.contrib.index.mapred.IntermediateForm
 
readFields(DataInput) - Method in class org.apache.hadoop.contrib.index.mapred.Shard
 
readFields(DataInput) - Method in class org.apache.hadoop.examples.MultiFileWordCount.WordOffset
 
readFields(DataInput) - Method in class org.apache.hadoop.examples.SecondarySort.IntPair
Read the two integers.
readFields(DataInput) - Method in class org.apache.hadoop.examples.SleepJob.EmptySplit
 
readFields(DataInput) - Method in class org.apache.hadoop.fs.BlockLocation
Implement readFields of Writable
readFields(DataInput) - Method in class org.apache.hadoop.fs.ContentSummary
Deserialize the fields of this object from in.
readFields(DataInput) - Method in class org.apache.hadoop.fs.FileStatus
 
readFields(DataInput) - Method in class org.apache.hadoop.fs.MD5MD5CRC32FileChecksum
Deserialize the fields of this object from in.
readFields(DataInput) - Method in class org.apache.hadoop.fs.permission.FsPermission
Deserialize the fields of this object from in.
readFields(DataInput) - Method in class org.apache.hadoop.fs.permission.PermissionStatus
Deserialize the fields of this object from in.
readFields(DataInput) - Method in class org.apache.hadoop.io.AbstractMapWritable
Deserialize the fields of this object from in.
readFields(DataInput) - Method in class org.apache.hadoop.io.ArrayWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.BooleanWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.BytesWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.ByteWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.CompressedWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.DoubleWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.FloatWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.GenericWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.IntWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.LongWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.MapWritable
Deserialize the fields of this object from in.
readFields(DataInput) - Method in class org.apache.hadoop.io.MD5Hash
 
readFields(DataInput) - Method in class org.apache.hadoop.io.NullWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.ObjectWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.SequenceFile.Metadata
 
readFields(DataInput) - Method in class org.apache.hadoop.io.SortedMapWritable
Deserialize the fields of this object from in.
readFields(DataInput) - Method in class org.apache.hadoop.io.Text
deserialize
readFields(DataInput) - Method in class org.apache.hadoop.io.TwoDArrayWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.UTF8
Deprecated.  
readFields(DataInput) - Method in class org.apache.hadoop.io.VersionedWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.VIntWritable
 
readFields(DataInput) - Method in class org.apache.hadoop.io.VLongWritable
 
readFields(DataInput) - Method in interface org.apache.hadoop.io.Writable
Deserialize the fields of this object from in.
readFields(DataInput) - Method in class org.apache.hadoop.mapred.ClusterStatus
 
readFields(DataInput) - Method in class org.apache.hadoop.mapred.Counters.Group
Deprecated.  
readFields(DataInput) - Method in class org.apache.hadoop.mapred.Counters
Deprecated. Read a set of groups.
readFields(DataInput) - Method in class org.apache.hadoop.mapred.FileSplit
Deprecated.  
readFields(DataInput) - Method in class org.apache.hadoop.mapred.JobProfile
 
readFields(DataInput) - Method in class org.apache.hadoop.mapred.JobQueueInfo
 
readFields(DataInput) - Method in class org.apache.hadoop.mapred.JobStatus
 
readFields(DataInput) - Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
Deserialize the fields of this object from in.
readFields(DataInput) - Method in class org.apache.hadoop.mapred.join.TupleWritable
Deserialize the fields of this object from in.
readFields(DataInput) - Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
 
readFields(DataInput) - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.DBInputSplit
Deserialize the fields of this object from in.
readFields(DataInput) - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.NullDBWritable
 
readFields(ResultSet) - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.NullDBWritable
 
readFields(ResultSet) - Method in interface org.apache.hadoop.mapred.lib.db.DBWritable
Reads the fields of the object from the ResultSet.
readFields(DataInput) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
 
readFields(DataInput) - Method in class org.apache.hadoop.mapred.TaskReport
 
readFields(DataInput) - Method in class org.apache.hadoop.mapreduce.Counter
Read the binary representation of the counter
readFields(DataInput) - Method in class org.apache.hadoop.mapreduce.CounterGroup
 
readFields(DataInput) - Method in class org.apache.hadoop.mapreduce.Counters
Read a set of groups.
readFields(DataInput) - Method in class org.apache.hadoop.mapreduce.ID
 
readFields(DataInput) - Method in class org.apache.hadoop.mapreduce.JobID
 
readFields(DataInput) - Method in class org.apache.hadoop.mapreduce.lib.input.FileSplit
 
readFields(DataInput) - Method in class org.apache.hadoop.mapreduce.TaskAttemptID
 
readFields(DataInput) - Method in class org.apache.hadoop.mapreduce.TaskID
 
readFields(DataInput) - Method in class org.apache.hadoop.record.Record
 
readFields(DataInput) - Method in class org.apache.hadoop.security.UnixUserGroupInformation
Deserialize this object First check if this is a UGI in the string format.
readFields(DataInput) - Method in class org.apache.hadoop.util.bloom.BloomFilter
 
readFields(DataInput) - Method in class org.apache.hadoop.util.bloom.CountingBloomFilter
 
readFields(DataInput) - Method in class org.apache.hadoop.util.bloom.DynamicBloomFilter
 
readFields(DataInput) - Method in class org.apache.hadoop.util.bloom.Filter
 
readFields(DataInput) - Method in class org.apache.hadoop.util.bloom.Key
 
readFields(DataInput) - Method in class org.apache.hadoop.util.bloom.RetouchedBloomFilter
 
readFieldsCompressed(DataInput) - Method in class org.apache.hadoop.io.CompressedWritable
Subclasses implement this instead of CompressedWritable.readFields(DataInput).
readFloat(byte[], int) - Static method in class org.apache.hadoop.io.WritableComparator
Parse a float from a byte array.
readFloat(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
readFloat(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
readFloat(String) - Method in interface org.apache.hadoop.record.RecordInput
Read a single-precision float from serialized record.
readFloat(byte[], int) - Static method in class org.apache.hadoop.record.Utils
Parse a float from a byte array.
readFloat(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
readFrom(Configuration) - Static method in class org.apache.hadoop.security.UserGroupInformation
Read a UserGroupInformation from conf
readFromConf(Configuration, String) - Static method in class org.apache.hadoop.security.UnixUserGroupInformation
Read a UGI from the given conf The object is expected to store with the property name attr as a comma separated string that starts with the user name followed by group names.
readFully(long, byte[], int, int) - Method in class org.apache.hadoop.fs.BufferedFSInputStream
 
readFully(long, byte[]) - Method in class org.apache.hadoop.fs.BufferedFSInputStream
 
readFully(long, byte[], int, int) - Method in class org.apache.hadoop.fs.FSDataInputStream
 
readFully(long, byte[]) - Method in class org.apache.hadoop.fs.FSDataInputStream
 
readFully(InputStream, byte[], int, int) - Static method in class org.apache.hadoop.fs.FSInputChecker
A utility function that tries to read up to len bytes from stm
readFully(long, byte[], int, int) - Method in class org.apache.hadoop.fs.FSInputStream
 
readFully(long, byte[]) - Method in class org.apache.hadoop.fs.FSInputStream
 
readFully(long, byte[], int, int) - Method in interface org.apache.hadoop.fs.PositionedReadable
Read the specified number of bytes, from a given position within a file.
readFully(long, byte[]) - Method in interface org.apache.hadoop.fs.PositionedReadable
Read number of bytes equalt to the length of the buffer, from a given position within a file.
readFully(InputStream, byte[], int, int) - Static method in class org.apache.hadoop.io.IOUtils
Reads len bytes in a loop.
readInt(byte[], int) - Static method in class org.apache.hadoop.io.WritableComparator
Parse an integer from a byte array.
readInt(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
readInt(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
readInt(String) - Method in interface org.apache.hadoop.record.RecordInput
Read an integer from serialized record.
readInt(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
readLine(LineReader, Text) - Static method in class org.apache.hadoop.streaming.StreamKeyValUtil
Read a utf8 encoded line from a data input stream.
readLine(LineReader, Text) - Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
Deprecated. use StreamKeyValUtil.readLine(LineReader, Text)
readLine(Text, int, int) - Method in class org.apache.hadoop.util.LineReader
Read one line from the InputStream into the given Text.
readLine(Text, int) - Method in class org.apache.hadoop.util.LineReader
Read from the InputStream into the given Text.
readLine(Text) - Method in class org.apache.hadoop.util.LineReader
Read from the InputStream into the given Text.
readLong(byte[], int) - Static method in class org.apache.hadoop.io.WritableComparator
Parse a long from a byte array.
readLong(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
readLong(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
readLong(String) - Method in interface org.apache.hadoop.record.RecordInput
Read a long integer from serialized record.
readLong(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
readObject(DataInput, Configuration) - Static method in class org.apache.hadoop.io.ObjectWritable
Read a Writable, String, primitive type, or an array of the preceding.
readObject(DataInput, ObjectWritable, Configuration) - Static method in class org.apache.hadoop.io.ObjectWritable
Read a Writable, String, primitive type, or an array of the preceding.
readRAMFiles(DataInput, RAMDirectory) - Static method in class org.apache.hadoop.contrib.index.lucene.RAMDirectoryUtil
Read a number of files from a data input to a ram directory.
readState(String) - Static method in class org.apache.hadoop.contrib.failmon.PersistentState
Read the state of parsing for all open log files from a property file.
readString(DataInput) - Static method in class org.apache.hadoop.io.file.tfile.Utils
Read a String as a VInt n, followed by n Bytes in Text format.
readString(DataInput) - Static method in class org.apache.hadoop.io.Text
Read a UTF8 encoded string from in
readString(DataInput) - Static method in class org.apache.hadoop.io.UTF8
Deprecated. Read a UTF-8 encoded string.
readString(DataInput) - Static method in class org.apache.hadoop.io.WritableUtils
 
readString(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
readString(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
readString(String) - Method in interface org.apache.hadoop.record.RecordInput
Read a UTF-8 encoded string from serialized record.
readString(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
readStringArray(DataInput) - Static method in class org.apache.hadoop.io.WritableUtils
 
readUnsignedShort(byte[], int) - Static method in class org.apache.hadoop.io.WritableComparator
Parse an unsigned short from a byte array.
readVInt(DataInput) - Static method in class org.apache.hadoop.io.file.tfile.Utils
Decoding the variable-length integer.
readVInt(byte[], int) - Static method in class org.apache.hadoop.io.WritableComparator
Reads a zero-compressed encoded integer from a byte array and returns it.
readVInt(DataInput) - Static method in class org.apache.hadoop.io.WritableUtils
Reads a zero-compressed encoded integer from input stream and returns it.
readVInt(byte[], int) - Static method in class org.apache.hadoop.record.Utils
Reads a zero-compressed encoded integer from a byte array and returns it.
readVInt(DataInput) - Static method in class org.apache.hadoop.record.Utils
Reads a zero-compressed encoded integer from a stream and returns it.
readVLong(DataInput) - Static method in class org.apache.hadoop.io.file.tfile.Utils
Decoding the variable-length integer.
readVLong(byte[], int) - Static method in class org.apache.hadoop.io.WritableComparator
Reads a zero-compressed encoded long from a byte array and returns it.
readVLong(DataInput) - Static method in class org.apache.hadoop.io.WritableUtils
Reads a zero-compressed encoded long from input stream and returns it.
readVLong(byte[], int) - Static method in class org.apache.hadoop.record.Utils
Reads a zero-compressed encoded long from a byte array and returns it.
readVLong(DataInput) - Static method in class org.apache.hadoop.record.Utils
Reads a zero-compressed encoded long from a stream and return it.
READY - Static variable in class org.apache.hadoop.mapred.jobcontrol.Job
 
Record() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
Record - Class in org.apache.hadoop.record
Abstract class that is extended by generated classes.
Record() - Constructor for class org.apache.hadoop.record.Record
 
RECORD_INPUT - Static variable in class org.apache.hadoop.record.compiler.Consts
 
RECORD_OUTPUT - Static variable in class org.apache.hadoop.record.compiler.Consts
 
RECORD_SEPARATOR - Static variable in class org.apache.hadoop.contrib.failmon.LocalStore
 
RECORD_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
RecordComparator - Class in org.apache.hadoop.record
A raw record comparator base class
RecordComparator(Class<? extends WritableComparable>) - Constructor for class org.apache.hadoop.record.RecordComparator
Construct a raw Record comparison implementation.
RecordInput - Interface in org.apache.hadoop.record
Interface that all the Deserializers have to implement.
RecordList() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
RecordOutput - Interface in org.apache.hadoop.record
Interface that alll the serializers have to implement.
RecordReader<K,V> - Interface in org.apache.hadoop.mapred
RecordReader reads <key, value> pairs from an InputSplit.
RecordReader<KEYIN,VALUEIN> - Class in org.apache.hadoop.mapreduce
The record reader breaks the data into key/value pairs for input to the Mapper.
RecordReader() - Constructor for class org.apache.hadoop.mapreduce.RecordReader
 
RecordTypeInfo - Class in org.apache.hadoop.record.meta
A record's Type Information object which can read/write itself.
RecordTypeInfo() - Constructor for class org.apache.hadoop.record.meta.RecordTypeInfo
Create an empty RecordTypeInfo object.
RecordTypeInfo(String) - Constructor for class org.apache.hadoop.record.meta.RecordTypeInfo
Create a RecordTypeInfo object representing a record with the given name
RecordWriter<K,V> - Interface in org.apache.hadoop.mapred
RecordWriter writes the output <key, value> pairs to an output file.
RecordWriter<K,V> - Class in org.apache.hadoop.mapreduce
RecordWriter writes the output <key, value> pairs to an output file.
RecordWriter() - Constructor for class org.apache.hadoop.mapreduce.RecordWriter
 
recoverJobHistoryFile(JobConf, Path) - Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Selects one of the two files generated as a part of recovery.
redCmd_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
reduce(Shard, Iterator<IntermediateForm>, OutputCollector<Shard, IntermediateForm>, Reporter) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateCombiner
 
reduce(Shard, Iterator<IntermediateForm>, OutputCollector<Shard, Text>, Reporter) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateReducer
 
reduce(Object, Iterator, OutputCollector, Reporter) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
 
reduce(Object, Iterator, OutputCollector, Reporter) - Method in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
reduce(BooleanWritable, Iterator<LongWritable>, OutputCollector<WritableComparable<?>, Writable>, Reporter) - Method in class org.apache.hadoop.examples.PiEstimator.PiReducer
Accumulate number of points inside/outside results from the mappers.
reduce(SecondarySort.IntPair, Iterable<IntWritable>, Reducer<SecondarySort.IntPair, IntWritable, Text, IntWritable>.Context) - Method in class org.apache.hadoop.examples.SecondarySort.Reduce
 
reduce(IntWritable, Iterator<NullWritable>, OutputCollector<NullWritable, NullWritable>, Reporter) - Method in class org.apache.hadoop.examples.SleepJob
 
reduce(Text, Iterable<IntWritable>, Reducer<Text, IntWritable, Text, IntWritable>.Context) - Method in class org.apache.hadoop.examples.WordCount.IntSumReducer
 
reduce(Text, Iterator<Text>, OutputCollector<Text, Text>, Reporter) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorCombiner
Combines values for a given key.
reduce(Text, Iterator<Text>, OutputCollector<Text, Text>, Reporter) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorMapper
Do nothing.
reduce(Text, Iterator<Text>, OutputCollector<Text, Text>, Reporter) - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorReducer
 
reduce(Object, Iterator, OutputCollector, Reporter) - Method in class org.apache.hadoop.mapred.lib.ChainReducer
Chains the reduce(...) method of the Reducer with the map(...) methods of the Mappers in the chain.
reduce(Text, Iterator<Text>, OutputCollector<Text, Text>, Reporter) - Method in class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
 
reduce(K, Iterator<V>, OutputCollector<K, V>, Reporter) - Method in class org.apache.hadoop.mapred.lib.IdentityReducer
Deprecated. Writes all keys and values directly to output.
reduce(K, Iterator<LongWritable>, OutputCollector<K, LongWritable>, Reporter) - Method in class org.apache.hadoop.mapred.lib.LongSumReducer
Deprecated.  
reduce(K2, Iterator<V2>, OutputCollector<K3, V3>, Reporter) - Method in interface org.apache.hadoop.mapred.Reducer
Deprecated. Reduces values for a given key.
reduce(Key, Iterable<IntWritable>, Reducer<Key, IntWritable, Key, IntWritable>.Context) - Method in class org.apache.hadoop.mapreduce.lib.reduce.IntSumReducer
 
reduce(KEY, Iterable<LongWritable>, Reducer<KEY, LongWritable, KEY, LongWritable>.Context) - Method in class org.apache.hadoop.mapreduce.lib.reduce.LongSumReducer
 
reduce(KEYIN, Iterable<VALUEIN>, Reducer<KEYIN, VALUEIN, KEYOUT, VALUEOUT>.Context) - Method in class org.apache.hadoop.mapreduce.Reducer
This method is called once for each key.
reduce(Object, Iterator, OutputCollector, Reporter) - Method in class org.apache.hadoop.streaming.PipeReducer
 
REDUCE_CLASS_ATTR - Static variable in class org.apache.hadoop.mapreduce.JobContext
 
ReduceContext<KEYIN,VALUEIN,KEYOUT,VALUEOUT> - Class in org.apache.hadoop.mapreduce
The context passed to the Reducer.
ReduceContext(Configuration, TaskAttemptID, RawKeyValueIterator, Counter, RecordWriter<KEYOUT, VALUEOUT>, OutputCommitter, StatusReporter, RawComparator<KEYIN>, Class<KEYIN>, Class<VALUEIN>) - Constructor for class org.apache.hadoop.mapreduce.ReduceContext
 
ReduceContext.ValueIterable - Class in org.apache.hadoop.mapreduce
 
ReduceContext.ValueIterable() - Constructor for class org.apache.hadoop.mapreduce.ReduceContext.ValueIterable
 
ReduceContext.ValueIterator - Class in org.apache.hadoop.mapreduce
 
ReduceContext.ValueIterator() - Constructor for class org.apache.hadoop.mapreduce.ReduceContext.ValueIterator
 
reduceDebugSpec_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
reduceProgress() - Method in class org.apache.hadoop.mapred.JobStatus
 
reduceProgress() - Method in interface org.apache.hadoop.mapred.RunningJob
Get the progress of the job's reduce-tasks, as a float between 0.0 and 1.0.
reduceProgress() - Method in class org.apache.hadoop.mapreduce.Job
Get the progress of the job's reduce-tasks, as a float between 0.0 and 1.0.
Reducer<K2,V2,K3,V3> - Interface in org.apache.hadoop.mapred
Deprecated. Use Reducer instead.
Reducer<KEYIN,VALUEIN,KEYOUT,VALUEOUT> - Class in org.apache.hadoop.mapreduce
Reduces a set of intermediate values which share a key to a smaller set of values.
Reducer() - Constructor for class org.apache.hadoop.mapreduce.Reducer
 
Reducer.Context - Class in org.apache.hadoop.mapreduce
 
Reducer.Context(Configuration, TaskAttemptID, RawKeyValueIterator, Counter, RecordWriter<KEYOUT, VALUEOUT>, OutputCommitter, StatusReporter, RawComparator<KEYIN>, Class<KEYIN>, Class<VALUEIN>) - Constructor for class org.apache.hadoop.mapreduce.Reducer.Context
 
ReflectionUtils - Class in org.apache.hadoop.util
General reflection utils
ReflectionUtils() - Constructor for class org.apache.hadoop.util.ReflectionUtils
 
refresh() - Method in class org.apache.hadoop.security.authorize.ConfiguredPolicy
 
refresh() - Method in class org.apache.hadoop.util.HostsFileReader
 
RefreshAuthorizationPolicyProtocol - Interface in org.apache.hadoop.security.authorize
Protocol which is used to refresh the authorization policy in use currently.
refreshServiceAcl() - Method in class org.apache.hadoop.mapred.JobTracker
 
refreshServiceAcl() - Method in interface org.apache.hadoop.security.authorize.RefreshAuthorizationPolicyProtocol
Refresh the service-level authorization policy in-effect.
RegexMapper<K> - Class in org.apache.hadoop.mapred.lib
A Mapper that extracts text matching a regular expression.
RegexMapper() - Constructor for class org.apache.hadoop.mapred.lib.RegexMapper
 
regexpEscape(String) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
registerMBean(String, String, Object) - Static method in class org.apache.hadoop.metrics.util.MBeanUtil
Register the MBean using our standard MBeanName format "hadoop:service=,name=" Where the and are the supplied parameters
registerNotification(JobConf, JobStatus) - Static method in class org.apache.hadoop.mapred.JobEndNotifier
 
registerUpdater(Updater) - Method in interface org.apache.hadoop.metrics.MetricsContext
Registers a callback to be called at regular time intervals, as determined by the implementation-class specific configuration.
registerUpdater(Updater) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Registers a callback to be called at time intervals determined by the configuration.
registerUpdater(Updater) - Method in class org.apache.hadoop.metrics.spi.CompositeContext
 
registry - Variable in class org.apache.hadoop.ipc.metrics.RpcMetrics
 
ReInit(InputStream) - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
ReInit(InputStream, String) - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
ReInit(Reader) - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
ReInit(RccTokenManager) - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
ReInit(SimpleCharStream) - Method in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
ReInit(SimpleCharStream, int) - Method in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
ReInit(Reader, int, int, int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
ReInit(Reader, int, int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
ReInit(Reader) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
ReInit(InputStream, String, int, int, int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
ReInit(InputStream, int, int, int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
ReInit(InputStream, String) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
ReInit(InputStream) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
ReInit(InputStream, String, int, int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
ReInit(InputStream, int, int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
release(Path) - Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
Deprecated. 
releaseCache(URI, Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
This is the opposite of getlocalcache.
reloadConfiguration() - Method in class org.apache.hadoop.conf.Configuration
Reload configuration from previously added resources.
RemoteException - Exception in org.apache.hadoop.ipc
 
RemoteException(String, String) - Constructor for exception org.apache.hadoop.ipc.RemoteException
 
remove() - Method in class org.apache.hadoop.contrib.utils.join.ArrayListBackedIterator
 
remove(Object) - Method in class org.apache.hadoop.io.MapWritable
remove(Object) - Method in class org.apache.hadoop.io.SortedMapWritable
remove() - Method in class org.apache.hadoop.mapreduce.ReduceContext.ValueIterator
 
remove() - Method in interface org.apache.hadoop.metrics.MetricsRecord
Removes, from the buffered data table, all rows having tags that equal the tags that have been set on this record.
remove(MetricsRecordImpl) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Called by MetricsRecordImpl.remove().
remove() - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Removes the row, if it exists, in the buffered data table having tags that equal the tags that have been set on this record.
remove(MetricsRecordImpl) - Method in class org.apache.hadoop.metrics.spi.NullContext
Do-nothing version of remove
remove(MetricsRecordImpl) - Method in class org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
Do-nothing version of remove
remove(Node) - Method in class org.apache.hadoop.net.NetworkTopology
Remove a node Update node counter & rack counter if neccessary
removeAttribute(String) - Method in class org.apache.hadoop.metrics.ContextFactory
Removes the named attribute if it exists.
removeJobInProgressListener(JobInProgressListener) - Method in class org.apache.hadoop.mapred.JobTracker
 
RemoveScheme - Interface in org.apache.hadoop.util.bloom
Defines the different remove scheme for retouched Bloom filters.
removeSuffix(String, String) - Static method in class org.apache.hadoop.io.compress.CompressionCodecFactory
Removes a suffix from a filename, if it has it.
removeTag(String) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Removes any tag of the specified name.
removeTag(String) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Removes any tag of the specified name.
rename(Path, Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Rename files/dirs
rename(Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
Renames Path src to Path dst.
rename(Path, Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Renames Path src to Path dst.
rename(Path, Path) - Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
 
rename(Path, Path) - Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
 
rename(Path, Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
rename(Path, Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
rename(Path, Path) - Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
 
rename(FileSystem, String, String) - Static method in class org.apache.hadoop.io.MapFile
Renames an existing map directory.
renameFile(String, String) - Method in class org.apache.hadoop.contrib.index.lucene.FileSystemDirectory
 
replaceFile(File, File) - Static method in class org.apache.hadoop.fs.FileUtil
Move the src file to the name specified by target.
replay(X) - Method in class org.apache.hadoop.mapred.join.ArrayListBackedIterator
 
replay(TupleWritable) - Method in class org.apache.hadoop.mapred.join.JoinRecordReader.JoinDelegationIterator
 
replay(V) - Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader.MultiFilterDelegationIterator
 
replay(U) - Method in class org.apache.hadoop.mapred.join.ResetableIterator.EMPTY
 
replay(T) - Method in interface org.apache.hadoop.mapred.join.ResetableIterator
Assign last value returned to actual.
replay(X) - Method in class org.apache.hadoop.mapred.join.StreamBackedIterator
 
report() - Method in class org.apache.hadoop.contrib.utils.join.JobBase
log the counters
reportChecksumFailure(Path, FSDataInputStream, long, FSDataInputStream, long) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Report a checksum error to the file system.
reportChecksumFailure(Path, FSDataInputStream, long, FSDataInputStream, long) - Method in class org.apache.hadoop.fs.LocalFileSystem
Moves files to a bad file directory on the same device, so that their storage will not be reused.
reportDiagnosticInfo(TaskAttemptID, String) - Method in class org.apache.hadoop.mapred.TaskTracker
Called when the task dies before completion, and we want to report back diagnostic info
reporter - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
 
reporter - Variable in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
reporter - Variable in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
 
Reporter - Interface in org.apache.hadoop.mapred
A facility for Map-Reduce applications to report progress and update counters, status information etc.
reporter - Variable in class org.apache.hadoop.mapreduce.ReduceContext
 
reportNextRecordRange(TaskAttemptID, SortedRanges.Range) - Method in class org.apache.hadoop.mapred.TaskTracker
 
reportTaskTrackerError(String, String, String) - Method in class org.apache.hadoop.mapred.JobTracker
 
requiresLayout() - Method in class org.apache.hadoop.metrics.jvm.EventCounter
 
reserveSpaceWithCheckSum(Path, long) - Method in class org.apache.hadoop.fs.InMemoryFileSystem
Deprecated. Register a file with its size.
reset() - Method in class org.apache.hadoop.contrib.failmon.MonitorJob
 
reset() - Method in class org.apache.hadoop.contrib.utils.join.ArrayListBackedIterator
 
reset() - Method in interface org.apache.hadoop.contrib.utils.join.ResetableIterator
 
reset() - Method in class org.apache.hadoop.fs.FileSystem.Statistics
Reset the counts of bytes to 0.
reset() - Method in class org.apache.hadoop.fs.FSInputChecker
 
reset() - Method in class org.apache.hadoop.fs.ftp.FTPInputStream
 
reset() - Method in class org.apache.hadoop.io.compress.bzip2.BZip2DummyCompressor
 
reset() - Method in class org.apache.hadoop.io.compress.bzip2.BZip2DummyDecompressor
 
reset() - Method in interface org.apache.hadoop.io.compress.Compressor
Resets compressor so that a new set of input data can be processed.
reset() - Method in interface org.apache.hadoop.io.compress.Decompressor
Resets decompressor so that a new set of input data can be processed.
reset() - Method in class org.apache.hadoop.io.compress.DecompressorStream
 
reset() - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
 
reset() - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
reset(byte[], int) - Method in class org.apache.hadoop.io.DataInputBuffer
Resets the data that the buffer reads.
reset(byte[], int, int) - Method in class org.apache.hadoop.io.DataInputBuffer
Resets the data that the buffer reads.
reset() - Method in class org.apache.hadoop.io.DataOutputBuffer
Resets the buffer to empty.
reset(byte[], int) - Method in class org.apache.hadoop.io.InputBuffer
Resets the data that the buffer reads.
reset(byte[], int, int) - Method in class org.apache.hadoop.io.InputBuffer
Resets the data that the buffer reads.
reset() - Method in class org.apache.hadoop.io.MapFile.Reader
Re-positions the reader before its first key.
reset() - Method in class org.apache.hadoop.io.OutputBuffer
Resets the buffer to empty.
reset() - Method in class org.apache.hadoop.mapred.join.ArrayListBackedIterator
 
reset() - Method in class org.apache.hadoop.mapred.join.JoinRecordReader.JoinDelegationIterator
 
reset() - Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader.MultiFilterDelegationIterator
 
reset() - Method in class org.apache.hadoop.mapred.join.ResetableIterator.EMPTY
 
reset() - Method in interface org.apache.hadoop.mapred.join.ResetableIterator
Set iterator to return to the start of its range.
reset() - Method in class org.apache.hadoop.mapred.join.StreamBackedIterator
 
reset() - Method in class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
reset the aggregator
reset() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
reset the aggregator
reset() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
reset the aggregator
reset() - Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
reset the aggregator
reset() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
reset the aggregator
reset() - Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
reset the aggregator
reset() - Method in class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
reset the aggregator
reset() - Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregator
reset the aggregator
reset() - Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
reset the aggregator
reset(BytesWritable) - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat.WritableValueBytes
 
reset() - Method in class org.apache.hadoop.record.Buffer
Reset the buffer to 0 size
reset() - Method in class org.apache.hadoop.util.DataChecksum
 
ResetableIterator - Interface in org.apache.hadoop.contrib.utils.join
This defines an iterator interface that will help the reducer class re-group its input by source tags.
ResetableIterator<T extends Writable> - Interface in org.apache.hadoop.mapred.join
This defines an interface to a stateful Iterator that can replay elements added to it directly.
ResetableIterator.EMPTY<U extends Writable> - Class in org.apache.hadoop.mapred.join
 
ResetableIterator.EMPTY() - Constructor for class org.apache.hadoop.mapred.join.ResetableIterator.EMPTY
 
resetAllMinMax() - Method in interface org.apache.hadoop.ipc.metrics.RpcMgtMBean
Reset all min max times
resetChecksumChunk(int) - Method in class org.apache.hadoop.fs.FSOutputSummer
Resets existing buffer with a new one of the specified size.
resetMinMax() - Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingRate
Reset the min max values
resetState() - Method in class org.apache.hadoop.io.compress.BlockDecompressorStream
 
resetState() - Method in class org.apache.hadoop.io.compress.CompressionInputStream
Reset the decompressor to its initial state and discard any buffered data, as the underlying stream may have been repositioned.
resetState() - Method in class org.apache.hadoop.io.compress.CompressionOutputStream
Reset the compression to the initial state.
resetState() - Method in class org.apache.hadoop.io.compress.CompressorStream
 
resetState() - Method in class org.apache.hadoop.io.compress.DecompressorStream
 
resetState() - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
 
resetState() - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
 
resolve(List<String>) - Method in class org.apache.hadoop.net.CachedDNSToSwitchMapping
 
resolve(List<String>) - Method in interface org.apache.hadoop.net.DNSToSwitchMapping
Resolves a list of DNS-names/IP-addresses and returns back a list of switch information (network paths).
resolveAndAddToTopology(String) - Method in class org.apache.hadoop.mapred.JobTracker
 
resume() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
resume the suspended thread
RetouchedBloomFilter - Class in org.apache.hadoop.util.bloom
Implements a retouched Bloom filter, as defined in the CoNEXT 2006 paper.
RetouchedBloomFilter() - Constructor for class org.apache.hadoop.util.bloom.RetouchedBloomFilter
Default constructor - use with readFields
RetouchedBloomFilter(int, int, int) - Constructor for class org.apache.hadoop.util.bloom.RetouchedBloomFilter
Constructor
retrieveBlock(Block, long) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
retrieveINode(Path) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
RETRY_FOREVER - Static variable in class org.apache.hadoop.io.retry.RetryPolicies
Keep trying forever.
retryByException(RetryPolicy, Map<Class<? extends Exception>, RetryPolicy>) - Static method in class org.apache.hadoop.io.retry.RetryPolicies
Set a default policy with some explicit handlers for specific exceptions.
retryByRemoteException(RetryPolicy, Map<Class<? extends Exception>, RetryPolicy>) - Static method in class org.apache.hadoop.io.retry.RetryPolicies
A retry policy for RemoteException Set a default policy with some explicit handlers for specific exceptions.
RetryPolicies - Class in org.apache.hadoop.io.retry
A collection of useful implementations of RetryPolicy.
RetryPolicies() - Constructor for class org.apache.hadoop.io.retry.RetryPolicies
 
RetryPolicy - Interface in org.apache.hadoop.io.retry
Specifies a policy for retrying method failures.
RetryProxy - Class in org.apache.hadoop.io.retry
A factory for creating retry proxies.
RetryProxy() - Constructor for class org.apache.hadoop.io.retry.RetryProxy
 
retryUpToMaximumCountWithFixedSleep(int, long, TimeUnit) - Static method in class org.apache.hadoop.io.retry.RetryPolicies
Keep trying a limited number of times, waiting a fixed time between attempts, and then fail by re-throwing the exception.
retryUpToMaximumCountWithProportionalSleep(int, long, TimeUnit) - Static method in class org.apache.hadoop.io.retry.RetryPolicies
Keep trying a limited number of times, waiting a growing amount of time between attempts, and then fail by re-throwing the exception.
retryUpToMaximumTimeWithFixedSleep(long, long, TimeUnit) - Static method in class org.apache.hadoop.io.retry.RetryPolicies
Keep trying for a maximum time, waiting a fixed time between attempts, and then fail by re-throwing the exception.
returnCompressor(Compressor) - Static method in class org.apache.hadoop.io.compress.CodecPool
Return the Compressor to the pool.
returnDecompressor(Decompressor) - Static method in class org.apache.hadoop.io.compress.CodecPool
Return the Decompressor to the pool.
reverseDns(InetAddress, String) - Static method in class org.apache.hadoop.net.DNS
Returns the hostname associated with the specified IP address by the provided nameserver.
rewind() - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner
Rewind to the first entry in the scanner.
RIO_PREFIX - Static variable in class org.apache.hadoop.record.compiler.Consts
 
rjustify(String, int) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
rNums - Static variable in interface org.apache.hadoop.io.compress.bzip2.BZip2Constants
This array really shouldn't be here.
ROOT - Static variable in class org.apache.hadoop.net.NodeBase
 
RoundRobinDistributionPolicy - Class in org.apache.hadoop.contrib.index.example
Choose a shard for each insert in a round-robin fashion.
RoundRobinDistributionPolicy() - Constructor for class org.apache.hadoop.contrib.index.example.RoundRobinDistributionPolicy
 
RPC - Class in org.apache.hadoop.ipc
A simple RPC mechanism.
RPC.Server - Class in org.apache.hadoop.ipc
An RPC Server.
RPC.Server(Object, Configuration, String, int) - Constructor for class org.apache.hadoop.ipc.RPC.Server
Construct an RPC server.
RPC.Server(Object, Configuration, String, int, int, boolean) - Constructor for class org.apache.hadoop.ipc.RPC.Server
Construct an RPC server.
RPC.VersionMismatch - Exception in org.apache.hadoop.ipc
A version mismatch for the RPC protocol.
RPC.VersionMismatch(String, long, long) - Constructor for exception org.apache.hadoop.ipc.RPC.VersionMismatch
Create a version mismatch exception
RpcActivityMBean - Class in org.apache.hadoop.ipc.metrics
This is the JMX MBean for reporting the RPC layer Activity.
RpcActivityMBean(MetricsRegistry, String, String) - Constructor for class org.apache.hadoop.ipc.metrics.RpcActivityMBean
 
RpcMetrics - Class in org.apache.hadoop.ipc.metrics
This class is for maintaining the various RPC statistics and publishing them through the metrics interfaces.
RpcMetrics(String, String, Server) - Constructor for class org.apache.hadoop.ipc.metrics.RpcMetrics
 
rpcMetrics - Variable in class org.apache.hadoop.ipc.Server
 
RpcMgtMBean - Interface in org.apache.hadoop.ipc.metrics
This is the JMX management interface for the RPC layer.
rpcProcessingTime - Variable in class org.apache.hadoop.ipc.metrics.RpcMetrics
 
rpcQueueTime - Variable in class org.apache.hadoop.ipc.metrics.RpcMetrics
The metrics variables are public: - they can be set directly by calling their set/inc methods -they can also be read directly - e.g.
rrClass - Variable in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
 
rrConstructor - Variable in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
 
rrCstrMap - Static variable in class org.apache.hadoop.mapred.join.Parser.Node
 
RTI_FILTER - Static variable in class org.apache.hadoop.record.compiler.Consts
 
RTI_FILTER_FIELDS - Static variable in class org.apache.hadoop.record.compiler.Consts
 
RTI_VAR - Static variable in class org.apache.hadoop.record.compiler.Consts
 
run() - Method in class org.apache.hadoop.contrib.failmon.Executor
 
run(Configuration, Path[], Path, int, Shard[]) - Method in interface org.apache.hadoop.contrib.index.mapred.IIndexUpdater
Create a Map/Reduce job configuration and run the Map/Reduce job to analyze documents and update Lucene instances in parallel.
run(Configuration, Path[], Path, int, Shard[]) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdater
 
run(String[]) - Method in class org.apache.hadoop.examples.dancing.DistributedPentomino
 
run(String[]) - Method in class org.apache.hadoop.examples.DBCountPageView
 
run(String[]) - Method in class org.apache.hadoop.examples.Grep
 
run(String[]) - Method in class org.apache.hadoop.examples.Join
The main driver for sort program.
run(String[]) - Method in class org.apache.hadoop.examples.MultiFileWordCount
 
run(String[]) - Method in class org.apache.hadoop.examples.PiEstimator
Parse arguments and then runs a map/reduce job.
run(String[]) - Method in class org.apache.hadoop.examples.RandomTextWriter
This is the main routine for launching a distributed random write job.
run(String[]) - Method in class org.apache.hadoop.examples.RandomWriter
This is the main routine for launching a distributed random write job.
run(int, int, long, int, long, int) - Method in class org.apache.hadoop.examples.SleepJob
 
run(String[]) - Method in class org.apache.hadoop.examples.SleepJob
 
run(String[]) - Method in class org.apache.hadoop.examples.Sort
The main driver for sort program.
run(String[]) - Method in class org.apache.hadoop.examples.terasort.TeraGen
 
run(String[]) - Method in class org.apache.hadoop.examples.terasort.TeraSort
 
run(String[]) - Method in class org.apache.hadoop.examples.terasort.TeraValidate
 
run(String[]) - Method in class org.apache.hadoop.fs.FsShell
run
run(String[]) - Method in class org.apache.hadoop.fs.s3.MigrationTool
 
run(Path) - Method in class org.apache.hadoop.fs.shell.Command
Execute the command on the input path
run(Path) - Method in class org.apache.hadoop.fs.shell.Count
 
run(String[]) - Method in class org.apache.hadoop.mapred.JobClient
 
run() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
The main loop for the thread.
run() - Method in class org.apache.hadoop.mapred.JobHistory.HistoryCleaner
Cleans up history data.
run(String[]) - Method in class org.apache.hadoop.mapred.lib.InputSampler
Driver for InputSampler from the command line.
run(RecordReader<K1, V1>, OutputCollector<K2, V2>, Reporter) - Method in class org.apache.hadoop.mapred.lib.MultithreadedMapRunner
 
run(RecordReader<K1, V1>, OutputCollector<K2, V2>, Reporter) - Method in interface org.apache.hadoop.mapred.MapRunnable
Deprecated. Start mapping input <key, value> pairs.
run(RecordReader<K1, V1>, OutputCollector<K2, V2>, Reporter) - Method in class org.apache.hadoop.mapred.MapRunner
 
run(String[]) - Method in class org.apache.hadoop.mapred.pipes.Submitter
 
run() - Method in class org.apache.hadoop.mapred.TaskTracker
The server retry loop.
run(String[]) - Method in class org.apache.hadoop.mapred.tools.MRAdmin
 
run(Mapper<K1, V1, K2, V2>.Context) - Method in class org.apache.hadoop.mapreduce.lib.map.MultithreadedMapper
Run the application's maps using a thread pool.
run(Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT>.Context) - Method in class org.apache.hadoop.mapreduce.Mapper
Expert users can override this method for more complete control over the execution of the Mapper.
run(Reducer<KEYIN, VALUEIN, KEYOUT, VALUEOUT>.Context) - Method in class org.apache.hadoop.mapreduce.Reducer
Advanced application writers can use the Reducer.run(org.apache.hadoop.mapreduce.Reducer.Context) method to control how the reduce task works.
run(RecordReader<K1, V1>, OutputCollector<K2, V2>, Reporter) - Method in class org.apache.hadoop.streaming.PipeMapRunner
 
run(String[]) - Method in class org.apache.hadoop.streaming.StreamJob
 
run() - Method in class org.apache.hadoop.util.Shell
check to see if a command needs to be executed and execute if needed
run(String[]) - Method in interface org.apache.hadoop.util.Tool
Execute the command with the given arguments.
run(Configuration, Tool, String[]) - Static method in class org.apache.hadoop.util.ToolRunner
Runs the given Tool by Tool.run(String[]), after parsing with the given generic arguments.
run(Tool, String[]) - Static method in class org.apache.hadoop.util.ToolRunner
Runs the Tool with its Configuration.
RUNA - Static variable in interface org.apache.hadoop.io.compress.bzip2.BZip2Constants
 
runAll() - Method in class org.apache.hadoop.fs.shell.Command
For each source path, execute the command
RUNB - Static variable in interface org.apache.hadoop.io.compress.bzip2.BZip2Constants
 
runCommand(String[]) - Static method in class org.apache.hadoop.contrib.failmon.Environment
Runs a shell command in the system and provides a StringBuffer with the output of the command.
runCommand(String) - Static method in class org.apache.hadoop.contrib.failmon.Environment
Runs a shell command in the system and provides a StringBuffer with the output of the command.
RunJar - Class in org.apache.hadoop.util
Run a Hadoop job jar.
RunJar() - Constructor for class org.apache.hadoop.util.RunJar
 
runJob(JobConf) - Static method in class org.apache.hadoop.contrib.utils.join.DataJoinJob
Submit/run a map/reduce job.
runJob(JobConf) - Static method in class org.apache.hadoop.mapred.JobClient
Utility that submits a job, then polls for progress until the job is complete.
runJob(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Submit a job to the map/reduce cluster.
RUNNING - Static variable in class org.apache.hadoop.mapred.jobcontrol.Job
 
RUNNING - Static variable in class org.apache.hadoop.mapred.JobStatus
 
running_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
RunningJob - Interface in org.apache.hadoop.mapred
RunningJob is the user-interface to query for details on a running Map-Reduce job.
runningJobs() - Method in class org.apache.hadoop.mapred.JobTracker
 
RunOnce - Class in org.apache.hadoop.contrib.failmon
Runs a set of monitoring jobs once for the local node.
RunOnce(String) - Constructor for class org.apache.hadoop.contrib.failmon.RunOnce
 

S

S3Credentials - Class in org.apache.hadoop.fs.s3
Extracts AWS credentials from the filesystem URI or configuration.
S3Credentials() - Constructor for class org.apache.hadoop.fs.s3.S3Credentials
 
S3Exception - Exception in org.apache.hadoop.fs.s3
Thrown if there is a problem communicating with Amazon S3.
S3Exception(Throwable) - Constructor for exception org.apache.hadoop.fs.s3.S3Exception
 
S3FileSystem - Class in org.apache.hadoop.fs.s3
A block-based FileSystem backed by Amazon S3.
S3FileSystem() - Constructor for class org.apache.hadoop.fs.s3.S3FileSystem
 
S3FileSystem(FileSystemStore) - Constructor for class org.apache.hadoop.fs.s3.S3FileSystem
 
S3FileSystemException - Exception in org.apache.hadoop.fs.s3
Thrown when there is a fatal exception while using S3FileSystem.
S3FileSystemException(String) - Constructor for exception org.apache.hadoop.fs.s3.S3FileSystemException
 
safeGetCanonicalPath(File) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
saveToConf(Configuration, String, UnixUserGroupInformation) - Static method in class org.apache.hadoop.security.UnixUserGroupInformation
Store the given ugi as a comma separated string in conf as a property attr The String starts with the user name followed by the default group names, and other group names.
ScriptBasedMapping - Class in org.apache.hadoop.net
This class implements the DNSToSwitchMapping interface using a script configured via topology.script.file.name .
ScriptBasedMapping() - Constructor for class org.apache.hadoop.net.ScriptBasedMapping
 
ScriptBasedMapping(Configuration) - Constructor for class org.apache.hadoop.net.ScriptBasedMapping
 
SecondarySort - Class in org.apache.hadoop.examples
This is an example Hadoop Map/Reduce application.
SecondarySort() - Constructor for class org.apache.hadoop.examples.SecondarySort
 
SecondarySort.FirstGroupingComparator - Class in org.apache.hadoop.examples
Compare only the first part of the pair, so that reduce is called once for each value of the first part.
SecondarySort.FirstGroupingComparator() - Constructor for class org.apache.hadoop.examples.SecondarySort.FirstGroupingComparator
 
SecondarySort.FirstPartitioner - Class in org.apache.hadoop.examples
Partition based on the first part of the pair.
SecondarySort.FirstPartitioner() - Constructor for class org.apache.hadoop.examples.SecondarySort.FirstPartitioner
 
SecondarySort.IntPair - Class in org.apache.hadoop.examples
Define a pair of integers that are writable.
SecondarySort.IntPair() - Constructor for class org.apache.hadoop.examples.SecondarySort.IntPair
 
SecondarySort.IntPair.Comparator - Class in org.apache.hadoop.examples
A Comparator that compares serialized IntPair.
SecondarySort.IntPair.Comparator() - Constructor for class org.apache.hadoop.examples.SecondarySort.IntPair.Comparator
 
SecondarySort.MapClass - Class in org.apache.hadoop.examples
Read two integers from each line and generate a key, value pair as ((left, right), right).
SecondarySort.MapClass() - Constructor for class org.apache.hadoop.examples.SecondarySort.MapClass
 
SecondarySort.Reduce - Class in org.apache.hadoop.examples
A reducer class that just emits the sum of the input values.
SecondarySort.Reduce() - Constructor for class org.apache.hadoop.examples.SecondarySort.Reduce
 
SecurityUtil - Class in org.apache.hadoop.security
 
SecurityUtil() - Constructor for class org.apache.hadoop.security.SecurityUtil
 
SecurityUtil.AccessControlList - Class in org.apache.hadoop.security
Class representing a configured access control list.
SecurityUtil.AccessControlList(String) - Constructor for class org.apache.hadoop.security.SecurityUtil.AccessControlList
Construct a new ACL from a String representation of the same.
seek(long) - Method in class org.apache.hadoop.fs.BufferedFSInputStream
 
seek(long) - Method in class org.apache.hadoop.fs.FSDataInputStream
 
seek(long) - Method in class org.apache.hadoop.fs.FSInputChecker
Seek to the given position in the stream.
seek(long) - Method in class org.apache.hadoop.fs.FSInputStream
Seek to the given offset from the start of the file.
seek(long) - Method in class org.apache.hadoop.fs.ftp.FTPInputStream
 
seek(long) - Method in interface org.apache.hadoop.fs.Seekable
Seek to the given offset from the start of the file.
seek(long) - Method in class org.apache.hadoop.io.ArrayFile.Reader
Positions the reader before its nth value.
seek(WritableComparable) - Method in class org.apache.hadoop.io.MapFile.Reader
Positions the reader at the named key, or if none such exists, at the first entry after the named key.
seek(long) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Set the current byte position in the input file.
seek(WritableComparable) - Method in class org.apache.hadoop.io.SetFile.Reader
 
seek(long) - Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
 
Seekable - Interface in org.apache.hadoop.fs
Stream that permits seeking.
seekNextRecordBoundary() - Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
Implementation should seek forward in_ to the first byte of the next record.
seekNextRecordBoundary() - Method in class org.apache.hadoop.streaming.StreamXmlRecordReader
 
seekTo(byte[]) - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner
Move the cursor to the first entry whose key is greater than or equal to the input key.
seekTo(byte[], int, int) - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner
Move the cursor to the first entry whose key is greater than or equal to the input key.
seekToEnd() - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner
Seek to the end of the scanner.
seekToNewSource(long) - Method in class org.apache.hadoop.fs.BufferedFSInputStream
 
seekToNewSource(long) - Method in class org.apache.hadoop.fs.FSDataInputStream
 
seekToNewSource(long) - Method in class org.apache.hadoop.fs.FSInputStream
Seeks a different copy of the data.
seekToNewSource(long) - Method in class org.apache.hadoop.fs.ftp.FTPInputStream
 
seekToNewSource(long) - Method in interface org.apache.hadoop.fs.Seekable
Seeks a different copy of the data.
seenPrimary_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
selectiveClearing(Key, short) - Method in class org.apache.hadoop.util.bloom.RetouchedBloomFilter
Performs the selective clearing for a given key.
SEMICOLON_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
SensorsParser - Class in org.apache.hadoop.contrib.failmon
Objects of this class parse the output of the lm-sensors utility to gather information about fan speed, temperatures for cpus and motherboard etc.
SensorsParser() - Constructor for class org.apache.hadoop.contrib.failmon.SensorsParser
 
SEPARATOR - Static variable in class org.apache.hadoop.fs.Path
The directory separator, a slash.
SEPARATOR - Static variable in class org.apache.hadoop.mapreduce.ID
 
SEPARATOR_CHAR - Static variable in class org.apache.hadoop.fs.Path
 
SequenceFile - Class in org.apache.hadoop.io
SequenceFiles are flat files consisting of binary key/value pairs.
SequenceFile.CompressionType - Enum in org.apache.hadoop.io
The compression type used to compress key/value pairs in the SequenceFile.
SequenceFile.Metadata - Class in org.apache.hadoop.io
The class encapsulating with the metadata of a file.
SequenceFile.Metadata() - Constructor for class org.apache.hadoop.io.SequenceFile.Metadata
 
SequenceFile.Metadata(TreeMap<Text, Text>) - Constructor for class org.apache.hadoop.io.SequenceFile.Metadata
 
SequenceFile.Reader - Class in org.apache.hadoop.io
Reads key/value pairs from a sequence-format file.
SequenceFile.Reader(FileSystem, Path, Configuration) - Constructor for class org.apache.hadoop.io.SequenceFile.Reader
Open the named file.
SequenceFile.Sorter - Class in org.apache.hadoop.io
Sorts key/value pairs in a sequence-format file.
SequenceFile.Sorter(FileSystem, Class<? extends WritableComparable>, Class, Configuration) - Constructor for class org.apache.hadoop.io.SequenceFile.Sorter
Sort and merge files containing the named classes.
SequenceFile.Sorter(FileSystem, RawComparator, Class, Class, Configuration) - Constructor for class org.apache.hadoop.io.SequenceFile.Sorter
Sort and merge using an arbitrary RawComparator.
SequenceFile.Sorter.RawKeyValueIterator - Interface in org.apache.hadoop.io
The interface to iterate over raw keys/values of SequenceFiles.
SequenceFile.Sorter.SegmentDescriptor - Class in org.apache.hadoop.io
This class defines a merge segment.
SequenceFile.Sorter.SegmentDescriptor(long, long, Path) - Constructor for class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
Constructs a segment
SequenceFile.ValueBytes - Interface in org.apache.hadoop.io
The interface to 'raw' values of SequenceFiles.
SequenceFile.Writer - Class in org.apache.hadoop.io
Write key/value pairs to a sequence-format file.
SequenceFile.Writer(FileSystem, Configuration, Path, Class, Class) - Constructor for class org.apache.hadoop.io.SequenceFile.Writer
Create the named file.
SequenceFile.Writer(FileSystem, Configuration, Path, Class, Class, Progressable, SequenceFile.Metadata) - Constructor for class org.apache.hadoop.io.SequenceFile.Writer
Create the named file with write-progress reporter.
SequenceFile.Writer(FileSystem, Configuration, Path, Class, Class, int, short, long, Progressable, SequenceFile.Metadata) - Constructor for class org.apache.hadoop.io.SequenceFile.Writer
Create the named file with write-progress reporter.
SequenceFileAsBinaryInputFormat - Class in org.apache.hadoop.mapred
InputFormat reading keys, values from SequenceFiles in binary (raw) format.
SequenceFileAsBinaryInputFormat() - Constructor for class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat
 
SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader - Class in org.apache.hadoop.mapred
Read records from a SequenceFile as binary (raw) bytes.
SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader(Configuration, FileSplit) - Constructor for class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
 
SequenceFileAsBinaryOutputFormat - Class in org.apache.hadoop.mapred
An OutputFormat that writes keys, values to SequenceFiles in binary(raw) format
SequenceFileAsBinaryOutputFormat() - Constructor for class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
 
SequenceFileAsBinaryOutputFormat.WritableValueBytes - Class in org.apache.hadoop.mapred
Inner class used for appendRaw
SequenceFileAsBinaryOutputFormat.WritableValueBytes() - Constructor for class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat.WritableValueBytes
 
SequenceFileAsBinaryOutputFormat.WritableValueBytes(BytesWritable) - Constructor for class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat.WritableValueBytes
 
SequenceFileAsTextInputFormat - Class in org.apache.hadoop.mapred
This class is similar to SequenceFileInputFormat, except it generates SequenceFileAsTextRecordReader which converts the input keys and values to their String forms by calling toString() method.
SequenceFileAsTextInputFormat() - Constructor for class org.apache.hadoop.mapred.SequenceFileAsTextInputFormat
 
SequenceFileAsTextRecordReader - Class in org.apache.hadoop.mapred
This class converts the input keys and values to their String forms by calling toString() method.
SequenceFileAsTextRecordReader(Configuration, FileSplit) - Constructor for class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
 
SequenceFileInputFilter<K,V> - Class in org.apache.hadoop.mapred
A class that allows a map/red job to work on a sample of sequence files.
SequenceFileInputFilter() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter
 
SequenceFileInputFilter.Filter - Interface in org.apache.hadoop.mapred
filter interface
SequenceFileInputFilter.FilterBase - Class in org.apache.hadoop.mapred
base class for Filters
SequenceFileInputFilter.FilterBase() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter.FilterBase
 
SequenceFileInputFilter.MD5Filter - Class in org.apache.hadoop.mapred
This class returns a set of records by examing the MD5 digest of its key against a filtering frequency f.
SequenceFileInputFilter.MD5Filter() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter.MD5Filter
 
SequenceFileInputFilter.PercentFilter - Class in org.apache.hadoop.mapred
This class returns a percentage of records The percentage is determined by a filtering frequency f using the criteria record# % f == 0.
SequenceFileInputFilter.PercentFilter() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter.PercentFilter
 
SequenceFileInputFilter.RegexFilter - Class in org.apache.hadoop.mapred
Records filter by matching key to regex
SequenceFileInputFilter.RegexFilter() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter.RegexFilter
 
SequenceFileInputFormat<K,V> - Class in org.apache.hadoop.mapred
Deprecated. Use SequenceFileInputFormat instead.
SequenceFileInputFormat() - Constructor for class org.apache.hadoop.mapred.SequenceFileInputFormat
Deprecated.  
SequenceFileInputFormat<K,V> - Class in org.apache.hadoop.mapreduce.lib.input
An InputFormat for SequenceFiles.
SequenceFileInputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat
 
SequenceFileOutputFormat<K,V> - Class in org.apache.hadoop.mapred
Deprecated. Use SequenceFileOutputFormat instead.
SequenceFileOutputFormat() - Constructor for class org.apache.hadoop.mapred.SequenceFileOutputFormat
Deprecated.  
SequenceFileOutputFormat<K,V> - Class in org.apache.hadoop.mapreduce.lib.output
An OutputFormat that writes SequenceFiles.
SequenceFileOutputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat
 
SequenceFileRecordReader<K,V> - Class in org.apache.hadoop.mapred
An RecordReader for SequenceFiles.
SequenceFileRecordReader(Configuration, FileSplit) - Constructor for class org.apache.hadoop.mapred.SequenceFileRecordReader
 
SequenceFileRecordReader<K,V> - Class in org.apache.hadoop.mapreduce.lib.input
An RecordReader for SequenceFiles.
SequenceFileRecordReader() - Constructor for class org.apache.hadoop.mapreduce.lib.input.SequenceFileRecordReader
 
Serialization<T> - Interface in org.apache.hadoop.io.serializer
Encapsulates a Serializer/Deserializer pair.
SerializationFactory - Class in org.apache.hadoop.io.serializer
A factory for Serializations.
SerializationFactory(Configuration) - Constructor for class org.apache.hadoop.io.serializer.SerializationFactory
Serializations are found by reading the io.serializations property from conf, which is a comma-delimited list of classnames.
serialize() - Method in class org.apache.hadoop.fs.s3.INode
 
serialize(T) - Method in interface org.apache.hadoop.io.serializer.Serializer
Serialize t to the underlying output stream.
serialize(RecordOutput, String) - Method in class org.apache.hadoop.record.meta.RecordTypeInfo
Serialize the type information for a record
serialize(RecordOutput, String) - Method in class org.apache.hadoop.record.Record
Serialize a record with tag (ususally field name)
serialize(RecordOutput) - Method in class org.apache.hadoop.record.Record
Serialize a record without a tag
SerializedRecord - Class in org.apache.hadoop.contrib.failmon
Objects of this class hold the serialized representations of EventRecords.
SerializedRecord(EventRecord) - Constructor for class org.apache.hadoop.contrib.failmon.SerializedRecord
Create the SerializedRecord given an EventRecord.
Serializer<T> - Interface in org.apache.hadoop.io.serializer
Provides a facility for serializing objects of type to an OutputStream.
Server - Class in org.apache.hadoop.ipc
An abstract IPC service.
Server(String, int, Class<? extends Writable>, int, Configuration) - Constructor for class org.apache.hadoop.ipc.Server
 
Server(String, int, Class<? extends Writable>, int, Configuration, String) - Constructor for class org.apache.hadoop.ipc.Server
Constructs a server listening on the named port and address.
Service - Class in org.apache.hadoop.security.authorize
An abstract definition of service as related to Service Level Authorization for Hadoop.
Service(String, Class<?>) - Constructor for class org.apache.hadoop.security.authorize.Service
 
SERVICE_AUTHORIZATION_CONFIG - Static variable in class org.apache.hadoop.security.authorize.ServiceAuthorizationManager
Configuration key for controlling service-level authorization for Hadoop.
ServiceAuthorizationManager - Class in org.apache.hadoop.security.authorize
An authorization manager which handles service-level authorization for incoming service requests.
ServiceAuthorizationManager() - Constructor for class org.apache.hadoop.security.authorize.ServiceAuthorizationManager
 
ServletUtil - Class in org.apache.hadoop.util
 
ServletUtil() - Constructor for class org.apache.hadoop.util.ServletUtil
 
set(String, String) - Method in class org.apache.hadoop.conf.Configuration
Set the value of the name property.
set(String, Object) - Method in class org.apache.hadoop.contrib.failmon.EventRecord
Set the value of a property of the EventRecord.
set(String, String) - Method in class org.apache.hadoop.contrib.failmon.SerializedRecord
Set the value of a property of the EventRecord.
set(int, int) - Method in class org.apache.hadoop.examples.SecondarySort.IntPair
Set the left and right values.
set(boolean, Checksum, int, int) - Method in class org.apache.hadoop.fs.FSInputChecker
Set the checksum related parameters
set(Writable[]) - Method in class org.apache.hadoop.io.ArrayWritable
 
set(boolean) - Method in class org.apache.hadoop.io.BooleanWritable
Set the value of the BooleanWritable
set(BytesWritable) - Method in class org.apache.hadoop.io.BytesWritable
Set the BytesWritable to the contents of the given newData.
set(byte[], int, int) - Method in class org.apache.hadoop.io.BytesWritable
Set the value to a copy of the given byte range
set(byte) - Method in class org.apache.hadoop.io.ByteWritable
Set the value of this ByteWritable.
set(double) - Method in class org.apache.hadoop.io.DoubleWritable
 
set(float) - Method in class org.apache.hadoop.io.FloatWritable
Set the value of this FloatWritable.
set(Writable) - Method in class org.apache.hadoop.io.GenericWritable
Set the instance that is wrapped.
set(int) - Method in class org.apache.hadoop.io.IntWritable
Set the value of this IntWritable.
set(long) - Method in class org.apache.hadoop.io.LongWritable
Set the value of this LongWritable.
set(MD5Hash) - Method in class org.apache.hadoop.io.MD5Hash
Copy the contents of another instance into this instance.
set(Object) - Method in class org.apache.hadoop.io.ObjectWritable
Reset the instance.
set(Text, Text) - Method in class org.apache.hadoop.io.SequenceFile.Metadata
 
set(String) - Method in class org.apache.hadoop.io.Text
Set to contain the contents of a string.
set(byte[]) - Method in class org.apache.hadoop.io.Text
Set to a utf8 byte array
set(Text) - Method in class org.apache.hadoop.io.Text
copy a text.
set(byte[], int, int) - Method in class org.apache.hadoop.io.Text
Set the Text to range of bytes
set(Writable[][]) - Method in class org.apache.hadoop.io.TwoDArrayWritable
 
set(String) - Method in class org.apache.hadoop.io.UTF8
Deprecated. Set to contain the contents of a string.
set(UTF8) - Method in class org.apache.hadoop.io.UTF8
Deprecated. Set to contain the contents of a string.
set(int) - Method in class org.apache.hadoop.io.VIntWritable
Set the value of this VIntWritable.
set(long) - Method in class org.apache.hadoop.io.VLongWritable
Set the value of this LongWritable.
set(int) - Method in class org.apache.hadoop.metrics.util.MetricsIntValue
Set the value
set(long) - Method in class org.apache.hadoop.metrics.util.MetricsLongValue
Set the value
set(byte[]) - Method in class org.apache.hadoop.record.Buffer
Use the specified bytes array as underlying sequence.
set(byte[], double) - Method in class org.apache.hadoop.util.bloom.Key
 
set(float) - Method in class org.apache.hadoop.util.Progress
Called during execution on a leaf node to set its progress.
SET_GROUP_COMMAND - Static variable in class org.apache.hadoop.util.Shell
 
SET_OWNER_COMMAND - Static variable in class org.apache.hadoop.util.Shell
a Unix command to set owner
SET_PERMISSION_COMMAND - Static variable in class org.apache.hadoop.util.Shell
a Unix command to set permission
setAggregatorDescriptors(JobConf, Class<? extends ValueAggregatorDescriptor>[]) - Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
 
setArchiveTimestamps(Configuration, String) - Static method in class org.apache.hadoop.filecache.DistributedCache
This is to check the timestamp of the archives to be localized
setAssignedJobID(JobID) - Method in class org.apache.hadoop.mapred.jobcontrol.Job
Set the mapred ID for this job as assigned by the mapred framework.
setAttemptsToStartSkipping(Configuration, int) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
Set the number of Task attempts AFTER which skip mode will be kicked off.
setAttribute(String, Object) - Method in class org.apache.hadoop.http.HttpServer
Set a value in the webapp context.
setAttribute(String, Object) - Method in class org.apache.hadoop.metrics.ContextFactory
Sets the named factory attribute to the specified value, creating it if it did not already exist.
setAttribute(Attribute) - Method in class org.apache.hadoop.metrics.util.MetricsDynamicMBeanBase
 
setAttributes(AttributeList) - Method in class org.apache.hadoop.metrics.util.MetricsDynamicMBeanBase
 
setAutoIncrMapperProcCount(Configuration, boolean) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
Set the flag which if set to true, SkipBadRecords.COUNTER_MAP_PROCESSED_RECORDS is incremented by MapRunner after invoking the map function.
setAutoIncrReducerProcCount(Configuration, boolean) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
Set the flag which if set to true, SkipBadRecords.COUNTER_REDUCE_PROCESSED_GROUPS is incremented by framework after invoking the reduce function.
setBoolean(String, boolean) - Method in class org.apache.hadoop.conf.Configuration
Set the value of the name property to a boolean.
setBooleanIfUnset(String, boolean) - Method in class org.apache.hadoop.conf.Configuration
Set the given property, if it is currently unset.
setCacheArchives(URI[], Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Set the configuration with the given set of archives
setCacheFiles(URI[], Configuration) - Static method in class org.apache.hadoop.filecache.DistributedCache
Set the configuration with the given set of files
setCapacity(int) - Method in class org.apache.hadoop.io.BytesWritable
Change the capacity of the backing storage.
setCapacity(int) - Method in class org.apache.hadoop.record.Buffer
Change the capacity of the backing storage.
setClass(String, Class<?>, Class<?>) - Method in class org.apache.hadoop.conf.Configuration
Set the value of the name property to the name of a theClass implementing the given interface xface.
setClassLoader(ClassLoader) - Method in class org.apache.hadoop.conf.Configuration
Set the class loader that will be used to load the various objects.
setCodecClasses(Configuration, List<Class>) - Static method in class org.apache.hadoop.io.compress.CompressionCodecFactory
Sets a list of codec classes in the configuration.
setCombinerClass(Class<? extends Reducer>) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the user-defined combiner class used to combine map-outputs before being sent to the reducers.
setCombinerClass(Class<? extends Reducer>) - Method in class org.apache.hadoop.mapreduce.Job
Set the combiner class for the job.
setCompressionType(Configuration, SequenceFile.CompressionType) - Static method in class org.apache.hadoop.io.SequenceFile
Deprecated. Use the one of the many SequenceFile.createWriter methods to specify the SequenceFile.CompressionType while creating the SequenceFile or SequenceFileOutputFormat.setOutputCompressionType(org.apache.hadoop.mapred.JobConf, org.apache.hadoop.io.SequenceFile.CompressionType) to specify the SequenceFile.CompressionType for job-outputs. or
setCompressMapOutput(boolean) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Should the map outputs be compressed before transfer? Uses the SequenceFile compression.
setCompressOutput(JobConf, boolean) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
Set whether the output of the job is compressed.
setCompressOutput(Job, boolean) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
Set whether the output of the job is compressed.
setConf(Configuration) - Method in interface org.apache.hadoop.conf.Configurable
Set the configuration to be used by this object.
setConf(Configuration) - Method in class org.apache.hadoop.conf.Configured
 
setConf(Configuration) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
 
setConf(Configuration) - Method in class org.apache.hadoop.io.AbstractMapWritable
 
setConf(Configuration) - Method in class org.apache.hadoop.io.compress.DefaultCodec
 
setConf(Configuration) - Method in class org.apache.hadoop.io.GenericWritable
 
setConf(Configuration) - Method in class org.apache.hadoop.io.ObjectWritable
 
setConf(Configuration) - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
Set the configuration to be used by this object.
setConf(Configuration) - Method in class org.apache.hadoop.mapred.lib.InputSampler
 
setConf(Configuration) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.MD5Filter
configure the filter according to configuration
setConf(Configuration) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.PercentFilter
configure the filter by checking the configuration
setConf(Configuration) - Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.RegexFilter
configure the Filter by checking the configuration
setConf(Configuration) - Method in class org.apache.hadoop.net.ScriptBasedMapping
 
setConf(Configuration) - Method in class org.apache.hadoop.net.SocksSocketFactory
 
setConf(Configuration) - Method in class org.apache.hadoop.security.authorize.ConfiguredPolicy
 
setConf(Configuration) - Method in class org.apache.hadoop.streaming.StreamJob
 
setConf(Object, Configuration) - Static method in class org.apache.hadoop.util.ReflectionUtils
Check and set 'configuration' if necessary.
setContentionTracing(boolean) - Static method in class org.apache.hadoop.util.ReflectionUtils
 
setCountersEnabled(JobConf, boolean) - Static method in class org.apache.hadoop.mapred.lib.MultipleOutputs
Enables or disables counters for the named outputs.
setCurrentUGI(UserGroupInformation) - Static method in class org.apache.hadoop.security.UserGroupInformation
Deprecated. Use UserGroupInformation.setCurrentUser(UserGroupInformation)
setCurrentUser(UserGroupInformation) - Static method in class org.apache.hadoop.security.UserGroupInformation
Set the UserGroupInformation for the current thread WARNING - This method should be used only in test cases and other exceptional cases!
setDebugStream(PrintStream) - Method in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
setDefaultUri(Configuration, URI) - Static method in class org.apache.hadoop.fs.FileSystem
Set the default filesystem URI in a configuration.
setDefaultUri(Configuration, String) - Static method in class org.apache.hadoop.fs.FileSystem
Set the default filesystem URI in a configuration.
setDelete(Term) - Method in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
Set the instance to be a delete operation.
setDestdir(File) - Method in class org.apache.hadoop.record.compiler.ant.RccTask
Sets directory where output files will be generated
setDictionary(byte[], int, int) - Method in class org.apache.hadoop.io.compress.bzip2.BZip2DummyCompressor
 
setDictionary(byte[], int, int) - Method in class org.apache.hadoop.io.compress.bzip2.BZip2DummyDecompressor
 
setDictionary(byte[], int, int) - Method in interface org.apache.hadoop.io.compress.Compressor
Sets preset dictionary for compression.
setDictionary(byte[], int, int) - Method in interface org.apache.hadoop.io.compress.Decompressor
Sets preset dictionary for compression.
setDictionary(byte[], int, int) - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
 
setDictionary(byte[], int, int) - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
setDigest(String) - Method in class org.apache.hadoop.io.MD5Hash
Sets the digest value from a hex string.
setDisableHistory(boolean) - Static method in class org.apache.hadoop.mapred.JobHistory
Enable/disable history logging.
setDisplayName(String) - Method in class org.apache.hadoop.mapred.Counters.Counter
Deprecated.  
setDisplayName(String) - Method in class org.apache.hadoop.mapred.Counters.Group
Deprecated. Set the display name
setDisplayName(String) - Method in class org.apache.hadoop.mapreduce.Counter
Deprecated. 
setDistributionPolicyClass(Class<? extends IDistributionPolicy>) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Set the distribution policy class.
setDocumentAnalyzerClass(Class<? extends Analyzer>) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Set the analyzer class.
setDoubleValue(Object, double) - Method in class org.apache.hadoop.contrib.utils.join.JobBase
Set the given counter to the given value
setEnvironment(Map<String, String>) - Method in class org.apache.hadoop.util.Shell
set the environment for the command
setEventId(int) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
set event Id.
setExcludesFile(String) - Method in class org.apache.hadoop.util.HostsFileReader
 
setExecutable(JobConf, String) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Set the URI for the application's executable.
setFactor(int) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Set the number of streams to merge at once.
setFactory(Class, WritableFactory) - Static method in class org.apache.hadoop.io.WritableFactories
Define a factory for a class.
setFailonerror(boolean) - Method in class org.apache.hadoop.record.compiler.ant.RccTask
Given multiple files (via fileset), set the error handling behavior
SetFile - Class in org.apache.hadoop.io
A file-based set of keys.
SetFile() - Constructor for class org.apache.hadoop.io.SetFile
 
setFile(File) - Method in class org.apache.hadoop.record.compiler.ant.RccTask
Sets the record definition file attribute
SetFile.Reader - Class in org.apache.hadoop.io
Provide access to an existing set file.
SetFile.Reader(FileSystem, String, Configuration) - Constructor for class org.apache.hadoop.io.SetFile.Reader
Construct a set reader for the named set.
SetFile.Reader(FileSystem, String, WritableComparator, Configuration) - Constructor for class org.apache.hadoop.io.SetFile.Reader
Construct a set reader for the named set using the named comparator.
SetFile.Writer - Class in org.apache.hadoop.io
Write a new set file.
SetFile.Writer(FileSystem, String, Class<? extends WritableComparable>) - Constructor for class org.apache.hadoop.io.SetFile.Writer
Deprecated. pass a Configuration too
SetFile.Writer(Configuration, FileSystem, String, Class<? extends WritableComparable>, SequenceFile.CompressionType) - Constructor for class org.apache.hadoop.io.SetFile.Writer
Create a set naming the element class and compression type.
SetFile.Writer(Configuration, FileSystem, String, WritableComparator, SequenceFile.CompressionType) - Constructor for class org.apache.hadoop.io.SetFile.Writer
Create a set naming the element comparator and compression type.
setFileTimestamps(Configuration, String) - Static method in class org.apache.hadoop.filecache.DistributedCache
This is to check the timestamp of the files to be localized
setFilterClass(Configuration, Class) - Static method in class org.apache.hadoop.mapred.SequenceFileInputFilter
set the filter class
setFinalSync(JobConf, boolean) - Static method in class org.apache.hadoop.examples.terasort.TeraOutputFormat
Set the requirement for a final sync before the stream is closed.
setFloat(String, float) - Method in class org.apache.hadoop.conf.Configuration
Set the value of the name property to a float.
setFormat(JobConf) - Method in class org.apache.hadoop.mapred.join.CompositeInputFormat
Interpret a given string as a composite expression.
setFrequency(Configuration, int) - Static method in class org.apache.hadoop.mapred.SequenceFileInputFilter.MD5Filter
set the filtering frequency in configuration
setFrequency(Configuration, int) - Static method in class org.apache.hadoop.mapred.SequenceFileInputFilter.PercentFilter
set the frequency and stores it in conf
setGroup(String) - Method in class org.apache.hadoop.fs.FileStatus
Sets group.
setGroupingComparatorClass(Class<? extends RawComparator>) - Method in class org.apache.hadoop.mapreduce.Job
Define the comparator that controls which keys are grouped together for a single call to Reducer.reduce(Object, Iterable, org.apache.hadoop.mapreduce.Reducer.Context)
setHosts(String[]) - Method in class org.apache.hadoop.fs.BlockLocation
Set the hosts hosting this block
setID(int) - Method in class org.apache.hadoop.mapred.join.Parser.Node
 
setIfUnset(String, String) - Method in class org.apache.hadoop.conf.Configuration
Sets a property if it is currently unset.
setIncludesFile(String) - Method in class org.apache.hadoop.util.HostsFileReader
 
setIndexInputFormatClass(Class<? extends InputFormat>) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Set the index input format class.
setIndexInterval(int) - Method in class org.apache.hadoop.io.MapFile.Writer
Sets the index interval.
setIndexInterval(Configuration, int) - Static method in class org.apache.hadoop.io.MapFile.Writer
Sets the index interval and stores it in conf
setIndexMaxFieldLength(int) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Set the max field length for a Lucene instance.
setIndexMaxNumSegments(int) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Set the max number of segments for a Lucene instance.
setIndexShards(String) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Set the string representation of a number of shards.
setIndexShards(IndexUpdateConfiguration, Shard[]) - Static method in class org.apache.hadoop.contrib.index.mapred.Shard
 
setIndexUpdaterClass(Class<? extends IIndexUpdater>) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Set the index updater class.
setIndexUseCompoundFile(boolean) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Set whether use the compound file format for a Lucene instance.
setInput(byte[], int, int) - Method in class org.apache.hadoop.io.compress.bzip2.BZip2DummyCompressor
 
setInput(byte[], int, int) - Method in class org.apache.hadoop.io.compress.bzip2.BZip2DummyDecompressor
 
setInput(byte[], int, int) - Method in interface org.apache.hadoop.io.compress.Compressor
Sets input data for compression.
setInput(byte[], int, int) - Method in interface org.apache.hadoop.io.compress.Decompressor
Sets input data for decompression.
setInput(byte[], int, int) - Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
 
setInput(byte[], int, int) - Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
setInput(JobConf, Class<? extends DBWritable>, String, String, String, String...) - Static method in class org.apache.hadoop.mapred.lib.db.DBInputFormat
Initializes the map-part of the job with the appropriate input settings.
setInput(JobConf, Class<? extends DBWritable>, String, String) - Static method in class org.apache.hadoop.mapred.lib.db.DBInputFormat
Initializes the map-part of the job with the appropriate input settings.
setInputFormat(Class<? extends InputFormat>) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the InputFormat implementation for the map-reduce job.
setInputFormatClass(Class<? extends InputFormat>) - Method in class org.apache.hadoop.mapreduce.Job
Set the InputFormat for the job.
setInputPathFilter(JobConf, Class<? extends PathFilter>) - Static method in class org.apache.hadoop.mapred.FileInputFormat
Deprecated. Set a PathFilter to be applied to the input paths for the map-reduce job.
setInputPathFilter(Job, Class<? extends PathFilter>) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
Set a PathFilter to be applied to the input paths for the map-reduce job.
setInputPaths(JobConf, String) - Static method in class org.apache.hadoop.mapred.FileInputFormat
Deprecated. Sets the given comma separated paths as the list of inputs for the map-reduce job.
setInputPaths(JobConf, Path...) - Static method in class org.apache.hadoop.mapred.FileInputFormat
Deprecated. Set the array of Paths as the list of inputs for the map-reduce job.
setInputPaths(Job, String) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
Sets the given comma separated paths as the list of inputs for the map-reduce job.
setInputPaths(Job, Path...) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
Set the array of Paths as the list of inputs for the map-reduce job.
setInsert(Document) - Method in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
Set the instance to be an insert operation.
setInstrumentationClass(Configuration, Class<? extends JobTrackerInstrumentation>) - Static method in class org.apache.hadoop.mapred.JobTracker
 
setInstrumentationClass(Configuration, Class<? extends TaskTrackerInstrumentation>) - Static method in class org.apache.hadoop.mapred.TaskTracker
 
setInt(String, int) - Method in class org.apache.hadoop.conf.Configuration
Set the value of the name property to an int.
setIOSortMB(int) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Set the IO sort space in MB.
setIsJavaMapper(JobConf, boolean) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Set whether the Mapper is written in Java.
setIsJavaRecordReader(JobConf, boolean) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Set whether the job is using a Java RecordReader.
setIsJavaRecordWriter(JobConf, boolean) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Set whether the job will use a Java RecordWriter.
setIsJavaReducer(JobConf, boolean) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Set whether the Reducer is written in Java.
setJar(String) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the user jar for the map-reduce job.
setJarByClass(Class) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the job's jar file by finding an example class location.
setJarByClass(Class<?>) - Method in class org.apache.hadoop.mapreduce.Job
Set the Jar by finding where a given class came from.
setJobConf(JobConf) - Method in class org.apache.hadoop.mapred.jobcontrol.Job
Set the mapred job conf for this job.
setJobConf() - Method in class org.apache.hadoop.streaming.StreamJob
 
setJobEndNotificationURI(String) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the uri to be invoked in-order to send a notification after the job has completed (success/failure).
setJobID(String) - Method in class org.apache.hadoop.mapred.jobcontrol.Job
Set the job ID for this job.
setJobName(String) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the user-specified job name.
setJobName(String) - Method in class org.apache.hadoop.mapred.jobcontrol.Job
Set the job name for this job.
setJobName(String) - Method in class org.apache.hadoop.mapreduce.Job
Set the user-specified job name.
setJobPriority(JobPriority) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set JobPriority for this job.
setJobPriority(JobPriority) - Method in class org.apache.hadoop.mapred.JobStatus
Set the priority of the job, defaulting to NORMAL.
setJobPriority(JobID, String) - Method in class org.apache.hadoop.mapred.JobTracker
Set the priority of a job
setJobPriority(String) - Method in interface org.apache.hadoop.mapred.RunningJob
Set the priority of a running job.
setKeepCommandFile(JobConf, boolean) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Set whether to keep the command file for debugging
setKeepFailedTaskFiles(boolean) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set whether the framework should keep the intermediate files for failed tasks.
setKeepTaskFilesPattern(String) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set a regular expression for task names that should be kept.
setKeyComparator(Class<? extends WritableComparator>) - Method in class org.apache.hadoop.mapred.join.Parser.Node
 
setKeyFieldComparatorOptions(String) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the KeyFieldBasedComparator options used to compare keys.
setKeyFieldPartitionerOptions(String) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the KeyFieldBasedPartitioner options used for Partitioner
setLanguage(String) - Method in class org.apache.hadoop.record.compiler.ant.RccTask
Sets the output language option
setLength(long) - Method in class org.apache.hadoop.fs.BlockLocation
Set the length of block
setLevel(int) - Method in interface org.apache.hadoop.net.Node
Set this node's level in the tree.
setLevel(int) - Method in class org.apache.hadoop.net.NodeBase
Set this node's level in the tree
setLoadNativeLibraries(Configuration, boolean) - Method in class org.apache.hadoop.util.NativeCodeLoader
Set if native hadoop libraries, if present, can be used for this job.
setLocalAnalysisClass(Class<? extends ILocalAnalysis>) - Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Set the local analysis class.
setLocalArchives(Configuration, String) - Static method in class org.apache.hadoop.filecache.DistributedCache
Set the conf to contain the location for localized archives
setLocalFiles(Configuration, String) - Static method in class org.apache.hadoop.filecache.DistributedCache
Set the conf to contain the location for localized files
setLong(String, long) - Method in class org.apache.hadoop.conf.Configuration
Set the value of the name property to a long.
setLongValue(Object, long) - Method in class org.apache.hadoop.contrib.utils.join.JobBase
Set the given counter to the given value
setMapDebugScript(String) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the debug script to run when the map tasks fail.
setMapOutputCompressorClass(Class<? extends CompressionCodec>) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the given class as the CompressionCodec for the map outputs.
setMapOutputKeyClass(Class<?>) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the key class for the map output data.
setMapOutputKeyClass(Class<?>) - Method in class org.apache.hadoop.mapreduce.Job
Set the key class for the map output data.
setMapOutputValueClass(Class<?>) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the value class for the map output data.
setMapOutputValueClass(Class<?>) - Method in class org.apache.hadoop.mapreduce.Job
Set the value class for the map output data.
setMapperClass(Class<? extends Mapper>) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the Mapper class for the job.
setMapperClass(Class<? extends Mapper>) - Method in class org.apache.hadoop.mapreduce.Job
Set the Mapper for the job.
setMapperClass(Job, Class<? extends Mapper<K1, V1, K2, V2>>) - Static method in class org.apache.hadoop.mapreduce.lib.map.MultithreadedMapper
Set the application's mapper class.
setMapperMaxSkipRecords(Configuration, long) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
Set the number of acceptable skip records surrounding the bad record PER bad record in mapper.
setMapredJobID(String) - Method in class org.apache.hadoop.mapred.jobcontrol.Job
Deprecated. use Job.setAssignedJobID(JobID) instead
setMapRunnerClass(Class<? extends MapRunnable>) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Expert: Set the MapRunnable class for the job.
setMapSpeculativeExecution(boolean) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Turn speculative execution on or off for this job for map tasks.
SETMASK - Static variable in class org.apache.hadoop.io.compress.bzip2.CBZip2OutputStream
This constant is accessible by subclasses for historical purposes.
setMaxInputSplitSize(Job, long) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
Set the maximum split size
setMaxItems(long) - Method in class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
Set the limit on the number of unique values
setMaxMapAttempts(int) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Expert: Set the number of maximum attempts that will be made to run a map task.
setMaxMapTaskFailuresPercent(int) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Expert: Set the maximum percentage of map tasks that can fail without the job being aborted.
setMaxPhysicalMemoryForTask(long) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. 
setMaxReduceAttempts(int) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Expert: Set the number of maximum attempts that will be made to run a reduce task.
setMaxReduceTaskFailuresPercent(int) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the maximum percentage of reduce tasks that can fail without the job being aborted.
setMaxSplitSize(long) - Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
Specify the maximum size (in bytes) of each split.
setMaxTaskFailuresPerTracker(int) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the maximum no.
setMaxVirtualMemoryForTask(long) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Use JobConf.setMemoryForMapTask(long mem) and Use JobConf.setMemoryForReduceTask(long mem)
setMemory(int) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Set the total amount of buffer memory, in bytes.
setMemoryForMapTask(long) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated.  
setMemoryForReduceTask(long) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated.  
setMessage(String) - Method in class org.apache.hadoop.mapred.jobcontrol.Job
Set the message for this job.
setMetric(String, int) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Sets the named metric to the specified value.
setMetric(String, long) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Sets the named metric to the specified value.
setMetric(String, short) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Sets the named metric to the specified value.
setMetric(String, byte) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Sets the named metric to the specified value.
setMetric(String, float) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Sets the named metric to the specified value.
setMetric(String, int) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Sets the named metric to the specified value.
setMetric(String, long) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Sets the named metric to the specified value.
setMetric(String, short) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Sets the named metric to the specified value.
setMetric(String, byte) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Sets the named metric to the specified value.
setMetric(String, float) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Sets the named metric to the specified value.
setMinInputSplitSize(Job, long) - Static method in class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
Set the minimum input split size
setMinSplitSize(long) - Method in class org.apache.hadoop.mapred.FileInputFormat
Deprecated.  
setMinSplitSizeNode(long) - Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
Specify the minimum size (in bytes) of each split per node.
setMinSplitSizeRack(long) - Method in class org.apache.hadoop.mapred.lib.CombineFileInputFormat
Specify the minimum size (in bytes) of each split per rack.
setName(Class, String) - Static method in class org.apache.hadoop.io.WritableName
Set the name that a class should be known as to something other than the class name.
setName(String) - Method in class org.apache.hadoop.record.meta.RecordTypeInfo
set the name of the record
setNames(String[]) - Method in class org.apache.hadoop.fs.BlockLocation
Set the names (host:port) hosting this block
setNetworkLocation(String) - Method in interface org.apache.hadoop.net.Node
Set the node's network location
setNetworkLocation(String) - Method in class org.apache.hadoop.net.NodeBase
Set this node's network location
setNetworkProperties() - Method in class org.apache.hadoop.contrib.failmon.LogParser
 
setNumberOfThreads(Job, int) - Static method in class org.apache.hadoop.mapreduce.lib.map.MultithreadedMapper
Set the number of threads in the pool for running maps.
setNumMapTasks(int) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the number of map tasks for this job.
setNumReduceTasks(int) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the requisite number of reduce tasks for this job.
setNumReduceTasks(int) - Method in class org.apache.hadoop.mapreduce.Job
Set the number of reduce tasks for the job.
setNumTasksToExecutePerJvm(int) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Sets the number of tasks that a spawned task JVM should run before it exits
setOffset(long) - Method in class org.apache.hadoop.fs.BlockLocation
Set the start offset of file associated with this block
setOp(DocumentAndOp.Op) - Method in class org.apache.hadoop.contrib.index.example.LineDocTextAndOp
Set the type of the operation.
setOutput(JobConf, String, String...) - Static method in class org.apache.hadoop.mapred.lib.db.DBOutputFormat
Initializes the reduce-part of the job with the appropriate output settings
setOutputCommitter(Class<? extends OutputCommitter>) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the OutputCommitter implementation for the map-reduce job.
setOutputCompressionType(JobConf, SequenceFile.CompressionType) - Static method in class org.apache.hadoop.mapred.SequenceFileOutputFormat
Deprecated. Set the SequenceFile.CompressionType for the output SequenceFile.
setOutputCompressionType(Job, SequenceFile.CompressionType) - Static method in class org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat
Set the SequenceFile.CompressionType for the output SequenceFile.
setOutputCompressorClass(JobConf, Class<? extends CompressionCodec>) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
Set the CompressionCodec to be used to compress job outputs.
setOutputCompressorClass(Job, Class<? extends CompressionCodec>) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
Set the CompressionCodec to be used to compress job outputs.
setOutputFormat(Class<? extends OutputFormat>) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the OutputFormat implementation for the map-reduce job.
setOutputFormatClass(Class<? extends OutputFormat>) - Method in class org.apache.hadoop.mapreduce.Job
Set the OutputFormat for the job.
setOutputKeyClass(Class<?>) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the key class for the job output data.
setOutputKeyClass(Class<?>) - Method in class org.apache.hadoop.mapreduce.Job
Set the key class for the job output data.
setOutputKeyComparatorClass(Class<? extends RawComparator>) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the RawComparator comparator used to compare keys.
setOutputPath(JobConf, Path) - Static method in class org.apache.hadoop.mapred.FileOutputFormat
Set the Path of the output directory for the map-reduce job.
setOutputPath(Job, Path) - Static method in class org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
Set the Path of the output directory for the map-reduce job.
setOutputValueClass(Class<?>) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the value class for job outputs.
setOutputValueClass(Class<?>) - Method in class org.apache.hadoop.mapreduce.Job
Set the value class for job outputs.
setOutputValueGroupingComparator(Class<? extends RawComparator>) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the user defined RawComparator comparator for grouping keys in the input to the reduce.
setOwner(String) - Method in class org.apache.hadoop.fs.FileStatus
Sets owner.
setOwner(Path, String, String) - Method in class org.apache.hadoop.fs.FileSystem
Set owner of a path (i.e.
setOwner(Path, String, String) - Method in class org.apache.hadoop.fs.FilterFileSystem
Set owner of a path (i.e.
setOwner(Path, String, String) - Method in class org.apache.hadoop.fs.HarFileSystem
not implemented.
setOwner(Path, String, String) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
Use the command chown to set owner.
setParent(Node) - Method in interface org.apache.hadoop.net.Node
Set this node's parent
setParent(Node) - Method in class org.apache.hadoop.net.NodeBase
Set this node's parent
setPartitionerClass(Class<? extends Partitioner>) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the Partitioner class used to partition Mapper-outputs to be sent to the Reducers.
setPartitionerClass(Class<? extends Partitioner>) - Method in class org.apache.hadoop.mapreduce.Job
Set the Partitioner for the job.
setPartitionFile(JobConf, Path) - Static method in class org.apache.hadoop.mapred.lib.TotalOrderPartitioner
Set the path to the SequenceFile storing the sorted partition keyset.
setPattern(Configuration, String) - Static method in class org.apache.hadoop.mapred.SequenceFileInputFilter.RegexFilter
Define the filtering regex and stores it in conf
setPeriod(int) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Sets the timer period
setPermission(FsPermission) - Method in class org.apache.hadoop.fs.FileStatus
Sets permission.
setPermission(Path, FsPermission) - Method in class org.apache.hadoop.fs.FileSystem
Set permission of a path.
setPermission(Path, FsPermission) - Method in class org.apache.hadoop.fs.FilterFileSystem
Set permission of a path.
setPermission(Path, FsPermission) - Method in class org.apache.hadoop.fs.HarFileSystem
Not implemented.
setPermission(Path, FsPermission) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
Use the command chmod to set permission.
setPingInterval(Configuration, int) - Static method in class org.apache.hadoop.ipc.Client
set the ping interval value in configuration
setPolicy(Policy) - Static method in class org.apache.hadoop.security.SecurityUtil
Set the global security policy for Hadoop.
setPrinter(DancingLinks.SolutionAcceptor<Pentomino.ColumnName>) - Method in class org.apache.hadoop.examples.dancing.Pentomino
Set the printer for the puzzle.
setProfileEnabled(boolean) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set whether the system should collect profiler information for some of the tasks in this job? The information is stored in the user log directory.
setProfileParams(String) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the profiler configuration arguments.
setProfileTaskRange(boolean, String) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the ranges of maps or reduces to profile.
setProgressable(Progressable) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Set the progressable object in order to report progress.
setProperty(String, String) - Static method in class org.apache.hadoop.contrib.failmon.Environment
Sets the value of a property inthe configuration file.
setQueueName(String) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the name of the queue to which this job should be submitted.
setQueueName(String) - Method in class org.apache.hadoop.mapred.JobQueueInfo
Set the queue name of the JobQueueInfo
setQuietMode(boolean) - Method in class org.apache.hadoop.conf.Configuration
Set the quietness-mode.
setReduceDebugScript(String) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the debug script to run when the reduce tasks fail.
setReducer(JobConf, Class<? extends Reducer<K1, V1, K2, V2>>, Class<? extends K1>, Class<? extends V1>, Class<? extends K2>, Class<? extends V2>, boolean, JobConf) - Static method in class org.apache.hadoop.mapred.lib.ChainReducer
Sets the Reducer class to the chain job's JobConf.
setReducerClass(Class<? extends Reducer>) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the Reducer class for the job.
setReducerClass(Class<? extends Reducer>) - Method in class org.apache.hadoop.mapreduce.Job
Set the Reducer for the job.
setReducerMaxSkipGroups(Configuration, long) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
Set the number of acceptable skip groups surrounding the bad group PER bad group in reducer.
setReduceSpeculativeExecution(boolean) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Turn speculative execution on or off for this job for reduce tasks.
setReplication(Path, short) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Set replication for an existing file.
setReplication(Path, short) - Method in class org.apache.hadoop.fs.FileSystem
Set replication for an existing file.
setReplication(Path, short) - Method in class org.apache.hadoop.fs.FilterFileSystem
Set replication for an existing file.
setReplication(Path, short) - Method in class org.apache.hadoop.fs.HarFileSystem
Not implemented.
setReplication(Path, short) - Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
 
setRunningTaskAttempts(Collection<TaskAttemptID>) - Method in class org.apache.hadoop.mapred.TaskReport
set running attempt(s) of the task.
setRunState(int) - Method in class org.apache.hadoop.mapred.JobStatus
Change the current run state of the job.
setSchedulingInfo(String) - Method in class org.apache.hadoop.mapred.JobQueueInfo
Set the scheduling information associated to particular job queue
setSchedulingInfo(String) - Method in class org.apache.hadoop.mapred.JobStatus
Used to set the scheduling information associated to a particular Job.
setSequenceFileOutputKeyClass(JobConf, Class<?>) - Static method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
Set the key class for the SequenceFile
setSequenceFileOutputValueClass(JobConf, Class<?>) - Static method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
Set the value class for the SequenceFile
setSessionId(String) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the user-specified session identifier.
setSigKillInterval(long) - Method in class org.apache.hadoop.util.ProcfsBasedProcessTree
 
setSize(int) - Method in class org.apache.hadoop.io.BytesWritable
Change the size of the buffer.
setSkipOutputPath(JobConf, Path) - Static method in class org.apache.hadoop.mapred.SkipBadRecords
Set the directory to which skipped records are written.
setSocketSendBufSize(int) - Method in class org.apache.hadoop.ipc.Server
Sets the socket buffer size used for responding to RPCs
setSortComparatorClass(Class<? extends RawComparator>) - Method in class org.apache.hadoop.mapreduce.Job
Define the comparator that controls how the keys are sorted before they are passed to the Reducer.
setSpeculativeExecution(boolean) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Turn speculative execution on or off for this job.
setState(ParseState) - Static method in class org.apache.hadoop.contrib.failmon.PersistentState
Set the state of parsing for a particular log file.
setState(int) - Method in class org.apache.hadoop.mapred.jobcontrol.Job
Set the state for this job.
setStatus(String) - Method in interface org.apache.hadoop.mapred.Reporter
Set the status description for the task.
setStatus(String) - Method in class org.apache.hadoop.mapreduce.StatusReporter
 
setStatus(String) - Method in class org.apache.hadoop.mapreduce.TaskAttemptContext
Set the current status of the task to the given string.
setStatus(String) - Method in class org.apache.hadoop.mapreduce.TaskInputOutputContext
 
setStatus(String) - Method in class org.apache.hadoop.util.Progress
 
setStrings(String, String...) - Method in class org.apache.hadoop.conf.Configuration
Set the array of string values for the name property as as comma delimited values.
setSuccessfulAttempt(TaskAttemptID) - Method in class org.apache.hadoop.mapred.TaskReport
set successful attempt ID of the task.
setTabSize(int) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
setTag(Text) - Method in class org.apache.hadoop.contrib.utils.join.TaggedMapOutput
 
setTag(String, String) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Sets the named tag to the specified value.
setTag(String, int) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Sets the named tag to the specified value.
setTag(String, long) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Sets the named tag to the specified value.
setTag(String, short) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Sets the named tag to the specified value.
setTag(String, byte) - Method in interface org.apache.hadoop.metrics.MetricsRecord
Sets the named tag to the specified value.
setTag(String, String) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Sets the named tag to the specified value.
setTag(String, int) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Sets the named tag to the specified value.
setTag(String, long) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Sets the named tag to the specified value.
setTag(String, short) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Sets the named tag to the specified value.
setTag(String, byte) - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Sets the named tag to the specified value.
setTaskId(String) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
Deprecated. use TaskCompletionEvent.setTaskID(TaskAttemptID) instead.
setTaskID(TaskAttemptID) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
Sets task id.
setTaskId(String) - Method in class org.apache.hadoop.mapred.TaskLogAppender
 
setTaskOutputFilter(JobClient.TaskStatusFilter) - Method in class org.apache.hadoop.mapred.JobClient
Deprecated. 
setTaskOutputFilter(JobConf, JobClient.TaskStatusFilter) - Static method in class org.apache.hadoop.mapred.JobClient
Modify the JobConf to set the task output filter.
setTaskRunTime(int) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
Set the task completion time
setTaskStatus(TaskCompletionEvent.Status) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
Set task status.
setTaskTrackerHttp(String) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
Set task tracker http location.
setThreads(int, int) - Method in class org.apache.hadoop.http.HttpServer
Set the min, max number of worker threads (simultaneous connections).
setTimes(Path, long, long) - Method in class org.apache.hadoop.fs.FileSystem
Set access time of a file
setTopologyPaths(String[]) - Method in class org.apache.hadoop.fs.BlockLocation
Set the network topology paths of the hosts
setTotalLogFileSize(long) - Method in class org.apache.hadoop.mapred.TaskLogAppender
 
setUMask(Configuration, FsPermission) - Static method in class org.apache.hadoop.fs.permission.FsPermission
Set the user file creation mask (umask)
setup(Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT>.Context) - Method in class org.apache.hadoop.mapreduce.Mapper
Called once at the beginning of the task.
setup(Reducer<KEYIN, VALUEIN, KEYOUT, VALUEOUT>.Context) - Method in class org.apache.hadoop.mapreduce.Reducer
Called once at the start of the task.
setUpdate(Document, Term) - Method in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
Set the instance to be an update operation.
setupJob(JobContext) - Method in class org.apache.hadoop.mapred.FileOutputCommitter
 
setupJob(JobContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
Deprecated. For the framework to setup the job output during initialization
setupJob(JobContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
Deprecated. This method implements the new interface by calling the old method.
setupJob(JobContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
Create the temporary directory that is the root of all of the task work directories.
setupJob(JobContext) - Method in class org.apache.hadoop.mapreduce.OutputCommitter
For the framework to setup the job output during initialization
setupJobConf(int, int, long, int, long, int) - Method in class org.apache.hadoop.examples.SleepJob
 
setupProgress() - Method in class org.apache.hadoop.mapred.JobStatus
 
setupProgress() - Method in interface org.apache.hadoop.mapred.RunningJob
Get the progress of the job's setup-tasks, as a float between 0.0 and 1.0.
setupTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.FileOutputCommitter
 
setupTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
Deprecated. Sets up output for the task.
setupTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapred.OutputCommitter
Deprecated. This method implements the new interface by calling the old method.
setupTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
No task setup required.
setupTask(TaskAttemptContext) - Method in class org.apache.hadoop.mapreduce.OutputCommitter
Sets up output for the task.
setUseNewMapper(boolean) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set whether the framework should use the new api for the mapper.
setUseNewReducer(boolean) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set whether the framework should use the new api for the reducer.
setUser(String) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the reported username for this job.
setVerbose(boolean) - Method in class org.apache.hadoop.streaming.JarBuilder
 
setVerifyChecksum(boolean) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
Set whether to verify checksum.
setVerifyChecksum(boolean) - Method in class org.apache.hadoop.fs.FileSystem
Set the verify checksum flag.
setVerifyChecksum(boolean) - Method in class org.apache.hadoop.fs.FilterFileSystem
Set the verify checksum flag.
setWorkingDirectory(Path) - Method in class org.apache.hadoop.fs.FileSystem
Set the current working directory for the given file system.
setWorkingDirectory(Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Set the current working directory for the given file system.
setWorkingDirectory(Path) - Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
 
setWorkingDirectory(Path) - Method in class org.apache.hadoop.fs.HarFileSystem
 
setWorkingDirectory(Path) - Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
 
setWorkingDirectory(Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
Set the working directory to the given directory.
setWorkingDirectory(Path) - Method in class org.apache.hadoop.fs.s3.S3FileSystem
 
setWorkingDirectory(Path) - Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
Set the working directory to the given directory.
setWorkingDirectory(Path) - Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Set the current working directory for the default file system.
setWorkingDirectory(Path) - Method in class org.apache.hadoop.mapreduce.Job
Set the current working directory for the default file system.
setWorkingDirectory(File) - Method in class org.apache.hadoop.util.Shell
set the working directory
Shard - Class in org.apache.hadoop.contrib.index.mapred
This class represents the metadata of a shard.
Shard() - Constructor for class org.apache.hadoop.contrib.index.mapred.Shard
Constructor.
Shard(long, String, long) - Constructor for class org.apache.hadoop.contrib.index.mapred.Shard
Construct a shard from a versio number, a directory and a generation number.
Shard(Shard) - Constructor for class org.apache.hadoop.contrib.index.mapred.Shard
Construct using a shard object.
ShardWriter - Class in org.apache.hadoop.contrib.index.lucene
The initial version of an index is stored in the perm dir.
ShardWriter(FileSystem, Shard, String, IndexUpdateConfiguration) - Constructor for class org.apache.hadoop.contrib.index.lucene.ShardWriter
Constructor
Shell - Class in org.apache.hadoop.util
A base class for running a Unix command.
Shell() - Constructor for class org.apache.hadoop.util.Shell
 
Shell(long) - Constructor for class org.apache.hadoop.util.Shell
 
Shell.ExitCodeException - Exception in org.apache.hadoop.util
This is an IOException with exit code added.
Shell.ExitCodeException(int, String) - Constructor for exception org.apache.hadoop.util.Shell.ExitCodeException
 
Shell.ShellCommandExecutor - Class in org.apache.hadoop.util
A simple shell command executor.
Shell.ShellCommandExecutor(String[]) - Constructor for class org.apache.hadoop.util.Shell.ShellCommandExecutor
 
Shell.ShellCommandExecutor(String[], File) - Constructor for class org.apache.hadoop.util.Shell.ShellCommandExecutor
 
Shell.ShellCommandExecutor(String[], File, Map<String, String>) - Constructor for class org.apache.hadoop.util.Shell.ShellCommandExecutor
 
ShellParser - Class in org.apache.hadoop.contrib.failmon
Objects of this class parse the output of system command-line utilities that can give information about the state of various hardware components in the system.
ShellParser() - Constructor for class org.apache.hadoop.contrib.failmon.ShellParser
 
shippedCanonFiles_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
shouldPreserveInput() - Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
 
shouldRetry(Exception, int) - Method in interface org.apache.hadoop.io.retry.RetryPolicy
Determines whether the framework should retry a method for the given exception, and the number of retries that have been made for that operation so far.
shuffleError(TaskAttemptID, String) - Method in class org.apache.hadoop.mapred.TaskTracker
A reduce-task failed to shuffle the map-outputs.
shutdown() - Method in class org.apache.hadoop.fs.DU
Shut down the refreshing thread.
shutdown() - Method in class org.apache.hadoop.ipc.metrics.RpcActivityMBean
 
shutdown() - Method in class org.apache.hadoop.ipc.metrics.RpcMetrics
 
shutdown() - Method in class org.apache.hadoop.mapred.TaskTracker
 
SimpleCharStream - Class in org.apache.hadoop.record.compiler.generated
An implementation of interface CharStream, where the stream is assumed to contain only ASCII characters (without unicode processing).
SimpleCharStream(Reader, int, int, int) - Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
SimpleCharStream(Reader, int, int) - Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
SimpleCharStream(Reader) - Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
SimpleCharStream(InputStream, String, int, int, int) - Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
SimpleCharStream(InputStream, int, int, int) - Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
SimpleCharStream(InputStream, String, int, int) - Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
SimpleCharStream(InputStream, int, int) - Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
SimpleCharStream(InputStream, String) - Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
SimpleCharStream(InputStream) - Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
simpleHostname(String) - Static method in class org.apache.hadoop.util.StringUtils
Given a full hostname, return the word upto the first dot.
size() - Method in class org.apache.hadoop.conf.Configuration
Return the number of keys in the configuration.
size() - Method in class org.apache.hadoop.io.file.tfile.ByteArray
 
size() - Method in interface org.apache.hadoop.io.file.tfile.RawComparable
Get the size of the byte range in the byte array.
size() - Static method in class org.apache.hadoop.io.file.tfile.Utils.Version
Get the size of the serialized Version object.
size() - Method in class org.apache.hadoop.io.MapWritable
size() - Method in class org.apache.hadoop.io.SortedMapWritable
size() - Method in class org.apache.hadoop.mapred.Counters.Group
Deprecated. Returns the number of counters in this group.
size() - Method in class org.apache.hadoop.mapred.Counters
Deprecated. Returns the total number of counters, by summing the number of counters in each group.
size() - Method in class org.apache.hadoop.mapred.join.TupleWritable
The number of children in this Tuple.
size() - Method in class org.apache.hadoop.mapreduce.CounterGroup
Returns the number of counters in this group.
size() - Method in class org.apache.hadoop.metrics.util.MetricsRegistry
 
size() - Method in class org.apache.hadoop.util.PriorityQueue
Returns the number of elements currently stored in the PriorityQueue.
SIZE_OF_INTEGER - Static variable in class org.apache.hadoop.util.DataChecksum
 
skip(long) - Method in class org.apache.hadoop.fs.BufferedFSInputStream
 
skip(long) - Method in class org.apache.hadoop.fs.FSInputChecker
Skips over and discards n bytes of data from the input stream.
skip(long) - Method in class org.apache.hadoop.io.compress.DecompressorStream
 
skip(long) - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
 
skip(DataInput) - Static method in class org.apache.hadoop.io.Text
Skips over one Text in the input.
skip(DataInput) - Static method in class org.apache.hadoop.io.UTF8
Deprecated. Skips over one UTF8 in the input.
skip(K) - Method in interface org.apache.hadoop.mapred.join.ComposableRecordReader
Skip key-value pairs with keys less than or equal to the key provided.
skip(K) - Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
Pass skip key to child RRs.
skip(K) - Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
Skip key-value pairs with keys less than or equal to the key provided.
skip(RecordInput, String, TypeID) - Static method in class org.apache.hadoop.record.meta.Utils
read/skip bytes from stream based on a type
SkipBadRecords - Class in org.apache.hadoop.mapred
Utility class for skip bad records functionality.
SkipBadRecords() - Constructor for class org.apache.hadoop.mapred.SkipBadRecords
 
skipCompressedByteArray(DataInput) - Static method in class org.apache.hadoop.io.WritableUtils
 
skipFully(InputStream, long) - Static method in class org.apache.hadoop.io.IOUtils
Similar to readFully().
skipFully(DataInput, int) - Static method in class org.apache.hadoop.io.WritableUtils
Skip len number of bytes in input streamin
SleepJob - Class in org.apache.hadoop.examples
Dummy class for testing MR framefork.
SleepJob() - Constructor for class org.apache.hadoop.examples.SleepJob
 
SleepJob.EmptySplit - Class in org.apache.hadoop.examples
 
SleepJob.EmptySplit() - Constructor for class org.apache.hadoop.examples.SleepJob.EmptySplit
 
SleepJob.SleepInputFormat - Class in org.apache.hadoop.examples
 
SleepJob.SleepInputFormat() - Constructor for class org.apache.hadoop.examples.SleepJob.SleepInputFormat
 
SMALL_THRESH - Static variable in class org.apache.hadoop.io.compress.bzip2.CBZip2OutputStream
This constant is accessible by subclasses for historical purposes.
SMARTParser - Class in org.apache.hadoop.contrib.failmon
Objects of this class parse the output of smartmontools to gather information about the state of disks in the system.
SMARTParser() - Constructor for class org.apache.hadoop.contrib.failmon.SMARTParser
Constructs a SMARTParser and reads the list of disk devices to query
SocketInputStream - Class in org.apache.hadoop.net
This implements an input stream that can have a timeout while reading.
SocketInputStream(ReadableByteChannel, long) - Constructor for class org.apache.hadoop.net.SocketInputStream
Create a new input stream with the given timeout.
SocketInputStream(Socket, long) - Constructor for class org.apache.hadoop.net.SocketInputStream
Same as SocketInputStream(socket.getChannel(), timeout):

Create a new input stream with the given timeout.
SocketInputStream(Socket) - Constructor for class org.apache.hadoop.net.SocketInputStream
Same as SocketInputStream(socket.getChannel(), socket.getSoTimeout()) :

Create a new input stream with the given timeout.
SocketOutputStream - Class in org.apache.hadoop.net
This implements an output stream that can have a timeout while writing.
SocketOutputStream(WritableByteChannel, long) - Constructor for class org.apache.hadoop.net.SocketOutputStream
Create a new ouput stream with the given timeout.
SocketOutputStream(Socket, long) - Constructor for class org.apache.hadoop.net.SocketOutputStream
Same as SocketOutputStream(socket.getChannel(), timeout):

Create a new ouput stream with the given timeout.
SocksSocketFactory - Class in org.apache.hadoop.net
Specialized SocketFactory to create sockets with a SOCKS proxy
SocksSocketFactory() - Constructor for class org.apache.hadoop.net.SocksSocketFactory
Default empty constructor (for use with the reflection API).
SocksSocketFactory(Proxy) - Constructor for class org.apache.hadoop.net.SocksSocketFactory
Constructor with a supplied Proxy
solution(List<List<ColumnName>>) - Method in interface org.apache.hadoop.examples.dancing.DancingLinks.SolutionAcceptor
A callback to return a solution to the application.
solve(int[], DancingLinks.SolutionAcceptor<ColumnName>) - Method in class org.apache.hadoop.examples.dancing.DancingLinks
Given a prefix, find solutions under it.
solve(DancingLinks.SolutionAcceptor<ColumnName>) - Method in class org.apache.hadoop.examples.dancing.DancingLinks
Solve a complete problem
solve(int[]) - Method in class org.apache.hadoop.examples.dancing.Pentomino
Find all of the solutions that start with the given prefix.
solve() - Method in class org.apache.hadoop.examples.dancing.Pentomino
Find all of the solutions to the puzzle.
solve() - Method in class org.apache.hadoop.examples.dancing.Sudoku
 
Sort<K,V> - Class in org.apache.hadoop.examples
This is the trivial map/reduce program that does absolutely nothing other than use the framework to fragment and sort the input values.
Sort() - Constructor for class org.apache.hadoop.examples.Sort
 
sort(Path[], Path, boolean) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Perform a file sort from a set of input files into an output file.
sort(Path, Path) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
The backwards compatible interface to sort.
sort(IndexedSortable, int, int) - Method in class org.apache.hadoop.util.HeapSort
Sort the given range of items using heap sort.
sort(IndexedSortable, int, int, Progressable) - Method in class org.apache.hadoop.util.HeapSort
Same as IndexedSorter.sort(IndexedSortable,int,int), but indicate progress periodically.
sort(IndexedSortable, int, int) - Method in interface org.apache.hadoop.util.IndexedSorter
Sort the items accessed through the given IndexedSortable over the given range of logical indices.
sort(IndexedSortable, int, int, Progressable) - Method in interface org.apache.hadoop.util.IndexedSorter
Same as IndexedSorter.sort(IndexedSortable,int,int), but indicate progress periodically.
sort(IndexedSortable, int, int) - Method in class org.apache.hadoop.util.QuickSort
Sort the given range of items using quick sort.
sort(IndexedSortable, int, int, Progressable) - Method in class org.apache.hadoop.util.QuickSort
Same as IndexedSorter.sort(IndexedSortable,int,int), but indicate progress periodically.
sortAndIterate(Path[], Path, boolean) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Perform a file sort from a set of input files and return an iterator.
SortedMapWritable - Class in org.apache.hadoop.io
A Writable SortedMap.
SortedMapWritable() - Constructor for class org.apache.hadoop.io.SortedMapWritable
default constructor.
SortedMapWritable(SortedMapWritable) - Constructor for class org.apache.hadoop.io.SortedMapWritable
Copy constructor.
SOURCE_TAGS_FIELD - Static variable in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
 
specialConstructor - Variable in exception org.apache.hadoop.record.compiler.generated.ParseException
This variable determines which constructor was used to create this object and thereby affects the semantics of the "getMessage" method (see below).
specialToken - Variable in class org.apache.hadoop.record.compiler.generated.Token
This field is used to access special tokens that occur prior to this token, but after the immediately preceding regular (non-special) token.
split(int) - Method in class org.apache.hadoop.examples.dancing.DancingLinks
Generate a list of row choices to cover the first moves.
split - Variable in class org.apache.hadoop.mapred.lib.CombineFileRecordReader
 
split(String) - Static method in class org.apache.hadoop.util.StringUtils
Split a string using the default separator
split(String, char, char) - Static method in class org.apache.hadoop.util.StringUtils
Split a string using the given separator
splitKeyVal(byte[], int, int, Text, Text, int, int) - Static method in class org.apache.hadoop.streaming.StreamKeyValUtil
split a UTF-8 byte array into key and value assuming that the delimilator is at splitpos.
splitKeyVal(byte[], int, int, Text, Text, int) - Static method in class org.apache.hadoop.streaming.StreamKeyValUtil
split a UTF-8 byte array into key and value assuming that the delimilator is at splitpos.
splitKeyVal(byte[], Text, Text, int, int) - Static method in class org.apache.hadoop.streaming.StreamKeyValUtil
split a UTF-8 byte array into key and value assuming that the delimilator is at splitpos.
splitKeyVal(byte[], Text, Text, int) - Static method in class org.apache.hadoop.streaming.StreamKeyValUtil
split a UTF-8 byte array into key and value assuming that the delimilator is at splitpos.
splitKeyVal(byte[], int, int, Text, Text, int, int) - Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
Deprecated. use StreamKeyValUtil.splitKeyVal(byte[], int, int, Text, Text, int, int)
splitKeyVal(byte[], int, int, Text, Text, int) - Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
Deprecated. use StreamKeyValUtil.splitKeyVal(byte[], int, int, Text, Text, int)
splitKeyVal(byte[], Text, Text, int, int) - Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
Deprecated. use StreamKeyValUtil.splitKeyVal(byte[], Text, Text, int, int)
splitKeyVal(byte[], Text, Text, int) - Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
Deprecated. use StreamKeyValUtil.splitKeyVal(byte[], Text, Text, int)
StandardSocketFactory - Class in org.apache.hadoop.net
Specialized SocketFactory to create sockets with a SOCKS proxy
StandardSocketFactory() - Constructor for class org.apache.hadoop.net.StandardSocketFactory
Default empty constructor (for use with the reflection API).
start() - Method in class org.apache.hadoop.fs.DU
Start the disk usage checking thread.
start() - Method in class org.apache.hadoop.http.HttpServer
Start the server.
start() - Method in class org.apache.hadoop.ipc.Server
Starts the service.
startLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.ChecksumFileSystem
 
startLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.FileSystem
Returns a local File that the user can write output to.
startLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.FilterFileSystem
Returns a local File that the user can write output to.
startLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.HarFileSystem
not implemented.
startLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
 
startLocalOutput(Path, Path) - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
startMap(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
startMap(TreeMap, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
startMap(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
startMap(TreeMap, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
startMap(String) - Method in interface org.apache.hadoop.record.RecordInput
Check the mark for start of the serialized map.
startMap(TreeMap, String) - Method in interface org.apache.hadoop.record.RecordOutput
Mark the start of a map to be serialized.
startMap(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
startMap(TreeMap, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
startMonitoring() - Method in class org.apache.hadoop.metrics.file.FileContext
Starts or restarts monitoring, by opening in append-mode, the file specified by the fileName attribute, if specified.
startMonitoring() - Method in interface org.apache.hadoop.metrics.MetricsContext
Starts or restarts monitoring, the emitting of metrics records as they are updated.
startMonitoring() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Starts or restarts monitoring, the emitting of metrics records.
startMonitoring() - Method in class org.apache.hadoop.metrics.spi.CompositeContext
 
startMonitoring() - Method in class org.apache.hadoop.metrics.spi.NullContext
Do-nothing version of startMonitoring
startNextPhase() - Method in class org.apache.hadoop.util.Progress
Called during execution to move to the next phase at this level in the tree.
startNotifier() - Static method in class org.apache.hadoop.mapred.JobEndNotifier
 
startRecord(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
startRecord(Record, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
startRecord(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
startRecord(Record, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
startRecord(String) - Method in interface org.apache.hadoop.record.RecordInput
Check the mark for start of the serialized record.
startRecord(Record, String) - Method in interface org.apache.hadoop.record.RecordOutput
Mark the start of a record to be serialized.
startRecord(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
startRecord(Record, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
startTracker(JobConf) - Static method in class org.apache.hadoop.mapred.JobTracker
Start the JobTracker with given configuration.
startTracker(JobConf, String) - Static method in class org.apache.hadoop.mapred.JobTracker
 
startupShutdownMessage(Class<?>, String[], Log) - Static method in class org.apache.hadoop.util.StringUtils
Print a log message for starting up and shutting down
startVector(String) - Method in class org.apache.hadoop.record.BinaryRecordInput
 
startVector(ArrayList, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
startVector(String) - Method in class org.apache.hadoop.record.CsvRecordInput
 
startVector(ArrayList, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
startVector(String) - Method in interface org.apache.hadoop.record.RecordInput
Check the mark for start of the serialized vector.
startVector(ArrayList, String) - Method in interface org.apache.hadoop.record.RecordOutput
Mark the start of a vector to be serialized.
startVector(String) - Method in class org.apache.hadoop.record.XmlRecordInput
 
startVector(ArrayList, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
stat2Paths(FileStatus[]) - Static method in class org.apache.hadoop.fs.FileUtil
convert an array of FileStatus to an array of Path
stat2Paths(FileStatus[], Path) - Static method in class org.apache.hadoop.fs.FileUtil
convert an array of FileStatus to an array of Path.
staticFlag - Static variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
statistics - Variable in class org.apache.hadoop.fs.FileSystem
The statistics for this file system.
StatusReporter - Class in org.apache.hadoop.mapreduce
 
StatusReporter() - Constructor for class org.apache.hadoop.mapreduce.StatusReporter
 
statusUpdate(TaskAttemptID, TaskStatus) - Method in class org.apache.hadoop.mapred.TaskTracker
Called periodically to report Task progress, from 0.0 to 1.0.
stop() - Method in class org.apache.hadoop.http.HttpServer
stop the server
stop() - Method in class org.apache.hadoop.ipc.Client
Stop all threads related to this client.
stop() - Method in class org.apache.hadoop.ipc.Server
Stops the service.
stop() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
set the thread state to STOPPING so that the thread will stop when it wakes up.
stopMonitoring() - Method in class org.apache.hadoop.metrics.file.FileContext
Stops monitoring, closing the file.
stopMonitoring() - Method in interface org.apache.hadoop.metrics.MetricsContext
Stops monitoring.
stopMonitoring() - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Stops monitoring.
stopMonitoring() - Method in class org.apache.hadoop.metrics.spi.CompositeContext
 
stopNotifier() - Static method in class org.apache.hadoop.mapred.JobEndNotifier
 
stopProxy(VersionedProtocol) - Static method in class org.apache.hadoop.ipc.RPC
Stop this proxy and release its invoker's resource
stopTracker() - Method in class org.apache.hadoop.mapred.JobTracker
 
store(Configuration, K, String) - Static method in class org.apache.hadoop.io.DefaultStringifier
Stores the item in the configuration with the given keyName.
storeArray(Configuration, K[], String) - Static method in class org.apache.hadoop.io.DefaultStringifier
Stores the array of items in the configuration with the given keyName.
storeBlock(Block, File) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
storeINode(Path, INode) - Method in interface org.apache.hadoop.fs.s3.FileSystemStore
 
StreamBackedIterator<X extends Writable> - Class in org.apache.hadoop.mapred.join
This class provides an implementation of ResetableIterator.
StreamBackedIterator() - Constructor for class org.apache.hadoop.mapred.join.StreamBackedIterator
 
StreamBaseRecordReader - Class in org.apache.hadoop.streaming
Shared functionality for hadoopStreaming formats.
StreamBaseRecordReader(FSDataInputStream, FileSplit, Reporter, JobConf, FileSystem) - Constructor for class org.apache.hadoop.streaming.StreamBaseRecordReader
 
StreamInputFormat - Class in org.apache.hadoop.streaming
An input format that selects a RecordReader based on a JobConf property.
StreamInputFormat() - Constructor for class org.apache.hadoop.streaming.StreamInputFormat
 
StreamJob - Class in org.apache.hadoop.streaming
All the client-side work happens here.
StreamJob(String[], boolean) - Constructor for class org.apache.hadoop.streaming.StreamJob
Deprecated. use StreamJob() with ToolRunner or set the Configuration using StreamJob.setConf(Configuration) and run with StreamJob.run(String[]).
StreamJob() - Constructor for class org.apache.hadoop.streaming.StreamJob
 
StreamKeyValUtil - Class in org.apache.hadoop.streaming
 
StreamKeyValUtil() - Constructor for class org.apache.hadoop.streaming.StreamKeyValUtil
 
StreamUtil - Class in org.apache.hadoop.streaming
Utilities not available elsewhere in Hadoop.
StreamUtil() - Constructor for class org.apache.hadoop.streaming.StreamUtil
 
StreamXmlRecordReader - Class in org.apache.hadoop.streaming
A way to interpret XML fragments as Mapper input records.
StreamXmlRecordReader(FSDataInputStream, FileSplit, Reporter, JobConf, FileSystem) - Constructor for class org.apache.hadoop.streaming.StreamXmlRecordReader
 
STRING - Static variable in class org.apache.hadoop.record.meta.TypeID.RIOType
 
string2long(String) - Static method in enum org.apache.hadoop.util.StringUtils.TraditionalBinaryPrefix
Convert a string to long.
STRING_VALUE_MAX - Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
STRING_VALUE_MIN - Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
Stringifier<T> - Interface in org.apache.hadoop.io
Stringifier interface offers two methods to convert an object to a string representation and restore the object given its string representation.
stringifyException(Throwable) - Static method in class org.apache.hadoop.util.StringUtils
Make a string representation of the exception.
stringifySolution(int, int, List<List<Pentomino.ColumnName>>) - Static method in class org.apache.hadoop.examples.dancing.Pentomino
Convert a solution to the puzzle returned by the model into a string that represents the placement of the pieces onto the board.
stringToPath(String[]) - Static method in class org.apache.hadoop.util.StringUtils
 
stringToURI(String[]) - Static method in class org.apache.hadoop.util.StringUtils
 
StringTypeID - Static variable in class org.apache.hadoop.record.meta.TypeID
 
StringUtils - Class in org.apache.hadoop.util
General string utils
StringUtils() - Constructor for class org.apache.hadoop.util.StringUtils
 
StringUtils.TraditionalBinaryPrefix - Enum in org.apache.hadoop.util
The traditional binary prefixes, kilo, mega, ..., exa, which can be represented by a 64-bit integer.
StringValueMax - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements a value aggregator that maintain the biggest of a sequence of strings.
StringValueMax() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
the default constructor
StringValueMin - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements a value aggregator that maintain the smallest of a sequence of strings.
StringValueMin() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
the default constructor
STRUCT - Static variable in class org.apache.hadoop.record.meta.TypeID.RIOType
 
StructTypeID - Class in org.apache.hadoop.record.meta
Represents typeID for a struct
StructTypeID(RecordTypeInfo) - Constructor for class org.apache.hadoop.record.meta.StructTypeID
Create a StructTypeID based on the RecordTypeInfo of some record
subMap(WritableComparable, WritableComparable) - Method in class org.apache.hadoop.io.SortedMapWritable
submit() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
Submit this job to mapred.
submit() - Method in class org.apache.hadoop.mapreduce.Job
Submit the job to the cluster and return immediately.
submitAndMonitorJob() - Method in class org.apache.hadoop.streaming.StreamJob
 
submitJob(String) - Method in class org.apache.hadoop.mapred.JobClient
Submit a job to the MR system.
submitJob(JobConf) - Method in class org.apache.hadoop.mapred.JobClient
Submit a job to the MR system.
submitJob(JobID) - Method in class org.apache.hadoop.mapred.JobTracker
JobTracker.submitJob() kicks off a new job.
submitJob(JobConf) - Static method in class org.apache.hadoop.mapred.pipes.Submitter
Deprecated. Use Submitter.runJob(JobConf)
submitJobInternal(JobConf) - Method in class org.apache.hadoop.mapred.JobClient
Internal method for submitting jobs to the system.
Submitter - Class in org.apache.hadoop.mapred.pipes
The main entry point and job submitter.
Submitter() - Constructor for class org.apache.hadoop.mapred.pipes.Submitter
 
Submitter(Configuration) - Constructor for class org.apache.hadoop.mapred.pipes.Submitter
 
SUCCEEDED - Static variable in class org.apache.hadoop.mapred.JobStatus
 
SUCCESS - Static variable in class org.apache.hadoop.mapred.jobcontrol.Job
 
Sudoku - Class in org.apache.hadoop.examples.dancing
This class uses the dancing links algorithm from Knuth to solve sudoku puzzles.
Sudoku(InputStream) - Constructor for class org.apache.hadoop.examples.dancing.Sudoku
Set up a puzzle board to the given size.
Sudoku.ColumnName - Interface in org.apache.hadoop.examples.dancing
This interface is a marker class for the columns created for the Sudoku solver.
suffix(String) - Method in class org.apache.hadoop.fs.Path
Adds a suffix to the final name in the path.
sum(Counters, Counters) - Static method in class org.apache.hadoop.mapred.Counters
Deprecated. Convenience method for computing the sum of two sets of counters.
suspend() - Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
suspend the running thread
swap(int, int) - Method in interface org.apache.hadoop.util.IndexedSortable
Swap items at the given addresses.
SwitchTo(int) - Method in class org.apache.hadoop.record.compiler.generated.RccTokenManager
 
SYMBOL - Variable in enum org.apache.hadoop.fs.permission.FsAction
Symbolic representation
symbol - Variable in enum org.apache.hadoop.util.StringUtils.TraditionalBinaryPrefix
 
symLink(String, String) - Static method in class org.apache.hadoop.fs.FileUtil
Create a soft link between a src and destination only on a local disk.
sync() - Method in class org.apache.hadoop.fs.FSDataOutputStream
Synchronize all buffer with the underlying devices.
sync() - Method in interface org.apache.hadoop.fs.Syncable
Synchronize all buffer with the underlying devices.
sync(long) - Method in class org.apache.hadoop.io.SequenceFile.Reader
Seek to the next sync mark past a given position.
sync() - Method in class org.apache.hadoop.io.SequenceFile.Writer
create a sync point
SYNC_INTERVAL - Static variable in class org.apache.hadoop.io.SequenceFile
The number of bytes between sync points.
Syncable - Interface in org.apache.hadoop.fs
This interface declare the sync() operation.
syncLogs(TaskAttemptID, TaskAttemptID) - Static method in class org.apache.hadoop.mapred.TaskLog
 
syncLogs(TaskAttemptID, TaskAttemptID, boolean) - Static method in class org.apache.hadoop.mapred.TaskLog
 
syncSeen() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns true iff the previous call to next passed a sync mark.
SystemLogParser - Class in org.apache.hadoop.contrib.failmon
An object of this class parses a Unix system log file to create appropriate EventRecords.
SystemLogParser(String) - Constructor for class org.apache.hadoop.contrib.failmon.SystemLogParser
Create a new parser object .

T

tabSize - Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
tag - Variable in class org.apache.hadoop.contrib.utils.join.TaggedMapOutput
 
TAG - Static variable in class org.apache.hadoop.record.compiler.Consts
 
TaggedMapOutput - Class in org.apache.hadoop.contrib.utils.join
This abstract class serves as the base class for the values that flow from the mappers to the reducers in a data join job.
TaggedMapOutput() - Constructor for class org.apache.hadoop.contrib.utils.join.TaggedMapOutput
 
tailMap(WritableComparable) - Method in class org.apache.hadoop.io.SortedMapWritable
TASK - Static variable in class org.apache.hadoop.mapreduce.TaskID
 
TaskAttemptContext - Class in org.apache.hadoop.mapred
Deprecated. Use TaskAttemptContext instead.
TaskAttemptContext - Class in org.apache.hadoop.mapreduce
The context for task attempts.
TaskAttemptContext(Configuration, TaskAttemptID) - Constructor for class org.apache.hadoop.mapreduce.TaskAttemptContext
 
TaskAttemptID - Class in org.apache.hadoop.mapred
Deprecated. 
TaskAttemptID(TaskID, int) - Constructor for class org.apache.hadoop.mapred.TaskAttemptID
Deprecated. Constructs a TaskAttemptID object from given TaskID.
TaskAttemptID(String, int, boolean, int, int) - Constructor for class org.apache.hadoop.mapred.TaskAttemptID
Deprecated. Constructs a TaskId object from given parts.
TaskAttemptID() - Constructor for class org.apache.hadoop.mapred.TaskAttemptID
Deprecated.  
TaskAttemptID - Class in org.apache.hadoop.mapreduce
TaskAttemptID represents the immutable and unique identifier for a task attempt.
TaskAttemptID(TaskID, int) - Constructor for class org.apache.hadoop.mapreduce.TaskAttemptID
Constructs a TaskAttemptID object from given TaskID.
TaskAttemptID(String, int, boolean, int, int) - Constructor for class org.apache.hadoop.mapreduce.TaskAttemptID
Constructs a TaskId object from given parts.
TaskAttemptID() - Constructor for class org.apache.hadoop.mapreduce.TaskAttemptID
 
TaskCompletionEvent - Class in org.apache.hadoop.mapred
This is used to track task completion events on job tracker.
TaskCompletionEvent() - Constructor for class org.apache.hadoop.mapred.TaskCompletionEvent
Default constructor for Writable.
TaskCompletionEvent(int, TaskAttemptID, int, boolean, TaskCompletionEvent.Status, String) - Constructor for class org.apache.hadoop.mapred.TaskCompletionEvent
Constructor.
TaskCompletionEvent.Status - Enum in org.apache.hadoop.mapred
 
TaskGraphServlet - Class in org.apache.hadoop.mapred
The servlet that outputs svg graphics for map / reduce task statuses
TaskGraphServlet() - Constructor for class org.apache.hadoop.mapred.TaskGraphServlet
 
TaskID - Class in org.apache.hadoop.mapred
Deprecated. 
TaskID(JobID, boolean, int) - Constructor for class org.apache.hadoop.mapred.TaskID
Deprecated. Constructs a TaskID object from given JobID.
TaskID(String, int, boolean, int) - Constructor for class org.apache.hadoop.mapred.TaskID
Deprecated. Constructs a TaskInProgressId object from given parts.
TaskID() - Constructor for class org.apache.hadoop.mapred.TaskID
Deprecated.  
TaskID - Class in org.apache.hadoop.mapreduce
TaskID represents the immutable and unique identifier for a Map or Reduce Task.
TaskID(JobID, boolean, int) - Constructor for class org.apache.hadoop.mapreduce.TaskID
Constructs a TaskID object from given JobID.
TaskID(String, int, boolean, int) - Constructor for class org.apache.hadoop.mapreduce.TaskID
Constructs a TaskInProgressId object from given parts.
TaskID() - Constructor for class org.apache.hadoop.mapreduce.TaskID
 
TaskInputOutputContext<KEYIN,VALUEIN,KEYOUT,VALUEOUT> - Class in org.apache.hadoop.mapreduce
A context object that allows input and output from the task.
TaskInputOutputContext(Configuration, TaskAttemptID, RecordWriter<KEYOUT, VALUEOUT>, OutputCommitter, StatusReporter) - Constructor for class org.apache.hadoop.mapreduce.TaskInputOutputContext
 
TaskLog - Class in org.apache.hadoop.mapred
A simple logger to handle the task-specific user logs.
TaskLog() - Constructor for class org.apache.hadoop.mapred.TaskLog
 
TaskLog.LogName - Enum in org.apache.hadoop.mapred
The filter for userlogs.
TaskLogAppender - Class in org.apache.hadoop.mapred
A simple log4j-appender for the task child's map-reduce system logs.
TaskLogAppender() - Constructor for class org.apache.hadoop.mapred.TaskLogAppender
 
TaskLogServlet - Class in org.apache.hadoop.mapred
A servlet that is run by the TaskTrackers to provide the task logs via http.
TaskLogServlet() - Constructor for class org.apache.hadoop.mapred.TaskLogServlet
 
TaskReport - Class in org.apache.hadoop.mapred
A report on the state of a task.
TaskReport() - Constructor for class org.apache.hadoop.mapred.TaskReport
 
TaskTracker - Class in org.apache.hadoop.mapred
TaskTracker is a process that starts and tracks MR Tasks in a networked environment.
TaskTracker(JobConf) - Constructor for class org.apache.hadoop.mapred.TaskTracker
Start with the local machine name, and the default JobTracker
TaskTracker.MapOutputServlet - Class in org.apache.hadoop.mapred
This class is used in TaskTracker's Jetty to serve the map outputs to other nodes.
TaskTracker.MapOutputServlet() - Constructor for class org.apache.hadoop.mapred.TaskTracker.MapOutputServlet
 
taskTrackerNames() - Method in class org.apache.hadoop.mapred.JobTracker
Get the active and blacklisted task tracker names in the cluster.
taskTrackers() - Method in class org.apache.hadoop.mapred.JobTracker
Get all the task trackers in the cluster
TEMP_DIR_NAME - Static variable in class org.apache.hadoop.mapred.FileOutputCommitter
Temporary directory name
TEMP_DIR_NAME - Static variable in class org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
Temporary directory name
TeraGen - Class in org.apache.hadoop.examples.terasort
Generate the official terasort input data set.
TeraGen() - Constructor for class org.apache.hadoop.examples.terasort.TeraGen
 
TeraGen.SortGenMapper - Class in org.apache.hadoop.examples.terasort
The Mapper class that given a row number, will generate the appropriate output line.
TeraGen.SortGenMapper() - Constructor for class org.apache.hadoop.examples.terasort.TeraGen.SortGenMapper
 
TeraInputFormat - Class in org.apache.hadoop.examples.terasort
An input format that reads the first 10 characters of each line as the key and the rest of the line as the value.
TeraInputFormat() - Constructor for class org.apache.hadoop.examples.terasort.TeraInputFormat
 
TeraOutputFormat - Class in org.apache.hadoop.examples.terasort
A streamlined text output format that writes key, value, and "\r\n".
TeraOutputFormat() - Constructor for class org.apache.hadoop.examples.terasort.TeraOutputFormat
 
TeraSort - Class in org.apache.hadoop.examples.terasort
Generates the sampled split points, launches the job, and waits for it to finish.
TeraSort() - Constructor for class org.apache.hadoop.examples.terasort.TeraSort
 
TeraValidate - Class in org.apache.hadoop.examples.terasort
Generate 1 mapper per a file that checks to make sure the keys are sorted within each file.
TeraValidate() - Constructor for class org.apache.hadoop.examples.terasort.TeraValidate
 
Text - Class in org.apache.hadoop.io
This class stores text using standard UTF8 encoding.
Text() - Constructor for class org.apache.hadoop.io.Text
 
Text(String) - Constructor for class org.apache.hadoop.io.Text
Construct from a string.
Text(Text) - Constructor for class org.apache.hadoop.io.Text
Construct from another text.
Text(byte[]) - Constructor for class org.apache.hadoop.io.Text
Construct from a byte array.
Text.Comparator - Class in org.apache.hadoop.io
A WritableComparator optimized for Text keys.
Text.Comparator() - Constructor for class org.apache.hadoop.io.Text.Comparator
 
TextInputFormat - Class in org.apache.hadoop.mapred
Deprecated. Use TextInputFormat instead.
TextInputFormat() - Constructor for class org.apache.hadoop.mapred.TextInputFormat
Deprecated.  
TextInputFormat - Class in org.apache.hadoop.mapreduce.lib.input
An InputFormat for plain text files.
TextInputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.input.TextInputFormat
 
TextOutputFormat<K,V> - Class in org.apache.hadoop.mapred
Deprecated. Use TextOutputFormat instead.
TextOutputFormat() - Constructor for class org.apache.hadoop.mapred.TextOutputFormat
Deprecated.  
TextOutputFormat<K,V> - Class in org.apache.hadoop.mapreduce.lib.output
An OutputFormat that writes plain text files.
TextOutputFormat() - Constructor for class org.apache.hadoop.mapreduce.lib.output.TextOutputFormat
 
TextOutputFormat.LineRecordWriter<K,V> - Class in org.apache.hadoop.mapred
Deprecated.  
TextOutputFormat.LineRecordWriter(DataOutputStream, String) - Constructor for class org.apache.hadoop.mapred.TextOutputFormat.LineRecordWriter
Deprecated.  
TextOutputFormat.LineRecordWriter(DataOutputStream) - Constructor for class org.apache.hadoop.mapred.TextOutputFormat.LineRecordWriter
Deprecated.  
TextOutputFormat.LineRecordWriter<K,V> - Class in org.apache.hadoop.mapreduce.lib.output
 
TextOutputFormat.LineRecordWriter(DataOutputStream, String) - Constructor for class org.apache.hadoop.mapreduce.lib.output.TextOutputFormat.LineRecordWriter
 
TextOutputFormat.LineRecordWriter(DataOutputStream) - Constructor for class org.apache.hadoop.mapreduce.lib.output.TextOutputFormat.LineRecordWriter
 
TFile - Class in org.apache.hadoop.io.file.tfile
A TFile is a container of key-value pairs.
TFile.Reader - Class in org.apache.hadoop.io.file.tfile
TFile Reader.
TFile.Reader(FSDataInputStream, long, Configuration) - Constructor for class org.apache.hadoop.io.file.tfile.TFile.Reader
Constructor
TFile.Reader.Scanner - Class in org.apache.hadoop.io.file.tfile
The TFile Scanner.
TFile.Reader.Scanner(TFile.Reader, long, long) - Constructor for class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner
Constructor
TFile.Reader.Scanner(TFile.Reader, RawComparable, RawComparable) - Constructor for class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner
Constructor
TFile.Reader.Scanner.Entry - Class in org.apache.hadoop.io.file.tfile
Entry to a <Key, Value> pair.
TFile.Reader.Scanner.Entry() - Constructor for class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
 
TFile.Writer - Class in org.apache.hadoop.io.file.tfile
TFile Writer.
TFile.Writer(FSDataOutputStream, int, String, String, Configuration) - Constructor for class org.apache.hadoop.io.file.tfile.TFile.Writer
Constructor
TIPStatus - Enum in org.apache.hadoop.mapred
The states of a TaskInProgress as seen by the JobTracker.
toArray() - Method in class org.apache.hadoop.io.ArrayWritable
 
toArray() - Method in class org.apache.hadoop.io.TwoDArrayWritable
 
toArray(Class<T>, List<T>) - Static method in class org.apache.hadoop.util.GenericsUtil
Converts the given List<T> to a an array of T[].
toArray(List<T>) - Static method in class org.apache.hadoop.util.GenericsUtil
Converts the given List<T> to a an array of T[].
toByteArray(Writable...) - Static method in class org.apache.hadoop.io.WritableUtils
Convert writables to a byte array
token - Variable in class org.apache.hadoop.record.compiler.generated.Rcc
 
Token - Class in org.apache.hadoop.record.compiler.generated
Describes the input token stream.
Token() - Constructor for class org.apache.hadoop.record.compiler.generated.Token
 
token_source - Variable in class org.apache.hadoop.record.compiler.generated.Rcc
 
TokenCounterMapper - Class in org.apache.hadoop.mapreduce.lib.map
Tokenize the input values and emit each word with a count of 1.
TokenCounterMapper() - Constructor for class org.apache.hadoop.mapreduce.lib.map.TokenCounterMapper
 
TokenCountMapper<K> - Class in org.apache.hadoop.mapred.lib
Deprecated. Use TokenCounterMapper instead.
TokenCountMapper() - Constructor for class org.apache.hadoop.mapred.lib.TokenCountMapper
Deprecated.  
tokenImage - Variable in exception org.apache.hadoop.record.compiler.generated.ParseException
This is a reference to the "tokenImage" array of the generated parser within which the parse error occurred.
tokenImage - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
TokenMgrError - Error in org.apache.hadoop.record.compiler.generated
 
TokenMgrError() - Constructor for error org.apache.hadoop.record.compiler.generated.TokenMgrError
 
TokenMgrError(String, int) - Constructor for error org.apache.hadoop.record.compiler.generated.TokenMgrError
 
TokenMgrError(boolean, int, int, int, String, char, int) - Constructor for error org.apache.hadoop.record.compiler.generated.TokenMgrError
 
toMap() - Method in class org.apache.hadoop.streaming.Environment
 
Tool - Interface in org.apache.hadoop.util
A tool interface that supports handling of generic command-line options.
ToolRunner - Class in org.apache.hadoop.util
A utility to help run Tools.
ToolRunner() - Constructor for class org.apache.hadoop.util.ToolRunner
 
top() - Method in class org.apache.hadoop.util.PriorityQueue
Returns the least element of the PriorityQueue in constant time.
toShort() - Method in class org.apache.hadoop.fs.permission.FsPermission
Encode the object to a short.
toString() - Method in class org.apache.hadoop.conf.Configuration.IntegerRanges
 
toString() - Method in class org.apache.hadoop.conf.Configuration
 
toString() - Method in class org.apache.hadoop.contrib.failmon.EventRecord
Creates and returns a string representation of the object.
toString() - Method in class org.apache.hadoop.contrib.failmon.SerializedRecord
Creates and returns a string reperssentation of the object
toString() - Method in class org.apache.hadoop.contrib.index.example.LineDocTextAndOp
 
toString() - Method in class org.apache.hadoop.contrib.index.lucene.FileSystemDirectory
 
toString() - Method in class org.apache.hadoop.contrib.index.lucene.ShardWriter
 
toString() - Method in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp.Op
 
toString() - Method in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
 
toString() - Method in class org.apache.hadoop.contrib.index.mapred.DocumentID
 
toString() - Method in class org.apache.hadoop.contrib.index.mapred.IntermediateForm
 
toString() - Method in class org.apache.hadoop.contrib.index.mapred.Shard
 
toString() - Method in class org.apache.hadoop.fs.BlockLocation
 
toString() - Method in class org.apache.hadoop.fs.ContentSummary
toString(boolean) - Method in class org.apache.hadoop.fs.ContentSummary
Return the string representation of the object in the output format.
toString() - Method in class org.apache.hadoop.fs.DF
 
toString() - Method in class org.apache.hadoop.fs.DU
 
toString() - Method in class org.apache.hadoop.fs.FileSystem.Statistics
 
toString() - Method in class org.apache.hadoop.fs.MD5MD5CRC32FileChecksum
toString() - Method in class org.apache.hadoop.fs.Path
 
toString() - Method in class org.apache.hadoop.fs.permission.FsPermission
toString() - Method in class org.apache.hadoop.fs.permission.PermissionStatus
toString() - Method in class org.apache.hadoop.fs.RawLocalFileSystem
 
toString() - Method in class org.apache.hadoop.fs.s3.Block
 
toString() - Method in class org.apache.hadoop.io.BooleanWritable
 
toString() - Method in class org.apache.hadoop.io.BytesWritable
Generate the stream of bytes as hex pairs separated by ' '.
toString() - Method in class org.apache.hadoop.io.ByteWritable
 
toString() - Method in class org.apache.hadoop.io.compress.CompressionCodecFactory
Print the extension map out as a string.
toString(T) - Method in class org.apache.hadoop.io.DefaultStringifier
 
toString() - Method in class org.apache.hadoop.io.DoubleWritable
 
toString() - Method in class org.apache.hadoop.io.file.tfile.Utils.Version
Return a string representation of the version.
toString() - Method in class org.apache.hadoop.io.FloatWritable
 
toString() - Method in class org.apache.hadoop.io.GenericWritable
 
toString() - Method in class org.apache.hadoop.io.IntWritable
 
toString() - Method in class org.apache.hadoop.io.LongWritable
 
toString() - Method in class org.apache.hadoop.io.MD5Hash
Returns a string representation of this object.
toString() - Method in class org.apache.hadoop.io.NullWritable
 
toString() - Method in class org.apache.hadoop.io.ObjectWritable
 
toString() - Method in class org.apache.hadoop.io.SequenceFile.Metadata
 
toString() - Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns the name of the file.
toString(T) - Method in interface org.apache.hadoop.io.Stringifier
Converts the object to a string representation
toString() - Method in class org.apache.hadoop.io.Text
Convert text back to string
toString() - Method in class org.apache.hadoop.io.UTF8
Deprecated. Convert to a String.
toString() - Method in exception org.apache.hadoop.io.VersionMismatchException
Returns a string representation of this object.
toString() - Method in class org.apache.hadoop.io.VIntWritable
 
toString() - Method in class org.apache.hadoop.io.VLongWritable
 
toString() - Method in class org.apache.hadoop.mapred.Counters
Deprecated. Return textual representation of the counter values.
toString() - Method in class org.apache.hadoop.mapred.FileSplit
Deprecated.  
toString() - Method in class org.apache.hadoop.mapred.jobcontrol.Job
 
toString() - Method in class org.apache.hadoop.mapred.join.TupleWritable
Convert Tuple to String as in the following.
toString() - Method in class org.apache.hadoop.mapred.lib.aggregate.UserDefinedValueAggregatorDescriptor
 
toString() - Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
 
toString() - Method in class org.apache.hadoop.mapred.MultiFileSplit
Deprecated.  
toString() - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
 
toString() - Method in enum org.apache.hadoop.mapred.TaskLog.LogName
 
toString() - Method in class org.apache.hadoop.mapreduce.Counters
Return textual representation of the counter values.
toString() - Method in class org.apache.hadoop.mapreduce.ID
 
toString() - Method in class org.apache.hadoop.mapreduce.JobID
 
toString() - Method in class org.apache.hadoop.mapreduce.lib.input.FileSplit
 
toString() - Method in class org.apache.hadoop.mapreduce.TaskAttemptID
 
toString() - Method in class org.apache.hadoop.mapreduce.TaskID
 
toString() - Method in class org.apache.hadoop.net.NetworkTopology
convert a network tree to a string
toString() - Method in class org.apache.hadoop.net.NodeBase
Return this node's string representation
toString() - Method in class org.apache.hadoop.record.Buffer
 
toString(String) - Method in class org.apache.hadoop.record.Buffer
Convert the byte buffer to a string an specific character encoding
toString() - Method in class org.apache.hadoop.record.compiler.CodeBuffer
 
toString() - Method in class org.apache.hadoop.record.compiler.generated.Token
Returns the image.
toString() - Method in class org.apache.hadoop.record.Record
 
toString() - Method in class org.apache.hadoop.security.authorize.ConnectionPermission
 
toString() - Method in class org.apache.hadoop.security.Group
 
toString() - Method in class org.apache.hadoop.security.UnixUserGroupInformation
Convert this object to a string
toString() - Method in class org.apache.hadoop.security.User
 
toString() - Method in class org.apache.hadoop.util.bloom.BloomFilter
 
toString() - Method in class org.apache.hadoop.util.bloom.CountingBloomFilter
 
toString() - Method in class org.apache.hadoop.util.bloom.DynamicBloomFilter
 
toString() - Method in class org.apache.hadoop.util.ProcfsBasedProcessTree
Returns a string printing PIDs of process present in the ProcfsBasedProcessTree.
toString() - Method in class org.apache.hadoop.util.Progress
 
toString() - Method in class org.apache.hadoop.util.Shell.ShellCommandExecutor
Returns the commands of this instance.
toStrings() - Method in class org.apache.hadoop.io.ArrayWritable
 
TotalOrderPartitioner<K extends WritableComparable,V> - Class in org.apache.hadoop.mapred.lib
Partitioner effecting a total order by reading split points from an externally generated source.
TotalOrderPartitioner() - Constructor for class org.apache.hadoop.mapred.lib.TotalOrderPartitioner
 
touch(File) - Static method in class org.apache.hadoop.streaming.StreamUtil
 
touchFile(String) - Method in class org.apache.hadoop.contrib.index.lucene.FileSystemDirectory
 
toUri() - Method in class org.apache.hadoop.fs.Path
Convert this to a URI.
transferToFully(FileChannel, long, int) - Method in class org.apache.hadoop.net.SocketOutputStream
Transfers data from FileChannel using FileChannel.transferTo(long, long, WritableByteChannel).
transform(InputStream, InputStream, Writer) - Static method in class org.apache.hadoop.util.XMLUtils
Transform input xml given a stylesheet.
Trash - Class in org.apache.hadoop.fs
Provides a trash feature.
Trash(Configuration) - Constructor for class org.apache.hadoop.fs.Trash
Construct a trash can accessor.
Trash(FileSystem, Configuration) - Constructor for class org.apache.hadoop.fs.Trash
Construct a trash can accessor for the FileSystem provided.
truncate() - Method in class org.apache.hadoop.record.Buffer
Change the capacity of the backing store to be the same as the current count of buffer.
TRY_ONCE_DONT_FAIL - Static variable in class org.apache.hadoop.io.retry.RetryPolicies
Try once, and fail silently for void methods, or by re-throwing the exception for non-void methods.
TRY_ONCE_THEN_FAIL - Static variable in class org.apache.hadoop.io.retry.RetryPolicies
Try once, and fail by re-throwing the exception.
TupleWritable - Class in org.apache.hadoop.mapred.join
Writable type storing multiple Writables.
TupleWritable() - Constructor for class org.apache.hadoop.mapred.join.TupleWritable
Create an empty tuple with no allocated storage for writables.
TupleWritable(Writable[]) - Constructor for class org.apache.hadoop.mapred.join.TupleWritable
Initialize tuple with storage; unknown whether any of them contain "written" values.
TwoDArrayWritable - Class in org.apache.hadoop.io
A Writable for 2D arrays containing a matrix of instances of a class.
TwoDArrayWritable(Class) - Constructor for class org.apache.hadoop.io.TwoDArrayWritable
 
TwoDArrayWritable(Class, Writable[][]) - Constructor for class org.apache.hadoop.io.TwoDArrayWritable
 
twoRotations - Static variable in class org.apache.hadoop.examples.dancing.Pentomino
Is the piece identical if rotated 180 degrees?
Type() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
TYPE_SEPARATOR - Static variable in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorDescriptor
 
TypeID - Class in org.apache.hadoop.record.meta
Represents typeID for basic types.
TypeID.RIOType - Class in org.apache.hadoop.record.meta
constants representing the IDL types we support
TypeID.RIOType() - Constructor for class org.apache.hadoop.record.meta.TypeID.RIOType
 
typeVal - Variable in class org.apache.hadoop.record.meta.TypeID
 

U

UGI_PROPERTY_NAME - Static variable in class org.apache.hadoop.security.UnixUserGroupInformation
 
UMASK_LABEL - Static variable in class org.apache.hadoop.fs.permission.FsPermission
umask property label
uncompressedValSerializer - Variable in class org.apache.hadoop.io.SequenceFile.Writer
 
unEscapeString(String) - Static method in class org.apache.hadoop.util.StringUtils
Unescape commas in the string using the default escape char
unEscapeString(String, char, char) - Static method in class org.apache.hadoop.util.StringUtils
Unescape charToEscape in the string with the escape char escapeChar
unEscapeString(String, char, char[]) - Static method in class org.apache.hadoop.util.StringUtils
 
UNIQ_VALUE_COUNT - Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
UniqValueCount - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements a value aggregator that dedupes a sequence of objects.
UniqValueCount() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
the default constructor
UniqValueCount(long) - Constructor for class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
constructor
UnixUserGroupInformation - Class in org.apache.hadoop.security
An implementation of UserGroupInformation in the Unix system
UnixUserGroupInformation() - Constructor for class org.apache.hadoop.security.UnixUserGroupInformation
Default constructor
UnixUserGroupInformation(String, String[]) - Constructor for class org.apache.hadoop.security.UnixUserGroupInformation
Constructor with parameters user name and its group names.
UnixUserGroupInformation(String[]) - Constructor for class org.apache.hadoop.security.UnixUserGroupInformation
Constructor with parameter user/group names
unJar(File, File) - Static method in class org.apache.hadoop.util.RunJar
Unpack a jar file into a directory.
unregisterMBean(ObjectName) - Static method in class org.apache.hadoop.metrics.util.MBeanUtil
 
unregisterUpdater(Updater) - Method in interface org.apache.hadoop.metrics.MetricsContext
Removes a callback, if it exists.
unregisterUpdater(Updater) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Removes a callback, if it exists.
unregisterUpdater(Updater) - Method in class org.apache.hadoop.metrics.spi.CompositeContext
 
unTar(File, File) - Static method in class org.apache.hadoop.fs.FileUtil
Given a Tar File as input it will untar the file in a the untar directory passed as the second parameter This utility will untar ".tar" files and ".tar.gz","tgz" files.
unwrapRemoteException(Class<?>...) - Method in exception org.apache.hadoop.ipc.RemoteException
If this remote exception wraps up one of the lookupTypes then return this exception.
unwrapRemoteException() - Method in exception org.apache.hadoop.ipc.RemoteException
Instantiate and return the exception wrapped up by this remote exception.
unZip(File, File) - Static method in class org.apache.hadoop.fs.FileUtil
Given a File input it will unzip the file in a the unzip directory passed as the second parameter
UPDATE - Static variable in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp.Op
 
update() - Method in interface org.apache.hadoop.metrics.MetricsRecord
Updates the table of buffered data which is to be sent periodically.
update(MetricsRecordImpl) - Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Called by MetricsRecordImpl.update().
update() - Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Updates the table of buffered data which is to be sent periodically.
update(MetricsRecordImpl) - Method in class org.apache.hadoop.metrics.spi.NullContext
Do-nothing version of update
update(MetricsRecordImpl) - Method in class org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
Do-nothing version of update
update(byte[], int, int) - Method in class org.apache.hadoop.util.DataChecksum
 
update(int) - Method in class org.apache.hadoop.util.DataChecksum
 
updateFileNames(String, String) - Method in class org.apache.hadoop.util.HostsFileReader
 
UpdateIndex - Class in org.apache.hadoop.contrib.index.main
A distributed "index" is partitioned into "shards".
UpdateIndex() - Constructor for class org.apache.hadoop.contrib.index.main.UpdateIndex
 
UpdateLineColumn(char) - Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
 
Updater - Interface in org.apache.hadoop.metrics
Call-back interface.
updateState(String, String, long) - Static method in class org.apache.hadoop.contrib.failmon.PersistentState
Upadate the state of parsing for a particular log file.
upload() - Method in class org.apache.hadoop.contrib.failmon.LocalStore
Upload the local file store into HDFS, after it compressing it.
UPLOAD_INTERVAL - Static variable in class org.apache.hadoop.contrib.failmon.LocalStore
 
UPPER_LIMIT_ON_TASK_VMEM_PROPERTY - Static variable in class org.apache.hadoop.mapred.JobConf
Deprecated.  
upperBound(byte[]) - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner
Move the cursor to the first entry whose key is strictly greater than the input key.
upperBound(byte[], int, int) - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner
Move the cursor to the first entry whose key is strictly greater than the input key.
upperBound(List<? extends T>, T, Comparator<? super T>) - Static method in class org.apache.hadoop.io.file.tfile.Utils
Upper bound binary search.
upperBound(List<? extends Comparable<? super T>>, T) - Static method in class org.apache.hadoop.io.file.tfile.Utils
Upper bound binary search.
uriToString(URI[]) - Static method in class org.apache.hadoop.util.StringUtils
 
URL_PROPERTY - Static variable in class org.apache.hadoop.mapred.lib.db.DBConfiguration
JDBC Database access URL
USAGE - Static variable in class org.apache.hadoop.fs.shell.Count
 
usage() - Static method in class org.apache.hadoop.record.compiler.generated.Rcc
 
USAGES - Static variable in class org.apache.hadoop.log.LogLevel
 
User - Class in org.apache.hadoop.security
The username of a user.
User(String) - Constructor for class org.apache.hadoop.security.User
Create a new User with the given username.
USER_NAME_COMMAND - Static variable in class org.apache.hadoop.util.Shell
a Unix command to get the current user's name
UserDefinedValueAggregatorDescriptor - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements a wrapper for a user defined value aggregator descriptor.
UserDefinedValueAggregatorDescriptor(String, JobConf) - Constructor for class org.apache.hadoop.mapred.lib.aggregate.UserDefinedValueAggregatorDescriptor
 
UserGroupInformation - Class in org.apache.hadoop.security
A Writable abstract class for storing user and groups information.
UserGroupInformation() - Constructor for class org.apache.hadoop.security.UserGroupInformation
 
USERNAME_PROPERTY - Static variable in class org.apache.hadoop.mapred.lib.db.DBConfiguration
User name to access the database
USTRING_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
UTF8 - Class in org.apache.hadoop.io
Deprecated. replaced by Text
UTF8() - Constructor for class org.apache.hadoop.io.UTF8
Deprecated.  
UTF8(String) - Constructor for class org.apache.hadoop.io.UTF8
Deprecated. Construct from a given string.
UTF8(UTF8) - Constructor for class org.apache.hadoop.io.UTF8
Deprecated. Construct from a given string.
UTF8.Comparator - Class in org.apache.hadoop.io
Deprecated. A WritableComparator optimized for UTF8 keys.
UTF8.Comparator() - Constructor for class org.apache.hadoop.io.UTF8.Comparator
Deprecated.  
UTF8ByteArrayUtils - Class in org.apache.hadoop.streaming
Deprecated. use UTF8ByteArrayUtils and StreamKeyValUtil instead
UTF8ByteArrayUtils() - Constructor for class org.apache.hadoop.streaming.UTF8ByteArrayUtils
Deprecated.  
UTF8ByteArrayUtils - Class in org.apache.hadoop.util
 
UTF8ByteArrayUtils() - Constructor for class org.apache.hadoop.util.UTF8ByteArrayUtils
 
utf8Length(String) - Static method in class org.apache.hadoop.io.Text
For the given string, returns the number of UTF-8 bytes required to encode the string.
Util - Class in org.apache.hadoop.metrics.spi
Static utility methods
Utils - Class in org.apache.hadoop.io.file.tfile
Supporting Utility classes used by TFile, and shared by users of TFile.
Utils - Class in org.apache.hadoop.record.meta
Various utility functions for Hadooop record I/O platform.
Utils - Class in org.apache.hadoop.record
Various utility functions for Hadooop record I/O runtime.
Utils.Version - Class in org.apache.hadoop.io.file.tfile
A generic Version class.
Utils.Version(DataInput) - Constructor for class org.apache.hadoop.io.file.tfile.Utils.Version
Construct the Version object by reading from the input stream.
Utils.Version(short, short) - Constructor for class org.apache.hadoop.io.file.tfile.Utils.Version
Constructor.

V

validateUTF8(byte[]) - Static method in class org.apache.hadoop.io.Text
Check if a byte array contains valid utf-8
validateUTF8(byte[], int, int) - Static method in class org.apache.hadoop.io.Text
Check to see if a byte array is valid utf-8
value - Variable in enum org.apache.hadoop.util.StringUtils.TraditionalBinaryPrefix
 
VALUE_HISTOGRAM - Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
ValueAggregator - Interface in org.apache.hadoop.mapred.lib.aggregate
This interface defines the minimal protocol for value aggregators.
ValueAggregatorBaseDescriptor - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements the common functionalities of the subclasses of ValueAggregatorDescriptor class.
ValueAggregatorBaseDescriptor() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
 
ValueAggregatorCombiner<K1 extends WritableComparable,V1 extends Writable> - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements the generic combiner of Aggregate.
ValueAggregatorCombiner() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorCombiner
 
ValueAggregatorDescriptor - Interface in org.apache.hadoop.mapred.lib.aggregate
This interface defines the contract a value aggregator descriptor must support.
ValueAggregatorJob - Class in org.apache.hadoop.mapred.lib.aggregate
This is the main class for creating a map/reduce job using Aggregate framework.
ValueAggregatorJob() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
 
ValueAggregatorJobBase<K1 extends WritableComparable,V1 extends Writable> - Class in org.apache.hadoop.mapred.lib.aggregate
This abstract class implements some common functionalities of the the generic mapper, reducer and combiner classes of Aggregate.
ValueAggregatorJobBase() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJobBase
 
ValueAggregatorMapper<K1 extends WritableComparable,V1 extends Writable> - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements the generic mapper of Aggregate.
ValueAggregatorMapper() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorMapper
 
ValueAggregatorReducer<K1 extends WritableComparable,V1 extends Writable> - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements the generic reducer of Aggregate.
ValueAggregatorReducer() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorReducer
 
ValueHistogram - Class in org.apache.hadoop.mapred.lib.aggregate
This class implements a value aggregator that computes the histogram of a sequence of strings.
ValueHistogram() - Constructor for class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
 
valueOf(String) - Static method in enum org.apache.hadoop.contrib.failmon.OfflineAnonymizer.LogType
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.examples.dancing.Pentomino.SolutionCategory
Returns the enum constant of this type with the specified name.
valueOf(Attributes) - Static method in class org.apache.hadoop.fs.MD5MD5CRC32FileChecksum
Return the object represented in the attributes.
valueOf(String) - Static method in enum org.apache.hadoop.fs.permission.FsAction
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in class org.apache.hadoop.fs.permission.FsPermission
Create a FsPermission from a Unix symbolic permission string
valueOf(String) - Static method in enum org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionHeader
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionLevel
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionStrategy
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.io.compress.zlib.ZlibDecompressor.CompressionHeader
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.io.SequenceFile.CompressionType
Returns the enum constant of this type with the specified name.
valueOf(Attributes) - Static method in exception org.apache.hadoop.ipc.RemoteException
Create RemoteException from attributes
valueOf(String) - Static method in enum org.apache.hadoop.mapred.JobClient.TaskStatusFilter
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.mapred.JobHistory.Keys
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.mapred.JobHistory.RecordTypes
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.mapred.JobHistory.Values
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.mapred.JobPriority
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.mapred.JobTracker.State
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.mapred.join.Parser.TType
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.mapred.TaskCompletionEvent.Status
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.mapred.TaskLog.LogName
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.mapred.TIPStatus
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.mapreduce.Job.JobState
Returns the enum constant of this type with the specified name.
valueOf(String) - Static method in enum org.apache.hadoop.util.StringUtils.TraditionalBinaryPrefix
Returns the enum constant of this type with the specified name.
valueOf(char) - Static method in enum org.apache.hadoop.util.StringUtils.TraditionalBinaryPrefix
 
values() - Static method in enum org.apache.hadoop.contrib.failmon.OfflineAnonymizer.LogType
Returns an array containing the constants of this enum type, in the order they are declared.
values() - Static method in enum org.apache.hadoop.examples.dancing.Pentomino.SolutionCategory
Returns an array containing the constants of this enum type, in the order they are declared.
values() - Static method in enum org.apache.hadoop.fs.permission.FsAction
Returns an array containing the constants of this enum type, in the order they are declared.
values() - Static method in enum org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionHeader
Returns an array containing the constants of this enum type, in the order they are declared.
values() - Static method in enum org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionLevel
Returns an array containing the constants of this enum type, in the order they are declared.
values() - Static method in enum org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionStrategy
Returns an array containing the constants of this enum type, in the order they are declared.
values() - Static method in enum org.apache.hadoop.io.compress.zlib.ZlibDecompressor.CompressionHeader
Returns an array containing the constants of this enum type, in the order they are declared.
values() - Method in class org.apache.hadoop.io.MapWritable
values() - Static method in enum org.apache.hadoop.io.SequenceFile.CompressionType
Returns an array containing the constants of this enum type, in the order they are declared.
values() - Method in class org.apache.hadoop.io.SortedMapWritable
values() - Static method in enum org.apache.hadoop.mapred.JobClient.TaskStatusFilter
Returns an array containing the constants of this enum type, in the order they are declared.
values() - Static method in enum org.apache.hadoop.mapred.JobHistory.Keys
Returns an array containing the constants of this enum type, in the order they are declared.
values() - Static method in enum org.apache.hadoop.mapred.JobHistory.RecordTypes
Returns an array containing the constants of this enum type, in the order they are declared.
values() - Static method in enum org.apache.hadoop.mapred.JobHistory.Values
Returns an array containing the constants of this enum type, in the order they are declared.
values() - Static method in enum org.apache.hadoop.mapred.JobPriority
Returns an array containing the constants of this enum type, in the order they are declared.
values() - Static method in enum org.apache.hadoop.mapred.JobTracker.State
Returns an array containing the constants of this enum type, in the order they are declared.
values() - Static method in enum org.apache.hadoop.mapred.join.Parser.TType
Returns an array containing the constants of this enum type, in the order they are declared.
values() - Static method in enum org.apache.hadoop.mapred.TaskCompletionEvent.Status
Returns an array containing the constants of this enum type, in the order they are declared.
values() - Static method in enum org.apache.hadoop.mapred.TaskLog.LogName
Returns an array containing the constants of this enum type, in the order they are declared.
values() - Static method in enum org.apache.hadoop.mapred.TIPStatus
Returns an array containing the constants of this enum type, in the order they are declared.
values() - Static method in enum org.apache.hadoop.mapreduce.Job.JobState
Returns an array containing the constants of this enum type, in the order they are declared.
values() - Static method in enum org.apache.hadoop.util.StringUtils.TraditionalBinaryPrefix
Returns an array containing the constants of this enum type, in the order they are declared.
Vector() - Method in class org.apache.hadoop.record.compiler.generated.Rcc
 
VECTOR - Static variable in class org.apache.hadoop.record.meta.TypeID.RIOType
 
VECTOR_TKN - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
vectorSize - Variable in class org.apache.hadoop.util.bloom.Filter
The vector size of this filter.
VectorTypeID - Class in org.apache.hadoop.record.meta
Represents typeID for vector.
VectorTypeID(TypeID) - Constructor for class org.apache.hadoop.record.meta.VectorTypeID
 
verbose - Variable in class org.apache.hadoop.streaming.JarBuilder
 
verbose_ - Variable in class org.apache.hadoop.streaming.StreamJob
 
VERSION - Static variable in class org.apache.hadoop.fs.HarFileSystem
 
VersionedProtocol - Interface in org.apache.hadoop.ipc
Superclass of all protocols that use Hadoop RPC.
VersionedWritable - Class in org.apache.hadoop.io
A base class for Writables that provides version checking.
VersionedWritable() - Constructor for class org.apache.hadoop.io.VersionedWritable
 
versionID - Static variable in interface org.apache.hadoop.security.authorize.RefreshAuthorizationPolicyProtocol
Version 1: Initial version
VersionInfo - Class in org.apache.hadoop.util
This class finds the package info for Hadoop and the HadoopVersionAnnotation information.
VersionInfo() - Constructor for class org.apache.hadoop.util.VersionInfo
 
VersionMismatchException - Exception in org.apache.hadoop.fs.s3
Thrown when Hadoop cannot read the version of the data stored in S3FileSystem.
VersionMismatchException(String, String) - Constructor for exception org.apache.hadoop.fs.s3.VersionMismatchException
 
VersionMismatchException - Exception in org.apache.hadoop.io
Thrown by VersionedWritable.readFields(DataInput) when the version of an object being read does not match the current implementation version as returned by VersionedWritable.getVersion().
VersionMismatchException(byte, byte) - Constructor for exception org.apache.hadoop.io.VersionMismatchException
 
VIntWritable - Class in org.apache.hadoop.io
A WritableComparable for integer values stored in variable-length format.
VIntWritable() - Constructor for class org.apache.hadoop.io.VIntWritable
 
VIntWritable(int) - Constructor for class org.apache.hadoop.io.VIntWritable
 
VLongWritable - Class in org.apache.hadoop.io
A WritableComparable for longs in a variable-length format.
VLongWritable() - Constructor for class org.apache.hadoop.io.VLongWritable
 
VLongWritable(long) - Constructor for class org.apache.hadoop.io.VLongWritable
 

W

waitForCompletion() - Method in interface org.apache.hadoop.mapred.RunningJob
Blocks until the job is complete.
waitForCompletion(boolean) - Method in class org.apache.hadoop.mapreduce.Job
Submit the job to the cluster and wait for it to finish.
waitForProxy(Class, long, InetSocketAddress, Configuration) - Static method in class org.apache.hadoop.ipc.RPC
 
waitForReadable() - Method in class org.apache.hadoop.net.SocketInputStream
waits for the underlying channel to be ready for reading.
waitForWritable() - Method in class org.apache.hadoop.net.SocketOutputStream
waits for the underlying channel to be ready for writing.
WAITING - Static variable in class org.apache.hadoop.mapred.jobcontrol.Job
 
webAppContext - Variable in class org.apache.hadoop.http.HttpServer
 
webServer - Variable in class org.apache.hadoop.http.HttpServer
 
width - Variable in class org.apache.hadoop.examples.dancing.Pentomino
 
width - Static variable in class org.apache.hadoop.mapred.TaskGraphServlet
height of the graph w/o margins
WILDCARD_ACL_VALUE - Static variable in class org.apache.hadoop.security.SecurityUtil.AccessControlList
 
windowBits() - Method in enum org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionHeader
 
windowBits() - Method in enum org.apache.hadoop.io.compress.zlib.ZlibDecompressor.CompressionHeader
 
WINDOWS - Static variable in class org.apache.hadoop.util.Shell
Set to true on Windows platforms
WithinMultiLineComment - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
WithinOneLineComment - Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
 
WordCount - Class in org.apache.hadoop.examples
 
WordCount() - Constructor for class org.apache.hadoop.examples.WordCount
 
WordCount.IntSumReducer - Class in org.apache.hadoop.examples
 
WordCount.IntSumReducer() - Constructor for class org.apache.hadoop.examples.WordCount.IntSumReducer
 
WordCount.TokenizerMapper - Class in org.apache.hadoop.examples
 
WordCount.TokenizerMapper() - Constructor for class org.apache.hadoop.examples.WordCount.TokenizerMapper
 
WORK_FACTOR - Static variable in class org.apache.hadoop.io.compress.bzip2.CBZip2OutputStream
This constant is accessible by subclasses for historical purposes.
WrappedRecordReader<K extends WritableComparable,U extends Writable> - Class in org.apache.hadoop.mapred.join
Proxy class for a RecordReader participating in the join framework.
Writable - Interface in org.apache.hadoop.io
A serializable object which implements a simple, efficient, serialization protocol, based on DataInput and DataOutput.
WritableComparable<T> - Interface in org.apache.hadoop.io
A Writable which is also Comparable.
WritableComparator - Class in org.apache.hadoop.io
A Comparator for WritableComparables.
WritableComparator(Class<? extends WritableComparable>) - Constructor for class org.apache.hadoop.io.WritableComparator
Construct for a WritableComparable implementation.
WritableComparator(Class<? extends WritableComparable>, boolean) - Constructor for class org.apache.hadoop.io.WritableComparator
 
WritableFactories - Class in org.apache.hadoop.io
Factories for non-public writables.
WritableFactory - Interface in org.apache.hadoop.io
A factory for a class of Writable.
WritableName - Class in org.apache.hadoop.io
Utility to permit renaming of Writable implementation classes without invalidiating files that contain their class name.
WritableSerialization - Class in org.apache.hadoop.io.serializer
A Serialization for Writables that delegates to Writable.write(java.io.DataOutput) and Writable.readFields(java.io.DataInput).
WritableSerialization() - Constructor for class org.apache.hadoop.io.serializer.WritableSerialization
 
WritableUtils - Class in org.apache.hadoop.io
 
WritableUtils() - Constructor for class org.apache.hadoop.io.WritableUtils
 
write(DataOutput) - Method in class org.apache.hadoop.conf.Configuration
 
write(DataOutput) - Method in class org.apache.hadoop.contrib.index.example.LineDocTextAndOp
 
write(DataOutput) - Method in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
 
write(DataOutput) - Method in class org.apache.hadoop.contrib.index.mapred.DocumentID
 
write(DataOutput) - Method in class org.apache.hadoop.contrib.index.mapred.IntermediateForm
 
write(DataOutput) - Method in class org.apache.hadoop.contrib.index.mapred.Shard
 
write(DataOutput) - Method in class org.apache.hadoop.examples.MultiFileWordCount.WordOffset
 
write(DataOutput) - Method in class org.apache.hadoop.examples.SecondarySort.IntPair
 
write(DataOutput) - Method in class org.apache.hadoop.examples.SleepJob.EmptySplit
 
write(DataOutput) - Method in class org.apache.hadoop.fs.BlockLocation
Implement write of Writable
write(DataOutput) - Method in class org.apache.hadoop.fs.ContentSummary
Serialize the fields of this object to out.
write(DataOutput) - Method in class org.apache.hadoop.fs.FileStatus
 
write(int) - Method in class org.apache.hadoop.fs.FSOutputSummer
Write one byte
write(byte[], int, int) - Method in class org.apache.hadoop.fs.FSOutputSummer
Writes len bytes from the specified byte array starting at offset off and generate a checksum for each data chunk.
write(DataOutput) - Method in class org.apache.hadoop.fs.MD5MD5CRC32FileChecksum
Serialize the fields of this object to out.
write(XMLOutputter, MD5MD5CRC32FileChecksum) - Static method in class org.apache.hadoop.fs.MD5MD5CRC32FileChecksum
Write that object to xml output.
write(DataOutput) - Method in class org.apache.hadoop.fs.permission.FsPermission
Serialize the fields of this object to out.
write(DataOutput) - Method in class org.apache.hadoop.fs.permission.PermissionStatus
Serialize the fields of this object to out.
write(DataOutput, String, String, FsPermission) - Static method in class org.apache.hadoop.fs.permission.PermissionStatus
Serialize a PermissionStatus from its base components.
write(DataOutput) - Method in class org.apache.hadoop.io.AbstractMapWritable
Serialize the fields of this object to out.
write(DataOutput) - Method in class org.apache.hadoop.io.ArrayWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.BooleanWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.BytesWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.ByteWritable
 
write(byte[], int, int) - Method in class org.apache.hadoop.io.compress.BlockCompressorStream
Write the data provided to the compression codec, compressing no more than the buffer size less the compression overhead as specified during construction for each block.
write(int) - Method in class org.apache.hadoop.io.compress.bzip2.CBZip2OutputStream
 
write(byte[], int, int) - Method in class org.apache.hadoop.io.compress.bzip2.CBZip2OutputStream
 
write(byte[], int, int) - Method in class org.apache.hadoop.io.compress.CompressionOutputStream
Write compressed bytes to the stream.
write(byte[], int, int) - Method in class org.apache.hadoop.io.compress.CompressorStream
 
write(int) - Method in class org.apache.hadoop.io.compress.CompressorStream
 
write(int) - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
 
write(byte[], int, int) - Method in class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
 
write(DataOutput) - Method in class org.apache.hadoop.io.CompressedWritable
 
write(DataInput, int) - Method in class org.apache.hadoop.io.DataOutputBuffer
Writes bytes from a DataInput directly into the buffer.
write(DataOutput) - Method in class org.apache.hadoop.io.DoubleWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.file.tfile.Utils.Version
Write the objec to a DataOutput.
write(DataOutput) - Method in class org.apache.hadoop.io.FloatWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.GenericWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.IntWritable
 
write(byte[], int, int) - Method in class org.apache.hadoop.io.IOUtils.NullOutputStream
 
write(int) - Method in class org.apache.hadoop.io.IOUtils.NullOutputStream
 
write(DataOutput) - Method in class org.apache.hadoop.io.LongWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.MapWritable
Serialize the fields of this object to out.
write(DataOutput) - Method in class org.apache.hadoop.io.MD5Hash
 
write(DataOutput) - Method in class org.apache.hadoop.io.NullWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.ObjectWritable
 
write(InputStream, int) - Method in class org.apache.hadoop.io.OutputBuffer
Writes bytes from a InputStream directly into the buffer.
write(DataOutput) - Method in class org.apache.hadoop.io.SequenceFile.Metadata
 
write(DataOutput) - Method in class org.apache.hadoop.io.SortedMapWritable
Serialize the fields of this object to out.
write(DataOutput) - Method in class org.apache.hadoop.io.Text
serialize write this object to out length uses zero-compressed encoding
write(DataOutput) - Method in class org.apache.hadoop.io.TwoDArrayWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.UTF8
Deprecated.  
write(DataOutput) - Method in class org.apache.hadoop.io.VersionedWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.VIntWritable
 
write(DataOutput) - Method in class org.apache.hadoop.io.VLongWritable
 
write(DataOutput) - Method in interface org.apache.hadoop.io.Writable
Serialize the fields of this object to out.
write(DataOutput) - Method in class org.apache.hadoop.mapred.ClusterStatus
 
write(DataOutput) - Method in class org.apache.hadoop.mapred.Counters.Group
Deprecated.  
write(DataOutput) - Method in class org.apache.hadoop.mapred.Counters
Deprecated. Write the set of groups.
write(DataOutput) - Method in class org.apache.hadoop.mapred.FileSplit
Deprecated.  
write(DataOutput) - Method in class org.apache.hadoop.mapred.JobProfile
 
write(DataOutput) - Method in class org.apache.hadoop.mapred.JobQueueInfo
 
write(DataOutput) - Method in class org.apache.hadoop.mapred.JobStatus
 
write(DataOutput) - Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
Write splits in the following format.
write(DataOutput) - Method in class org.apache.hadoop.mapred.join.TupleWritable
Writes each Writable to out.
write(DataOutput) - Method in class org.apache.hadoop.mapred.lib.CombineFileSplit
 
write(DataOutput) - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.DBInputSplit
Serialize the fields of this object to out.
write(DataOutput) - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.NullDBWritable
 
write(PreparedStatement) - Method in class org.apache.hadoop.mapred.lib.db.DBInputFormat.NullDBWritable
 
write(K, V) - Method in class org.apache.hadoop.mapred.lib.db.DBOutputFormat.DBRecordWriter
Writes a key/value pair.
write(PreparedStatement) - Method in interface org.apache.hadoop.mapred.lib.db.DBWritable
Sets the fields of the object in the PreparedStatement.
write(K, V) - Method in interface org.apache.hadoop.mapred.RecordWriter
Writes a key/value pair.
write(DataOutput) - Method in class org.apache.hadoop.mapred.TaskCompletionEvent
 
write(DataOutput) - Method in class org.apache.hadoop.mapred.TaskReport
 
write(K, V) - Method in class org.apache.hadoop.mapred.TextOutputFormat.LineRecordWriter
Deprecated.  
write(DataOutput) - Method in class org.apache.hadoop.mapreduce.Counter
Write the binary representation of the counter
write(DataOutput) - Method in class org.apache.hadoop.mapreduce.CounterGroup
 
write(DataOutput) - Method in class org.apache.hadoop.mapreduce.Counters
Write the set of groups.
write(DataOutput) - Method in class org.apache.hadoop.mapreduce.ID
 
write(DataOutput) - Method in class org.apache.hadoop.mapreduce.JobID
 
write(DataOutput) - Method in class org.apache.hadoop.mapreduce.lib.input.FileSplit
 
write(K, V) - Method in class org.apache.hadoop.mapreduce.lib.output.TextOutputFormat.LineRecordWriter
 
write(K, V) - Method in class org.apache.hadoop.mapreduce.RecordWriter
Writes a key/value pair.
write(DataOutput) - Method in class org.apache.hadoop.mapreduce.TaskAttemptID
 
write(DataOutput) - Method in class org.apache.hadoop.mapreduce.TaskID
 
write(KEYOUT, VALUEOUT) - Method in class org.apache.hadoop.mapreduce.TaskInputOutputContext
Generate an output key/value pair.
write(int) - Method in class org.apache.hadoop.net.SocketOutputStream
 
write(byte[], int, int) - Method in class org.apache.hadoop.net.SocketOutputStream
 
write(ByteBuffer) - Method in class org.apache.hadoop.net.SocketOutputStream
 
write(DataOutput) - Method in class org.apache.hadoop.record.Record
 
write(DataOutput) - Method in class org.apache.hadoop.security.UnixUserGroupInformation
Serialize this object First write a string marking that this is a UGI in the string format, then write this object's serialized form to the given data output
write(DataOutput) - Method in class org.apache.hadoop.util.bloom.BloomFilter
 
write(DataOutput) - Method in class org.apache.hadoop.util.bloom.CountingBloomFilter
 
write(DataOutput) - Method in class org.apache.hadoop.util.bloom.DynamicBloomFilter
 
write(DataOutput) - Method in class org.apache.hadoop.util.bloom.Filter
 
write(DataOutput) - Method in class org.apache.hadoop.util.bloom.Key
 
write(DataOutput) - Method in class org.apache.hadoop.util.bloom.RetouchedBloomFilter
 
writeBool(boolean, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
writeBool(boolean, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
writeBool(boolean, String) - Method in interface org.apache.hadoop.record.RecordOutput
Write a boolean to serialized record.
writeBool(boolean, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
writeBuffer(Buffer, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
writeBuffer(Buffer, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
writeBuffer(Buffer, String) - Method in interface org.apache.hadoop.record.RecordOutput
Write a buffer to serialized record.
writeBuffer(Buffer, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
writeByte(byte, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
writeByte(byte, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
writeByte(byte, String) - Method in interface org.apache.hadoop.record.RecordOutput
Write a byte to serialized record.
writeByte(byte, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
writeChunk(byte[], int, int, byte[]) - Method in class org.apache.hadoop.fs.FSOutputSummer
 
writeCompressed(DataOutput) - Method in class org.apache.hadoop.io.CompressedWritable
Subclasses implement this instead of CompressedWritable.write(DataOutput).
writeCompressedByteArray(DataOutput, byte[]) - Static method in class org.apache.hadoop.io.WritableUtils
 
writeCompressedBytes(DataOutputStream) - Method in interface org.apache.hadoop.io.SequenceFile.ValueBytes
Write compressed bytes to outStream.
writeCompressedBytes(DataOutputStream) - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat.WritableValueBytes
 
writeCompressedString(DataOutput, String) - Static method in class org.apache.hadoop.io.WritableUtils
 
writeCompressedStringArray(DataOutput, String[]) - Static method in class org.apache.hadoop.io.WritableUtils
 
writeDouble(double, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
writeDouble(double, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
writeDouble(double, String) - Method in interface org.apache.hadoop.record.RecordOutput
Write a double precision floating point number to serialized record.
writeDouble(double, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
writeEnum(DataOutput, Enum<?>) - Static method in class org.apache.hadoop.io.WritableUtils
writes String value of enum to DataOutput.
writeFile(SequenceFile.Sorter.RawKeyValueIterator, SequenceFile.Writer) - Method in class org.apache.hadoop.io.SequenceFile.Sorter
Writes records from RawKeyValueIterator into a file represented by the passed writer
writeFloat(float, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
writeFloat(float, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
writeFloat(float, String) - Method in interface org.apache.hadoop.record.RecordOutput
Write a single-precision float to serialized record.
writeFloat(float, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
writeHeader(DataOutputStream) - Method in class org.apache.hadoop.util.DataChecksum
Writes the checksum header to the output stream out.
writeInt(int, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
writeInt(int, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
writeInt(int, String) - Method in interface org.apache.hadoop.record.RecordOutput
Write an integer to serialized record.
writeInt(int, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
writeKey(OutputStream) - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
Writing the key to the output stream.
writeLong(long, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
writeLong(long, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
writeLong(long, String) - Method in interface org.apache.hadoop.record.RecordOutput
Write a long integer to serialized record.
writeLong(long, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
writeObject(DataOutput, Object, Class, Configuration) - Static method in class org.apache.hadoop.io.ObjectWritable
Write a Writable, String, primitive type, or an array of the preceding.
writePartitionFile(JobConf, Path) - Static method in class org.apache.hadoop.examples.terasort.TeraInputFormat
Use the input splits to take samples of the input and generate sample keys.
writePartitionFile(JobConf, InputSampler.Sampler<K, V>) - Static method in class org.apache.hadoop.mapred.lib.InputSampler
Write a partition file for the given job, using the Sampler provided.
writeRAMFiles(DataOutput, RAMDirectory, String[]) - Static method in class org.apache.hadoop.contrib.index.lucene.RAMDirectoryUtil
Write a number of files from a ram directory to a data output.
writeState(String) - Static method in class org.apache.hadoop.contrib.failmon.PersistentState
Write the state of parsing for all open log files to a property file on disk.
writeString(DataOutput, String) - Static method in class org.apache.hadoop.io.file.tfile.Utils
Write a String as a VInt n, followed by n Bytes as in Text format.
writeString(DataOutput, String) - Static method in class org.apache.hadoop.io.Text
Write a UTF8 encoded string to out
writeString(DataOutput, String) - Static method in class org.apache.hadoop.io.UTF8
Deprecated. Write a UTF-8 encoded string.
writeString(DataOutput, String) - Static method in class org.apache.hadoop.io.WritableUtils
 
writeString(String, String) - Method in class org.apache.hadoop.record.BinaryRecordOutput
 
writeString(String, String) - Method in class org.apache.hadoop.record.CsvRecordOutput
 
writeString(String, String) - Method in interface org.apache.hadoop.record.RecordOutput
Write a unicode string to serialized record.
writeString(String, String) - Method in class org.apache.hadoop.record.XmlRecordOutput
 
writeStringArray(DataOutput, String[]) - Static method in class org.apache.hadoop.io.WritableUtils
 
writeTo(OutputStream) - Method in class org.apache.hadoop.io.DataOutputBuffer
Write to a file stream
writeUncompressedBytes(DataOutputStream) - Method in interface org.apache.hadoop.io.SequenceFile.ValueBytes
Writes the uncompressed bytes to the outStream.
writeUncompressedBytes(DataOutputStream) - Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat.WritableValueBytes
 
writeValue(OutputStream) - Method in class org.apache.hadoop.io.file.tfile.TFile.Reader.Scanner.Entry
Writing the value to the output stream.
writeValue(DataOutputStream, boolean) - Method in class org.apache.hadoop.util.DataChecksum
Writes the current checksum to the stream.
writeValue(byte[], int, boolean) - Method in class org.apache.hadoop.util.DataChecksum
Writes the current checksum to a buffer.
writeVInt(DataOutput, int) - Static method in class org.apache.hadoop.io.file.tfile.Utils
Encoding an integer into a variable-length encoding format.
writeVInt(DataOutput, int) - Static method in class org.apache.hadoop.io.WritableUtils
Serializes an integer to a binary stream with zero-compressed encoding.
writeVInt(DataOutput, int) - Static method in class org.apache.hadoop.record.Utils
Serializes an int to a binary stream with zero-compressed encoding.
writeVLong(DataOutput, long) - Static method in class org.apache.hadoop.io.file.tfile.Utils
Encoding a Long integer into a variable-length encoding format.
writeVLong(DataOutput, long) - Static method in class org.apache.hadoop.io.WritableUtils
Serializes a long to a binary stream with zero-compressed encoding.
writeVLong(DataOutput, long) - Static method in class org.apache.hadoop.record.Utils
Serializes a long to a binary stream with zero-compressed encoding.
writeXml(OutputStream) - Method in class org.apache.hadoop.conf.Configuration
Write out the non-default properties in this configuration to the give OutputStream.
writeXml(String, XMLOutputter) - Method in exception org.apache.hadoop.ipc.RemoteException
Write the object to XML format

X

xmargin - Static variable in class org.apache.hadoop.mapred.TaskGraphServlet
margin space on x axis
XmlRecordInput - Class in org.apache.hadoop.record
XML Deserializer.
XmlRecordInput(InputStream) - Constructor for class org.apache.hadoop.record.XmlRecordInput
Creates a new instance of XmlRecordInput
XmlRecordOutput - Class in org.apache.hadoop.record
XML Serializer.
XmlRecordOutput(OutputStream) - Constructor for class org.apache.hadoop.record.XmlRecordOutput
Creates a new instance of XmlRecordOutput
XMLUtils - Class in org.apache.hadoop.util
General xml utilities.
XMLUtils() - Constructor for class org.apache.hadoop.util.XMLUtils
 
xor(Filter) - Method in class org.apache.hadoop.util.bloom.BloomFilter
 
xor(Filter) - Method in class org.apache.hadoop.util.bloom.CountingBloomFilter
 
xor(Filter) - Method in class org.apache.hadoop.util.bloom.DynamicBloomFilter
 
xor(Filter) - Method in class org.apache.hadoop.util.bloom.Filter
Peforms a logical XOR between this filter and a specified filter.

Y

ymargin - Static variable in class org.apache.hadoop.mapred.TaskGraphServlet
margin space on y axis

Z

zipCompress(String) - Static method in class org.apache.hadoop.contrib.failmon.LocalStore
Compress a text file using the ZIP compressing algorithm.
ZlibCompressor - Class in org.apache.hadoop.io.compress.zlib
A Compressor based on the popular zlib compression algorithm.
ZlibCompressor(ZlibCompressor.CompressionLevel, ZlibCompressor.CompressionStrategy, ZlibCompressor.CompressionHeader, int) - Constructor for class org.apache.hadoop.io.compress.zlib.ZlibCompressor
Creates a new compressor using the specified compression level.
ZlibCompressor() - Constructor for class org.apache.hadoop.io.compress.zlib.ZlibCompressor
Creates a new compressor with the default compression level.
ZlibCompressor.CompressionHeader - Enum in org.apache.hadoop.io.compress.zlib
The type of header for compressed data.
ZlibCompressor.CompressionLevel - Enum in org.apache.hadoop.io.compress.zlib
The compression level for zlib library.
ZlibCompressor.CompressionStrategy - Enum in org.apache.hadoop.io.compress.zlib
The compression level for zlib library.
ZlibDecompressor - Class in org.apache.hadoop.io.compress.zlib
A Decompressor based on the popular zlib compression algorithm.
ZlibDecompressor(ZlibDecompressor.CompressionHeader, int) - Constructor for class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
Creates a new decompressor.
ZlibDecompressor() - Constructor for class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
 
ZlibDecompressor.CompressionHeader - Enum in org.apache.hadoop.io.compress.zlib
The headers to detect from compressed data.
ZlibFactory - Class in org.apache.hadoop.io.compress.zlib
A collection of factories to create the right zlib/gzip compressor/decompressor instances.
ZlibFactory() - Constructor for class org.apache.hadoop.io.compress.zlib.ZlibFactory
 

A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Copyright © 2009 The Apache Software Foundation