List of usage examples for org.apache.hadoop.mapred FileOutputCommitter subclass-usage
From source file com.ibm.bi.dml.runtime.matrix.data.MultipleOutputCommitter.java
public class MultipleOutputCommitter extends FileOutputCommitter { // maintain the map of matrix index to its final output dir // private HashMap<Byte, String> outputmap=new HashMap<Byte, String>(); private String[] outputs;
From source file com.liveramp.hank.hadoop.DomainBuilderOutputCommitter.java
public class DomainBuilderOutputCommitter extends FileOutputCommitter { private static final Logger LOG = LoggerFactory.getLogger(DomainBuilderOutputCommitter.class); // TODO: Make these configurable private static final int N_THREADS = 10;
From source file com.rapleaf.hank.hadoop.DomainBuilderOutputCommitter.java
public class DomainBuilderOutputCommitter extends FileOutputCommitter { private static final Logger LOG = Logger.getLogger(DomainBuilderOutputCommitter.class); // Note: setupJob() commitJob() and cleanupJob() should get called automatically by the MapReduce // framework in subsequent versions of Hadoop.
From source file eu.stratosphere.hadoopcompatibility.FileOutputCommitterWrapper.java
/** * Hadoop 1.2.1 {@link org.apache.hadoop.mapred.FileOutputCommitter} takes {@link org.apache.hadoop.mapred.JobContext} * as input parameter. However JobContext class is package private, and in Hadoop 2.2.0 it's public. * This class takes {@link org.apache.hadoop.mapred.JobConf} as input instead of JobContext in order to setup and commit tasks. */ public class FileOutputCommitterWrapper extends FileOutputCommitter implements Serializable {
From source file eu.stratosphere.hadoopcompatibility.mapred.record.datatypes.HadoopFileOutputCommitter.java
/** * Hadoop 1.2.1 {@link org.apache.hadoop.mapred.FileOutputCommitter} takes {@link org.apache.hadoop.mapred.JobContext} * as input parameter. However JobContext class is package private, and in Hadoop 2.2.0 it's public. * This class takes {@link org.apache.hadoop.mapred.JobConf} as input instead of JobContext in order to setup and commit tasks. */ public class HadoopFileOutputCommitter extends FileOutputCommitter implements Serializable {
From source file org.apache.carbondata.hadoop.api.CarbonTableOutputCommitter.java
public class CarbonTableOutputCommitter extends FileOutputCommitter { private SegmentManager segmentManager; private Segment newSegment; @Override public void setupJob(JobContext context) throws IOException {
From source file org.apache.flink.hadoopcompatibility.mapred.record.datatypes.HadoopFileOutputCommitter.java
/** * Hadoop 1.2.1 {@link org.apache.hadoop.mapred.FileOutputCommitter} takes {@link org.apache.hadoop.mapred.JobContext} * as input parameter. However JobContext class is package private, and in Hadoop 2.2.0 it's public. * This class takes {@link org.apache.hadoop.mapred.JobConf} as input instead of JobContext in order to setup and commit tasks. */ public class HadoopFileOutputCommitter extends FileOutputCommitter implements Serializable {
From source file org.apache.parquet.hadoop.mapred.MapredParquetOutputCommitter.java
/**
*
* Adapter for supporting ParquetOutputCommitter in mapred API
*
* @author Tianshuo Deng
*/
From source file org.apache.sysml.runtime.matrix.data.MultipleOutputCommitter.java
public class MultipleOutputCommitter extends FileOutputCommitter { // maintain the map of matrix index to its destination output dir // private HashMap<Byte, String> outputmap=new HashMap<Byte, String>(); private String[] outputs; @Override
From source file org.commoncrawl.mapred.ec2.parser.OutputCommitter.java
/**
* A clone of portions of the FileOutputCommitter in Hadoop used to help
* isolate issues we were having with running the job on EMR using the S3N
* FileSystem.
*
* @author rana