Example usage for org.apache.hadoop.mapreduce OutputFormat subclass-usage

List of usage examples for org.apache.hadoop.mapreduce OutputFormat subclass-usage

Introduction

In this page you can find the example usage for org.apache.hadoop.mapreduce OutputFormat subclass-usage.

Usage

From source file andromache.hadoop.CassandraOutputFormat.java

/**
 * The <code>ColumnFamilyOutputFormat</code> acts as a Hadoop-specific
 * OutputFormat that allows reduce tasks to store keys (and corresponding
    
+ values) as Cassandra rows (and respective columns).
 * <p/>

From source file cn.edu.hfut.dmic.webcollector.fetcher.FetcherOutputFormat.java

/**
 *
 * @author hu
 */
public class FetcherOutputFormat extends OutputFormat<Text, Writable> {

From source file co.cask.cdap.internal.app.runtime.batch.dataset.AbstractBatchWritableOutputFormat.java

/**
 * An abstract base implementation of {@link OutputFormat} for writing to {@link BatchWritable} from batch job.
 *
 * @param <KEY> type of the key
 * @param <VALUE> type of the value
 */

From source file co.cask.cdap.internal.app.runtime.batch.dataset.DataSetOutputFormat.java

/**
 * An {@link OutputFormat} for writing into dataset.
 * @param <KEY> Type of key.
 * @param <VALUE> Type of value.
 */
public final class DataSetOutputFormat<KEY, VALUE> extends OutputFormat<KEY, VALUE> {

From source file co.cask.cdap.internal.app.runtime.batch.dataset.output.MultipleOutputsMainOutputWrapper.java

/**
 * OutputFormat that wraps a root OutputFormat and provides an OutputFormatCommitter that delegates to multiple
 * preconfigured OutputFormatCommitters.
 *
 * @param <K> Type of key
 * @param <V> Type of value

From source file co.cask.cdap.internal.app.runtime.batch.dataset.UnsupportedOutputFormat.java

/**
 * OutputFormat that allows instantiation of the RecordWriter, but throws {@link UnsupportedOperationException}
 * upon any attempts to write to it.
 *
 * All other operations are no-ops.
 *

From source file co.cask.cdap.internal.app.runtime.spark.dataset.SparkDatasetOutputFormat.java

/**
 * An {@link OutputFormat} for writing into dataset.
 *
 * @param <KEY>   Type of key.
 * @param <VALUE> Type of value.
 *                TODO: Refactor this OutputFormat and MapReduce OutputFormat

From source file co.nubetech.apache.hadoop.DBOutputFormat.java

/**
 * A OutputFormat that sends the reduce output to a SQL table.
 * <p>
 * {@link DBOutputFormat} accepts &lt;key,value&gt; pairs, where key has a type
 * extending DBWritable. Returned {@link RecordWriter} writes <b>only the
 * key</b> to the database with a batch SQL query.

From source file co.nubetech.hiho.mapreduce.lib.db.apache.DBOutputFormat.java

/**
 * A OutputFormat that sends the reduce output to a SQL table.
 * <p> 
 * {@link DBOutputFormat} accepts &lt;key,value&gt; pairs, where 
 * key has a type extending DBWritable. Returned {@link RecordWriter} 
 * writes <b>only the key</b> to the database with a batch SQL query.  

From source file com.abel.hwfs.custom.output.SetSizeDBOutputFormat.java

/**
 * A OutputFormat that sends the reduce output to a SQL table.
 * <p>
 * {@link MyDBOutputFormat} accepts &lt;key,value&gt; pairs, where
 * key has a type extending DBWritable. Returned {@link RecordWriter}
 * writes <b>only the key</b> to the database with a batch SQL query.