Example usage for org.apache.hadoop.mapreduce OutputFormat subclass-usage

List of usage examples for org.apache.hadoop.mapreduce OutputFormat subclass-usage

Introduction

In this page you can find the example usage for org.apache.hadoop.mapreduce OutputFormat subclass-usage.

Usage

From source file org.apache.accumulo.hadoop.mapreduce.AccumuloOutputFormat.java

/**
 * This class allows MapReduce jobs to use Accumulo as the sink for data. This {@link OutputFormat}
 * accepts keys and values of type {@link Text} (for a table name) and {@link Mutation} from the Map
 * and Reduce functions. Configured with fluent API using {@link AccumuloOutputFormat#configure()}.
 * Here is an example with all possible options:
 *

From source file org.apache.beam.sdk.io.hadoop.format.EmployeeOutputFormat.java

/**
 * This is a valid OutputFormat for writing employee data, available in the form of {@code
 * List<KV>}. {@linkplain EmployeeOutputFormat} is used to test the {@linkplain HadoopFormatIO }
 * sink.
 */
public class EmployeeOutputFormat extends OutputFormat<Text, Employee> {

From source file org.apache.blur.mapreduce.lib.BlurOutputFormat.java

/**
 * {@link BlurOutputFormat} is used to index data and delivery the indexes to
 * the proper Blur table for searching. A typical usage of this class would be
 * as follows.<br/>
 * <br/>
 * 

From source file org.apache.cassandra.hadoop.AbstractBulkOutputFormat.java

public abstract class AbstractBulkOutputFormat<K, V> extends OutputFormat<K, V>
        implements org.apache.hadoop.mapred.OutputFormat<K, V> {
    @Override
    public void checkOutputSpecs(JobContext context) {
        checkOutputSpecs(HadoopCompat.getConfiguration(context));
    }

From source file org.apache.cassandra.hadoop.AbstractColumnFamilyOutputFormat.java

/**
 * The <code>ColumnFamilyOutputFormat</code> acts as a Hadoop-specific
 * OutputFormat that allows reduce tasks to store keys (and corresponding
 * values) as Cassandra rows (and respective columns) in a given
 * ColumnFamily.
 *

From source file org.apache.cassandra.hadoop.BulkOutputFormat.java

@Deprecated
public class BulkOutputFormat extends OutputFormat<ByteBuffer, List<Mutation>>
        implements org.apache.hadoop.mapred.OutputFormat<ByteBuffer, List<Mutation>> {
    /** Fills the deprecated OutputFormat interface for streaming. */
    @Deprecated
    public BulkRecordWriter getRecordWriter(org.apache.hadoop.fs.FileSystem filesystem,

From source file org.apache.cassandra.hadoop.ColumnFamilyOutputFormat.java

/**
 * The <code>ColumnFamilyOutputFormat</code> acts as a Hadoop-specific
 * OutputFormat that allows reduce tasks to store keys (and corresponding
 * values) as Cassandra rows (and respective columns) in a given
 * ColumnFamily.
 *

From source file org.apache.cassandra.hadoop.cql3.CqlBulkOutputFormat.java

/**
 * The <code>CqlBulkOutputFormat</code> acts as a Hadoop-specific
 * OutputFormat that allows reduce tasks to store keys (and corresponding
 * bound variable values) as CQL rows (and respective columns) in a given
 * table.
 *

From source file org.apache.cassandra.hadoop.cql3.CqlOutputFormat.java

/**
 * The <code>CqlOutputFormat</code> acts as a Hadoop-specific
 * OutputFormat that allows reduce tasks to store keys (and corresponding
 * bound variable values) as CQL rows (and respective columns) in a given
 * table.
 *

From source file org.apache.cassandra.hadoop2.AbstractColumnFamilyOutputFormat.java

/**
 * The <code>ColumnFamilyOutputFormat</code> acts as a Hadoop-specific
 * OutputFormat that allows reduce tasks to store keys (and corresponding
 * values) as Cassandra rows (and respective columns) in a given ColumnFamily.
 *
 * <p>