Example usage for org.apache.hadoop.mapreduce OutputFormat subclass-usage

List of usage examples for org.apache.hadoop.mapreduce OutputFormat subclass-usage

Introduction

In this page you can find the example usage for org.apache.hadoop.mapreduce OutputFormat subclass-usage.

Usage

From source file org.hypertable.hadoop.mapreduce.OutputFormat.java

/**
 * Write Map/Reduce output to a table in Hypertable.
 *
 * Change this to read from configs at some point.
 * Key is not used but output value must be a KeyWritable
 */

From source file org.hypertable.hadoop.mapreduce.SerializedCellsOutputFormat.java

/**
 * Write Map/Reduce output to a table in Hypertable.
 *
 * TODO: For now we assume ThriftBroker is running on localhost on default port (15867).
 * Change this to read from configs at some point.
 * Key is not used

From source file org.kiji.schema.mapreduce.KijiTableOutputFormat.java

/**
 * Used to write {@link KijiMutation}s to Kiji tables. Use the
 * {@link KijiTableOutputFormat#setOptions(Job, String, String)}
 * method to configure this output format for use in a mapreduce job.
 *
 * To setup a job to use KijiTableOutputFormat:

From source file org.kitesdk.data.mapreduce.DatasetKeyOutputFormat.java

/**
 * A MapReduce {@code OutputFormat} for writing to a {@link Dataset}.
 *
 * Since a {@code Dataset} only contains entities (not key/value pairs), this output
 * format ignores the value.
 *

From source file org.kududb.mapreduce.KuduTableOutputFormat.java

/**
 * <p>
 * Use {@link
 * KuduTableMapReduceUtil.TableOutputFormatConfigurator}
 * to correctly setup this output format, then {@link
 * KuduTableMapReduceUtil#getTableFromContext(org.apache.hadoop.mapreduce.TaskInputOutputContext)}

From source file org.locationtech.geomesa.jobs.interop.mapreduce.GeoMesaOutputFormat.java

/**
 * Output format for writing simple features to GeoMesa. The key will be ignored. SimpleFeatureTypes
 * will be created in GeoMesa as needed based on the simple features passed.
 *
 * Configure using the static methods.
 */

From source file org.mrgeo.data.accumulo.output.image.AccumuloMrsImagePyramidOutputFormat.java

public class AccumuloMrsImagePyramidOutputFormat extends OutputFormat<TileIdWritable, RasterWritable> {
    private int zoomLevel = -1;
    private String table = null;

    private String username = null;
    private String password = null;

From source file org.mrgeo.data.accumulo.output.image.AccumuloMrsPyramidOutputFormat.java

public class AccumuloMrsPyramidOutputFormat extends OutputFormat<TileIdWritable, RasterWritable> {
    private static final Logger log = LoggerFactory.getLogger(AccumuloMrsPyramidOutputFormat.class);
    private static boolean outputInfoSet = false;
    private static Job job;
    private int zoomLevel = -1;
    private String table = null;

From source file org.pingles.cascading.cassandra.hadoop.ColumnFamilyOutputFormat.java

/**
 * The <code>ColumnFamilyOutputFormat</code> acts as a Hadoop-specific
 * OutputFormat that allows reduce tasks to store keys (and corresponding
 * values) as Cassandra rows (and respective columns) in a given
 * ColumnFamily.
 * 

From source file org.schedoscope.export.jdbc.outputformat.JdbcOutputFormat.java

/**
 * The JDBC output format is responsible to write data into a database using
 * JDBC connection.
 *
 * @param <K>
 *            The key class.