Example usage for org.apache.hadoop.mapreduce RecordReader subclass-usage

List of usage examples for org.apache.hadoop.mapreduce RecordReader subclass-usage

Introduction

In this page you can find the example usage for org.apache.hadoop.mapreduce RecordReader subclass-usage.

Usage

From source file org.apache.mahout.text.WholeFileRecordReader.java

/**
 * RecordReader used with the MultipleTextFileInputFormat class to read full files as
 * k/v pairs and groups of files as single input splits.
 */
public class WholeFileRecordReader extends RecordReader<IntWritable, BytesWritable> {

From source file org.apache.mnemonic.hadoop.mapreduce.MneMapreduceRecordReader.java

/**
 * This record reader implements the org.apache.hadoop.mapreduce API.
 *
 * @param <V>
 *          the type of the data item
 */

From source file org.apache.orc.mapreduce.OrcMapreduceRecordReader.java

/**
 * This record reader implements the org.apache.hadoop.mapreduce API.
 * It is in the org.apache.orc.mapred package to share implementation with
 * the mapred API record reader.
 * @param <V> the root type of the file
 */

From source file org.apache.parquet.hadoop.ParquetRecordReader.java

/**
 * Reads the records from a block of a Parquet file
 *
 * @see ParquetInputFormat
 *
 * @author Julien Le Dem

From source file org.apache.phoenix.mapreduce.PhoenixRecordReader.java

/**
 * {@link RecordReader} implementation that iterates over the the records.
 */
public class PhoenixRecordReader<T extends DBWritable> extends RecordReader<NullWritable, T> {

    private static final Log LOG = LogFactory.getLog(PhoenixRecordReader.class);

From source file org.apache.phoenix.pig.hadoop.PhoenixRecordReader.java

/**
 * RecordReader that process the scan and returns PhoenixRecord
 * 
 */
public final class PhoenixRecordReader extends RecordReader<NullWritable, PhoenixRecord> {

From source file org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigRecordReader.java

/**
 * A wrapper around the actual RecordReader and loadfunc - this is needed for
 * two reasons
 * 1) To intercept the initialize call from hadoop and initialize the underlying
 * actual RecordReader with the right Context object - this is achieved by
 * looking up the Context corresponding to the input split this Reader is

From source file org.apache.pig.impl.io.BinStorageRecordReader.java

/**
 * Treats keys as offset in file and value as line. 
 */
public class BinStorageRecordReader extends RecordReader<Text, Tuple> {

    private long start;

From source file org.apache.pig.impl.io.InterRecordReader.java

/**
 * A record reader used to read data written using {@link InterRecordWriter}
 * It uses the default InterSedes object for deserialization.
 */
public class InterRecordReader extends RecordReader<Text, Tuple> {

From source file org.apache.pig.impl.io.TFileRecordReader.java

/**
 * A record reader used to read data written using {@link InterRecordWriter} It
 * uses the default InterSedes object for deserialization.
 */
public class TFileRecordReader extends RecordReader<Text, Tuple> {