Example usage for org.apache.hadoop.mapreduce InputSplit subclass-usage

List of usage examples for org.apache.hadoop.mapreduce InputSplit subclass-usage

Introduction

In this page you can find the example usage for org.apache.hadoop.mapreduce InputSplit subclass-usage.

Usage

From source file oracle.kv.hadoop.KVInputSplit.java

/**
 * @hidden
 */
public class KVInputSplit extends InputSplit implements Writable {

    private String kvStore;

From source file oracle.kv.hadoop.table.TableInputSplit.java

/**
 * Concrete implementation of the InputSplit interface required to perform
 * Hadoop MapReduce. A RecordReader will take instances of this class, where
 * each such instance corresponds to data stored in an Oracle NoSQL Database
 * store, and use those instances to retrieve that data when performing the
 * MapReduce job.

From source file org.apache.accumulo.core.client.mapreduce.impl.AccumuloInputSplit.java

/**
 * Abstracts over configurations common to all InputSplits. Specifically it leaves out methods
 * related to number of ranges and locations per InputSplit as those vary by implementation.
 *
 * @see org.apache.accumulo.core.client.mapreduce.RangeInputSplit
 * @see org.apache.accumulo.core.client.mapreduce.impl.BatchInputSplit

From source file org.apache.accumulo.core.client.mapreduce.RangeInputSplit.java

/**
 * The Class RangeInputSplit. Encapsulates an Accumulo range for use in Map Reduce jobs.
 */
public class RangeInputSplit extends InputSplit implements Writable {
    private Range range;
    private String[] locations;

From source file org.apache.accumulo.hadoopImpl.mapreduce.RangeInputSplit.java

/**
 * The Class RangeInputSplit. Encapsulates an Accumulo range for use in Map Reduce jobs.
 */
public class RangeInputSplit extends InputSplit implements Writable {
    private Range range;
    private String[] locations;

From source file org.apache.bigtop.bigpetstore.generator.PetStoreTransactionInputSplit.java

/**
 * What does an `InputSplit` actually do? From the Javadocs, it looks like ...
 * absolutely nothing.
 *
 * Note: for some reason, you *have* to implement Writable, even if your methods
 * do nothing, or you will got strange and un-debuggable null pointer

From source file org.apache.carbondata.core.datamap.DataMapDistributable.java

/**
 * Distributable class for datamap.
 */
@InterfaceAudience.Internal
public abstract class DataMapDistributable extends InputSplit implements Distributable, Serializable {

From source file org.apache.carbondata.core.datamap.dev.expr.DataMapDistributableWrapper.java

public class DataMapDistributableWrapper extends InputSplit implements Serializable {

    private String uniqueId;

    private DataMapDistributable distributable;

From source file org.apache.carbondata.hadoop.CarbonMultiBlockSplit.java

/**
 * This class wraps multiple blocks belong to a same node to one split.
 * So the scanning task will scan multiple blocks. This is an optimization for concurrent query.
 */
public class CarbonMultiBlockSplit extends InputSplit implements Writable {

From source file org.apache.carbondata.hadoop.CarbonRawDataInputSplit.java

/**
 * Handles input splits for raw data
 */
public class CarbonRawDataInputSplit extends InputSplit implements Writable {

    long length;