Example usage for org.apache.hadoop.mapreduce Partitioner subclass-usage

List of usage examples for org.apache.hadoop.mapreduce Partitioner subclass-usage

Introduction

In this page you can find the example usage for org.apache.hadoop.mapreduce Partitioner subclass-usage.

Usage

From source file demo.NaturalKeyPartitioner.java

/**
 * Partitions key based on "natural" key of {@link StockKey} (which
 * is the symbol).
 * @author Jee Vang
 *
 */

From source file dz.lab.mapred.hbase.custom_output.CustomParitioner.java

/**
 * 
 * A blog partitioner that ensure that all blogs with the same author will endup in the same reduce task
 */
public class CustomParitioner extends Partitioner<Text, BlogWritable> {

From source file edu.umd.cloud9.example.pagerank.RangePartitioner.java

/**
 * Ranger partitioner. In the context of graph algorithms, ensures that consecutive node ids are
 * blocked together.
 *
 * @author Jimmy Lin
 * @author Michael Schatz

From source file edu.umd.honghongie.RangePartitioner.java

/**
 * Ranger partitioner. In the context of graph algorithms, ensures that consecutive node ids are
 * blocked together.
 *
 * @author Jimmy Lin
 * @author Michael Schatz

From source file edu.umd.JBizz.RangePartitioner.java

/**
 * Ranger partitioner. In the context of graph algorithms, ensures that consecutive node ids are
 * blocked together.
 *
 * @author Jimmy Lin
 * @author Michael Schatz

From source file edu.umd.shrawanraina.RangePartitioner.java

/**
 * Ranger partitioner. In the context of graph algorithms, ensures that consecutive node ids are
 * blocked together.
 *
 * @author Jimmy Lin
 * @author Michael Schatz

From source file edu.umd.windmemory.RangePartitioner.java

/**
 * Ranger partitioner. In the context of graph algorithms, ensures that consecutive node ids are
 * blocked together.
 *
 * @author Jimmy Lin
 * @author Michael Schatz

From source file gr.ntua.h2rdf.byteImport.MyNewTotalOrderPartitioner.java

public class MyNewTotalOrderPartitioner<VALUE> extends Partitioner<ImmutableBytesWritable, VALUE> {
    //private static Random g1= new Random();

    public static final int MAX_HBASE_ROWS = 45;
    public static final int MAX_HBASE_BUCKETS = 255;
    private static final byte[] SUBCLASS = Bytes.toBytes(new Long("8742859611446415633"));

From source file gr.ntua.h2rdf.loadTriples.TotalOrderPartitioner.java

/**
 * Partitioner effecting a total order by reading split points from
 * an externally generated source.
 * 
 * This is an identical copy of o.a.h.mapreduce.lib.partition.TotalOrderPartitioner
 * from Hadoop trunk at r910774.

From source file gr.ntua.h2rdf.sampler.TotalOrderPartitioner.java

/**
 * Partitioner effecting a total order by reading split points from
 * an externally generated source.
 * 
 * This is an identical copy of o.a.h.mapreduce.lib.partition.TotalOrderPartitioner
 * from Hadoop trunk at r910774.