Example usage for org.apache.hadoop.io Writable interface-usage

List of usage examples for org.apache.hadoop.io Writable interface-usage

Introduction

In this page you can find the example usage for org.apache.hadoop.io Writable interface-usage.

Usage

From source file com.cloudera.integration.oracle.goldengate.ldv.mapreduce.lib.FieldValueWritable.java

/**
 *
 * @author jcustenborder
 */
public class FieldValueWritable implements Writable {

From source file com.cloudera.knittingboar.sgd.ParallelOnlineLogisticRegression.java

/**
 * Parallel Online Logisitc Regression
 * 
 * Based loosely on Mahout's :
 * 
 * http://svn.apache.org/repos/asf/mahout/trunk/core/src/main/java/org/apache/

From source file com.cloudera.knittingboar.sgd.POLRModelParameters.java

/**
 * Encapsulates everything we need to know about a model and how it reads and
 * vectorizes its input. This encapsulation allows us to coherently save and
 * restore a model from a file. This also allows us to keep command line
 * arguments that affect learning in a coherent way.
 * 

From source file com.cloudera.recordbreaker.hive.borrowed.AvroGenericRecordWritable.java

/**
 * BORROWED FROM AVRO TRUNK.  WHEN NEW VERSION OF AVRO IS DEPLOYED, THIS CLASS
 * SHOULD BE OBSOLETED IN FAVOR OF
 * org.apache.hadoop.hive.serde2.avro.AvroGenericRecordWritable
 *
 * Wrapper around an Avro GenericRecord.  Necessary because Hive's deserializer

From source file com.cloudera.recordbreaker.learnstructure.InferredType.java

/*********************************************************
 * InferredType is returned by TypeInference.infer() for the inferred record type
 * of a file's contents.  It has several subclasses.
 *********************************************************/
public abstract class InferredType implements Writable {
    static byte BASE_TYPE = 1;

From source file com.cloudera.recordbreaker.schemadict.SchemaStatisticalSummary.java

/********************************************
 * The SchemaStatistical Summary object is designed to mirror the structure of an input Schema.
 * In addition to the name and type information associated with a Schema object, it keeps statistical data
 * about observed actual data values that correspond to each Schema element.  
 *
 * This class is intended to be used in the following way:

From source file com.cloudera.recordservice.examples.terasort.Unsigned16.java

/**
 * An unsigned 16 byte integer class that supports addition, multiplication,
 * and left shifts.
 *
 * Copied from hadoop example.
 */

From source file com.cloudera.recordservice.mapreduce.RecordServiceInputSplit.java

/**
 * The InputSplit implementation that is used in conjunction with the
 * Record Service. It contains the Schema of the record as well as all the
 * information required for the Record Service Worker to execute the task
 */
public class RecordServiceInputSplit extends InputSplit implements Writable {

From source file com.cloudera.recordservice.mr.RecordServiceRecord.java

public class RecordServiceRecord implements Writable {
    // Array of Writable objects. This is created once and reused.
    private Writable[] columnValObjects_;

    // The values for the current record. If column[i] is NULL,
    // columnVals_[i] is null, otherwise it is columnValObjects_[i].

From source file com.cloudera.recordservice.mr.Schema.java

/**
 * The Schema class provides metadata for records. It is a wrapper for
 * core.Schema but implements the Writable interface.
 */
public class Schema implements Writable {
    private com.cloudera.recordservice.core.Schema schema_;