Example usage for org.apache.hadoop.io SequenceFile setDefaultCompressionType

List of usage examples for org.apache.hadoop.io SequenceFile setDefaultCompressionType

Introduction

In this page you can find the example usage for org.apache.hadoop.io SequenceFile setDefaultCompressionType.

Prototype

static public void setDefaultCompressionType(Configuration job, CompressionType val) 

Source Link

Document

Set the default compression type for sequence files.

Usage

From source file:kafka.etl.tweet.producer.TweetProducer.java

License:Apache License

protected void generateOffsets() throws Exception {
    JobConf conf = new JobConf();
    java.util.Date date = new java.util.Date();
    conf.set("hadoop.job.ugi", _props.getProperty("hadoop.job.ugi"));
    conf.setCompressMapOutput(false);/*from ww  w .j  ava 2s . c  o  m*/
    Calendar cal = Calendar.getInstance();
    Path outPath = new Path(_offsetsDir + Path.SEPARATOR + "1.dat");
    FileSystem fs = outPath.getFileSystem(conf);
    if (fs.exists(outPath))
        fs.delete(outPath);

    KafkaETLRequest request = new KafkaETLRequest(_topic, "tcp://" + _uri.getHost() + ":" + _uri.getPort(), 0);

    System.out.println("Dump " + request.toString() + " to " + outPath.toUri().toString());

    byte[] bytes = request.toString().getBytes("UTF-8");
    KafkaETLKey dummyKey = new KafkaETLKey();
    SequenceFile.setDefaultCompressionType(conf, SequenceFile.CompressionType.NONE);
    SequenceFile.Writer writer = SequenceFile.createWriter(fs, conf, outPath, KafkaETLKey.class,
            BytesWritable.class);
    writer.append(dummyKey, new BytesWritable(bytes));
    writer.close();
}