Example usage for org.apache.hadoop.hdfs DistributedFileSystem getDataNodeStats

List of usage examples for org.apache.hadoop.hdfs DistributedFileSystem getDataNodeStats

Introduction

In this page you can find the example usage for org.apache.hadoop.hdfs DistributedFileSystem getDataNodeStats.

Prototype

public DatanodeInfo[] getDataNodeStats(final DatanodeReportType type) throws IOException 

Source Link

Usage

From source file:org.apache.hawq.pxf.service.rest.ClusterNodesResource.java

License:Apache License

/**
 * Function queries the Hadoop namenode with the getDataNodeStats API It
 * gets the host's IP and REST port of every HDFS data node in the cluster.
 * Then, it packs the results in JSON format and writes to the HTTP response
 * stream. Response Examples:<br>// www. j  a  va2  s. c  o  m
 * <ol>
 * <li>When there are no datanodes - getDataNodeStats returns an empty array
 * <code>{"regions":[]}</code></li>
 * <li>When there are datanodes
 * <code>{"regions":[{"host":"1.2.3.1","port":50075},{"host":"1.2.3.2","port"
 * :50075}]}</code></li>
 * </ol>
 *
 * @return JSON response with nodes info
 * @throws Exception if failed to retrieve info
 */
@GET
@Path("getNodesInfo")
@Produces("application/json")
public Response read() throws Exception {
    LOG.debug("getNodesInfo started");
    StringBuilder jsonOutput = new StringBuilder("{\"regions\":[");
    try {
        /*
         * 1. Initialize the HADOOP client side API for a distributed file
         * system
         */
        Configuration conf = new Configuration();
        FileSystem fs = FileSystem.get(conf);
        DistributedFileSystem dfs = (DistributedFileSystem) fs;

        /*
         * 2. Query the namenode for the datanodes info. Only live nodes are
         * returned - in accordance with the results returned by
         * org.apache.hadoop.hdfs.tools.DFSAdmin#report().
         */
        DatanodeInfo[] liveNodes = dfs.getDataNodeStats(DatanodeReportType.LIVE);

        /*
         * 3. Pack the datanodes info in a JSON text format and write it to
         * the HTTP output stream.
         */
        String prefix = "";
        for (DatanodeInfo node : liveNodes) {
            verifyNode(node);
            // write one node to the HTTP stream
            jsonOutput.append(prefix).append(writeNode(node));
            prefix = ",";
        }
        jsonOutput.append("]}");
        LOG.debug("getNodesCluster output: " + jsonOutput);
    } catch (NodeDataException e) {
        LOG.error("Nodes verification failed", e);
        throw e;
    } catch (ClientAbortException e) {
        LOG.error("Remote connection closed by HAWQ", e);
        throw e;
    } catch (java.io.IOException e) {
        LOG.error("Unhandled exception thrown", e);
        throw e;
    }

    return Response.ok(jsonOutput.toString(), MediaType.APPLICATION_JSON_TYPE).build();
}