Example usage for edu.stanford.nlp.math ArrayMath multiplyInPlace

List of usage examples for edu.stanford.nlp.math ArrayMath multiplyInPlace

Introduction

In this page you can find the example usage for edu.stanford.nlp.math ArrayMath multiplyInPlace.

Prototype

public static void multiplyInPlace(float[] a, double b) 

Source Link

Document

Scales the values in this array by b.

Usage

From source file:cmu.arktweetnlp.impl.OWLQN.java

void mapDirByInverseHessian() {
    int count = sList.size();

    if (count != 0) {
        for (int i = count - 1; i >= 0; i--) {
            //mheilman: The program will try to divide by zero here unless there is a check 
            //that the parameters change at each iteration.  See comments in the minimize() method.
            //A roList value is the inner product of the change in the gradient 
            //and the change in parameters between the current and last iterations.  
            //See the discussion of L-BFGS in Nocedal and Wright's Numerical Optimization book 
            //(though I think that defines rho as the multiplicative inverse of what is here).
            alphas[i] = -ArrayMath.innerProduct(sList.get(i), dir) / roList.get(i);
            ArrayMath.addMultInPlace(dir, yList.get(i), alphas[i]);
        }/*from  www .j  a v  a2s.co m*/

        double[] lastY = yList.get(count - 1);
        double yDotY = ArrayMath.innerProduct(lastY, lastY);
        double scalar = roList.get(count - 1) / yDotY;
        ArrayMath.multiplyInPlace(dir, scalar);

        for (int i = 0; i < count; i++) {
            double beta = ArrayMath.innerProduct(yList.get(i), dir) / roList.get(i);
            ArrayMath.addMultInPlace(dir, sList.get(i), -alphas[i] - beta);
        }
    }
}

From source file:dz.pfe.storm.ressources.cmu.arktweetnlp.impl.OWLQN.java

synchronized void mapDirByInverseHessian() {
    int count = sList.size();

    if (count != 0) {
        for (int i = count - 1; i >= 0; i--) {
            //mheilman: The program will try to divide by zero here unless there is a check
            //that the parameters change at each iteration.  See comments in the minimize() method.
            //A roList value is the inner product of the change in the gradient
            //and the change in parameters between the current and last iterations.
            //See the discussion of L-BFGS in Nocedal and Wright's Numerical Optimization book
            //(though I think that defines rho as the multiplicative inverse of what is here).
            alphas[i] = -ArrayMath.innerProduct(sList.get(i), dir) / roList.get(i);
            ArrayMath.addMultInPlace(dir, yList.get(i), alphas[i]);
        }/*from   w  w w  .ja  v  a2  s .com*/

        double[] lastY = yList.get(count - 1);
        double yDotY = ArrayMath.innerProduct(lastY, lastY);
        double scalar = roList.get(count - 1) / yDotY;
        ArrayMath.multiplyInPlace(dir, scalar);

        for (int i = 0; i < count; i++) {
            double beta = ArrayMath.innerProduct(yList.get(i), dir) / roList.get(i);
            ArrayMath.addMultInPlace(dir, sList.get(i), -alphas[i] - beta);
        }
    }
}