Example usage for edu.stanford.nlp.math ArrayMath addMultInPlace

List of usage examples for edu.stanford.nlp.math ArrayMath addMultInPlace

Introduction

In this page you can find the example usage for edu.stanford.nlp.math ArrayMath addMultInPlace.

Prototype

public static void addMultInPlace(double[] a, double[] b, double c) 

Source Link

Document

Add c times the array b to array a.

Usage

From source file:cmu.arktweetnlp.impl.OWLQN.java

void mapDirByInverseHessian() {
    int count = sList.size();

    if (count != 0) {
        for (int i = count - 1; i >= 0; i--) {
            //mheilman: The program will try to divide by zero here unless there is a check 
            //that the parameters change at each iteration.  See comments in the minimize() method.
            //A roList value is the inner product of the change in the gradient 
            //and the change in parameters between the current and last iterations.  
            //See the discussion of L-BFGS in Nocedal and Wright's Numerical Optimization book 
            //(though I think that defines rho as the multiplicative inverse of what is here).
            alphas[i] = -ArrayMath.innerProduct(sList.get(i), dir) / roList.get(i);
            ArrayMath.addMultInPlace(dir, yList.get(i), alphas[i]);
        }//from w  w w  . j  av  a2  s. com

        double[] lastY = yList.get(count - 1);
        double yDotY = ArrayMath.innerProduct(lastY, lastY);
        double scalar = roList.get(count - 1) / yDotY;
        ArrayMath.multiplyInPlace(dir, scalar);

        for (int i = 0; i < count; i++) {
            double beta = ArrayMath.innerProduct(yList.get(i), dir) / roList.get(i);
            ArrayMath.addMultInPlace(dir, sList.get(i), -alphas[i] - beta);
        }
    }
}

From source file:dz.pfe.storm.ressources.cmu.arktweetnlp.impl.OWLQN.java

synchronized void mapDirByInverseHessian() {
    int count = sList.size();

    if (count != 0) {
        for (int i = count - 1; i >= 0; i--) {
            //mheilman: The program will try to divide by zero here unless there is a check
            //that the parameters change at each iteration.  See comments in the minimize() method.
            //A roList value is the inner product of the change in the gradient
            //and the change in parameters between the current and last iterations.
            //See the discussion of L-BFGS in Nocedal and Wright's Numerical Optimization book
            //(though I think that defines rho as the multiplicative inverse of what is here).
            alphas[i] = -ArrayMath.innerProduct(sList.get(i), dir) / roList.get(i);
            ArrayMath.addMultInPlace(dir, yList.get(i), alphas[i]);
        }//from  ww  w.j ava  2 s  .  c  om

        double[] lastY = yList.get(count - 1);
        double yDotY = ArrayMath.innerProduct(lastY, lastY);
        double scalar = roList.get(count - 1) / yDotY;
        ArrayMath.multiplyInPlace(dir, scalar);

        for (int i = 0; i < count; i++) {
            double beta = ArrayMath.innerProduct(yList.get(i), dir) / roList.get(i);
            ArrayMath.addMultInPlace(dir, sList.get(i), -alphas[i] - beta);
        }
    }
}