org.nd4j.linalg.factory.Nd4j.getBlasWrapper()方法的使用及代码示例

x33g5p2x  于2022-01-24 转载在 其他  
字(10.3k)|赞(0)|评价(0)|浏览(163)

本文整理了Java中org.nd4j.linalg.factory.Nd4j.getBlasWrapper()方法的一些代码示例,展示了Nd4j.getBlasWrapper()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Nd4j.getBlasWrapper()方法的具体详情如下:
包路径:org.nd4j.linalg.factory.Nd4j
类名称:Nd4j
方法名:getBlasWrapper

Nd4j.getBlasWrapper介绍

暂无

代码示例

代码示例来源:origin: deeplearning4j/dl4j-examples

  1. public static void main(String[] args) {
  2. Nd4j.setDataType(DataBuffer.Type.DOUBLE);
  3. INDArray arr = Nd4j.create(300);
  4. double numTimes = 10000000;
  5. double total = 0;
  6. for(int i = 0; i < numTimes; i++) {
  7. long start = System.nanoTime();
  8. Nd4j.getBlasWrapper().axpy(new Integer(1), arr,arr);
  9. long after = System.nanoTime();
  10. long add = Math.abs(after - start);
  11. System.out.println("Took " + add);
  12. total += Math.abs(after - start);
  13. }
  14. System.out.println("Avg time " + (total / numTimes));
  15. }
  16. }

代码示例来源:origin: deeplearning4j/nd4j

  1. @Override
  2. public Map<Integer, Double> labelCounts() {
  3. Map<Integer, Double> ret = new HashMap<>();
  4. if (labels == null)
  5. return ret;
  6. long nTensors = labels.tensorssAlongDimension(1);
  7. for (int i = 0; i < nTensors; i++) {
  8. INDArray row = labels.tensorAlongDimension(i, 1);
  9. INDArray javaRow = labels.javaTensorAlongDimension(i, 1);
  10. int maxIdx = Nd4j.getBlasWrapper().iamax(row);
  11. int maxIdxJava = Nd4j.getBlasWrapper().iamax(javaRow);
  12. if (maxIdx < 0)
  13. throw new IllegalStateException("Please check the iamax implementation for "
  14. + Nd4j.getBlasWrapper().getClass().getName());
  15. if (ret.get(maxIdx) == null)
  16. ret.put(maxIdx, 1.0);
  17. else
  18. ret.put(maxIdx, ret.get(maxIdx) + 1.0);
  19. }
  20. return ret;
  21. }

代码示例来源:origin: deeplearning4j/nd4j

  1. @Override
  2. public int outcome() {
  3. return Nd4j.getBlasWrapper().iamax(getLabels());
  4. }

代码示例来源:origin: deeplearning4j/nd4j

  1. /**
  2. * Scale by 1 / norm2 of the matrix
  3. *
  4. * @param toScale the ndarray to scale
  5. * @return the scaled ndarray
  6. */
  7. public static INDArray unitVec(INDArray toScale) {
  8. double length = toScale.norm2Number().doubleValue();
  9. if (length > 0) {
  10. if (toScale.data().dataType() == (DataBuffer.Type.FLOAT))
  11. return Nd4j.getBlasWrapper().scal(1.0f / (float) length, toScale);
  12. else
  13. return Nd4j.getBlasWrapper().scal(1.0 / length, toScale);
  14. }
  15. return toScale;
  16. }

代码示例来源:origin: deeplearning4j/nd4j

  1. /**
  2. * Compute generalized eigenvalues of the problem A x = L x.
  3. * Matrix A is modified in the process, holding eigenvectors after execution.
  4. *
  5. * @param A symmetric Matrix A. After execution, A will contain the eigenvectors as columns
  6. * @return a vector of eigenvalues L.
  7. */
  8. public static INDArray symmetricGeneralizedEigenvalues(INDArray A) {
  9. INDArray eigenvalues = Nd4j.create(A.rows());
  10. Nd4j.getBlasWrapper().syev('V', 'L', A, eigenvalues);
  11. return eigenvalues;
  12. }

代码示例来源:origin: deeplearning4j/nd4j

  1. /** Matrix multiply: Implements c = alpha*op(a)*op(b) + beta*c where op(X) means transpose X (or not)
  2. * depending on setting of arguments transposeA and transposeB.<br>
  3. * Note that matrix c MUST be fortran order, have zero offset and have c.data().length == c.length().
  4. * An exception will be thrown otherwise.<br>
  5. * Don't use this unless you know about level 3 blas and NDArray storage orders.
  6. * @param a First matrix
  7. * @param b Second matrix
  8. * @param c result matrix. Used in calculation (assuming beta != 0) and result is stored in this. f order,
  9. * zero offset and length == data.length only
  10. * @param transposeA if true: transpose matrix a before mmul
  11. * @param transposeB if true: transpose matrix b before mmul
  12. * @return result, i.e., matrix c is returned for convenience
  13. */
  14. public static INDArray gemm(INDArray a,
  15. INDArray b,
  16. INDArray c,
  17. boolean transposeA,
  18. boolean transposeB,
  19. double alpha,
  20. double beta) {
  21. getBlasWrapper().level3().gemm(a, b, c, transposeA, transposeB, alpha, beta);
  22. return c;
  23. }

代码示例来源:origin: deeplearning4j/nd4j

  1. /**
  2. * in place subtraction of two matrices
  3. *
  4. * @param other the second ndarray to subtract
  5. * @param result the result ndarray
  6. * @return the result of the subtraction
  7. */
  8. @Override
  9. public IComplexNDArray subi(INDArray other, INDArray result) {
  10. IComplexNDArray cOther = (IComplexNDArray) other;
  11. IComplexNDArray cResult = (IComplexNDArray) result;
  12. if (other.isScalar())
  13. return subi(cOther.getComplex(0), result);
  14. if (result == this)
  15. Nd4j.getBlasWrapper().axpy(Nd4j.NEG_UNIT, cOther, cResult);
  16. else if (result == other) {
  17. if (data.dataType() == (DataBuffer.Type.DOUBLE)) {
  18. Nd4j.getBlasWrapper().scal(Nd4j.NEG_UNIT.asDouble(), cResult);
  19. Nd4j.getBlasWrapper().axpy(Nd4j.UNIT, this, cResult);
  20. } else {
  21. Nd4j.getBlasWrapper().scal(Nd4j.NEG_UNIT.asFloat(), cResult);
  22. Nd4j.getBlasWrapper().axpy(Nd4j.UNIT, this, cResult);
  23. }
  24. } else {
  25. Nd4j.getBlasWrapper().copy(this, result);
  26. Nd4j.getBlasWrapper().axpy(Nd4j.NEG_UNIT, cOther, cResult);
  27. }
  28. return cResult;
  29. }

代码示例来源:origin: deeplearning4j/nd4j

  1. /**
  2. * Compute generalized eigenvalues of the problem A x = L x.
  3. * Matrix A is modified in the process, holding eigenvectors as columns after execution.
  4. *
  5. * @param A symmetric Matrix A. After execution, A will contain the eigenvectors as columns
  6. * @param calculateVectors if false, it will not modify A and calculate eigenvectors
  7. * @return a vector of eigenvalues L.
  8. */
  9. public static INDArray symmetricGeneralizedEigenvalues(INDArray A, boolean calculateVectors) {
  10. INDArray eigenvalues = Nd4j.create(A.rows());
  11. Nd4j.getBlasWrapper().syev('V', 'L', (calculateVectors ? A : A.dup()), eigenvalues);
  12. return eigenvalues;
  13. }

代码示例来源:origin: deeplearning4j/nd4j

  1. @Override
  2. public INDArray mmul(INDArray other) {
  3. long[] shape = {rows(), other.columns()};
  4. INDArray result = createUninitialized(shape, 'f');
  5. if (result.isScalar())
  6. return Nd4j.scalar(Nd4j.getBlasWrapper().dot(this, other));
  7. return mmuli(other, result);
  8. }

代码示例来源:origin: deeplearning4j/nd4j

  1. /**
  2. * Computes the eigenvalues of a general matrix.
  3. */
  4. public static IComplexNDArray eigenvalues(INDArray A) {
  5. assert A.rows() == A.columns();
  6. INDArray WR = Nd4j.create(A.rows(), A.rows());
  7. INDArray WI = WR.dup();
  8. Nd4j.getBlasWrapper().geev('N', 'N', A.dup(), WR, WI, dummy, dummy);
  9. return Nd4j.createComplex(WR, WI);
  10. }

代码示例来源:origin: deeplearning4j/nd4j

  1. /**
  2. * Perform a copy matrix multiplication
  3. *
  4. * @param other the other matrix to perform matrix multiply with
  5. * @return the result of the matrix multiplication
  6. */
  7. @Override
  8. public INDArray mmul(INDArray other) {
  9. // FIXME: for 1D case, we probably want vector output here?
  10. long[] shape = {rows(), other.rank() == 1 ? 1 : other.columns()};
  11. INDArray result = createUninitialized(shape, 'f');
  12. if (result.isScalar())
  13. return Nd4j.scalar(Nd4j.getBlasWrapper().dot(this, other));
  14. return mmuli(other, result);
  15. }

代码示例来源:origin: deeplearning4j/nd4j

  1. /**
  2. * Compute generalized eigenvalues of the problem A x = L B x.
  3. * The data will be unchanged, no eigenvectors returned.
  4. *
  5. * @param A symmetric Matrix A.
  6. * @param B symmetric Matrix B.
  7. * @return a vector of eigenvalues L.
  8. */
  9. public static INDArray symmetricGeneralizedEigenvalues(INDArray A, INDArray B) {
  10. assert A.rows() == A.columns();
  11. assert B.rows() == B.columns();
  12. INDArray W = Nd4j.create(A.rows());
  13. A = InvertMatrix.invert(B, false).mmuli(A);
  14. Nd4j.getBlasWrapper().syev('V', 'L', A, W);
  15. return W;
  16. }

代码示例来源:origin: deeplearning4j/nd4j

  1. /**
  2. * Returns a column vector where each entry is the nth bilinear
  3. * product of the nth slices of the two tensors.
  4. */
  5. @Override
  6. public INDArray bilinearProducts(INDArray curr, INDArray in) {
  7. assert curr.shape().length == 3;
  8. if (in.columns() != 1) {
  9. throw new AssertionError("Expected a column vector");
  10. }
  11. if (in.rows() != curr.size(curr.shape().length - 1)) {
  12. throw new AssertionError("Number of rows in the input does not match number of columns in tensor");
  13. }
  14. if (curr.size(curr.shape().length - 2) != curr.size(curr.shape().length - 1)) {
  15. throw new AssertionError("Can only perform this operation on a SimpleTensor with square slices");
  16. }
  17. INDArray ret = Nd4j.create(curr.slices(), 1);
  18. INDArray inT = in.transpose();
  19. for (int i = 0; i < curr.slices(); i++) {
  20. INDArray slice = curr.slice(i);
  21. INDArray inTTimesSlice = inT.mmul(slice);
  22. ret.putScalar(i, Nd4j.getBlasWrapper().dot(inTTimesSlice, in));
  23. }
  24. return ret;
  25. }

代码示例来源:origin: deeplearning4j/nd4j

  1. Nd4j.getBlasWrapper().axpy(Nd4j.UNIT, cOther, cResult);
  2. } else if (result == other) {
  3. Nd4j.getBlasWrapper().axpy(Nd4j.UNIT, this, cResult);
  4. } else {
  5. INDArray resultLinear = result.linearView();

代码示例来源:origin: deeplearning4j/nd4j

  1. Nd4j.getBlasWrapper().level2().gemv(BlasBufferUtil.getCharForTranspose(temp),
  2. BlasBufferUtil.getCharForTranspose(this), Nd4j.UNIT, this, otherArray, Nd4j.ZERO, temp);
  3. } else {
  4. Nd4j.getBlasWrapper().level3().gemm(BlasBufferUtil.getCharForTranspose(temp),
  5. BlasBufferUtil.getCharForTranspose(this), BlasBufferUtil.getCharForTranspose(other),
  6. Nd4j.UNIT, this, otherArray, Nd4j.ZERO, temp);
  7. Nd4j.getBlasWrapper().copy(temp, resultArray);
  8. Nd4j.getBlasWrapper().level2().gemv(BlasBufferUtil.getCharForTranspose(resultArray),
  9. BlasBufferUtil.getCharForTranspose(this), Nd4j.UNIT, this, otherArray, Nd4j.ZERO,
  10. resultArray);
  11. Nd4j.getBlasWrapper().level3().gemm(BlasBufferUtil.getCharForTranspose(resultArray),
  12. BlasBufferUtil.getCharForTranspose(this), BlasBufferUtil.getCharForTranspose(other),
  13. Nd4j.UNIT, this, otherArray, Nd4j.ZERO, resultArray);

代码示例来源:origin: deeplearning4j/nd4j

  1. /**
  2. * Compute generalized eigenvalues of the problem A x = L B x.
  3. * The data will be unchanged, no eigenvectors returned unless calculateVectors is true.
  4. * If calculateVectors == true, A will contain a matrix with the eigenvectors as columns.
  5. *
  6. * @param A symmetric Matrix A.
  7. * @param B symmetric Matrix B.
  8. * @return a vector of eigenvalues L.
  9. */
  10. public static INDArray symmetricGeneralizedEigenvalues(INDArray A, INDArray B, boolean calculateVectors) {
  11. assert A.rows() == A.columns();
  12. assert B.rows() == B.columns();
  13. INDArray W = Nd4j.create(A.rows());
  14. if (calculateVectors)
  15. A.assign(InvertMatrix.invert(B, false).mmuli(A));
  16. else
  17. A = InvertMatrix.invert(B, false).mmuli(A);
  18. Nd4j.getBlasWrapper().syev('V', 'L', A, W);
  19. return W;
  20. }

代码示例来源:origin: deeplearning4j/nd4j

  1. Nd4j.getBlasWrapper().lapack().gesvd(A, s, null, VT);

代码示例来源:origin: deeplearning4j/nd4j

  1. Nd4j.getBlasWrapper().level2().gemv(ordering(), BlasBufferUtil.getCharForTranspose(other), 1.0, this, other,
  2. 0.0, gemmResultArr);
  3. } else {
  4. Nd4j.getBlasWrapper().level3().gemm(ordering(), BlasBufferUtil.getCharForTranspose(other),
  5. BlasBufferUtil.getCharForTranspose(gemmResultArr), 1.0, this, other, 0.0, gemmResultArr);

代码示例来源:origin: deeplearning4j/nd4j

  1. @Override
  2. public INDArray sample(int[] shape) {
  3. int numRows = 1;
  4. for (int i = 0; i < shape.length - 1; i++)
  5. numRows *= shape[i];
  6. int numCols = shape[shape.length - 1];
  7. val flatShape = new int[]{numRows, numCols};
  8. val flatRng = Nd4j.getExecutioner().exec(new GaussianDistribution(Nd4j.createUninitialized(flatShape, Nd4j.order()), 0.0, 1.0), random);
  9. long m = flatRng.rows();
  10. long n = flatRng.columns();
  11. val s = Nd4j.create(m < n ? m : n);
  12. val u = m < n ? Nd4j.create(m, n) : Nd4j.create(m, m);
  13. val v = Nd4j.create(n, n, 'f');
  14. Nd4j.getBlasWrapper().lapack().gesvd(flatRng, s, u, v);
  15. // FIXME: int cast
  16. if (gains == null) {
  17. if (u.rows() == numRows && u.columns() == numCols) {
  18. return v.get(NDArrayIndex.interval(0, numRows), NDArrayIndex.interval(0, numCols)).mul(gain).reshape(ArrayUtil.toLongArray(shape));
  19. } else {
  20. return u.get(NDArrayIndex.interval(0, numRows), NDArrayIndex.interval(0, numCols)).mul(gain).reshape(ArrayUtil.toLongArray(shape));
  21. }
  22. } else {
  23. throw new UnsupportedOperationException();
  24. }
  25. }

代码示例来源:origin: deeplearning4j/nd4j

  1. INDArray VL = Nd4j.create(A.rows(), A.rows());
  2. Nd4j.getBlasWrapper().geev('v', 'v', A.dup(), WR, WI, VL, VR);

相关文章