org.nd4j.linalg.factory.Nd4j.hstack()方法的使用及代码示例

x33g5p2x  于2022-01-24 转载在 其他  
字(6.4k)|赞(0)|评价(0)|浏览(186)

本文整理了Java中org.nd4j.linalg.factory.Nd4j.hstack()方法的一些代码示例,展示了Nd4j.hstack()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Nd4j.hstack()方法的具体详情如下:
包路径:org.nd4j.linalg.factory.Nd4j
类名称:Nd4j
方法名:hstack

Nd4j.hstack介绍

[英]Concatenates two matrices horizontally. Matrices must have identical numbers of rows.
[中]水平连接两个矩阵。矩阵的行数必须相同。

代码示例

代码示例来源:origin: deeplearning4j/nd4j

  1. /**
  2. * Adds a feature for each example on to the current feature vector
  3. *
  4. * @param toAdd the feature vector to add
  5. */
  6. @Override
  7. public void addFeatureVector(INDArray toAdd) {
  8. setFeatures(Nd4j.hstack(getFeatureMatrix(), toAdd));
  9. }

代码示例来源:origin: deeplearning4j/dl4j-examples

  1. INDArray hstack = Nd4j.hstack(ones,zeros);
  2. System.out.println("### HSTACK ####");
  3. System.out.println(hstack);

代码示例来源:origin: deeplearning4j/dl4j-examples

  1. INDArray hStack = Nd4j.hstack(rowVector1, rowVector2); //Horizontal stack: [1,3]+[1,3] to [1,6]
  2. System.out.println("\n\n\nCreating INDArrays from other INDArrays, using hstack and vstack:");
  3. System.out.println("vStack:\n" + vStack);

代码示例来源:origin: org.deeplearning4j/deeplearning4j-nn

  1. private INDArray constructParams() {
  2. //some params will be null for subsampling etc
  3. INDArray keepView = null;
  4. for (INDArray aParam : editedParams) {
  5. if (aParam != null) {
  6. if (keepView == null) {
  7. keepView = aParam;
  8. } else {
  9. keepView = Nd4j.hstack(keepView, aParam);
  10. }
  11. }
  12. }
  13. if (!appendParams.isEmpty()) {
  14. INDArray appendView = Nd4j.hstack(appendParams);
  15. return Nd4j.hstack(keepView, appendView);
  16. } else {
  17. return keepView;
  18. }
  19. }

代码示例来源:origin: org.nd4j/nd4j-parameter-server-node

  1. @Override
  2. public INDArray getAccumulatedResult() {
  3. if (aggregationWidth == 1) {
  4. return chunks.get((short) 0);
  5. } else
  6. return Nd4j.hstack(chunks.values());
  7. }

代码示例来源:origin: org.nd4j/nd4j-parameter-server-node_2.11

  1. @Override
  2. public INDArray getAccumulatedResult() {
  3. if (aggregationWidth == 1) {
  4. return chunks.get((short) 0);
  5. } else
  6. return Nd4j.hstack(chunks.values());
  7. }

代码示例来源:origin: org.nd4j/nd4j-api

  1. /**
  2. * Adds a feature for each example on to the current feature vector
  3. *
  4. * @param toAdd the feature vector to add
  5. */
  6. @Override
  7. public void addFeatureVector(INDArray toAdd) {
  8. setFeatures(Nd4j.hstack(getFeatureMatrix(), toAdd));
  9. }

代码示例来源:origin: mccorby/FederatedAndroidTrainer

  1. @Override
  2. public FederatedDataSet getTestData() {
  3. Random rand = new Random(seed);
  4. int numSamples = N_SAMPLES/10;
  5. double[] sum = new double[numSamples];
  6. double[] input1 = new double[numSamples];
  7. double[] input2 = new double[numSamples];
  8. for (int i = 0; i < numSamples; i++) {
  9. input1[i] = MIN_RANGE + (MAX_RANGE - MIN_RANGE) * rand.nextDouble();
  10. input2[i] = MIN_RANGE + (MAX_RANGE - MIN_RANGE) * rand.nextDouble();
  11. sum[i] = input1[i] + input2[i];
  12. }
  13. INDArray inputNDArray1 = Nd4j.create(input1, new int[]{numSamples, 1});
  14. INDArray inputNDArray2 = Nd4j.create(input2, new int[]{numSamples, 1});
  15. INDArray inputNDArray = Nd4j.hstack(inputNDArray1, inputNDArray2);
  16. INDArray outPut = Nd4j.create(sum, new int[]{numSamples, 1});
  17. return new FederatedDataSetImpl(new DataSet(inputNDArray, outPut));
  18. }

代码示例来源:origin: mccorby/FederatedAndroidTrainer

  1. @Override
  2. public FederatedDataSet getTrainingData() {
  3. Random rand = new Random(seed);
  4. double[] sum = new double[N_SAMPLES];
  5. double[] input1 = new double[N_SAMPLES];
  6. double[] input2 = new double[N_SAMPLES];
  7. for (int i = 0; i < N_SAMPLES; i++) {
  8. input1[i] = MIN_RANGE + (MAX_RANGE - MIN_RANGE) * rand.nextDouble();
  9. input2[i] = MIN_RANGE + (MAX_RANGE - MIN_RANGE) * rand.nextDouble();
  10. sum[i] = input1[i] + input2[i];
  11. }
  12. INDArray inputNDArray1 = Nd4j.create(input1, new int[]{N_SAMPLES, 1});
  13. INDArray inputNDArray2 = Nd4j.create(input2, new int[]{N_SAMPLES, 1});
  14. INDArray inputNDArray = Nd4j.hstack(inputNDArray1, inputNDArray2);
  15. INDArray outPut = Nd4j.create(sum, new int[]{N_SAMPLES, 1});
  16. DataSet dataSet = new DataSet(inputNDArray, outPut);
  17. dataSet.shuffle();
  18. return new FederatedDataSetImpl(dataSet);
  19. }

代码示例来源:origin: neo4j-graph-analytics/ml-models

  1. final INDArray nodeFeatures = Nd4j.hstack(arrays);
  2. embedding.putRow(nodeId, nodeFeatures);

代码示例来源:origin: org.deeplearning4j/deeplearning4j-nn

  1. out = Nd4j.hstack(inputs);
  2. break;
  3. case 3:
  4. out = Nd4j.hstack(inputs);
  5. out = Nd4j.hstack(inputs);

代码示例来源:origin: sjsdfg/dl4j-tutorials

  1. private static DataSetIterator getTrainingData(int batchSize, Random rand) {
  2. double [] sum = new double[nSamples];
  3. double [] input1 = new double[nSamples];
  4. double [] input2 = new double[nSamples];
  5. for (int i= 0; i< nSamples; i++) {
  6. input1[i] = MIN_RANGE + (MAX_RANGE - MIN_RANGE) * rand.nextDouble();
  7. input2[i] = MIN_RANGE + (MAX_RANGE - MIN_RANGE) * rand.nextDouble();
  8. sum[i] = input1[i] + input2[i];
  9. }
  10. INDArray inputNDArray1 = Nd4j.create(input1, new int[]{nSamples,1});
  11. INDArray inputNDArray2 = Nd4j.create(input2, new int[]{nSamples,1});
  12. INDArray inputNDArray = Nd4j.hstack(inputNDArray1,inputNDArray2);
  13. INDArray outPut = Nd4j.create(sum, new int[]{nSamples, 1});
  14. DataSet dataSet = new DataSet(inputNDArray, outPut);
  15. List<DataSet> listDs = dataSet.asList();
  16. return new ListDataSetIterator(listDs,batchSize);
  17. }
  18. }

代码示例来源:origin: neo4j-graph-analytics/ml-models

  1. public Embedding prune(Embedding prevEmbedding, Embedding embedding) {
  2. INDArray embeddingToPrune = Nd4j.hstack(prevEmbedding.getNDEmbedding(), embedding.getNDEmbedding());
  3. Feature[] featuresToPrune = ArrayUtils.addAll(prevEmbedding.getFeatures(), embedding.getFeatures());
  4. progressLogger.log("Feature Pruning: Creating features graph");
  5. final Graph graph = loadFeaturesGraph(embeddingToPrune, prevEmbedding.features.length);
  6. progressLogger.log("Feature Pruning: Created features graph");
  7. progressLogger.log("Feature Pruning: Finding features to keep");
  8. int[] featureIdsToKeep = findConnectedComponents(graph)
  9. .collect(Collectors.groupingBy(item -> item.setId))
  10. .values()
  11. .stream()
  12. .mapToInt(results -> results.stream().mapToInt(value -> (int) value.nodeId).min().getAsInt())
  13. .toArray();
  14. progressLogger.log("Feature Pruning: Found features to keep");
  15. progressLogger.log("Feature Pruning: Pruning embeddings");
  16. INDArray prunedNDEmbedding = pruneEmbedding(embeddingToPrune, featureIdsToKeep);
  17. progressLogger.log("Feature Pruning: Pruned embeddings");
  18. Feature[] prunedFeatures = new Feature[featureIdsToKeep.length];
  19. for (int index = 0; index < featureIdsToKeep.length; index++) {
  20. prunedFeatures[index] = featuresToPrune[featureIdsToKeep[index]];
  21. }
  22. return new Embedding(prunedFeatures, prunedNDEmbedding);
  23. }

代码示例来源:origin: neo4j-graph-analytics/ml-models

  1. @Override
  2. public INDArray ndOp(INDArray features, INDArray adjacencyMatrix) {
  3. INDArray[] maxes = new INDArray[features.columns()];
  4. for (int fCol = 0; fCol < features.columns(); fCol++) {
  5. INDArray mul = adjacencyMatrix.transpose().mulColumnVector(features.getColumn(fCol));
  6. maxes[fCol] = mul.max(0).transpose();
  7. }
  8. return Nd4j.hstack(maxes);
  9. }

代码示例来源:origin: org.deeplearning4j/deeplearning4j-datavec-iterators

  1. f = Nd4j.hstack(f1, f2);
  2. } else {

代码示例来源:origin: org.deeplearning4j/deeplearning4j-core

  1. f = Nd4j.hstack(f1, f2);
  2. } else {

相关文章