本文整理了Java中org.apache.spark.Accumulator.setValue()
方法的一些代码示例,展示了Accumulator.setValue()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Accumulator.setValue()
方法的具体详情如下:
包路径:org.apache.spark.Accumulator
类名称:Accumulator
方法名:setValue
暂无
代码示例来源:origin: apache/tinkerpop
@Override
public void set(final String key, final Object value) {
checkKeyValue(key, value);
if (this.inExecute)
throw Memory.Exceptions.memorySetOnlyDuringVertexProgramSetUpAndTerminate(key);
else
this.sparkMemory.get(key).setValue(new ObjectWritable<>(value));
}
代码示例来源:origin: org.apache.spark/spark-core_2.11
floatAccum.setValue(5.0f);
assertEquals((Float) 5.0f, floatAccum.value());
代码示例来源:origin: org.apache.spark/spark-core
floatAccum.setValue(5.0f);
assertEquals((Float) 5.0f, floatAccum.value());
代码示例来源:origin: org.apache.spark/spark-core_2.10
floatAccum.setValue(5.0f);
assertEquals((Float) 5.0f, floatAccum.value());
代码示例来源:origin: org.apache.tinkerpop/spark-gremlin
@Override
public void set(final String key, final Object value) {
checkKeyValue(key, value);
if (this.inExecute)
throw Memory.Exceptions.memorySetOnlyDuringVertexProgramSetUpAndTerminate(key);
else
this.sparkMemory.get(key).setValue(new ObjectWritable<>(value));
}
代码示例来源:origin: ai.grakn/grakn-kb
@Override
public void set(final String key, final Object value) {
checkKeyValue(key, value);
if (this.inExecute) {
throw Memory.Exceptions.memorySetOnlyDuringVertexProgramSetUpAndTerminate(key);
} else {
this.sparkMemory.get(key).setValue(new ObjectWritable<>(value));
}
}
代码示例来源:origin: org.apache.beam/beam-runners-spark
/** Init metrics accumulator if it has not been initiated. This method is idempotent. */
public static void init(SparkPipelineOptions opts, JavaSparkContext jsc) {
if (instance == null) {
synchronized (MetricsAccumulator.class) {
if (instance == null) {
Optional<CheckpointDir> maybeCheckpointDir =
opts.isStreaming()
? Optional.of(new CheckpointDir(opts.getCheckpointDir()))
: Optional.absent();
Accumulator<MetricsContainerStepMap> accumulator =
jsc.sc()
.accumulator(
new SparkMetricsContainerStepMap(),
ACCUMULATOR_NAME,
new MetricsAccumulatorParam());
if (maybeCheckpointDir.isPresent()) {
Optional<MetricsContainerStepMap> maybeRecoveredValue =
recoverValueFromCheckpoint(jsc, maybeCheckpointDir.get());
if (maybeRecoveredValue.isPresent()) {
accumulator.setValue(maybeRecoveredValue.get());
}
}
instance = accumulator;
}
}
LOG.info("Instantiated metrics accumulator: " + instance.value());
}
}
代码示例来源:origin: org.apache.beam/beam-runners-spark
/** Init aggregators accumulator if it has not been initiated. This method is idempotent. */
public static void init(SparkPipelineOptions opts, JavaSparkContext jsc) {
if (instance == null) {
synchronized (AggregatorsAccumulator.class) {
if (instance == null) {
Optional<CheckpointDir> maybeCheckpointDir =
opts.isStreaming()
? Optional.of(new CheckpointDir(opts.getCheckpointDir()))
: Optional.absent();
Accumulator<NamedAggregators> accumulator =
jsc.sc().accumulator(new NamedAggregators(), ACCUMULATOR_NAME, new AggAccumParam());
if (maybeCheckpointDir.isPresent()) {
Optional<NamedAggregators> maybeRecoveredValue =
recoverValueFromCheckpoint(jsc, maybeCheckpointDir.get());
if (maybeRecoveredValue.isPresent()) {
accumulator.setValue(maybeRecoveredValue.get());
}
}
instance = accumulator;
}
}
LOG.info("Instantiated aggregators accumulator: " + instance.value());
}
}
内容来源于网络,如有侵权,请联系作者删除!