本文整理了Java中org.apache.beam.sdk.metrics.Gauge
类的一些代码示例,展示了Gauge
类的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Gauge
类的具体详情如下:
包路径:org.apache.beam.sdk.metrics.Gauge
类名称:Gauge
[英]A metric that reports the latest value out of reported values.
Since metrics are collected from many workers the value may not be the absolute last, but one of the latest values.
[中]报告报告值中最新值的度量。
由于指标是从许多工人那里收集的,因此该值可能不是最后一个绝对值,而是最新值之一。
代码示例来源:origin: org.apache.beam/beam-sdks-java-io-kafka
private void reportBacklog() {
long splitBacklogBytes = getSplitBacklogBytes();
if (splitBacklogBytes < 0) {
splitBacklogBytes = UnboundedReader.BACKLOG_UNKNOWN;
}
backlogBytesOfSplit.set(splitBacklogBytes);
long splitBacklogMessages = getSplitBacklogMessageCount();
if (splitBacklogMessages < 0) {
splitBacklogMessages = UnboundedReader.BACKLOG_UNKNOWN;
}
backlogElementsOfSplit.set(splitBacklogMessages);
}
代码示例来源:origin: org.apache.beam/beam-sdks-java-io-kafka
MetricName bytesRead = SourceMetrics.bytesRead().getName();
MetricName bytesReadBySplit = SourceMetrics.bytesReadBySplit(splitId).getName();
MetricName backlogElementsOfSplit = SourceMetrics.backlogElementsOfSplit(splitId).getName();
MetricName backlogBytesOfSplit = SourceMetrics.backlogBytesOfSplit(splitId).getName();
代码示例来源:origin: spotify/dbeam
public void exposeMetrics(long elapsedMs) {
logger.info(String.format("jdbcavroio : Read %d rows, took %5.2f seconds",
rowCount, elapsedMs / 1000.0));
this.writeElapsedMs.inc(elapsedMs);
if (rowCount > 0) {
this.recordCount.inc((this.rowCount % countReportEvery));
this.msPerMillionRows.set(1000000L * elapsedMs / rowCount);
if (elapsedMs != 0) {
this.rowsPerMinute.set((60 * 1000L) * rowCount / elapsedMs);
}
}
}
代码示例来源:origin: GoogleCloudPlatform/DataflowTemplates
private void reportBacklog() {
long splitBacklogBytes = getSplitBacklogBytes();
if (splitBacklogBytes < 0) {
splitBacklogBytes = UnboundedReader.BACKLOG_UNKNOWN;
}
backlogBytesOfSplit.set(splitBacklogBytes);
long splitBacklogMessages = getSplitBacklogMessageCount();
if (splitBacklogMessages < 0) {
splitBacklogMessages = UnboundedReader.BACKLOG_UNKNOWN;
}
backlogElementsOfSplit.set(splitBacklogMessages);
}
代码示例来源:origin: spotify/dbeam
/**
* Increment and report counters to Beam SDK and logs.
* To avoid slowing down the writes, counts are reported every x 1000s of rows.
* This exposes the job progress.
*/
public void incrementRecordCount() {
this.rowCount++;
if ((this.rowCount % countReportEvery) == 0) {
this.recordCount.inc(countReportEvery);
long elapsedMs = System.currentTimeMillis() - this.writeIterateStartTime;
long msPerMillionRows = 1000000L * elapsedMs / rowCount;
long rowsPerMinute = (60 * 1000L) * rowCount / elapsedMs;
this.msPerMillionRows.set(msPerMillionRows);
this.rowsPerMinute.set(rowsPerMinute);
if ((this.rowCount % logEvery) == 0) {
logger.info(String.format(
"jdbcavroio : Fetched # %08d rows at %08d rows per minute and %08d ms per M rows",
rowCount, rowsPerMinute, msPerMillionRows));
}
}
}
代码示例来源:origin: com.google.cloud.bigtable/bigtable-hbase-beam
@Override
public void run() {
try {
BufferedMutatorDoFn.cumulativeThrottlingSeconds.set(TimeUnit.NANOSECONDS.toSeconds(
ResourceLimiterStats.getInstance(instanceName).getCumulativeThrottlingTimeNanos()));
} catch (Exception e) {
STATS_LOG.warn("Something bad happened in export stats", e);
}
}
};
代码示例来源:origin: GoogleCloudPlatform/cloud-bigtable-client
@Override
public void run() {
try {
BufferedMutatorDoFn.cumulativeThrottlingSeconds.set(TimeUnit.NANOSECONDS.toSeconds(
ResourceLimiterStats.getInstance(instanceName).getCumulativeThrottlingTimeNanos()));
} catch (Exception e) {
STATS_LOG.warn("Something bad happened in export stats", e);
}
}
};
代码示例来源:origin: org.apache.beam/beam-sdks-java-core
@Override
public void set(long value) {
MetricsContainer container = MetricsEnvironment.getCurrentContainer();
if (container != null) {
container.getGauge(name).set(value);
}
}
代码示例来源:origin: org.apache.beam/beam-runners-flink_2.11
@Test
public void testGauge() {
FlinkMetricContainer.FlinkGauge flinkGauge =
new FlinkMetricContainer.FlinkGauge(GaugeResult.empty());
when(metricGroup.gauge(eq("namespace.name"), anyObject())).thenReturn(flinkGauge);
FlinkMetricContainer container = new FlinkMetricContainer(runtimeContext);
MetricsContainer step = container.getMetricsContainer("step");
MetricName metricName = MetricName.named("namespace", "name");
Gauge gauge = step.getGauge(metricName);
assertThat(flinkGauge.getValue(), is(GaugeResult.empty()));
// first set will install the mocked gauge
container.updateMetrics("step");
gauge.set(1);
gauge.set(42);
container.updateMetrics("step");
assertThat(flinkGauge.getValue().getValue(), is(42L));
}
代码示例来源:origin: org.apache.beam/beam-sdks-java-core
@SuppressWarnings("unused")
@ProcessElement
public void processElement(ProcessContext c) {
Distribution values = Metrics.distribution(MetricsTest.class, "input");
Gauge gauge = Metrics.gauge(MetricsTest.class, "my-gauge");
Integer element = c.element();
count.inc();
values.update(element);
gauge.set(12L);
c.output(element);
c.output(output2, element);
}
})
代码示例来源:origin: org.apache.beam/beam-runners-spark
final long readDurationMillis = metadata.getReadDurationMillis();
if (readDurationMillis > maxReadDuration) {
gauge.set(readDurationMillis);
内容来源于网络,如有侵权,请联系作者删除!