org.apache.htrace.core.Tracer.newScope()方法的使用及代码示例

x33g5p2x  于2022-01-30 转载在 其他  
字(9.6k)|赞(0)|评价(0)|浏览(144)

本文整理了Java中org.apache.htrace.core.Tracer.newScope()方法的一些代码示例,展示了Tracer.newScope()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Tracer.newScope()方法的具体详情如下:
包路径:org.apache.htrace.core.Tracer
类名称:Tracer
方法名:newScope

Tracer.newScope介绍

[英]Create a new trace scope. If there are no scopes above the current scope, we will apply our configured samplers. Otherwise, we will create a trace Span only if this thread is already tracing.
[中]创建一个新的跟踪范围。如果没有超出当前范围的范围,我们将应用配置的采样器。否则,只有当这个线程已经在跟踪时,我们才会创建跟踪范围。

代码示例

代码示例来源:origin: apache/hbase

/**
 * Wrapper method to create new TraceScope with the given description
 * @return TraceScope or null when not tracing
 */
public static TraceScope createTrace(String description) {
 return (tracer == null) ? null : tracer.newScope(description);
}

代码示例来源:origin: brianfrankcooper/YCSB

private static void initWorkload(Properties props, Thread warningthread, Workload workload, Tracer tracer) {
 try {
  try (final TraceScope span = tracer.newScope(CLIENT_WORKLOAD_INIT_SPAN)) {
   workload.init(props);
   warningthread.interrupt();
  }
 } catch (WorkloadException e) {
  e.printStackTrace();
  e.printStackTrace(System.out);
  System.exit(0);
 }
}

代码示例来源:origin: apache/hbase

/**
 * Wrapper method to create new child TraceScope with the given description
 * and parent scope's spanId
 * @param span parent span
 * @return TraceScope or null when not tracing
 */
public static TraceScope createTrace(String description, Span span) {
 if(span == null) return createTrace(description);
 return (tracer == null) ? null : tracer.newScope(description, span.getSpanId());
}

代码示例来源:origin: brianfrankcooper/YCSB

/**
 * Initialize any state for this DB.
 * Called once per DB instance; there is one DB instance per client thread.
 */
public void init() throws DBException {
 try (final TraceScope span = tracer.newScope(scopeStringInit)) {
  db.init();
  this.reportLatencyForEachError = Boolean.parseBoolean(getProperties().
    getProperty(REPORT_LATENCY_FOR_EACH_ERROR_PROPERTY,
      REPORT_LATENCY_FOR_EACH_ERROR_PROPERTY_DEFAULT));
  if (!reportLatencyForEachError) {
   String latencyTrackedErrorsProperty = getProperties().getProperty(LATENCY_TRACKED_ERRORS_PROPERTY, null);
   if (latencyTrackedErrorsProperty != null) {
    this.latencyTrackedErrors = new HashSet<String>(Arrays.asList(
      latencyTrackedErrorsProperty.split(",")));
   }
  }
  System.err.println("DBWrapper: report latency for each error is " +
    this.reportLatencyForEachError + " and specific error codes to track" +
    " for latency are: " + this.latencyTrackedErrors.toString());
 }
}

代码示例来源:origin: brianfrankcooper/YCSB

/**
 * Cleanup any state for this DB.
 * Called once per DB instance; there is one DB instance per client thread.
 */
public void cleanup() throws DBException {
 try (final TraceScope span = tracer.newScope(scopeStringCleanup)) {
  long ist = measurements.getIntendedtartTimeNs();
  long st = System.nanoTime();
  db.cleanup();
  long en = System.nanoTime();
  measure("CLEANUP", Status.OK, ist, st, en);
 }
}

代码示例来源:origin: org.apache.hadoop/hadoop-common

public FileStatus[] glob() throws IOException {
 TraceScope scope = tracer.newScope("Globber#glob");
 scope.addKVAnnotation("pattern", pathPattern.toUri().getPath());
 try {
  return doGlob();
 } finally {
  scope.close();
 }
}

代码示例来源:origin: brianfrankcooper/YCSB

try (final TraceScope span = tracer.newScope(CLIENT_INIT_SPAN)) {
 int opcount;
 if (dotransactions) {

代码示例来源:origin: org.apache.hadoop/hadoop-common

/**
 * Create and initialize a new instance of a FileSystem.
 * @param uri URI containing the FS schema and FS details
 * @param conf configuration to use to look for the FS instance declaration
 * and to pass to the {@link FileSystem#initialize(URI, Configuration)}.
 * @return the initialized filesystem.
 * @throws IOException problems loading or initializing the FileSystem
 */
private static FileSystem createFileSystem(URI uri, Configuration conf)
  throws IOException {
 Tracer tracer = FsTracer.get(conf);
 try(TraceScope scope = tracer.newScope("FileSystem#createFileSystem")) {
  scope.addKVAnnotation("scheme", uri.getScheme());
  Class<?> clazz = getFileSystemClass(uri.getScheme(), conf);
  FileSystem fs = (FileSystem)ReflectionUtils.newInstance(clazz, conf);
  fs.initialize(uri, conf);
  return fs;
 }
}

代码示例来源:origin: brianfrankcooper/YCSB

/**
  * Delete a record from the database.
  *
  * @param table The name of the table
  * @param key The record key of the record to delete.
  * @return The result of the operation.
  */
 public Status delete(String table, String key) {
  try (final TraceScope span = tracer.newScope(scopeStringDelete)) {
   long ist = measurements.getIntendedtartTimeNs();
   long st = System.nanoTime();
   Status res = db.delete(table, key);
   long en = System.nanoTime();
   measure("DELETE", res, ist, st, en);
   measurements.reportStatus("DELETE", res);
   return res;
  }
 }
}

代码示例来源:origin: brianfrankcooper/YCSB

/**
 * Read a record from the database. Each field/value pair from the result
 * will be stored in a HashMap.
 *
 * @param table The name of the table
 * @param key The record key of the record to read.
 * @param fields The list of fields to read, or null for all of them
 * @param result A HashMap of field/value pairs for the result
 * @return The result of the operation.
 */
public Status read(String table, String key, Set<String> fields,
          Map<String, ByteIterator> result) {
 try (final TraceScope span = tracer.newScope(scopeStringRead)) {
  long ist = measurements.getIntendedtartTimeNs();
  long st = System.nanoTime();
  Status res = db.read(table, key, fields, result);
  long en = System.nanoTime();
  measure("READ", res, ist, st, en);
  measurements.reportStatus("READ", res);
  return res;
 }
}

代码示例来源:origin: brianfrankcooper/YCSB

/**
 * Insert a record in the database. Any field/value pairs in the specified
 * values HashMap will be written into the record with the specified
 * record key.
 *
 * @param table The name of the table
 * @param key The record key of the record to insert.
 * @param values A HashMap of field/value pairs to insert in the record
 * @return The result of the operation.
 */
public Status insert(String table, String key,
           Map<String, ByteIterator> values) {
 try (final TraceScope span = tracer.newScope(scopeStringInsert)) {
  long ist = measurements.getIntendedtartTimeNs();
  long st = System.nanoTime();
  Status res = db.insert(table, key, values);
  long en = System.nanoTime();
  measure("INSERT", res, ist, st, en);
  measurements.reportStatus("INSERT", res);
  return res;
 }
}

代码示例来源:origin: brianfrankcooper/YCSB

/**
 * Update a record in the database. Any field/value pairs in the specified values HashMap will be written into the
 * record with the specified record key, overwriting any existing values with the same field name.
 *
 * @param table The name of the table
 * @param key The record key of the record to write.
 * @param values A HashMap of field/value pairs to update in the record
 * @return The result of the operation.
 */
public Status update(String table, String key,
           Map<String, ByteIterator> values) {
 try (final TraceScope span = tracer.newScope(scopeStringUpdate)) {
  long ist = measurements.getIntendedtartTimeNs();
  long st = System.nanoTime();
  Status res = db.update(table, key, values);
  long en = System.nanoTime();
  measure("UPDATE", res, ist, st, en);
  measurements.reportStatus("UPDATE", res);
  return res;
 }
}

代码示例来源:origin: org.apache.hadoop/hadoop-common

Tracer tracer = Tracer.curThreadTracer();
if (tracer != null) {
 scope = tracer.newScope("Groups#fetchGroupList");
 scope.addKVAnnotation("user", user);

代码示例来源:origin: brianfrankcooper/YCSB

/**
 * Perform a range scan for a set of records in the database.
 * Each field/value pair from the result will be stored in a HashMap.
 *
 * @param table The name of the table
 * @param startkey The record key of the first record to read.
 * @param recordcount The number of records to read
 * @param fields The list of fields to read, or null for all of them
 * @param result A Vector of HashMaps, where each HashMap is a set field/value pairs for one record
 * @return The result of the operation.
 */
public Status scan(String table, String startkey, int recordcount,
          Set<String> fields, Vector<HashMap<String, ByteIterator>> result) {
 try (final TraceScope span = tracer.newScope(scopeStringScan)) {
  long ist = measurements.getIntendedtartTimeNs();
  long st = System.nanoTime();
  Status res = db.scan(table, startkey, recordcount, fields, result);
  long en = System.nanoTime();
  measure("SCAN", res, ist, st, en);
  measurements.reportStatus("SCAN", res);
  return res;
 }
}

代码示例来源:origin: brianfrankcooper/YCSB

int opsDone;
try (final TraceScope span = tracer.newScope(CLIENT_WORKLOAD_SPAN)) {
 try (final TraceScope span = tracer.newScope(CLIENT_CLEANUP_SPAN)) {
 try (final TraceScope span = tracer.newScope(CLIENT_EXPORT_MEASUREMENTS_SPAN)) {
  exportMeasurements(props, opsDone, en - st);

代码示例来源:origin: org.apache.hadoop/hadoop-common

@Override
public Object invoke(Object proxy, Method method, Object[] args)
 throws Throwable {
 long startTime = 0;
 if (LOG.isDebugEnabled()) {
  startTime = Time.monotonicNow();
 }
 // if Tracing is on then start a new span for this rpc.
 // guard it in the if statement to make sure there isn't
 // any extra string manipulation.
 Tracer tracer = Tracer.curThreadTracer();
 TraceScope traceScope = null;
 if (tracer != null) {
  traceScope = tracer.newScope(RpcClientUtil.methodToTraceString(method));
 }
 ObjectWritable value;
 try {
  value = (ObjectWritable)
   client.call(RPC.RpcKind.RPC_WRITABLE, new Invocation(method, args),
    remoteId, fallbackToSimpleAuth);
 } finally {
  if (traceScope != null) traceScope.close();
 }
 if (LOG.isDebugEnabled()) {
  long callTime = Time.monotonicNow() - startTime;
  LOG.debug("Call: " + method.getName() + " " + callTime);
 }
 return value.get();
}

代码示例来源:origin: org.apache.hadoop/hadoop-common

TraceScope traceScope = null;
if (tracer != null) {
 traceScope = tracer.newScope(RpcClientUtil.methodToTraceString(method));

代码示例来源:origin: org.apache.hadoop/hadoop-common

throw new UnknownCommandException();
TraceScope scope = tracer.newScope(instance.getCommandName());
if (scope.getSpan() != null) {
 String args = StringUtils.join(" ", argv);

代码示例来源:origin: org.apache.hadoop/hadoop-common

header.getTraceInfo().getTraceId(),
  header.getTraceInfo().getParentId());
traceScope = tracer.newScope(
  RpcClientUtil.toTraceName(rpcRequest.toString()),
  parentSpanId);

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

private TraceScope continueTraceSpan(DataTransferTraceInfoProto proto,
                   String description) {
 TraceScope scope = null;
 SpanId spanId = fromProto(proto);
 if (spanId != null) {
  scope = tracer.newScope(description, spanId);
 }
 return scope;
}

相关文章