org.apache.hadoop.hbase.client.Increment.setWriteToWAL()方法的使用及代码示例

x33g5p2x  于2022-01-21 转载在 其他  
字(3.0k)|赞(0)|评价(0)|浏览(119)

本文整理了Java中org.apache.hadoop.hbase.client.Increment.setWriteToWAL()方法的一些代码示例,展示了Increment.setWriteToWAL()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Increment.setWriteToWAL()方法的具体详情如下:
包路径:org.apache.hadoop.hbase.client.Increment
类名称:Increment
方法名:setWriteToWAL

Increment.setWriteToWAL介绍

[英]Sets whether this operation should write to the WAL or not.
[中]设置此操作是否应写入WAL。

代码示例

代码示例来源:origin: apache/flume

@Override
 public Void run() throws Exception {
  for (Row r : actions) {
   if (r instanceof Put) {
    ((Put) r).setWriteToWAL(enableWal);
   }
   // Newer versions of HBase - Increment implements Row.
   if (r instanceof Increment) {
    ((Increment) r).setWriteToWAL(enableWal);
   }
  }
  table.batch(actions);
  return null;
 }
});

代码示例来源:origin: apache/flume

@Override
 public Void run() throws Exception {
  List<Increment> processedIncrements;
  if (batchIncrements) {
   processedIncrements = coalesceIncrements(incs);
  } else {
   processedIncrements = incs;
  }
  // Only used for unit testing.
  if (debugIncrCallback != null) {
   debugIncrCallback.onAfterCoalesce(processedIncrements);
  }
  for (final Increment i : processedIncrements) {
   i.setWriteToWAL(enableWal);
   table.increment(i);
  }
  return null;
 }
});

代码示例来源:origin: org.apache.flume.flume-ng-sinks/flume-ng-hbase-sink

@Override
 public Void run() throws Exception {
  for (Row r : actions) {
   if (r instanceof Put) {
    ((Put) r).setWriteToWAL(enableWal);
   }
   // Newer versions of HBase - Increment implements Row.
   if (r instanceof Increment) {
    ((Increment) r).setWriteToWAL(enableWal);
   }
  }
  table.batch(actions);
  return null;
 }
});

代码示例来源:origin: org.apache.flume.flume-ng-sinks/flume-ng-hbase-sink

@Override
 public Void run() throws Exception {
  List<Increment> processedIncrements;
  if (batchIncrements) {
   processedIncrements = coalesceIncrements(incs);
  } else {
   processedIncrements = incs;
  }
  // Only used for unit testing.
  if (debugIncrCallback != null) {
   debugIncrCallback.onAfterCoalesce(processedIncrements);
  }
  for (final Increment i : processedIncrements) {
   i.setWriteToWAL(enableWal);
   table.increment(i);
  }
  return null;
 }
});

代码示例来源:origin: jrkinley/storm-hbase

/**
 * Creates a HBase {@link Increment} from a Storm {@link Tuple}
 * @param tuple The {@link Tuple}
 * @param increment The amount to increment the counter by
 * @return {@link Increment}
 */
public Increment getIncrementFromTuple(final Tuple tuple, final long increment) {
 byte[] rowKey = Bytes.toBytes(tuple.getStringByField(tupleRowKeyField));
 Increment inc = new Increment(rowKey);
 inc.setWriteToWAL(writeToWAL);
 if (columnFamilies.size() > 0) {
  for (String cf : columnFamilies.keySet()) {
   byte[] cfBytes = Bytes.toBytes(cf);
   for (String cq : columnFamilies.get(cf)) {
    byte[] val;
    try {
     val = Bytes.toBytes(tuple.getStringByField(cq));
    } catch (IllegalArgumentException ex) {
     // if cq isn't a tuple field, use cq for counter instead of tuple
     // value
     val = Bytes.toBytes(cq);
    }
    inc.addColumn(cfBytes, val, increment);
   }
  }
 }
 return inc;
}

代码示例来源:origin: co.cask.hbase/hbase

public static Increment incrementFromThrift(TIncrement in) throws IOException {
  Increment out = new Increment(in.getRow());
  for (TColumnIncrement column : in.getColumns()) {
   out.addColumn(column.getFamily(), column.getQualifier(), column.getAmount());
  }
  out.setWriteToWAL(in.isWriteToWal());
  return out;
 }
}

相关文章