org.apache.hadoop.hbase.client.Increment.getFamilyMapOfLongs()方法的使用及代码示例

x33g5p2x  于2022-01-21 转载在 其他  
字(11.0k)|赞(0)|评价(0)|浏览(156)

本文整理了Java中org.apache.hadoop.hbase.client.Increment.getFamilyMapOfLongs()方法的一些代码示例,展示了Increment.getFamilyMapOfLongs()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Increment.getFamilyMapOfLongs()方法的具体详情如下:
包路径:org.apache.hadoop.hbase.client.Increment
类名称:Increment
方法名:getFamilyMapOfLongs

Increment.getFamilyMapOfLongs介绍

[英]Before 0.95, when you called Increment#getFamilyMap(), you got back a map of families to a list of Longs. Now, #getFamilyCellMap() returns families by list of Cells. This method has been added so you can have the old behavior.
[中]在0.95之前,当您调用Increment#getFamilyMap()时,您得到了一个到long列表的族映射。现在,#getFamilyCellMap()按单元格列表返回族。已添加此方法,以便可以使用旧行为。

代码示例

代码示例来源:origin: apache/flume

@SuppressWarnings("unchecked")
private Map<byte[], NavigableMap<byte[], Long>> getFamilyMap(Increment inc) {
 Preconditions.checkNotNull(inc, "Increment required");
 return inc.getFamilyMapOfLongs();
}

代码示例来源:origin: apache/hbase

@Test
 public void testIncrementInstance() {
  final long expected = 13;
  Increment inc = new Increment(new byte [] {'r'});
  int total = 0;
  for (int i = 0; i < 2; i++) {
   byte [] bytes = Bytes.toBytes(i);
   inc.addColumn(bytes, bytes, expected);
   total++;
  }
  Map<byte[], NavigableMap<byte [], Long>> familyMapOfLongs = inc.getFamilyMapOfLongs();
  int found = 0;
  for (Map.Entry<byte [], NavigableMap<byte [], Long>> entry: familyMapOfLongs.entrySet()) {
   for (Map.Entry<byte [], Long> e: entry.getValue().entrySet()) {
    assertEquals(expected, e.getValue().longValue());
    found++;
   }
  }
  assertEquals(total, found);
 }
}

代码示例来源:origin: apache/metron

NavigableMap<byte[], Long> set = inc.getFamilyMapOfLongs().get(family);
if (set == null) {
 set = new TreeMap<byte[], Long>(Bytes.BYTES_COMPARATOR);
inc.getFamilyMapOfLongs().put(family, set);

代码示例来源:origin: larsgeorge/hbase-book

increment1.getFamilyMapOfLongs();
for (byte[] family : longs.keySet()) {
 System.out.println("Increment #1 - family: " + Bytes.toString(family));

代码示例来源:origin: org.apache.hbase/hbase-client

@Test
 public void testIncrementInstance() {
  final long expected = 13;
  Increment inc = new Increment(new byte [] {'r'});
  int total = 0;
  for (int i = 0; i < 2; i++) {
   byte [] bytes = Bytes.toBytes(i);
   inc.addColumn(bytes, bytes, expected);
   total++;
  }
  Map<byte[], NavigableMap<byte [], Long>> familyMapOfLongs = inc.getFamilyMapOfLongs();
  int found = 0;
  for (Map.Entry<byte [], NavigableMap<byte [], Long>> entry: familyMapOfLongs.entrySet()) {
   for (Map.Entry<byte [], Long> e: entry.getValue().entrySet()) {
    assertEquals(expected, e.getValue().longValue());
    found++;
   }
  }
  assertEquals(total, found);
 }
}

代码示例来源:origin: org.apache.flume.flume-ng-sinks/flume-ng-hbase2-sink

@SuppressWarnings("unchecked")
private Map<byte[], NavigableMap<byte[], Long>> getFamilyMap(Increment inc) {
 Preconditions.checkNotNull(inc, "Increment required");
 return inc.getFamilyMapOfLongs();
}

代码示例来源:origin: OpenSOC/opensoc-streaming

NavigableMap<byte[], Long> set = inc.getFamilyMapOfLongs().get(family);
if (set == null) {
 set = new TreeMap<byte[], Long>(Bytes.BYTES_COMPARATOR);
inc.getFamilyMapOfLongs().put(family, set);

代码示例来源:origin: yahoo/simplified-lambda

/**
 * {@inheritDoc}
 */
@Override
public Result increment(Increment increment) throws IOException {
  this.sleeper();
  List<KeyValue> kvs = new ArrayList<KeyValue>();
  Map<byte[], NavigableMap<byte[], Long>> famToVal = increment.getFamilyMapOfLongs();
  for (Map.Entry<byte[], NavigableMap<byte[], Long>> ef : famToVal.entrySet()) {
    byte[] family = ef.getKey();
    NavigableMap<byte[], Long> qToVal = ef.getValue();
    for (Map.Entry<byte[], Long> eq : qToVal.entrySet()) {
      long newValue = incrementColumnValue(increment.getRow(), family, eq.getKey(), eq.getValue());
      kvs.add(new KeyValue(increment.getRow(), family, eq.getKey(), Bytes.toBytes(newValue)));
    }
  }
  return new Result(kvs);
}

代码示例来源:origin: rayokota/hgraphdb

/**
 * {@inheritDoc}
 */
@Override
public Result increment(Increment increment) throws IOException {
  List<Cell> kvs = new ArrayList<>();
  Map<byte[], NavigableMap<byte[], Long>> famToVal = increment.getFamilyMapOfLongs();
  for (Map.Entry<byte[], NavigableMap<byte[], Long>> ef : famToVal.entrySet()) {
    byte[] family = ef.getKey();
    NavigableMap<byte[], Long> qToVal = ef.getValue();
    for (Map.Entry<byte[], Long> eq : qToVal.entrySet()) {
      //noinspection UnusedAssignment
      long newValue = incrementColumnValue(increment.getRow(), family, eq.getKey(), eq.getValue());
      Map.Entry<Long, byte[]> timestampAndValue = data.get(increment.getRow()).get(family).get(eq.getKey()).lastEntry();
      kvs.add(new KeyValue(increment.getRow(), family, eq.getKey(), timestampAndValue.getKey(), timestampAndValue.getValue()));
    }
  }
  return Result.create(kvs);
}

代码示例来源:origin: GoogleCloudPlatform/cloud-bigtable-client

/** {@inheritDoc} */
 @Override
 public void adapt(Increment operation, ReadModifyWriteRow readModifyWriteRow) {
  if (!operation.getTimeRange().isAllTime()) {
   throw new UnsupportedOperationException(
     "Setting the time range in an Increment is not implemented");
  }

  for (Map.Entry<byte[], NavigableMap<byte[], Long>> familyEntry :
    operation.getFamilyMapOfLongs().entrySet()) {
   String familyName = Bytes.toString(familyEntry.getKey());
   // Bigtable applies all increments present in a single RPC. HBase applies only the last
   // mutation present, if any. We remove all but the last mutation for each qualifier here:
   List<Cell> mutationCells =
     CellDeduplicationHelper.deduplicateFamily(operation, familyEntry.getKey());

   for (Cell cell : mutationCells) {
    readModifyWriteRow.increment(
      familyName,
      ByteString.copyFrom(
        cell.getQualifierArray(),
        cell.getQualifierOffset(),
        cell.getQualifierLength()),
      Bytes.toLong(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength())
    );
   }
  }
 }
}

代码示例来源:origin: com.google.cloud.bigtable/bigtable-hbase

/** {@inheritDoc} */
 @Override
 public void adapt(Increment operation, ReadModifyWriteRow readModifyWriteRow) {
  if (!operation.getTimeRange().isAllTime()) {
   throw new UnsupportedOperationException(
     "Setting the time range in an Increment is not implemented");
  }

  for (Map.Entry<byte[], NavigableMap<byte[], Long>> familyEntry :
    operation.getFamilyMapOfLongs().entrySet()) {
   String familyName = Bytes.toString(familyEntry.getKey());
   // Bigtable applies all increments present in a single RPC. HBase applies only the last
   // mutation present, if any. We remove all but the last mutation for each qualifier here:
   List<Cell> mutationCells =
     CellDeduplicationHelper.deduplicateFamily(operation, familyEntry.getKey());

   for (Cell cell : mutationCells) {
    readModifyWriteRow.increment(
      familyName,
      ByteString.copyFrom(
        cell.getQualifierArray(),
        cell.getQualifierOffset(),
        cell.getQualifierLength()),
      Bytes.toLong(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength())
    );
   }
  }
 }
}

代码示例来源:origin: cdapio/cdap

Get get = new Get(increment.getRow());
get.setAttribute(TxConstants.TX_OPERATION_ATTRIBUTE_KEY, txBytes);
for (Map.Entry<byte[], NavigableMap<byte[], Long>> entry : increment.getFamilyMapOfLongs().entrySet()) {
 byte[] family = entry.getKey();
 for (byte[] column : entry.getValue().keySet()) {
         increment.getAttribute(HBaseTable.TX_MAX_LIFETIME_MILLIS_KEY));
put.setAttribute(HBaseTable.TX_ID, increment.getAttribute(HBaseTable.TX_ID));
for (Map.Entry<byte[], NavigableMap<byte[], Long>> entry : increment.getFamilyMapOfLongs().entrySet()) {
 byte[] family = entry.getKey();
 for (Map.Entry<byte[], Long> colEntry: entry.getValue().entrySet()) {

代码示例来源:origin: cdapio/cdap

Get get = new Get(increment.getRow());
get.setAttribute(TxConstants.TX_OPERATION_ATTRIBUTE_KEY, txBytes);
for (Map.Entry<byte[], NavigableMap<byte[], Long>> entry : increment.getFamilyMapOfLongs().entrySet()) {
 byte[] family = entry.getKey();
 for (byte[] column : entry.getValue().keySet()) {
         increment.getAttribute(HBaseTable.TX_MAX_LIFETIME_MILLIS_KEY));
put.setAttribute(HBaseTable.TX_ID, increment.getAttribute(HBaseTable.TX_ID));
for (Map.Entry<byte[], NavigableMap<byte[], Long>> entry : increment.getFamilyMapOfLongs().entrySet()) {
 byte[] family = entry.getKey();
 for (Map.Entry<byte[], Long> colEntry: entry.getValue().entrySet()) {

代码示例来源:origin: caskdata/cdap

Get get = new Get(increment.getRow());
get.setAttribute(TxConstants.TX_OPERATION_ATTRIBUTE_KEY, txBytes);
for (Map.Entry<byte[], NavigableMap<byte[], Long>> entry : increment.getFamilyMapOfLongs().entrySet()) {
 byte[] family = entry.getKey();
 for (byte[] column : entry.getValue().keySet()) {
         increment.getAttribute(HBaseTable.TX_MAX_LIFETIME_MILLIS_KEY));
put.setAttribute(HBaseTable.TX_ID, increment.getAttribute(HBaseTable.TX_ID));
for (Map.Entry<byte[], NavigableMap<byte[], Long>> entry : increment.getFamilyMapOfLongs().entrySet()) {
 byte[] family = entry.getKey();
 for (Map.Entry<byte[], Long> colEntry: entry.getValue().entrySet()) {

代码示例来源:origin: cdapio/cdap

Get get = new Get(increment.getRow());
get.setAttribute(TxConstants.TX_OPERATION_ATTRIBUTE_KEY, txBytes);
for (Map.Entry<byte[], NavigableMap<byte[], Long>> entry : increment.getFamilyMapOfLongs().entrySet()) {
 byte[] family = entry.getKey();
 for (byte[] column : entry.getValue().keySet()) {
         increment.getAttribute(HBaseTable.TX_MAX_LIFETIME_MILLIS_KEY));
put.setAttribute(HBaseTable.TX_ID, increment.getAttribute(HBaseTable.TX_ID));
for (Map.Entry<byte[], NavigableMap<byte[], Long>> entry : increment.getFamilyMapOfLongs().entrySet()) {
 byte[] family = entry.getKey();
 for (Map.Entry<byte[], Long> colEntry: entry.getValue().entrySet()) {

代码示例来源:origin: caskdata/cdap

Get get = new Get(increment.getRow());
get.setAttribute(TxConstants.TX_OPERATION_ATTRIBUTE_KEY, txBytes);
for (Map.Entry<byte[], NavigableMap<byte[], Long>> entry : increment.getFamilyMapOfLongs().entrySet()) {
 byte[] family = entry.getKey();
 for (byte[] column : entry.getValue().keySet()) {
         increment.getAttribute(HBaseTable.TX_MAX_LIFETIME_MILLIS_KEY));
put.setAttribute(HBaseTable.TX_ID, increment.getAttribute(HBaseTable.TX_ID));
for (Map.Entry<byte[], NavigableMap<byte[], Long>> entry : increment.getFamilyMapOfLongs().entrySet()) {
 byte[] family = entry.getKey();
 for (Map.Entry<byte[], Long> colEntry: entry.getValue().entrySet()) {

代码示例来源:origin: cdapio/cdap

Get get = new Get(increment.getRow());
get.setAttribute(TxConstants.TX_OPERATION_ATTRIBUTE_KEY, txBytes);
for (Map.Entry<byte[], NavigableMap<byte[], Long>> entry : increment.getFamilyMapOfLongs().entrySet()) {
 byte[] family = entry.getKey();
 for (byte[] column : entry.getValue().keySet()) {
         increment.getAttribute(HBaseTable.TX_MAX_LIFETIME_MILLIS_KEY));
put.setAttribute(HBaseTable.TX_ID, increment.getAttribute(HBaseTable.TX_ID));
for (Map.Entry<byte[], NavigableMap<byte[], Long>> entry : increment.getFamilyMapOfLongs().entrySet()) {
 byte[] family = entry.getKey();
 for (Map.Entry<byte[], Long> colEntry: entry.getValue().entrySet()) {

代码示例来源:origin: caskdata/cdap

Get get = new Get(increment.getRow());
get.setAttribute(TxConstants.TX_OPERATION_ATTRIBUTE_KEY, txBytes);
for (Map.Entry<byte[], NavigableMap<byte[], Long>> entry : increment.getFamilyMapOfLongs().entrySet()) {
 byte[] family = entry.getKey();
 for (byte[] column : entry.getValue().keySet()) {
          increment.getAttribute(HBaseTable.TX_MAX_LIFETIME_MILLIS_KEY));
 put.setAttribute(HBaseTable.TX_ID, increment.getAttribute(HBaseTable.TX_ID));
 for (Map.Entry<byte[], NavigableMap<byte[], Long>> entry : increment.getFamilyMapOfLongs().entrySet()) {
  byte[] family = entry.getKey();
  for (Map.Entry<byte[], Long> colEntry: entry.getValue().entrySet()) {

相关文章