org.apache.hadoop.hbase.client.Append.getAttribute()方法的使用及代码示例

x33g5p2x  于2022-01-16 转载在 其他  
字(7.7k)|赞(0)|评价(0)|浏览(151)

本文整理了Java中org.apache.hadoop.hbase.client.Append.getAttribute()方法的一些代码示例,展示了Append.getAttribute()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Append.getAttribute()方法的具体详情如下:
包路径:org.apache.hadoop.hbase.client.Append
类名称:Append
方法名:getAttribute

Append.getAttribute介绍

暂无

代码示例

代码示例来源:origin: apache/hbase

  1. @Override
  2. public Result preAppend(ObserverContext<RegionCoprocessorEnvironment> c, Append append)
  3. throws IOException {
  4. User user = getActiveUser(c);
  5. checkForReservedTagPresence(user, append);
  6. // Require WRITE permission to the table, CF, and the KV to be appended
  7. RegionCoprocessorEnvironment env = c.getEnvironment();
  8. Map<byte[],? extends Collection<Cell>> families = append.getFamilyCellMap();
  9. AuthResult authResult = permissionGranted(OpType.APPEND, user,
  10. env, families, Action.WRITE);
  11. AccessChecker.logResult(authResult);
  12. if (!authResult.isAllowed()) {
  13. if (cellFeaturesEnabled && !compatibleEarlyTermination) {
  14. append.setAttribute(CHECK_COVERING_PERM, TRUE);
  15. } else if (authorizationEnabled) {
  16. throw new AccessDeniedException("Insufficient permissions " +
  17. authResult.toContextString());
  18. }
  19. }
  20. byte[] bytes = append.getAttribute(AccessControlConstants.OP_ATTRIBUTE_ACL);
  21. if (bytes != null) {
  22. if (cellFeaturesEnabled) {
  23. addCellPermissions(bytes, append.getFamilyCellMap());
  24. } else {
  25. throw new DoNotRetryIOException("Cell ACLs cannot be persisted");
  26. }
  27. }
  28. return null;
  29. }

代码示例来源:origin: apache/hbase

  1. @Override
  2. public Result preAppendAfterRowLock(final ObserverContext<RegionCoprocessorEnvironment> c,
  3. final Append append) throws IOException {
  4. if (append.getAttribute(CHECK_COVERING_PERM) != null) {
  5. // We had failure with table, cf and q perm checks and now giving a chance for cell
  6. // perm check
  7. TableName table = c.getEnvironment().getRegion().getRegionInfo().getTable();
  8. AuthResult authResult = null;
  9. User user = getActiveUser(c);
  10. if (checkCoveringPermission(user, OpType.APPEND, c.getEnvironment(), append.getRow(),
  11. append.getFamilyCellMap(), append.getTimeRange().getMax(), Action.WRITE)) {
  12. authResult = AuthResult.allow(OpType.APPEND.toString(),
  13. "Covering cell set", user, Action.WRITE, table, append.getFamilyCellMap());
  14. } else {
  15. authResult = AuthResult.deny(OpType.APPEND.toString(),
  16. "Covering cell set", user, Action.WRITE, table, append.getFamilyCellMap());
  17. }
  18. AccessChecker.logResult(authResult);
  19. if (authorizationEnabled && !authResult.isAllowed()) {
  20. throw new AccessDeniedException("Insufficient permissions " +
  21. authResult.toContextString());
  22. }
  23. }
  24. return null;
  25. }

代码示例来源:origin: forcedotcom/phoenix

  1. public Result preAppend(final ObserverContext<RegionCoprocessorEnvironment> e,
  2. final Append append) throws IOException {
  3. byte[] opBuf = append.getAttribute(OPERATION_ATTRIB);
  4. if (opBuf == null) {
  5. return null;
  6. maxGetTimestamp = minGetTimestamp + 1;
  7. } else {
  8. clientTimestampBuf = append.getAttribute(MAX_TIMERANGE_ATTRIB);
  9. if (clientTimestampBuf != null) {
  10. clientTimestamp = maxGetTimestamp = Bytes.toLong(clientTimestampBuf);
  11. case RETURN_SEQUENCE:
  12. KeyValue currentValueKV = result.raw()[0];
  13. long expectedValue = PDataType.LONG.getCodec().decodeLong(append.getAttribute(CURRENT_VALUE_ATTRIB), 0, null);
  14. long value = PDataType.LONG.getCodec().decodeLong(currentValueKV.getBuffer(), currentValueKV.getValueOffset(), null);

代码示例来源:origin: apache/phoenix

  1. org.apache.hadoop.hbase.coprocessor.ObserverContext<RegionCoprocessorEnvironment> e,
  2. Append append) throws IOException {
  3. byte[] opBuf = append.getAttribute(OPERATION_ATTRIB);
  4. if (opBuf == null) {
  5. return null;
  6. maxGetTimestamp = minGetTimestamp + 1;
  7. } else {
  8. clientTimestampBuf = append.getAttribute(MAX_TIMERANGE_ATTRIB);
  9. if (clientTimestampBuf != null) {
  10. clientTimestamp = maxGetTimestamp = Bytes.toLong(clientTimestampBuf);
  11. case RETURN_SEQUENCE:
  12. KeyValue currentValueKV = PhoenixKeyValueUtil.maybeCopyCell(result.rawCells()[0]);
  13. long expectedValue = PLong.INSTANCE.getCodec().decodeLong(append.getAttribute(CURRENT_VALUE_ATTRIB), 0, SortOrder.getDefault());
  14. long value = PLong.INSTANCE.getCodec().decodeLong(currentValueKV.getValueArray(),
  15. currentValueKV.getValueOffset(), SortOrder.getDefault());

代码示例来源:origin: co.cask.hbase/hbase

  1. /**
  2. * @return current setting for returnResults
  3. */
  4. public boolean isReturnResults() {
  5. byte[] v = getAttribute(RETURN_RESULTS);
  6. return v == null ? true : Bytes.toBoolean(v);
  7. }

代码示例来源:origin: org.apache.phoenix/phoenix-core

  1. org.apache.hadoop.hbase.coprocessor.ObserverContext<RegionCoprocessorEnvironment> e,
  2. Append append) throws IOException {
  3. byte[] opBuf = append.getAttribute(OPERATION_ATTRIB);
  4. if (opBuf == null) {
  5. return null;
  6. maxGetTimestamp = minGetTimestamp + 1;
  7. } else {
  8. clientTimestampBuf = append.getAttribute(MAX_TIMERANGE_ATTRIB);
  9. if (clientTimestampBuf != null) {
  10. clientTimestamp = maxGetTimestamp = Bytes.toLong(clientTimestampBuf);
  11. case RETURN_SEQUENCE:
  12. KeyValue currentValueKV = PhoenixKeyValueUtil.maybeCopyCell(result.rawCells()[0]);
  13. long expectedValue = PLong.INSTANCE.getCodec().decodeLong(append.getAttribute(CURRENT_VALUE_ATTRIB), 0, SortOrder.getDefault());
  14. long value = PLong.INSTANCE.getCodec().decodeLong(currentValueKV.getValueArray(),
  15. currentValueKV.getValueOffset(), SortOrder.getDefault());

代码示例来源:origin: com.aliyun.phoenix/ali-phoenix-core

  1. org.apache.hadoop.hbase.coprocessor.ObserverContext<RegionCoprocessorEnvironment> e,
  2. Append append) throws IOException {
  3. byte[] opBuf = append.getAttribute(OPERATION_ATTRIB);
  4. if (opBuf == null) {
  5. return null;
  6. maxGetTimestamp = minGetTimestamp + 1;
  7. } else {
  8. clientTimestampBuf = append.getAttribute(MAX_TIMERANGE_ATTRIB);
  9. if (clientTimestampBuf != null) {
  10. clientTimestamp = maxGetTimestamp = Bytes.toLong(clientTimestampBuf);
  11. case RETURN_SEQUENCE:
  12. KeyValue currentValueKV = PhoenixKeyValueUtil.maybeCopyCell(result.rawCells()[0]);
  13. long expectedValue = PLong.INSTANCE.getCodec().decodeLong(append.getAttribute(CURRENT_VALUE_ATTRIB), 0, SortOrder.getDefault());
  14. long value = PLong.INSTANCE.getCodec().decodeLong(currentValueKV.getValueArray(),
  15. currentValueKV.getValueOffset(), SortOrder.getDefault());

代码示例来源:origin: harbby/presto-connectors

  1. @Override
  2. public Result preAppend(ObserverContext<RegionCoprocessorEnvironment> c, Append append)
  3. throws IOException {
  4. User user = getActiveUser();
  5. checkForReservedTagPresence(user, append);
  6. // Require WRITE permission to the table, CF, and the KV to be appended
  7. RegionCoprocessorEnvironment env = c.getEnvironment();
  8. Map<byte[],? extends Collection<Cell>> families = append.getFamilyCellMap();
  9. AuthResult authResult = permissionGranted(OpType.APPEND, user, env, families, Action.WRITE);
  10. logResult(authResult);
  11. if (!authResult.isAllowed()) {
  12. if (cellFeaturesEnabled && !compatibleEarlyTermination) {
  13. append.setAttribute(CHECK_COVERING_PERM, TRUE);
  14. } else if (authorizationEnabled) {
  15. throw new AccessDeniedException("Insufficient permissions " +
  16. authResult.toContextString());
  17. }
  18. }
  19. byte[] bytes = append.getAttribute(AccessControlConstants.OP_ATTRIBUTE_ACL);
  20. if (bytes != null) {
  21. if (cellFeaturesEnabled) {
  22. addCellPermissions(bytes, append.getFamilyCellMap());
  23. } else {
  24. throw new DoNotRetryIOException("Cell ACLs cannot be persisted");
  25. }
  26. }
  27. return null;
  28. }

代码示例来源:origin: harbby/presto-connectors

  1. @Override
  2. public Result preAppendAfterRowLock(final ObserverContext<RegionCoprocessorEnvironment> c,
  3. final Append append) throws IOException {
  4. if (append.getAttribute(CHECK_COVERING_PERM) != null) {
  5. // We had failure with table, cf and q perm checks and now giving a chance for cell
  6. // perm check
  7. TableName table = c.getEnvironment().getRegion().getRegionInfo().getTable();
  8. AuthResult authResult = null;
  9. if (checkCoveringPermission(OpType.APPEND, c.getEnvironment(), append.getRow(),
  10. append.getFamilyCellMap(), HConstants.LATEST_TIMESTAMP, Action.WRITE)) {
  11. authResult = AuthResult.allow(OpType.APPEND.toString(), "Covering cell set",
  12. getActiveUser(), Action.WRITE, table, append.getFamilyCellMap());
  13. } else {
  14. authResult = AuthResult.deny(OpType.APPEND.toString(), "Covering cell set",
  15. getActiveUser(), Action.WRITE, table, append.getFamilyCellMap());
  16. }
  17. logResult(authResult);
  18. if (authorizationEnabled && !authResult.isAllowed()) {
  19. throw new AccessDeniedException("Insufficient permissions " +
  20. authResult.toContextString());
  21. }
  22. }
  23. return null;
  24. }

相关文章