org.apache.hadoop.hbase.client.Table.batch()方法的使用及代码示例

x33g5p2x  于2022-01-29 转载在 其他  
字(11.5k)|赞(0)|评价(0)|浏览(213)

本文整理了Java中org.apache.hadoop.hbase.client.Table.batch()方法的一些代码示例,展示了Table.batch()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Table.batch()方法的具体详情如下:
包路径:org.apache.hadoop.hbase.client.Table
类名称:Table
方法名:batch

Table.batch介绍

[英]Same as #batch(List,Object[]), but returns an array of results instead of using a results parameter reference.
[中]与#batch(List,Object[])相同,但返回结果数组,而不是使用结果参数引用。

代码示例

代码示例来源:origin: thinkaurelius/titan

  1. @Override
  2. public void batch(List<Row> writes, Object[] results) throws IOException, InterruptedException
  3. {
  4. table.batch(writes, results);
  5. /* table.flushCommits(); not needed anymore */
  6. }

代码示例来源:origin: apache/hbase

  1. /**
  2. * Execute the passed <code>mutations</code> against <code>hbase:meta</code> table.
  3. * @param connection connection we're using
  4. * @param mutations Puts and Deletes to execute on hbase:meta
  5. * @throws IOException
  6. */
  7. public static void mutateMetaTable(final Connection connection,
  8. final List<Mutation> mutations)
  9. throws IOException {
  10. Table t = getMetaHTable(connection);
  11. try {
  12. debugLogMutations(mutations);
  13. t.batch(mutations, null);
  14. } catch (InterruptedException e) {
  15. InterruptedIOException ie = new InterruptedIOException(e.getMessage());
  16. ie.initCause(e);
  17. throw ie;
  18. } finally {
  19. t.close();
  20. }
  21. }

代码示例来源:origin: apache/hbase

  1. /**
  2. * Do the changes and handle the pool
  3. * @param tableName table to insert into
  4. * @param allRows list of actions
  5. * @throws IOException
  6. */
  7. private void batch(TableName tableName, Collection<List<Row>> allRows) throws IOException {
  8. if (allRows.isEmpty()) {
  9. return;
  10. }
  11. Connection connection = getConnection();
  12. try (Table table = connection.getTable(tableName)) {
  13. for (List<Row> rows : allRows) {
  14. table.batch(rows, null);
  15. }
  16. } catch (RetriesExhaustedWithDetailsException rewde) {
  17. for (Throwable ex : rewde.getCauses()) {
  18. if (ex instanceof TableNotFoundException) {
  19. throw new TableNotFoundException("'" + tableName + "'");
  20. }
  21. }
  22. throw rewde;
  23. } catch (InterruptedException ix) {
  24. throw (InterruptedIOException) new InterruptedIOException().initCause(ix);
  25. }
  26. }

代码示例来源:origin: apache/hbase

  1. private void testAppend(Increment inc) throws Exception {
  2. checkResult(table.increment(inc));
  3. List<Row> actions = Arrays.asList(inc, inc);
  4. Object[] results = new Object[actions.size()];
  5. table.batch(actions, results);
  6. checkResult(results);
  7. }

代码示例来源:origin: apache/hbase

  1. private void testAppend(Append append) throws Exception {
  2. checkResult(table.append(append));
  3. List<Row> actions = Arrays.asList(append, append);
  4. Object[] results = new Object[actions.size()];
  5. table.batch(actions, results);
  6. checkResult(results);
  7. }

代码示例来源:origin: apache/hbase

  1. @Override
  2. public void prePut(final ObserverContext<RegionCoprocessorEnvironment> e, final Put put,
  3. final WALEdit edit, final Durability durability) throws IOException {
  4. try (Table table = e.getEnvironment().getConnection().getTable(otherTable, getPool())) {
  5. Put p = new Put(new byte[]{'a'});
  6. p.addColumn(family, null, new byte[]{'a'});
  7. try {
  8. table.batch(Collections.singletonList(put), null);
  9. } catch (InterruptedException e1) {
  10. throw new IOException(e1);
  11. }
  12. completedWithPool[0] = true;
  13. }
  14. }
  15. }

代码示例来源:origin: apache/hbase

  1. table.batch(puts, results);
  2. table.batch(gets, multiRes);

代码示例来源:origin: apache/hbase

  1. @Test
  2. public void testHTableBatchWithEmptyPut ()throws Exception {
  3. Table table = TEST_UTIL.createTable(TableName.valueOf(name.getMethodName()),
  4. new byte[][] { FAMILY });
  5. try {
  6. List actions = (List) new ArrayList();
  7. Object[] results = new Object[2];
  8. // create an empty Put
  9. Put put1 = new Put(ROW);
  10. actions.add(put1);
  11. Put put2 = new Put(ANOTHERROW);
  12. put2.addColumn(FAMILY, QUALIFIER, VALUE);
  13. actions.add(put2);
  14. table.batch(actions, results);
  15. fail("Empty Put should have failed the batch call");
  16. } catch (IllegalArgumentException iae) {
  17. } finally {
  18. table.close();
  19. }
  20. }

代码示例来源:origin: apache/hbase

  1. @Test
  2. public void testHTableWithLargeBatch() throws Exception {
  3. Table table = TEST_UTIL.createTable(TableName.valueOf(name.getMethodName()),
  4. new byte[][] { FAMILY });
  5. int sixtyFourK = 64 * 1024;
  6. try {
  7. List actions = new ArrayList();
  8. Object[] results = new Object[(sixtyFourK + 1) * 2];
  9. for (int i = 0; i < sixtyFourK + 1; i ++) {
  10. Put put1 = new Put(ROW);
  11. put1.addColumn(FAMILY, QUALIFIER, VALUE);
  12. actions.add(put1);
  13. Put put2 = new Put(ANOTHERROW);
  14. put2.addColumn(FAMILY, QUALIFIER, VALUE);
  15. actions.add(put2);
  16. }
  17. table.batch(actions, results);
  18. } finally {
  19. table.close();
  20. }
  21. }

代码示例来源:origin: apache/hbase

  1. @Test
  2. public void testBadFam() throws Exception {
  3. LOG.info("test=testBadFam");
  4. Table table = UTIL.getConnection().getTable(TEST_TABLE);
  5. List<Row> actions = new ArrayList<>();
  6. Put p = new Put(Bytes.toBytes("row1"));
  7. p.addColumn(Bytes.toBytes("bad_family"), Bytes.toBytes("qual"), Bytes.toBytes("value"));
  8. actions.add(p);
  9. p = new Put(Bytes.toBytes("row2"));
  10. p.addColumn(BYTES_FAMILY, Bytes.toBytes("qual"), Bytes.toBytes("value"));
  11. actions.add(p);
  12. // row1 and row2 should be in the same region.
  13. Object [] r = new Object[actions.size()];
  14. try {
  15. table.batch(actions, r);
  16. fail();
  17. } catch (RetriesExhaustedWithDetailsException ex) {
  18. LOG.debug(ex.toString(), ex);
  19. // good!
  20. assertFalse(ex.mayHaveClusterIssues());
  21. }
  22. assertEquals(2, r.length);
  23. assertTrue(r[0] instanceof Throwable);
  24. assertTrue(r[1] instanceof Result);
  25. table.close();
  26. }

代码示例来源:origin: apache/hbase

  1. new Put(ROW).addColumn(FAMILY, QUALIFIERS[0], VALUE)));
  2. Object[] batchResult = new Object[1];
  3. t.batch(Arrays.asList(arm), batchResult);
  4. new Put(ROW).addColumn(FAMILY, QUALIFIERS[1], VALUE),
  5. new Delete(ROW).addColumns(FAMILY, QUALIFIERS[0])));
  6. t.batch(Arrays.asList(arm), batchResult);
  7. r = t.get(g);
  8. assertEquals(0, Bytes.compareTo(VALUE, r.getValue(FAMILY, QUALIFIERS[1])));
  9. arm = RowMutations.of(Collections.singletonList(
  10. new Put(ROW).addColumn(new byte[]{'b', 'o', 'g', 'u', 's'}, QUALIFIERS[0], VALUE)));
  11. t.batch(Arrays.asList(arm), batchResult);
  12. fail("Expected RetriesExhaustedWithDetailsException with NoSuchColumnFamilyException");
  13. } catch(RetriesExhaustedWithDetailsException e) {

代码示例来源:origin: apache/hbase

  1. @Test
  2. public void testBatchWithDelete() throws Exception {
  3. LOG.info("test=testBatchWithDelete");
  4. Table table = UTIL.getConnection().getTable(TEST_TABLE);
  5. // Load some data
  6. List<Put> puts = constructPutRequests();
  7. Object[] results = new Object[puts.size()];
  8. table.batch(puts, results);
  9. validateSizeAndEmpty(results, KEYS.length);
  10. // Deletes
  11. List<Row> deletes = new ArrayList<>();
  12. for (int i = 0; i < KEYS.length; i++) {
  13. Delete delete = new Delete(KEYS[i]);
  14. delete.addFamily(BYTES_FAMILY);
  15. deletes.add(delete);
  16. }
  17. results= new Object[deletes.size()];
  18. table.batch(deletes, results);
  19. validateSizeAndEmpty(results, KEYS.length);
  20. // Get to make sure ...
  21. for (byte[] k : KEYS) {
  22. Get get = new Get(k);
  23. get.addColumn(BYTES_FAMILY, QUALIFIER);
  24. Assert.assertFalse(table.exists(get));
  25. }
  26. table.close();
  27. }

代码示例来源:origin: apache/hbase

  1. @Test
  2. public void testBatchAppendWithReturnResultFalse() throws Exception {
  3. LOG.info("Starting testBatchAppendWithReturnResultFalse");
  4. final TableName tableName = TableName.valueOf(name.getMethodName());
  5. Table table = TEST_UTIL.createTable(tableName, FAMILY);
  6. Append append1 = new Append(Bytes.toBytes("row1"));
  7. append1.setReturnResults(false);
  8. append1.addColumn(FAMILY, Bytes.toBytes("f1"), Bytes.toBytes("value1"));
  9. Append append2 = new Append(Bytes.toBytes("row1"));
  10. append2.setReturnResults(false);
  11. append2.addColumn(FAMILY, Bytes.toBytes("f1"), Bytes.toBytes("value2"));
  12. List<Append> appends = new ArrayList<>();
  13. appends.add(append1);
  14. appends.add(append2);
  15. Object[] results = new Object[2];
  16. table.batch(appends, results);
  17. assertTrue(results.length == 2);
  18. for(Object r : results) {
  19. Result result = (Result)r;
  20. assertTrue(result.isEmpty());
  21. }
  22. table.close();
  23. }

代码示例来源:origin: apache/hbase

  1. table.batch(actions, multiRes);
  2. validateResult(multiRes[1], QUAL1, Bytes.toBytes("abcdef"));
  3. validateResult(multiRes[1], QUAL4, Bytes.toBytes("xyz"));

代码示例来源:origin: apache/hbase

  1. @Test
  2. public void testBatchIncrementsWithReturnResultFalse() throws Exception {
  3. LOG.info("Starting testBatchIncrementsWithReturnResultFalse");
  4. final TableName tableName = TableName.valueOf(name.getMethodName());
  5. Table table = TEST_UTIL.createTable(tableName, FAMILY);
  6. Increment inc1 = new Increment(Bytes.toBytes("row2"));
  7. inc1.setReturnResults(false);
  8. inc1.addColumn(FAMILY, Bytes.toBytes("f1"), 1);
  9. Increment inc2 = new Increment(Bytes.toBytes("row2"));
  10. inc2.setReturnResults(false);
  11. inc2.addColumn(FAMILY, Bytes.toBytes("f1"), 1);
  12. List<Increment> incs = new ArrayList<>();
  13. incs.add(inc1);
  14. incs.add(inc2);
  15. Object[] results = new Object[2];
  16. table.batch(incs, results);
  17. assertTrue(results.length == 2);
  18. for(Object r : results) {
  19. Result result = (Result)r;
  20. assertTrue(result.isEmpty());
  21. }
  22. table.close();
  23. }

代码示例来源:origin: apache/hbase

  1. @Test
  2. public void testHTableDeleteWithList() throws Exception {
  3. LOG.info("test=testHTableDeleteWithList");
  4. Table table = UTIL.getConnection().getTable(TEST_TABLE);
  5. // Load some data
  6. List<Put> puts = constructPutRequests();
  7. Object[] results = new Object[puts.size()];
  8. table.batch(puts, results);
  9. validateSizeAndEmpty(results, KEYS.length);
  10. // Deletes
  11. ArrayList<Delete> deletes = new ArrayList<>();
  12. for (int i = 0; i < KEYS.length; i++) {
  13. Delete delete = new Delete(KEYS[i]);
  14. delete.addFamily(BYTES_FAMILY);
  15. deletes.add(delete);
  16. }
  17. table.delete(deletes);
  18. Assert.assertTrue(deletes.isEmpty());
  19. // Get to make sure ...
  20. for (byte[] k : KEYS) {
  21. Get get = new Get(k);
  22. get.addColumn(BYTES_FAMILY, QUALIFIER);
  23. Assert.assertFalse(table.exists(get));
  24. }
  25. table.close();
  26. }

代码示例来源:origin: apache/hbase

  1. @Test
  2. public void testBatchWithPut() throws Exception {
  3. LOG.info("test=testBatchWithPut");
  4. Table table = CONNECTION.getTable(TEST_TABLE);
  5. // put multiple rows using a batch
  6. List<Put> puts = constructPutRequests();
  7. Object[] results = new Object[puts.size()];
  8. table.batch(puts, results);
  9. validateSizeAndEmpty(results, KEYS.length);
  10. if (true) {
  11. int liveRScount = UTIL.getMiniHBaseCluster().getLiveRegionServerThreads().size();
  12. assert liveRScount > 0;
  13. JVMClusterUtil.RegionServerThread liveRS =
  14. UTIL.getMiniHBaseCluster().getLiveRegionServerThreads().get(0);
  15. liveRS.getRegionServer().abort("Aborting for tests", new Exception("testBatchWithPut"));
  16. puts = constructPutRequests();
  17. try {
  18. results = new Object[puts.size()];
  19. table.batch(puts, results);
  20. } catch (RetriesExhaustedWithDetailsException ree) {
  21. LOG.info(ree.getExhaustiveDescription());
  22. table.close();
  23. throw ree;
  24. }
  25. validateSizeAndEmpty(results, KEYS.length);
  26. }
  27. validateLoadedData(table);
  28. table.close();
  29. }

代码示例来源:origin: apache/hbase

  1. Object[] objs = new Object[batches.size()];
  2. try (Table table = TEST_UTIL.getConnection().getTable(TABLE_NAME)) {
  3. table.batch(batches, objs);
  4. fail("Where is the exception? We put the malformed cells!!!");
  5. } catch (RetriesExhaustedWithDetailsException e) {

代码示例来源:origin: apache/hbase

  1. /**
  2. * This is for testing the active number of threads that were used while
  3. * doing a batch operation. It inserts one row per region via the batch
  4. * operation, and then checks the number of active threads.
  5. * <p/>
  6. * For HBASE-3553
  7. */
  8. @Test
  9. public void testActiveThreadsCount() throws Exception {
  10. UTIL.getConfiguration().setLong("hbase.htable.threads.coresize", slaves + 1);
  11. try (Connection connection = ConnectionFactory.createConnection(UTIL.getConfiguration())) {
  12. ThreadPoolExecutor executor = HTable.getDefaultExecutor(UTIL.getConfiguration());
  13. try {
  14. try (Table t = connection.getTable(TEST_TABLE, executor)) {
  15. List<Put> puts = constructPutRequests(); // creates a Put for every region
  16. t.batch(puts, null);
  17. HashSet<ServerName> regionservers = new HashSet<>();
  18. try (RegionLocator locator = connection.getRegionLocator(TEST_TABLE)) {
  19. for (Row r : puts) {
  20. HRegionLocation location = locator.getRegionLocation(r.getRow());
  21. regionservers.add(location.getServerName());
  22. }
  23. }
  24. assertEquals(regionservers.size(), executor.getLargestPoolSize());
  25. }
  26. } finally {
  27. executor.shutdownNow();
  28. }
  29. }
  30. }

代码示例来源:origin: apache/hbase

  1. Object[] results3 = new Object[actions.size()];
  2. Object[] results1 = results3;
  3. hTableInterface.batch(actions, results1);
  4. assertEquals(MyObserver.tr2.getMin(), range2.getMin());
  5. assertEquals(MyObserver.tr2.getMax(), range2.getMax());

相关文章