org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster()方法的使用及代码示例

x33g5p2x  于2022-01-20 转载在 其他  
字(10.9k)|赞(0)|评价(0)|浏览(92)

本文整理了Java中org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster()方法的一些代码示例,展示了HBaseTestingUtility.startMiniHBaseCluster()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。HBaseTestingUtility.startMiniHBaseCluster()方法的具体详情如下:
包路径:org.apache.hadoop.hbase.HBaseTestingUtility
类名称:HBaseTestingUtility
方法名:startMiniHBaseCluster

HBaseTestingUtility.startMiniHBaseCluster介绍

[英]Starts up mini hbase cluster using default options. Default options can be found in StartMiniClusterOption.Builder.
[中]使用默认选项启动迷你hbase群集。可以在StartMiniClusterOption中找到默认选项。建设者

代码示例

代码示例来源:origin: apache/hbase

@Before
public void setUp() throws IOException, InterruptedException {
 TEST_UTIL.getConfiguration().set("hbase.wal.provider", walProvider);
 TEST_UTIL.startMiniHBaseCluster();
}

代码示例来源:origin: apache/hbase

/**
 * Starts up mini hbase cluster using default options.
 * Default options can be found in {@link StartMiniClusterOption.Builder}.
 * @see #startMiniHBaseCluster(StartMiniClusterOption)
 * @see #shutdownMiniHBaseCluster()
 */
public MiniHBaseCluster startMiniHBaseCluster() throws IOException, InterruptedException {
 return startMiniHBaseCluster(StartMiniClusterOption.builder().build());
}

代码示例来源:origin: apache/hbase

@Before
public void setup() throws Exception {
 testUtil = new HBaseTestingUtility();
 conf = testUtil.getConfiguration();
 conf.set(CoprocessorHost.REGIONSERVER_COPROCESSOR_CONF_KEY,
   StopBlockingRegionObserver.class.getName());
 conf.set(CoprocessorHost.REGION_COPROCESSOR_CONF_KEY,
   StopBlockingRegionObserver.class.getName());
 // make sure we have multiple blocks so that the client does not prefetch all block locations
 conf.set("dfs.blocksize", Long.toString(100 * 1024));
 // prefetch the first block
 conf.set(DFSConfigKeys.DFS_CLIENT_READ_PREFETCH_SIZE_KEY, Long.toString(100 * 1024));
 conf.set(HConstants.REGION_IMPL, ErrorThrowingHRegion.class.getName());
 testUtil.startMiniZKCluster();
 dfsCluster = testUtil.startMiniDFSCluster(2);
 StartMiniClusterOption option = StartMiniClusterOption.builder().numRegionServers(2).build();
 cluster = testUtil.startMiniHBaseCluster(option);
}

代码示例来源:origin: apache/hbase

/**
 * Starts up mini hbase cluster.
 * Usually you won't want this.  You'll usually want {@link #startMiniCluster()}.
 * All other options will use default values, defined in {@link StartMiniClusterOption.Builder}.
 * @param numMasters Master node number.
 * @param numRegionServers Number of region servers.
 * @return The mini HBase cluster created.
 * @see #shutdownMiniHBaseCluster()
 * @deprecated Use {@link #startMiniHBaseCluster(StartMiniClusterOption)} instead.
 */
@Deprecated
public MiniHBaseCluster startMiniHBaseCluster(int numMasters, int numRegionServers)
  throws IOException, InterruptedException {
 StartMiniClusterOption option = StartMiniClusterOption.builder()
   .numMasters(numMasters).numRegionServers(numRegionServers).build();
 return startMiniHBaseCluster(option);
}

代码示例来源:origin: apache/hbase

/**
 * Start up a mini cluster of hbase, optionally dfs and zookeeper if needed.
 * It modifies Configuration.  It homes the cluster data directory under a random
 * subdirectory in a directory under System property test.build.data, to be cleaned up on exit.
 * @see #shutdownMiniDFSCluster()
 */
public MiniHBaseCluster startMiniCluster(StartMiniClusterOption option) throws Exception {
 LOG.info("Starting up minicluster with option: {}", option);
 // If we already put up a cluster, fail.
 if (miniClusterRunning) {
  throw new IllegalStateException("A mini-cluster is already running");
 }
 miniClusterRunning = true;
 setupClusterTestDir();
 System.setProperty(TEST_DIRECTORY_KEY, this.clusterTestDir.getPath());
 // Bring up mini dfs cluster. This spews a bunch of warnings about missing
 // scheme. Complaints are 'Scheme is undefined for build/test/data/dfs/name1'.
 if (dfsCluster == null) {
  LOG.info("STARTING DFS");
  dfsCluster = startMiniDFSCluster(option.getNumDataNodes(), option.getDataNodeHosts());
 } else {
  LOG.info("NOT STARTING DFS");
 }
 // Start up a zk cluster.
 if (getZkCluster() == null) {
  startMiniZKCluster(option.getNumZkServers());
 }
 // Start the MiniHBaseCluster
 return startMiniHBaseCluster(option);
}

代码示例来源:origin: apache/hbase

/**
 * Starts up mini hbase cluster.
 * Usually you won't want this.  You'll usually want {@link #startMiniCluster()}.
 * All other options will use default values, defined in {@link StartMiniClusterOption.Builder}.
 * @param numMasters Master node number.
 * @param numRegionServers Number of region servers.
 * @param rsPorts Ports that RegionServer should use.
 * @return The mini HBase cluster created.
 * @see #shutdownMiniHBaseCluster()
 * @deprecated Use {@link #startMiniHBaseCluster(StartMiniClusterOption)} instead.
 */
@Deprecated
public MiniHBaseCluster startMiniHBaseCluster(int numMasters, int numRegionServers,
  List<Integer> rsPorts) throws IOException, InterruptedException {
 StartMiniClusterOption option = StartMiniClusterOption.builder()
   .numMasters(numMasters).numRegionServers(numRegionServers).rsPorts(rsPorts).build();
 return startMiniHBaseCluster(option);
}

代码示例来源:origin: apache/hbase

private void startCluster(int numRS) throws Exception {
 SplitLogCounters.resetCounters();
 LOG.info("Starting cluster");
 conf.setLong("hbase.splitlog.max.resubmit", 0);
 // Make the failure test faster
 conf.setInt("zookeeper.recovery.retry", 0);
 conf.setInt(HConstants.REGIONSERVER_INFO_PORT, -1);
 conf.setFloat(HConstants.LOAD_BALANCER_SLOP_KEY, (float) 100.0); // no load balancing
 conf.setInt(HBASE_SPLIT_WAL_MAX_SPLITTER, 3);
 conf.setInt(HConstants.REGION_SERVER_HIGH_PRIORITY_HANDLER_COUNT, 10);
 conf.set("hbase.wal.provider", getWalProvider());
 StartMiniClusterOption option = StartMiniClusterOption.builder()
   .numMasters(NUM_MASTERS).numRegionServers(numRS).build();
 TEST_UTIL.startMiniHBaseCluster(option);
 cluster = TEST_UTIL.getHBaseCluster();
 LOG.info("Waiting for active/ready master");
 cluster.waitForActiveAndReadyMaster();
 master = cluster.getMaster();
 TEST_UTIL.waitFor(120000, 200, new Waiter.Predicate<Exception>() {
  @Override
  public boolean evaluate() throws Exception {
   return cluster.getLiveRegionServerThreads().size() >= numRS;
  }
 });
}

代码示例来源:origin: apache/drill

String old_home = System.getProperty("user.home");
System.setProperty("user.home", UTIL.getDataTestDir().toString());
UTIL.startMiniHBaseCluster(1, 1);
System.setProperty("user.home", old_home);
hbaseClusterCreated = true;

代码示例来源:origin: apache/hbase

@Before
public void setUp() throws Exception {
 StartMiniClusterOption option = StartMiniClusterOption.builder()
   .numMasters(2).numRegionServers(2).build();
 TEST_UTIL.startMiniHBaseCluster(option);
}

代码示例来源:origin: apache/hbase

@Test
 public void testKillMiniHBaseCluster() throws Exception {

  HBaseTestingUtility htu = new HBaseTestingUtility();
  htu.startMiniZKCluster();

  try {
   htu.startMiniHBaseCluster();

   TableName tableName;
   byte[] FAM_NAME;

   for(int i = 0; i < NUMTABLES; i++) {
    tableName = TableName.valueOf(name.getMethodName() + i);
    FAM_NAME = Bytes.toBytes("fam" + i);

    try (Table table = htu.createMultiRegionTable(tableName, FAM_NAME, NUMREGIONS)) {
     htu.loadRandomRows(table, FAM_NAME, 100, NUMROWS);
    }
   }
  } finally {
   htu.killMiniHBaseCluster();
   htu.shutdownMiniZKCluster();
  }
 }
}

代码示例来源:origin: apache/hbase

@Test
public void testRewritingClusterIdToPB() throws Exception {
 TEST_UTIL.startMiniZKCluster();
 TEST_UTIL.startMiniDFSCluster(1);
 TEST_UTIL.createRootDir();
 Path rootDir = FSUtils.getRootDir(TEST_UTIL.getConfiguration());
 FileSystem fs = rootDir.getFileSystem(TEST_UTIL.getConfiguration());
 Path filePath = new Path(rootDir, HConstants.CLUSTER_ID_FILE_NAME);
 FSDataOutputStream s = null;
 try {
  s = fs.create(filePath);
  s.writeUTF(TEST_UTIL.getRandomUUID().toString());
 } finally {
  if (s != null) {
   s.close();
  }
 }
 TEST_UTIL.startMiniHBaseCluster();
 HMaster master = TEST_UTIL.getHBaseCluster().getMaster();
 int expected = LoadBalancer.isTablesOnMaster(TEST_UTIL.getConfiguration())? 2: 1;
 assertEquals(expected, master.getServerManager().getOnlineServersList().size());
}

代码示例来源:origin: apache/hbase

TEST_UTIL.startMiniHBaseCluster();

代码示例来源:origin: apache/hbase

utility2.startMiniHBaseCluster(option);
Get get = new Get(rowkey);
for (int i = 0; i < NB_RETRIES; i++) {

代码示例来源:origin: apache/hbase

TEST_UTIL.startMiniHBaseCluster(StartMiniClusterOption.builder()
  .numRegionServers(0).createRootDir(false).build());

代码示例来源:origin: apache/hbase

/**
 * Starts up mini hbase cluster.
 * Usually you won't want this.  You'll usually want {@link #startMiniCluster()}.
 * All other options will use default values, defined in {@link StartMiniClusterOption.Builder}.
 * @param numMasters Master node number.
 * @param numRegionServers Number of region servers.
 * @param rsPorts Ports that RegionServer should use.
 * @param masterClass The class to use as HMaster, or null for default.
 * @param rsClass The class to use as HRegionServer, or null for default.
 * @param createRootDir Whether to create a new root or data directory path.
 * @param createWALDir Whether to create a new WAL directory.
 * @return The mini HBase cluster created.
 * @see #shutdownMiniHBaseCluster()
 * @deprecated Use {@link #startMiniHBaseCluster(StartMiniClusterOption)} instead.
 */
@Deprecated
public MiniHBaseCluster startMiniHBaseCluster(int numMasters, int numRegionServers,
  List<Integer> rsPorts, Class<? extends HMaster> masterClass,
  Class<? extends MiniHBaseCluster.MiniHBaseClusterRegionServer> rsClass,
  boolean createRootDir, boolean createWALDir) throws IOException, InterruptedException {
 StartMiniClusterOption option = StartMiniClusterOption.builder()
   .numMasters(numMasters).masterClass(masterClass)
   .numRegionServers(numRegionServers).rsClass(rsClass).rsPorts(rsPorts)
   .createRootDir(createRootDir).createWALDir(createWALDir).build();
 return startMiniHBaseCluster(option);
}

代码示例来源:origin: apache/hbase

StartMiniClusterOption option = StartMiniClusterOption.builder()
  .numRegionServers(NUM_SLAVES).build();
UTIL.startMiniHBaseCluster(option);

代码示例来源:origin: apache/hbase

@Test
public void testClusterId() throws Exception  {
 TEST_UTIL.startMiniZKCluster();
 TEST_UTIL.startMiniDFSCluster(1);
 Configuration conf = new Configuration(TEST_UTIL.getConfiguration());
 //start region server, needs to be separate
 //so we get an unset clusterId
 rst = JVMClusterUtil.createRegionServerThread(conf, HRegionServer.class, 0);
 rst.start();
 //Make sure RS is in blocking state
 Thread.sleep(10000);
 StartMiniClusterOption option = StartMiniClusterOption.builder()
   .numMasters(1).numRegionServers(0).build();
 TEST_UTIL.startMiniHBaseCluster(option);
 rst.waitForServerOnline();
 String clusterId = ZKClusterId.readClusterIdZNode(TEST_UTIL.getZooKeeperWatcher());
 assertNotNull(clusterId);
 assertEquals(clusterId, rst.getRegionServer().getClusterId());
}

代码示例来源:origin: apache/hbase

MiniHBaseCluster hbm = htu.startMiniHBaseCluster();
conf = hbm.getConfiguration();

代码示例来源:origin: apache/hbase

utility1.startMiniHBaseCluster();

代码示例来源:origin: apache/hbase

@Test
public void testRpcThrottleWhenStartup() throws IOException, InterruptedException {
 TEST_UTIL.getAdmin().switchRpcThrottle(false);
 assertFalse(TEST_UTIL.getAdmin().isRpcThrottleEnabled());
 TEST_UTIL.killMiniHBaseCluster();
 TEST_UTIL.startMiniHBaseCluster();
 assertFalse(TEST_UTIL.getAdmin().isRpcThrottleEnabled());
 for (JVMClusterUtil.RegionServerThread rs : TEST_UTIL.getHBaseCluster()
   .getRegionServerThreads()) {
  RegionServerRpcQuotaManager quotaManager =
    rs.getRegionServer().getRegionServerRpcQuotaManager();
  assertFalse(quotaManager.isRpcThrottleEnabled());
 }
 // enable rpc throttle
 TEST_UTIL.getAdmin().switchRpcThrottle(true);
 assertTrue(TEST_UTIL.getAdmin().isRpcThrottleEnabled());
}

相关文章

HBaseTestingUtility类方法