本文整理了Java中org.apache.hadoop.hbase.HBaseTestingUtility.<init>()
方法的一些代码示例,展示了HBaseTestingUtility.<init>()
的具体用法。这些代码示例主要来源于Github
/Stackoverflow
/Maven
等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。HBaseTestingUtility.<init>()
方法的具体详情如下:
包路径:org.apache.hadoop.hbase.HBaseTestingUtility
类名称:HBaseTestingUtility
方法名:<init>
[英]Create an HBaseTestingUtility using a default configuration.
Initially, all tmp files are written to a local test data directory. Once #startMiniDFSCluster is called, either directly or via #startMiniCluster(), tmp data will be written to the DFS directory instead.
Previously, there was a distinction between the type of utility returned by #createLocalHTU() and this constructor; this is no longer the case. All HBaseTestingUtility objects will behave as local until a DFS cluster is started, at which point they will switch to using mini DFS for storage.
[中]使用默认配置创建HBasetEstangulity。
最初,所有tmp文件都写入本地测试数据目录。一旦直接或通过#startMiniCluster()调用了#startMiniDFSCluster(),tmp数据将被写入DFS目录。
以前,#createLocalHTU()返回的实用程序类型与此构造函数之间存在区别;现在已经不是这样了。在DFS群集启动之前,所有HBaseteStangulity对象都将作为本地对象运行,此时它们将切换到使用迷你DFS进行存储。
代码示例来源:origin: apache/hbase
@BeforeClass
public static void setupCluster() throws Exception {
util = new HBaseTestingUtility();
util.getConfiguration().set(CoprocessorHost.REGION_COPROCESSOR_CONF_KEY, "");
util.startMiniCluster(1);
}
代码示例来源:origin: apache/hbase
@BeforeClass
public static void setUpBeforeClass() throws Exception {
// Stack up three coprocessors just so I can check bypass skips subsequent calls.
Configuration conf = HBaseConfiguration.create();
conf.setStrings(CoprocessorHost.USER_REGION_COPROCESSOR_CONF_KEY,
new String [] {TestCoprocessor.class.getName(),
TestCoprocessor2.class.getName(),
TestCoprocessor3.class.getName()});
util = new HBaseTestingUtility(conf);
util.startMiniCluster();
}
代码示例来源:origin: apache/hbase
@Before
public void setUp() throws Exception {
UTIL = new HBaseTestingUtility();
conf = UTIL.getConfiguration();
}
代码示例来源:origin: apache/hbase
@BeforeClass
public static void setupBeforeClass() throws Exception {
TEST_UTIL = new HBaseTestingUtility();
CONF = TEST_UTIL.getConfiguration();
CONF.setStrings(CoprocessorHost.MASTER_COPROCESSOR_CONF_KEY,
DummyCoprocessorService.class.getName());
CONF.setStrings(CoprocessorHost.REGIONSERVER_COPROCESSOR_CONF_KEY,
DummyCoprocessorService.class.getName());
CONF.setStrings(CoprocessorHost.REGION_COPROCESSOR_CONF_KEY,
DummyCoprocessorService.class.getName());
TEST_UTIL.startMiniCluster();
}
代码示例来源:origin: apache/hbase
@Before
public void setUp() {
hbtu = new HBaseTestingUtility();
tableName = TableName.valueOf("Table-" + testName.getMethodName());
hbtu.getConfiguration().set(
FlushThroughputControllerFactory.HBASE_FLUSH_THROUGHPUT_CONTROLLER_KEY,
PressureAwareFlushThroughputController.class.getName());
}
代码示例来源:origin: apache/hbase
@BeforeClass
public static void setUpBeforeClass() throws Exception {
Configuration conf = HBaseConfiguration.create();
conf.set(CoprocessorHost.WAL_COPROCESSOR_CONF_KEY, TestWALObserver.class.getName());
util = new HBaseTestingUtility(conf);
util.startMiniCluster();
}
代码示例来源:origin: apache/hbase
@BeforeClass
public static void setupBeforeClass() throws Exception {
TEST_UTIL = new HBaseTestingUtility();
CONF = TEST_UTIL.getConfiguration();
CONF.setStrings(CoprocessorHost.REGIONSERVER_COPROCESSOR_CONF_KEY,
DummyRegionServerEndpoint.class.getName());
TEST_UTIL.startMiniCluster();
}
代码示例来源:origin: apache/hbase
@Before
public void setUp() throws Exception {
htu = new HBaseTestingUtility();
htu.getConfiguration().setInt("dfs.blocksize", 1024);// For the test with multiple blocks
htu.getConfiguration().setInt("dfs.replication", 3);
htu.startMiniDFSCluster(3,
new String[]{"/r1", "/r2", "/r3"}, new String[]{host1, host2, host3});
conf = htu.getConfiguration();
cluster = htu.getDFSCluster();
dfs = (DistributedFileSystem) FileSystem.get(conf);
}
代码示例来源:origin: apache/hbase
@Before
public void setUp() throws Exception {
testingUtility = new HBaseTestingUtility();
testingUtility.startMiniCluster();
LogManager.getRootLogger().addAppender(mockAppender);
}
代码示例来源:origin: apache/hbase
@Before public void setUp() throws Exception {
utility = new HBaseTestingUtility();
utility.getConfiguration().setInt("hbase.hfile.compaction.discharger.interval", 10);
utility.startMiniCluster();
}
代码示例来源:origin: apache/hbase
@Before
public void setUp() throws Exception {
htu = new HBaseTestingUtility();
htu.getConfiguration().setInt("dfs.blocksize", 1024);// For the test with multiple blocks
htu.getConfiguration().setInt("dfs.replication", 3);
htu.startMiniDFSCluster(3,
new String[]{"/r1", "/r2", "/r3"}, new String[]{host1, host2, host3});
conf = htu.getConfiguration();
cluster = htu.getDFSCluster();
dfs = (DistributedFileSystem) FileSystem.get(conf);
}
代码示例来源:origin: apache/hbase
@Test public void testMiniCluster() throws Exception {
HBaseTestingUtility hbt = new HBaseTestingUtility();
MiniHBaseCluster cluster = hbt.startMiniCluster();
try {
assertEquals(1, cluster.getLiveRegionServerThreads().size());
} finally {
hbt.shutdownMiniCluster();
}
}
代码示例来源:origin: apache/hbase
@BeforeClass
public static void setUpBeforeClass() throws Exception {
util = new HBaseTestingUtility();
util.getConfiguration().setBoolean(CoprocessorHost.ABORT_ON_ERROR_KEY, false);
util.startMiniCluster();
}
代码示例来源:origin: apache/hbase
@Before
public void setUp() throws Exception {
htu = new HBaseTestingUtility();
htu.getConfiguration().setInt("dfs.blocksize", 1024);// For the test with multiple blocks
htu.getConfiguration().setInt("dfs.replication", 3);
htu.startMiniDFSCluster(3,
new String[]{"/r1", "/r2", "/r3"}, new String[]{host1, host2, host3});
conf = htu.getConfiguration();
cluster = htu.getDFSCluster();
dfs = (DistributedFileSystem) FileSystem.get(conf);
}
代码示例来源:origin: apache/hbase
@BeforeClass
public static void setUp() throws Exception {
TEST_UTIL = new HBaseTestingUtility();
TEST_UTIL.startMiniCluster(NUM_SLAVES_BASE);
admin = TEST_UTIL.getAdmin();
cluster = TEST_UTIL.getHBaseCluster();
master = ((MiniHBaseCluster)cluster).getMaster();
LOG.info("Done initializing cluster");
}
代码示例来源:origin: apache/hbase
@BeforeClass
public static void setupBeforeClass() throws Exception {
TEST_UTIL = new HBaseTestingUtility();
TEST_UTIL.getConfiguration().set(
RpcServerFactory.CUSTOM_RPC_SERVER_IMPL_CONF_KEY,
NettyRpcServer.class.getName());
TEST_UTIL.startMiniCluster();
}
代码示例来源:origin: apache/hbase
@Before
public void setUp() throws Exception {
TEST_UTIL = new HBaseTestingUtility();
conf = TEST_UTIL.getConfiguration();
testDir = TEST_UTIL.getDataTestDir("TestBlocksScanned");
}
代码示例来源:origin: apache/hbase
@SuppressWarnings("resource")
private void startMiniClusters(int numClusters) throws Exception {
Random random = new Random();
utilities = new HBaseTestingUtility[numClusters];
configurations = new Configuration[numClusters];
for (int i = 0; i < numClusters; i++) {
Configuration conf = new Configuration(baseConfiguration);
conf.set(HConstants.ZOOKEEPER_ZNODE_PARENT, "/" + i + random.nextInt());
HBaseTestingUtility utility = new HBaseTestingUtility(conf);
if (i == 0) {
utility.startMiniZKCluster();
miniZK = utility.getZkCluster();
} else {
utility.setZkCluster(miniZK);
}
utility.startMiniCluster();
utilities[i] = utility;
configurations[i] = conf;
new ZKWatcher(conf, "cluster" + i, null, true);
}
}
代码示例来源:origin: apache/hbase
@BeforeClass
public static void beforeClass() throws Exception {
util = new HBaseTestingUtility();
util.getConfiguration().setInt(CompactionConfiguration.HBASE_HSTORE_COMPACTION_MIN_KEY,100);
util.getConfiguration().set("dfs.blocksize", "64000");
util.getConfiguration().set("dfs.namenode.fs-limits.min-block-size", "1024");
util.getConfiguration().set(TimeToLiveHFileCleaner.TTL_CONF_KEY,"0");
util.startMiniCluster(2);
}
代码示例来源:origin: apache/hbase
@Before
public void setUp() throws IOException {
htu = new HBaseTestingUtility();
fs = htu.getTestFileSystem();
conf = htu.getConfiguration();
}
内容来源于网络,如有侵权,请联系作者删除!