org.apache.hadoop.hbase.HBaseTestingUtility.getDefaultRootDirPath()方法的使用及代码示例

x33g5p2x  于2022-01-20 转载在 其他  
字(12.1k)|赞(0)|评价(0)|浏览(108)

本文整理了Java中org.apache.hadoop.hbase.HBaseTestingUtility.getDefaultRootDirPath()方法的一些代码示例,展示了HBaseTestingUtility.getDefaultRootDirPath()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。HBaseTestingUtility.getDefaultRootDirPath()方法的具体详情如下:
包路径:org.apache.hadoop.hbase.HBaseTestingUtility
类名称:HBaseTestingUtility
方法名:getDefaultRootDirPath

HBaseTestingUtility.getDefaultRootDirPath介绍

[英]Same as { HBaseTestingUtility#getDefaultRootDirPath(boolean create)except that create flag is false. Note: this does not cause the root dir to be created.
[中]与{HBaseTestingUtility#getDefaultRootDirPath(布尔创建)相同,只是create标志为false。注意:这不会导致创建根目录。

代码示例

代码示例来源:origin: apache/hbase

/**
 * Same as {{@link HBaseTestingUtility#getDefaultRootDirPath(boolean create)}
 * except that <code>create</code> flag is false.
 * Note: this does not cause the root dir to be created.
 * @return Fully qualified path for the default hbase root dir
 * @throws IOException
 */
public Path getDefaultRootDirPath() throws IOException {
 return getDefaultRootDirPath(false);
}

代码示例来源:origin: apache/hbase

private void clearArchiveDirectory() throws IOException {
 UTIL.getTestFileSystem().delete(
  new Path(UTIL.getDefaultRootDirPath(), HConstants.HFILE_ARCHIVE_DIRECTORY), true);
}

代码示例来源:origin: apache/hbase

private static void setupConf(Configuration conf) throws IOException {
  // disable the ui
  conf.setInt("hbase.regionsever.info.port", -1);
  // change the flush size to a small amount, regulating number of store files
  conf.setInt("hbase.hregion.memstore.flush.size", 25000);
  // so make sure we get a compaction when doing a load, but keep around some
  // files in the store
  conf.setInt("hbase.hstore.compaction.min", 10);
  conf.setInt("hbase.hstore.compactionThreshold", 10);
  // block writes if we get to 12 store files
  conf.setInt("hbase.hstore.blockingStoreFiles", 12);
  // Enable snapshot
  conf.setBoolean(SnapshotManager.HBASE_SNAPSHOT_ENABLED, true);
  conf.set(HConstants.HBASE_REGION_SPLIT_POLICY_KEY,
    ConstantSizeRegionSplitPolicy.class.getName());

  conf.set(SnapshotDescriptionUtils.SNAPSHOT_WORKING_DIR, UTIL.getDefaultRootDirPath().toString()
    + Path.SEPARATOR + UUID.randomUUID().toString() + Path.SEPARATOR + ".tmpdir"
    + Path.SEPARATOR);
 }
}

代码示例来源:origin: apache/hbase

@Test(expected = FileNotFoundException.class)
public void testLinkReadWithMissingFile() throws Exception {
 HBaseTestingUtility testUtil = new HBaseTestingUtility();
 FileSystem fs = new MyDistributedFileSystem();
 Path originalPath = new Path(testUtil.getDefaultRootDirPath(), "test.file");
 Path archivedPath = new Path(testUtil.getDefaultRootDirPath(), "archived.file");
 List<Path> files = new ArrayList<Path>();
 files.add(originalPath);
 files.add(archivedPath);
 FileLink link = new FileLink(files);
 link.open(fs);
}

代码示例来源:origin: apache/hbase

/**
 * Check that ExportSnapshot will succeed if something fails but the retry succeed.
 */
@Test
public void testExportRetry() throws Exception {
 Path copyDir = getLocalDestinationDir();
 FileSystem fs = FileSystem.get(copyDir.toUri(), new Configuration());
 copyDir = copyDir.makeQualified(fs);
 Configuration conf = new Configuration(TEST_UTIL.getConfiguration());
 conf.setBoolean(ExportSnapshot.Testing.CONF_TEST_FAILURE, true);
 conf.setInt(ExportSnapshot.Testing.CONF_TEST_FAILURE_COUNT, 2);
 conf.setInt("mapreduce.map.maxattempts", 3);
 testExportFileSystemState(conf, tableName, snapshotName, snapshotName, tableNumFiles,
   TEST_UTIL.getDefaultRootDirPath(), copyDir, true, getBypassRegionPredicate(), true);
}

代码示例来源:origin: apache/hbase

/**
 * Check that ExportSnapshot will fail if we inject failure more times than MR will retry.
 */
@Test
public void testExportFailure() throws Exception {
 Path copyDir = getLocalDestinationDir();
 FileSystem fs = FileSystem.get(copyDir.toUri(), new Configuration());
 copyDir = copyDir.makeQualified(fs);
 Configuration conf = new Configuration(TEST_UTIL.getConfiguration());
 conf.setBoolean(ExportSnapshot.Testing.CONF_TEST_FAILURE, true);
 conf.setInt(ExportSnapshot.Testing.CONF_TEST_FAILURE_COUNT, 4);
 conf.setInt("mapreduce.map.maxattempts", 3);
 testExportFileSystemState(conf, tableName, snapshotName, snapshotName, tableNumFiles,
   TEST_UTIL.getDefaultRootDirPath(), copyDir, true, getBypassRegionPredicate(), false);
}

代码示例来源:origin: apache/hbase

/**
 * Creates an hbase rootdir in user home directory.  Also creates hbase
 * version file.  Normally you won't make use of this method.  Root hbasedir
 * is created for you as part of mini cluster startup.  You'd only use this
 * method if you were doing manual operation.
 * @param create This flag decides whether to get a new
 * root or data directory path or not, if it has been fetched already.
 * Note : Directory will be made irrespective of whether path has been fetched or not.
 * If directory already exists, it will be overwritten
 * @return Fully qualified path to hbase root dir
 * @throws IOException
 */
public Path createRootDir(boolean create) throws IOException {
 FileSystem fs = FileSystem.get(this.conf);
 Path hbaseRootdir = getDefaultRootDirPath(create);
 FSUtils.setRootDir(this.conf, hbaseRootdir);
 fs.mkdirs(hbaseRootdir);
 FSUtils.setVersion(fs, hbaseRootdir);
 return hbaseRootdir;
}

代码示例来源:origin: apache/hbase

@Test
public void testNoHFileLinkInRootDir() throws IOException {
 rootDir = TEST_UTIL.getDefaultRootDirPath();
 FSUtils.setRootDir(conf, rootDir);
 fs = rootDir.getFileSystem(conf);
 TableName tableName = TableName.valueOf("testNoHFileLinkInRootDir");
 String snapshotName = tableName.getNameAsString() + "-snapshot";
 createTableAndSnapshot(tableName, snapshotName);
 Path restoreDir = new Path("/hbase/.tmp-restore");
 RestoreSnapshotHelper.copySnapshotForScanner(conf, fs, rootDir, restoreDir, snapshotName);
 checkNoHFileLinkInTableDir(tableName);
}

代码示例来源:origin: apache/hbase

protected void testExportFileSystemState(final TableName tableName,
  final byte[] snapshotName, final byte[] targetName, int filesExpected,
  Path copyDir, boolean overwrite) throws Exception {
 testExportFileSystemState(TEST_UTIL.getConfiguration(), tableName, snapshotName, targetName,
  filesExpected, TEST_UTIL.getDefaultRootDirPath(), copyDir,
  overwrite, getBypassRegionPredicate(), true);
}

代码示例来源:origin: apache/hbase

@Before
public void setUp() throws Exception {
 Configuration c = TEST_UTIL.getConfiguration();
 c.setBoolean("dfs.support.append", true);
 TEST_UTIL.startMiniCluster(1);
 table = TEST_UTIL.createMultiRegionTable(TABLE_NAME, FAMILY);
 TEST_UTIL.loadTable(table, FAMILY);
 // setup the hdfssnapshots
 client = new DFSClient(TEST_UTIL.getDFSCluster().getURI(), TEST_UTIL.getConfiguration());
 String fullUrIPath = TEST_UTIL.getDefaultRootDirPath().toString();
 String uriString = TEST_UTIL.getTestFileSystem().getUri().toString();
 baseDir = StringUtils.removeStart(fullUrIPath, uriString);
 client.allowSnapshot(baseDir);
}

代码示例来源:origin: apache/hbase

@BeforeClass
public static void startCluster() throws Exception {
 UTIL.startMiniDFSCluster(1);
 fs = UTIL.getDFSCluster().getFileSystem();
 rootDir = UTIL.getDefaultRootDirPath();
}

代码示例来源:origin: apache/hbase

@Test
public void testSnapshottingWithTmpSplitsAndMergeDirectoriesPresent() throws Exception {
 // lets get a region and create those directories and make sure we ignore them
 RegionInfo firstRegion = TEST_UTIL.getConnection().getRegionLocator(
   table.getName()).getAllRegionLocations().stream().findFirst().get().getRegion();
 String encodedName = firstRegion.getEncodedName();
 Path tableDir = FSUtils.getTableDir(TEST_UTIL.getDefaultRootDirPath(), TABLE_NAME);
 Path regionDirectoryPath = new Path(tableDir, encodedName);
 TEST_UTIL.getTestFileSystem().create(
   new Path(regionDirectoryPath, HRegionFileSystem.REGION_TEMP_DIR));
 TEST_UTIL.getTestFileSystem().create(
   new Path(regionDirectoryPath, HRegionFileSystem.REGION_SPLITS_DIR));
 TEST_UTIL.getTestFileSystem().create(
   new Path(regionDirectoryPath, HRegionFileSystem.REGION_MERGES_DIR));
 // now snapshot
 String snapshotDir = client.createSnapshot(baseDir, "foo_snapshot");
 // everything should still open just fine
 HRegion snapshottedRegion = openSnapshotRegion(firstRegion,
   FSUtils.getTableDir(new Path(snapshotDir), TABLE_NAME));
 Assert.assertNotNull(snapshottedRegion); // no errors and the region should open
 snapshottedRegion.close();
}

代码示例来源:origin: apache/hbase

/**
 * Test, on HDFS, that the FileLink is still readable
 * even when the current file gets renamed.
 */
@Test
public void testHDFSLinkReadDuringRename() throws Exception {
 HBaseTestingUtility testUtil = new HBaseTestingUtility();
 Configuration conf = testUtil.getConfiguration();
 conf.setInt("dfs.blocksize", 1024 * 1024);
 conf.setInt("dfs.client.read.prefetch.size", 2 * 1024 * 1024);
 testUtil.startMiniDFSCluster(1);
 MiniDFSCluster cluster = testUtil.getDFSCluster();
 FileSystem fs = cluster.getFileSystem();
 assertEquals("hdfs", fs.getUri().getScheme());
 try {
  testLinkReadDuringRename(fs, testUtil.getDefaultRootDirPath());
 } finally {
  testUtil.shutdownMiniCluster();
 }
}

代码示例来源:origin: apache/hbase

private HRegionFileSystem getHRegionFS(HTable table, Configuration conf) throws IOException {
 FileSystem fs = TEST_UTIL.getDFSCluster().getFileSystem();
 Path tableDir = FSUtils.getTableDir(TEST_UTIL.getDefaultRootDirPath(), table.getName());
 List<Path> regionDirs = FSUtils.getRegionDirs(fs, tableDir);
 assertEquals(1, regionDirs.size());
 List<Path> familyDirs = FSUtils.getFamilyDirs(fs, regionDirs.get(0));
 assertEquals(2, familyDirs.size());
 RegionInfo hri = table.getRegionLocator().getAllRegionLocations().get(0).getRegionInfo();
 HRegionFileSystem regionFs = new HRegionFileSystem(conf, new HFileSystem(fs), tableDir, hri);
 return regionFs;
}

代码示例来源:origin: apache/hbase

Path tableDir = FSUtils.getTableDir(TEST_UTIL.getDefaultRootDirPath(), TABLENAME);
assertTrue(fs.exists(tableDir));

代码示例来源:origin: apache/hbase

Path tableDir = FSUtils.getTableDir(TEST_UTIL.getDefaultRootDirPath(), TABLENAME);
assertTrue(fs.exists(tableDir));

代码示例来源:origin: apache/hbase

Path oldWALsDir = new Path(TEST_UTIL.getDefaultRootDirPath(),
  HConstants.HREGION_OLDLOGDIR_NAME);
FileSystem fs = TEST_UTIL.getDFSCluster().getFileSystem();

代码示例来源:origin: apache/hbase

private HRegion initHRegion(TableDescriptor htd, byte[] startKey, byte[] stopKey, int replicaId)
  throws IOException {
 Configuration conf = TEST_UTIL.getConfiguration();
 conf.set("hbase.wal.provider", walProvider);
 conf.setBoolean("hbase.hregion.mvcc.preassign", false);
 Path tableDir = FSUtils.getTableDir(testDir, htd.getTableName());
 RegionInfo info = RegionInfoBuilder.newBuilder(htd.getTableName()).setStartKey(startKey)
   .setEndKey(stopKey).setReplicaId(replicaId).setRegionId(0).build();
 fileSystem = tableDir.getFileSystem(conf);
 final Configuration walConf = new Configuration(conf);
 FSUtils.setRootDir(walConf, tableDir);
 this.walConf = walConf;
 wals = new WALFactory(walConf, "log_" + replicaId);
 ChunkCreator.initialize(MemStoreLABImpl.CHUNK_SIZE_DEFAULT, false, 0, 0, 0, null);
 HRegion region = HRegion.createHRegion(info, TEST_UTIL.getDefaultRootDirPath(), conf, htd,
  wals.getWAL(info));
 return region;
}

代码示例来源:origin: apache/hbase

@Test
public void testCreateSplitWALProcedures() throws Exception {
 TEST_UTIL.createTable(TABLE_NAME, FAMILY, TEST_UTIL.KEYS_FOR_HBA_CREATE_TABLE);
 // load table
 TEST_UTIL.loadTable(TEST_UTIL.getConnection().getTable(TABLE_NAME), FAMILY);
 ProcedureExecutor<MasterProcedureEnv> masterPE = master.getMasterProcedureExecutor();
 ServerName metaServer = TEST_UTIL.getHBaseCluster().getServerHoldingMeta();
 Path metaWALDir = new Path(TEST_UTIL.getDefaultRootDirPath(),
   AbstractFSWALProvider.getWALDirectoryName(metaServer.toString()));
 // Test splitting meta wal
 FileStatus[] wals =
   TEST_UTIL.getTestFileSystem().listStatus(metaWALDir, MasterWalManager.META_FILTER);
 Assert.assertEquals(1, wals.length);
 List<Procedure> testProcedures =
   splitWALManager.createSplitWALProcedures(Lists.newArrayList(wals[0]), metaServer);
 Assert.assertEquals(1, testProcedures.size());
 ProcedureTestingUtility.submitAndWait(masterPE, testProcedures.get(0));
 Assert.assertFalse(TEST_UTIL.getTestFileSystem().exists(wals[0].getPath()));
 // Test splitting wal
 wals = TEST_UTIL.getTestFileSystem().listStatus(metaWALDir, MasterWalManager.NON_META_FILTER);
 Assert.assertEquals(1, wals.length);
 testProcedures =
   splitWALManager.createSplitWALProcedures(Lists.newArrayList(wals[0]), metaServer);
 Assert.assertEquals(1, testProcedures.size());
 ProcedureTestingUtility.submitAndWait(masterPE, testProcedures.get(0));
 Assert.assertFalse(TEST_UTIL.getTestFileSystem().exists(wals[0].getPath()));
}

代码示例来源:origin: apache/hbase

when(sink.getSkippedEditsCounter()).thenReturn(skippedEdits);
FSTableDescriptors fstd = new FSTableDescriptors(HTU.getConfiguration(),
  FileSystem.get(HTU.getConfiguration()), HTU.getDefaultRootDirPath());
RegionReplicaReplicationEndpoint.RegionReplicaSinkWriter sinkWriter =
  new RegionReplicaReplicationEndpoint.RegionReplicaSinkWriter(sink,

相关文章

HBaseTestingUtility类方法