org.apache.hadoop.hdfs.server.common.Util.fileAsURI()方法的使用及代码示例

x33g5p2x  于2022-02-01 转载在 其他  
字(14.5k)|赞(0)|评价(0)|浏览(170)

本文整理了Java中org.apache.hadoop.hdfs.server.common.Util.fileAsURI()方法的一些代码示例,展示了Util.fileAsURI()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Util.fileAsURI()方法的具体详情如下:
包路径:org.apache.hadoop.hdfs.server.common.Util
类名称:Util
方法名:fileAsURI

Util.fileAsURI介绍

[英]Converts the passed File to a URI. This method trims the trailing slash if one is appended because the underlying file is in fact a directory that exists.
[中]将传递的文件转换为URI。如果附加了一个斜杠,该方法会修剪尾随斜杠,因为基础文件实际上是一个存在的目录。

代码示例

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

  1. /**
  2. * Interprets the passed string as a URI. In case of error it
  3. * assumes the specified string is a file.
  4. *
  5. * @param s the string to interpret
  6. * @return the resulting URI
  7. */
  8. static URI stringAsURI(String s) throws IOException {
  9. URI u = null;
  10. // try to make a URI
  11. try {
  12. u = new URI(s);
  13. } catch (URISyntaxException e){
  14. LOG.error("Syntax error in URI " + s
  15. + ". Please check hdfs configuration.", e);
  16. }
  17. // if URI is null or scheme is undefined, then assume it's file://
  18. if(u == null || u.getScheme() == null){
  19. LOG.info("Assuming 'file' scheme for path " + s + " in configuration.");
  20. u = fileAsURI(new File(s));
  21. }
  22. return u;
  23. }

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

  1. /**
  2. * Return the storage directory corresponding to the passed URI.
  3. * @param uri URI of a storage directory
  4. * @return The matching storage directory or null if none found
  5. */
  6. public StorageDirectory getStorageDirectory(URI uri) {
  7. try {
  8. uri = Util.fileAsURI(new File(uri));
  9. Iterator<StorageDirectory> it = dirIterator();
  10. while (it.hasNext()) {
  11. StorageDirectory sd = it.next();
  12. if (Util.fileAsURI(sd.getRoot()).equals(uri)) {
  13. return sd;
  14. }
  15. }
  16. } catch (IOException ioe) {
  17. LOG.warn("Error converting file to URI", ioe);
  18. }
  19. return null;
  20. }

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

  1. /**
  2. * Return the list of locations being used for a specific purpose.
  3. * i.e. Image or edit log storage.
  4. *
  5. * @param dirType Purpose of locations requested.
  6. * @throws IOException
  7. */
  8. Collection<URI> getDirectories(NameNodeDirType dirType)
  9. throws IOException {
  10. ArrayList<URI> list = new ArrayList<>();
  11. Iterator<StorageDirectory> it = (dirType == null) ? dirIterator() :
  12. dirIterator(dirType);
  13. for ( ; it.hasNext();) {
  14. StorageDirectory sd = it.next();
  15. try {
  16. list.add(Util.fileAsURI(sd.getRoot()));
  17. } catch (IOException e) {
  18. throw new IOException("Exception while processing " +
  19. "StorageDirectory " + sd.getRoot(), e);
  20. }
  21. }
  22. return list;
  23. }

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs-test

  1. private Configuration getConf() throws IOException {
  2. String baseDir = MiniDFSCluster.getBaseDirectory();
  3. String nameDirs = fileAsURI(new File(baseDir, "name1")) + "," +
  4. fileAsURI(new File(baseDir, "name2"));
  5. Configuration conf = new HdfsConfiguration();
  6. FileSystem.setDefaultUri(conf, "hdfs://localhost:0");
  7. conf.set(DFSConfigKeys.DFS_NAMENODE_HTTP_ADDRESS_KEY, "0.0.0.0:0");
  8. conf.set(DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY, nameDirs);
  9. conf.set(DFSConfigKeys.DFS_NAMENODE_EDITS_DIR_KEY, nameDirs);
  10. conf.set(DFSConfigKeys.DFS_NAMENODE_SECONDARY_HTTP_ADDRESS_KEY, "0.0.0.0:0");
  11. conf.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, false);
  12. return conf;
  13. }
  14. }

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs-test

  1. /**
  2. * secnn-6
  3. * checkpoint for edits and image is the same directory
  4. * @throws IOException
  5. */
  6. public void testChkpointStartup2() throws IOException{
  7. LOG.info("--starting checkpointStartup2 - same directory for checkpoint");
  8. // different name dirs
  9. config.set(DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY,
  10. fileAsURI(new File(hdfsDir, "name")).toString());
  11. config.set(DFSConfigKeys.DFS_NAMENODE_EDITS_DIR_KEY,
  12. fileAsURI(new File(hdfsDir, "edits")).toString());
  13. // same checkpoint dirs
  14. config.set(DFSConfigKeys.DFS_NAMENODE_CHECKPOINT_EDITS_DIR_KEY,
  15. fileAsURI(new File(hdfsDir, "chkpt")).toString());
  16. config.set(DFSConfigKeys.DFS_NAMENODE_CHECKPOINT_DIR_KEY,
  17. fileAsURI(new File(hdfsDir, "chkpt")).toString());
  18. createCheckPoint();
  19. corruptNameNodeFiles();
  20. checkNameNodeFiles();
  21. }

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

  1. /**
  2. * Return the storage directory corresponding to the passed URI
  3. * @param uri URI of a storage directory
  4. * @return The matching storage directory or null if none found
  5. */
  6. StorageDirectory getStorageDirectory(URI uri) {
  7. try {
  8. uri = Util.fileAsURI(new File(uri));
  9. Iterator<StorageDirectory> it = dirIterator();
  10. for (; it.hasNext(); ) {
  11. StorageDirectory sd = it.next();
  12. if (Util.fileAsURI(sd.getRoot()).equals(uri)) {
  13. return sd;
  14. }
  15. }
  16. } catch (IOException ioe) {
  17. LOG.warn("Error converting file to URI", ioe);
  18. }
  19. return null;
  20. }

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs-test

  1. /**
  2. * seccn-8
  3. * checkpoint for edits and image are different directories
  4. * @throws IOException
  5. */
  6. public void testChkpointStartup1() throws IOException{
  7. //setUpConfig();
  8. LOG.info("--starting testStartup Recovery");
  9. // different name dirs
  10. config.set(DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY,
  11. fileAsURI(new File(hdfsDir, "name")).toString());
  12. config.set(DFSConfigKeys.DFS_NAMENODE_EDITS_DIR_KEY,
  13. fileAsURI(new File(hdfsDir, "edits")).toString());
  14. // same checkpoint dirs
  15. config.set(DFSConfigKeys.DFS_NAMENODE_CHECKPOINT_EDITS_DIR_KEY,
  16. fileAsURI(new File(hdfsDir, "chkpt_edits")).toString());
  17. config.set(DFSConfigKeys.DFS_NAMENODE_CHECKPOINT_DIR_KEY,
  18. fileAsURI(new File(hdfsDir, "chkpt")).toString());
  19. createCheckPoint();
  20. corruptNameNodeFiles();
  21. checkNameNodeFiles();
  22. }

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

  1. public static URI formatSharedEditsDir(File baseDir, int minNN, int maxNN)
  2. throws IOException {
  3. return fileAsURI(new File(baseDir, "shared-edits-" +
  4. minNN + "-through-" + maxNN));
  5. }

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs-test

  1. protected void setUp() throws Exception {
  2. config = new HdfsConfiguration();
  3. hdfsDir = new File(MiniDFSCluster.getBaseDirectory());
  4. if ( hdfsDir.exists() && !FileUtil.fullyDelete(hdfsDir) ) {
  5. throw new IOException("Could not delete hdfs directory '" + hdfsDir + "'");
  6. }
  7. LOG.info("--hdfsdir is " + hdfsDir.getAbsolutePath());
  8. config.set(DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY,
  9. fileAsURI(new File(hdfsDir, "name")).toString());
  10. config.set(DFSConfigKeys.DFS_DATANODE_DATA_DIR_KEY,
  11. new File(hdfsDir, "data").getPath());
  12. config.set(DFSConfigKeys.DFS_NAMENODE_CHECKPOINT_DIR_KEY,
  13. fileAsURI(new File(hdfsDir, "secondary")).toString());
  14. config.set(DFSConfigKeys.DFS_NAMENODE_SECONDARY_HTTP_ADDRESS_KEY,
  15. WILDCARD_HTTP_HOST + "0");
  16. FileSystem.setDefaultUri(config, "hdfs://"+NAME_NODE_HOST + "0");
  17. }

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

  1. /**
  2. * Return the storage directory corresponding to the passed URI
  3. * @param uri URI of a storage directory
  4. * @return The matching storage directory or null if none found
  5. */
  6. StorageDirectory getStorageDirectory(URI uri) {
  7. try {
  8. uri = Util.fileAsURI(new File(uri));
  9. Iterator<StorageDirectory> it = dirIterator();
  10. for (; it.hasNext(); ) {
  11. StorageDirectory sd = it.next();
  12. if (Util.fileAsURI(sd.getRoot()).equals(uri)) {
  13. return sd;
  14. }
  15. }
  16. } catch (IOException ioe) {
  17. LOG.warn("Error converting file to URI", ioe);
  18. }
  19. return null;
  20. }

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

  1. /**
  2. * Return the list of locations being used for a specific purpose.
  3. * i.e. Image or edit log storage.
  4. *
  5. * @param dirType Purpose of locations requested.
  6. * @throws IOException
  7. */
  8. Collection<URI> getDirectories(NameNodeDirType dirType)
  9. throws IOException {
  10. ArrayList<URI> list = new ArrayList<URI>();
  11. Iterator<StorageDirectory> it = (dirType == null) ? dirIterator() :
  12. dirIterator(dirType);
  13. for ( ;it.hasNext(); ) {
  14. StorageDirectory sd = it.next();
  15. try {
  16. list.add(Util.fileAsURI(sd.getRoot()));
  17. } catch (IOException e) {
  18. throw new IOException("Exception while processing " +
  19. "StorageDirectory " + sd.getRoot(), e);
  20. }
  21. }
  22. return list;
  23. }

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs-test

  1. /**
  2. * Start the BackupNode
  3. */
  4. public BackupNode startBackupNode(Configuration conf) throws IOException {
  5. String dataDir = getTestingDir();
  6. // Set up testing environment directories
  7. hdfsDir = new File(dataDir, "backupNode");
  8. if ( hdfsDir.exists() && !FileUtil.fullyDelete(hdfsDir) ) {
  9. throw new IOException("Could not delete hdfs directory '" + hdfsDir + "'");
  10. }
  11. File currDir = new File(hdfsDir, "name2");
  12. File currDir2 = new File(currDir, "current");
  13. File currDir3 = new File(currDir, "image");
  14. assertTrue(currDir.mkdirs());
  15. assertTrue(currDir2.mkdirs());
  16. assertTrue(currDir3.mkdirs());
  17. conf.set(DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY,
  18. fileAsURI(new File(hdfsDir, "name2")).toString());
  19. conf.set(DFSConfigKeys.DFS_NAMENODE_EDITS_DIR_KEY, "${dfs.name.dir}");
  20. // Start BackupNode
  21. String[] args = new String [] { StartupOption.BACKUP.getName() };
  22. BackupNode bu = (BackupNode)NameNode.createNameNode(args, conf);
  23. return bu;
  24. }

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs-test

  1. /**
  2. * Start the namenode.
  3. */
  4. public NameNode startNameNode(boolean withService) throws IOException {
  5. String dataDir = getTestingDir();
  6. hdfsDir = new File(dataDir, "dfs");
  7. if ( hdfsDir.exists() && !FileUtil.fullyDelete(hdfsDir) ) {
  8. throw new IOException("Could not delete hdfs directory '" + hdfsDir + "'");
  9. }
  10. config = new HdfsConfiguration();
  11. config.set(DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY,
  12. fileAsURI(new File(hdfsDir, "name1")).toString());
  13. FileSystem.setDefaultUri(config, "hdfs://" + THIS_HOST);
  14. if (withService) {
  15. NameNode.setServiceAddress(config, THIS_HOST);
  16. }
  17. config.set(DFSConfigKeys.DFS_NAMENODE_HTTP_ADDRESS_KEY, THIS_HOST);
  18. NameNode.format(config);
  19. String[] args = new String[] {};
  20. // NameNode will modify config with the ports it bound to
  21. return NameNode.createNameNode(args, config);
  22. }

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

  1. String makeDataNodeDirs(int dnIndex, StorageType[] storageTypes) throws IOException {
  2. StringBuilder sb = new StringBuilder();
  3. assert storageTypes == null || storageTypes.length == storagesPerDatanode;
  4. for (int j = 0; j < storagesPerDatanode; ++j) {
  5. File dir = getInstanceStorageDir(dnIndex, j);
  6. dir.mkdirs();
  7. if (!dir.isDirectory()) {
  8. throw new IOException("Mkdirs failed to create directory for DataNode " + dir);
  9. }
  10. sb.append((j > 0 ? "," : "") + "[" +
  11. (storageTypes == null ? StorageType.DEFAULT : storageTypes[j]) +
  12. "]" + fileAsURI(dir));
  13. }
  14. return sb.toString();
  15. }

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

  1. private Configuration getConf() throws IOException {
  2. String baseDir = MiniDFSCluster.getBaseDirectory();
  3. String nameDirs = fileAsURI(new File(baseDir, "name1")) + "," +
  4. fileAsURI(new File(baseDir, "name2"));
  5. Configuration conf = new HdfsConfiguration();
  6. FileSystem.setDefaultUri(conf, "hdfs://localhost:0");
  7. conf.set(DFSConfigKeys.DFS_NAMENODE_HTTP_ADDRESS_KEY, "0.0.0.0:0");
  8. conf.set(DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY, nameDirs);
  9. conf.set(DFSConfigKeys.DFS_NAMENODE_EDITS_DIR_KEY, nameDirs);
  10. conf.set(DFSConfigKeys.DFS_NAMENODE_SECONDARY_HTTP_ADDRESS_KEY, "0.0.0.0:0");
  11. conf.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, false);
  12. return conf;
  13. }
  14. }

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

  1. /**
  2. * secnn-6
  3. * checkpoint for edits and image is the same directory
  4. * @throws IOException
  5. */
  6. @Test
  7. public void testChkpointStartup2() throws IOException{
  8. LOG.info("--starting checkpointStartup2 - same directory for checkpoint");
  9. // different name dirs
  10. config.set(DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY,
  11. fileAsURI(new File(hdfsDir, "name")).toString());
  12. config.set(DFSConfigKeys.DFS_NAMENODE_EDITS_DIR_KEY,
  13. fileAsURI(new File(hdfsDir, "edits")).toString());
  14. // same checkpoint dirs
  15. config.set(DFSConfigKeys.DFS_NAMENODE_CHECKPOINT_EDITS_DIR_KEY,
  16. fileAsURI(new File(hdfsDir, "chkpt")).toString());
  17. config.set(DFSConfigKeys.DFS_NAMENODE_CHECKPOINT_DIR_KEY,
  18. fileAsURI(new File(hdfsDir, "chkpt")).toString());
  19. createCheckPoint(1);
  20. corruptNameNodeFiles();
  21. checkNameNodeFiles();
  22. }

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

  1. @Before
  2. public void setUp() throws IOException {
  3. conf = new Configuration();
  4. conf.set(DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY,
  5. fileAsURI(new File(MiniDFSCluster.getBaseDirectory(),
  6. "namenode")).toString());
  7. NameNode.initMetrics(conf, NamenodeRole.NAMENODE);
  8. fs = null;
  9. fsIsReady = true;
  10. }

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

  1. @Before
  2. public void setUp() throws Exception {
  3. config = new HdfsConfiguration();
  4. hdfsDir = new File(MiniDFSCluster.getBaseDirectory());
  5. if ( hdfsDir.exists() && !FileUtil.fullyDelete(hdfsDir) ) {
  6. throw new IOException("Could not delete hdfs directory '" + hdfsDir + "'");
  7. }
  8. LOG.info("--hdfsdir is " + hdfsDir.getAbsolutePath());
  9. config.set(DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY,
  10. fileAsURI(new File(hdfsDir, "name")).toString());
  11. config.set(DFSConfigKeys.DFS_DATANODE_DATA_DIR_KEY,
  12. new File(hdfsDir, "data").getPath());
  13. config.set(DFSConfigKeys.DFS_DATANODE_ADDRESS_KEY, "0.0.0.0:0");
  14. config.set(DFSConfigKeys.DFS_DATANODE_HTTP_ADDRESS_KEY, "0.0.0.0:0");
  15. config.set(DFSConfigKeys.DFS_DATANODE_IPC_ADDRESS_KEY, "0.0.0.0:0");
  16. config.set(DFSConfigKeys.DFS_NAMENODE_CHECKPOINT_DIR_KEY,
  17. fileAsURI(new File(hdfsDir, "secondary")).toString());
  18. config.set(DFSConfigKeys.DFS_NAMENODE_SECONDARY_HTTP_ADDRESS_KEY,
  19. WILDCARD_HTTP_HOST + "0");
  20. FileSystem.setDefaultUri(config, "hdfs://"+NAME_NODE_HOST + "0");
  21. }

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

  1. /**
  2. * Sets up a MiniDFSCluster, configures it to create one edits file,
  3. * starts DelegationTokenSecretManager (to get security op codes)
  4. *
  5. * @param dfsDir DFS directory (where to setup MiniDFS cluster)
  6. */
  7. public void startCluster(String dfsDir) throws IOException {
  8. // same as manageDfsDirs but only one edits file instead of two
  9. config.set(DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY,
  10. Util.fileAsURI(new File(dfsDir, "name")).toString());
  11. config.set(DFSConfigKeys.DFS_NAMENODE_CHECKPOINT_DIR_KEY,
  12. Util.fileAsURI(new File(dfsDir, "namesecondary1")).toString());
  13. // blocksize for concat (file size must be multiple of blocksize)
  14. config.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, blockSize);
  15. // for security to work (fake JobTracker user)
  16. config.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTH_TO_LOCAL,
  17. "RULE:[2:$1@$0](JobTracker@.*FOO.COM)s/@.*//" + "DEFAULT");
  18. config.setBoolean(
  19. DFSConfigKeys.DFS_NAMENODE_DELEGATION_TOKEN_ALWAYS_USE_KEY, true);
  20. config.setBoolean(DFSConfigKeys.DFS_NAMENODE_ACLS_ENABLED_KEY, true);
  21. cluster =
  22. new MiniDFSCluster.Builder(config).manageNameDfsDirs(false).build();
  23. cluster.waitClusterUp();
  24. }

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

  1. /**
  2. * Start the namenode.
  3. */
  4. public NameNode startNameNode(boolean withService) throws IOException {
  5. hdfsDir = new File(TEST_DATA_DIR, "dfs");
  6. if ( hdfsDir.exists() && !FileUtil.fullyDelete(hdfsDir) ) {
  7. throw new IOException("Could not delete hdfs directory '" + hdfsDir + "'");
  8. }
  9. config = new HdfsConfiguration();
  10. config.set(DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY,
  11. fileAsURI(new File(hdfsDir, "name1")).toString());
  12. FileSystem.setDefaultUri(config, "hdfs://" + THIS_HOST);
  13. if (withService) {
  14. NameNode.setServiceAddress(config, THIS_HOST);
  15. }
  16. config.set(DFSConfigKeys.DFS_NAMENODE_HTTP_ADDRESS_KEY, THIS_HOST);
  17. DFSTestUtil.formatNameNode(config);
  18. String[] args = new String[] {};
  19. // NameNode will modify config with the ports it bound to
  20. return NameNode.createNameNode(args, config);
  21. }

相关文章