org.apache.hadoop.hdfs.server.common.Util类的使用及代码示例

x33g5p2x  于2022-02-01 转载在 其他  
字(11.0k)|赞(0)|评价(0)|浏览(160)

本文整理了Java中org.apache.hadoop.hdfs.server.common.Util类的一些代码示例,展示了Util类的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。Util类的具体详情如下:
包路径:org.apache.hadoop.hdfs.server.common.Util
类名称:Util

Util介绍

暂无

代码示例

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

  1. /**
  2. * Returns edit directories that are shared between primary and secondary.
  3. * @param conf configuration
  4. * @return collection of edit directories from {@code conf}
  5. */
  6. public static List<URI> getSharedEditsDirs(Configuration conf) {
  7. // don't use getStorageDirs here, because we want an empty default
  8. // rather than the dir in /tmp
  9. Collection<String> dirNames = conf.getTrimmedStringCollection(
  10. DFS_NAMENODE_SHARED_EDITS_DIR_KEY);
  11. return Util.stringCollectionAsURIs(dirNames);
  12. }

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

  1. /**
  2. * Interprets the passed string as a URI. In case of error it
  3. * assumes the specified string is a file.
  4. *
  5. * @param s the string to interpret
  6. * @return the resulting URI
  7. */
  8. static URI stringAsURI(String s) throws IOException {
  9. URI u = null;
  10. // try to make a URI
  11. try {
  12. u = new URI(s);
  13. } catch (URISyntaxException e){
  14. LOG.error("Syntax error in URI " + s
  15. + ". Please check hdfs configuration.", e);
  16. }
  17. // if URI is null or scheme is undefined, then assume it's file://
  18. if(u == null || u.getScheme() == null){
  19. LOG.info("Assuming 'file' scheme for path " + s + " in configuration.");
  20. u = fileAsURI(new File(s));
  21. }
  22. return u;
  23. }

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

  1. /**
  2. * Converts a collection of strings into a collection of URIs.
  3. * @param names collection of strings to convert to URIs
  4. * @return collection of URIs
  5. */
  6. public static List<URI> stringCollectionAsURIs(
  7. Collection<String> names) {
  8. List<URI> uris = new ArrayList<>(names.size());
  9. for(String name : names) {
  10. try {
  11. uris.add(stringAsURI(name));
  12. } catch (IOException e) {
  13. LOG.error("Error while processing URI: " + name, e);
  14. }
  15. }
  16. return uris;
  17. }

代码示例来源:origin: com.facebook.hadoop/hadoop-core

  1. /**
  2. * Load an edit log, and apply the changes to the in-memory structure
  3. * This is where we apply edits that we've been writing to disk all
  4. * along.
  5. */
  6. int loadFSEdits(EditLogInputStream edits, long expectedStartingTxId)
  7. throws IOException {
  8. long startTime = now();
  9. currentTxId = expectedStartingTxId;
  10. int numEdits = loadFSEdits(edits, true);
  11. FSImage.LOG.info("Edits file " + edits.getName()
  12. + " of size " + edits.length() + " edits # " + numEdits
  13. + " loaded in " + (now()-startTime)/1000 + " seconds.");
  14. return numEdits;
  15. }

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

  1. deleteTmpFiles(localPaths);
  2. deleteTmpFiles(localPaths);
  3. throw new IOException("File " + url + " received length " + received +
  4. " is not of the advertised size " + advertisedSize +
  5. deleteTmpFiles(localPaths);
  6. throw new IOException("File " + url + " computed digest " +
  7. computedDigest + " does not match advertised digest " +

代码示例来源:origin: com.facebook.hadoop/hadoop-core

  1. private long dispatchBlockMoves() throws InterruptedException {
  2. long bytesLastMoved = bytesMoved.get();
  3. Future<?>[] futures = new Future<?>[sources.size()];
  4. int i=0;
  5. for (Source source : sources) {
  6. futures[i++] = dispatcherExecutor.submit(
  7. source.new BlockMoveDispatcher(Util.now()));
  8. }
  9. // wait for all dispatcher threads to finish
  10. for (Future<?> future : futures) {
  11. try {
  12. future.get();
  13. } catch (ExecutionException e) {
  14. LOG.warn("Dispatcher thread failed", e.getCause());
  15. }
  16. }
  17. // wait for all block moving to be done
  18. waitForMoveCompletion();
  19. return bytesMoved.get()-bytesLastMoved;
  20. }

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

  1. /**
  2. * Return the storage directory corresponding to the passed URI.
  3. * @param uri URI of a storage directory
  4. * @return The matching storage directory or null if none found
  5. */
  6. public StorageDirectory getStorageDirectory(URI uri) {
  7. try {
  8. uri = Util.fileAsURI(new File(uri));
  9. Iterator<StorageDirectory> it = dirIterator();
  10. while (it.hasNext()) {
  11. StorageDirectory sd = it.next();
  12. if (Util.fileAsURI(sd.getRoot()).equals(uri)) {
  13. return sd;
  14. }
  15. }
  16. } catch (IOException ioe) {
  17. LOG.warn("Error converting file to URI", ioe);
  18. }
  19. return null;
  20. }

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

  1. static List<URI> getCheckpointEditsDirs(Configuration conf,
  2. String defaultName) {
  3. Collection<String> dirNames = conf.getTrimmedStringCollection(
  4. DFSConfigKeys.DFS_NAMENODE_CHECKPOINT_EDITS_DIR_KEY);
  5. if (dirNames.size() == 0 && defaultName != null) {
  6. dirNames.add(defaultName);
  7. }
  8. return Util.stringCollectionAsURIs(dirNames);
  9. }

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs-test

  1. /**
  2. * Test for a relative path, os independent
  3. * @throws IOException
  4. */
  5. public void testRelativePathAsURI() throws IOException {
  6. URI u = Util.stringAsURI(RELATIVE_FILE_PATH);
  7. LOG.info("Uri: " + u);
  8. assertNotNull(u);
  9. }

代码示例来源:origin: com.facebook.hadoop/hadoop-core

  1. public int run(String[] args) throws Exception {
  2. final long startTime = Util.now();
  3. try {
  4. checkReplicationPolicyCompatibility(conf);
  5. final List<InetSocketAddress> namenodes = DFSUtil.getClientRpcAddresses(conf, null);
  6. parse(args);
  7. return Balancer.run(namenodes, conf);
  8. } catch (IOException e) {
  9. System.out.println(e + ". Exiting ...");
  10. return IO_EXCEPTION;
  11. } catch (InterruptedException e) {
  12. System.out.println(e + ". Exiting ...");
  13. return INTERRUPTED;
  14. } catch (Exception e) {
  15. e.printStackTrace();
  16. return ILLEGAL_ARGS;
  17. } finally {
  18. System.out.println("Balancing took " + time2Str(Util.now()-startTime));
  19. }
  20. }

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

  1. /**
  2. * Return the list of locations being used for a specific purpose.
  3. * i.e. Image or edit log storage.
  4. *
  5. * @param dirType Purpose of locations requested.
  6. * @throws IOException
  7. */
  8. Collection<URI> getDirectories(NameNodeDirType dirType)
  9. throws IOException {
  10. ArrayList<URI> list = new ArrayList<>();
  11. Iterator<StorageDirectory> it = (dirType == null) ? dirIterator() :
  12. dirIterator(dirType);
  13. for ( ; it.hasNext();) {
  14. StorageDirectory sd = it.next();
  15. try {
  16. list.add(Util.fileAsURI(sd.getRoot()));
  17. } catch (IOException e) {
  18. throw new IOException("Exception while processing " +
  19. "StorageDirectory " + sd.getRoot(), e);
  20. }
  21. }
  22. return list;
  23. }

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

  1. /**
  2. * Retrieve checkpoint dirs from configuration.
  3. *
  4. * @param conf the Configuration
  5. * @param defaultValue a default value for the attribute, if null
  6. * @return a Collection of URIs representing the values in
  7. * dfs.namenode.checkpoint.dir configuration property
  8. */
  9. static Collection<URI> getCheckpointDirs(Configuration conf,
  10. String defaultValue) {
  11. Collection<String> dirNames = conf.getTrimmedStringCollection(
  12. DFSConfigKeys.DFS_NAMENODE_CHECKPOINT_DIR_KEY);
  13. if (dirNames.size() == 0 && defaultValue != null) {
  14. dirNames.add(defaultValue);
  15. }
  16. return Util.stringCollectionAsURIs(dirNames);
  17. }

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

  1. /**
  2. * Converts a collection of strings into a collection of URIs.
  3. * @param names collection of strings to convert to URIs
  4. * @return collection of URIs
  5. */
  6. public static List<URI> stringCollectionAsURIs(
  7. Collection<String> names) {
  8. List<URI> uris = new ArrayList<URI>(names.size());
  9. for(String name : names) {
  10. try {
  11. uris.add(stringAsURI(name));
  12. } catch (IOException e) {
  13. LOG.error("Error while processing URI: " + name, e);
  14. }
  15. }
  16. return uris;
  17. }
  18. }

代码示例来源:origin: org.jvnet.hudson.hadoop/hadoop-core

  1. private void dispatchBlocks() {
  2. long startTime = Util.now();
  3. this.blocksToReceive = 2*scheduledSize;
  4. boolean isTimeUp = false;
  5. if (Util.now()-startTime > MAX_ITERATION_TIME) {
  6. isTimeUp = true;
  7. continue;

代码示例来源:origin: com.facebook.hadoop/hadoop-core

  1. /**
  2. * Interprets the passed string as a URI. In case of error it
  3. * assumes the specified string is a file.
  4. *
  5. * @param s the string to interpret
  6. * @return the resulting URI
  7. * @throws IOException
  8. */
  9. public static URI stringAsURI(String s) throws IOException {
  10. URI u = null;
  11. // try to make a URI
  12. try {
  13. u = new URI(s);
  14. } catch (URISyntaxException e){
  15. LOG.error("Syntax error in URI " + s
  16. + ". Please check hdfs configuration.", e);
  17. }
  18. // if URI is null or scheme is undefined, then assume it's file://
  19. if(u == null || u.getScheme() == null){
  20. LOG.warn("Path " + s + " should be specified as a URI "
  21. + "in configuration files. Please update hdfs configuration.");
  22. u = fileAsURI(new File(s));
  23. }
  24. return u;
  25. }

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

  1. private static Collection<URI> getStorageDirs(Configuration conf,
  2. String propertyName) {
  3. Collection<String> dirNames = conf.getTrimmedStringCollection(propertyName);
  4. StartupOption startOpt = NameNode.getStartupOption(conf);
  5. if(startOpt == StartupOption.IMPORT) {
  6. // In case of IMPORT this will get rid of default directories
  7. // but will retain directories specified in hdfs-site.xml
  8. // When importing image from a checkpoint, the name-node can
  9. // start with empty set of storage directories.
  10. Configuration cE = new HdfsConfiguration(false);
  11. cE.addResource("core-default.xml");
  12. cE.addResource("core-site.xml");
  13. cE.addResource("hdfs-default.xml");
  14. Collection<String> dirNames2 = cE.getTrimmedStringCollection(propertyName);
  15. dirNames.removeAll(dirNames2);
  16. if(dirNames.isEmpty())
  17. LOG.warn("!!! WARNING !!!" +
  18. "\n\tThe NameNode currently runs without persistent storage." +
  19. "\n\tAny changes to the file system meta-data may be lost." +
  20. "\n\tRecommended actions:" +
  21. "\n\t\t- shutdown and restart NameNode with configured \""
  22. + propertyName + "\" in hdfs-site.xml;" +
  23. "\n\t\t- use Backup Node as a persistent and up-to-date storage " +
  24. "of the file system meta-data.");
  25. } else if (dirNames.isEmpty()) {
  26. dirNames = Collections.singletonList(
  27. DFSConfigKeys.DFS_NAMENODE_EDITS_DIR_DEFAULT);
  28. }
  29. return Util.stringCollectionAsURIs(dirNames);
  30. }

代码示例来源:origin: com.facebook.hadoop/hadoop-core

  1. /**
  2. * Converts a collection of strings into a collection of URIs.
  3. * @param names collection of strings to convert to URIs
  4. * @return collection of URIs
  5. */
  6. public static Collection<URI> stringCollectionAsURIs(
  7. Collection<String> names) {
  8. Collection<URI> uris = new ArrayList<URI>(names.size());
  9. for(String name : names) {
  10. try {
  11. uris.add(stringAsURI(name));
  12. } catch (IOException e) {
  13. LOG.error("Error while processing URI: " + name, e);
  14. }
  15. }
  16. return uris;
  17. }
  18. }

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs-test

  1. public void testThrottler() throws IOException {
  2. Configuration conf = new HdfsConfiguration();
  3. FileSystem.setDefaultUri(conf, "hdfs://localhost:0");
  4. long bandwidthPerSec = 1024*1024L;
  5. final long TOTAL_BYTES =6*bandwidthPerSec;
  6. long bytesToSend = TOTAL_BYTES;
  7. long start = Util.now();
  8. DataTransferThrottler throttler = new DataTransferThrottler(bandwidthPerSec);
  9. long totalBytes = 0L;
  10. long bytesSent = 1024*512L; // 0.5MB
  11. throttler.throttle(bytesSent);
  12. bytesToSend -= bytesSent;
  13. bytesSent = 1024*768L; // 0.75MB
  14. throttler.throttle(bytesSent);
  15. bytesToSend -= bytesSent;
  16. try {
  17. Thread.sleep(1000);
  18. } catch (InterruptedException ignored) {}
  19. throttler.throttle(bytesToSend);
  20. long end = Util.now();
  21. assertTrue(totalBytes*1000/(end-start)<=bandwidthPerSec);
  22. }

代码示例来源:origin: ch.cern.hadoop/hadoop-hdfs

  1. /**
  2. * Interprets the passed string as a URI. In case of error it
  3. * assumes the specified string is a file.
  4. *
  5. * @param s the string to interpret
  6. * @return the resulting URI
  7. * @throws IOException
  8. */
  9. public static URI stringAsURI(String s) throws IOException {
  10. URI u = null;
  11. // try to make a URI
  12. try {
  13. u = new URI(s);
  14. } catch (URISyntaxException e){
  15. LOG.error("Syntax error in URI " + s
  16. + ". Please check hdfs configuration.", e);
  17. }
  18. // if URI is null or scheme is undefined, then assume it's file://
  19. if(u == null || u.getScheme() == null){
  20. LOG.warn("Path " + s + " should be specified as a URI "
  21. + "in configuration files. Please update hdfs configuration.");
  22. u = fileAsURI(new File(s));
  23. }
  24. return u;
  25. }

代码示例来源:origin: org.apache.hadoop/hadoop-hdfs

  1. DFSConfigKeys.DFS_NAMENODE_DU_RESERVED_DEFAULT);
  2. Collection<URI> extraCheckedVolumes = Util.stringCollectionAsURIs(conf
  3. .getTrimmedStringCollection(DFSConfigKeys.DFS_NAMENODE_CHECKED_VOLUMES_KEY));

相关文章