org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.checkBaseOutputPath()方法的使用及代码示例

x33g5p2x  于2022-01-25 转载在 其他  
字(12.3k)|赞(0)|评价(0)|浏览(119)

本文整理了Java中org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.checkBaseOutputPath()方法的一些代码示例,展示了MultipleOutputs.checkBaseOutputPath()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。MultipleOutputs.checkBaseOutputPath()方法的具体详情如下:
包路径:org.apache.hadoop.mapreduce.lib.output.MultipleOutputs
类名称:MultipleOutputs
方法名:checkBaseOutputPath

MultipleOutputs.checkBaseOutputPath介绍

[英]Checks if output name is valid. name cannot be the name used for the default output
[中]检查输出名称是否有效。name不能是用于默认输出的名称

代码示例

代码示例来源:origin: org.apache.hadoop/hadoop-mapred

  1. /**
  2. * Checks if a named output name is valid.
  3. *
  4. * @param namedOutput named output Name
  5. * @throws IllegalArgumentException if the output name is not valid.
  6. */
  7. private static void checkNamedOutputName(JobContext job,
  8. String namedOutput, boolean alreadyDefined) {
  9. checkTokenName(namedOutput);
  10. checkBaseOutputPath(namedOutput);
  11. List<String> definedChannels = getNamedOutputsList(job);
  12. if (alreadyDefined && definedChannels.contains(namedOutput)) {
  13. throw new IllegalArgumentException("Named output '" + namedOutput +
  14. "' already alreadyDefined");
  15. } else if (!alreadyDefined && !definedChannels.contains(namedOutput)) {
  16. throw new IllegalArgumentException("Named output '" + namedOutput +
  17. "' not defined");
  18. }
  19. }

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

  1. /**
  2. * Checks if a named output name is valid.
  3. *
  4. * @param namedOutput named output Name
  5. * @throws IllegalArgumentException if the output name is not valid.
  6. */
  7. private static void checkNamedOutputName(JobContext job,
  8. String namedOutput, boolean alreadyDefined) {
  9. checkTokenName(namedOutput);
  10. checkBaseOutputPath(namedOutput);
  11. List<String> definedChannels = getNamedOutputsList(job);
  12. if (alreadyDefined && definedChannels.contains(namedOutput)) {
  13. throw new IllegalArgumentException("Named output '" + namedOutput +
  14. "' already alreadyDefined");
  15. } else if (!alreadyDefined && !definedChannels.contains(namedOutput)) {
  16. throw new IllegalArgumentException("Named output '" + namedOutput +
  17. "' not defined");
  18. }
  19. }

代码示例来源:origin: io.hops/hadoop-mapreduce-client-core

  1. /**
  2. * Checks if a named output name is valid.
  3. *
  4. * @param namedOutput named output Name
  5. * @throws IllegalArgumentException if the output name is not valid.
  6. */
  7. private static void checkNamedOutputName(JobContext job,
  8. String namedOutput, boolean alreadyDefined) {
  9. checkTokenName(namedOutput);
  10. checkBaseOutputPath(namedOutput);
  11. List<String> definedChannels = getNamedOutputsList(job);
  12. if (alreadyDefined && definedChannels.contains(namedOutput)) {
  13. throw new IllegalArgumentException("Named output '" + namedOutput +
  14. "' already alreadyDefined");
  15. } else if (!alreadyDefined && !definedChannels.contains(namedOutput)) {
  16. throw new IllegalArgumentException("Named output '" + namedOutput +
  17. "' not defined");
  18. }
  19. }

代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-core

  1. /**
  2. * Checks if a named output name is valid.
  3. *
  4. * @param namedOutput named output Name
  5. * @throws IllegalArgumentException if the output name is not valid.
  6. */
  7. private static void checkNamedOutputName(JobContext job,
  8. String namedOutput, boolean alreadyDefined) {
  9. checkTokenName(namedOutput);
  10. checkBaseOutputPath(namedOutput);
  11. List<String> definedChannels = getNamedOutputsList(job);
  12. if (alreadyDefined && definedChannels.contains(namedOutput)) {
  13. throw new IllegalArgumentException("Named output '" + namedOutput +
  14. "' already alreadyDefined");
  15. } else if (!alreadyDefined && !definedChannels.contains(namedOutput)) {
  16. throw new IllegalArgumentException("Named output '" + namedOutput +
  17. "' not defined");
  18. }
  19. }

代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-core

  1. /**
  2. * Checks if a named output name is valid.
  3. *
  4. * @param namedOutput named output Name
  5. * @throws IllegalArgumentException if the output name is not valid.
  6. */
  7. private static void checkNamedOutputName(JobContext job,
  8. String namedOutput, boolean alreadyDefined) {
  9. checkTokenName(namedOutput);
  10. checkBaseOutputPath(namedOutput);
  11. List<String> definedChannels = getNamedOutputsList(job);
  12. if (alreadyDefined && definedChannels.contains(namedOutput)) {
  13. throw new IllegalArgumentException("Named output '" + namedOutput +
  14. "' already alreadyDefined");
  15. } else if (!alreadyDefined && !definedChannels.contains(namedOutput)) {
  16. throw new IllegalArgumentException("Named output '" + namedOutput +
  17. "' not defined");
  18. }
  19. }

代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-core

  1. /**
  2. * Write key and value to baseOutputPath using the namedOutput.
  3. *
  4. * @param namedOutput the named output name
  5. * @param key the key
  6. * @param value the value
  7. * @param baseOutputPath base-output path to write the record to.
  8. * Note: Framework will generate unique filename for the baseOutputPath
  9. */
  10. @SuppressWarnings("unchecked")
  11. public <K, V> void write(String namedOutput, K key, V value,
  12. String baseOutputPath) throws IOException, InterruptedException {
  13. checkNamedOutputName(context, namedOutput, false);
  14. checkBaseOutputPath(baseOutputPath);
  15. if (!namedOutputs.contains(namedOutput)) {
  16. throw new IllegalArgumentException("Undefined named output '" +
  17. namedOutput + "'");
  18. }
  19. TaskAttemptContext taskContext = getContext(namedOutput);
  20. getRecordWriter(taskContext, baseOutputPath).write(key, value);
  21. }

代码示例来源:origin: org.apache.hadoop/hadoop-mapred

  1. /**
  2. * Write key and value to baseOutputPath using the namedOutput.
  3. *
  4. * @param namedOutput the named output name
  5. * @param key the key
  6. * @param value the value
  7. * @param baseOutputPath base-output path to write the record to.
  8. * Note: Framework will generate unique filename for the baseOutputPath
  9. */
  10. @SuppressWarnings("unchecked")
  11. public <K, V> void write(String namedOutput, K key, V value,
  12. String baseOutputPath) throws IOException, InterruptedException {
  13. checkNamedOutputName(context, namedOutput, false);
  14. checkBaseOutputPath(baseOutputPath);
  15. if (!namedOutputs.contains(namedOutput)) {
  16. throw new IllegalArgumentException("Undefined named output '" +
  17. namedOutput + "'");
  18. }
  19. TaskAttemptContext taskContext = getContext(namedOutput);
  20. getRecordWriter(taskContext, baseOutputPath).write(key, value);
  21. }

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

  1. /**
  2. * Write key and value to baseOutputPath using the namedOutput.
  3. *
  4. * @param namedOutput the named output name
  5. * @param key the key
  6. * @param value the value
  7. * @param baseOutputPath base-output path to write the record to.
  8. * Note: Framework will generate unique filename for the baseOutputPath
  9. */
  10. @SuppressWarnings("unchecked")
  11. public <K, V> void write(String namedOutput, K key, V value,
  12. String baseOutputPath) throws IOException, InterruptedException {
  13. checkNamedOutputName(context, namedOutput, false);
  14. checkBaseOutputPath(baseOutputPath);
  15. if (!namedOutputs.contains(namedOutput)) {
  16. throw new IllegalArgumentException("Undefined named output '" +
  17. namedOutput + "'");
  18. }
  19. TaskAttemptContext taskContext = getContext(namedOutput);
  20. getRecordWriter(taskContext, baseOutputPath).write(key, value);
  21. }

代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-core

  1. /**
  2. * Write key and value to baseOutputPath using the namedOutput.
  3. *
  4. * @param namedOutput the named output name
  5. * @param key the key
  6. * @param value the value
  7. * @param baseOutputPath base-output path to write the record to.
  8. * Note: Framework will generate unique filename for the baseOutputPath
  9. */
  10. @SuppressWarnings("unchecked")
  11. public <K, V> void write(String namedOutput, K key, V value,
  12. String baseOutputPath) throws IOException, InterruptedException {
  13. checkNamedOutputName(context, namedOutput, false);
  14. checkBaseOutputPath(baseOutputPath);
  15. if (!namedOutputs.contains(namedOutput)) {
  16. throw new IllegalArgumentException("Undefined named output '" +
  17. namedOutput + "'");
  18. }
  19. TaskAttemptContext taskContext = getContext(namedOutput);
  20. getRecordWriter(taskContext, baseOutputPath).write(key, value);
  21. }

代码示例来源:origin: io.hops/hadoop-mapreduce-client-core

  1. /**
  2. * Write key and value to baseOutputPath using the namedOutput.
  3. *
  4. * @param namedOutput the named output name
  5. * @param key the key
  6. * @param value the value
  7. * @param baseOutputPath base-output path to write the record to.
  8. * Note: Framework will generate unique filename for the baseOutputPath
  9. * <b>Warning</b>: when the baseOutputPath is a path that resolves
  10. * outside of the final job output directory, the directory is created
  11. * immediately and then persists through subsequent task retries, breaking
  12. * the concept of output committing.
  13. */
  14. @SuppressWarnings("unchecked")
  15. public <K, V> void write(String namedOutput, K key, V value,
  16. String baseOutputPath) throws IOException, InterruptedException {
  17. checkNamedOutputName(context, namedOutput, false);
  18. checkBaseOutputPath(baseOutputPath);
  19. if (!namedOutputs.contains(namedOutput)) {
  20. throw new IllegalArgumentException("Undefined named output '" +
  21. namedOutput + "'");
  22. }
  23. TaskAttemptContext taskContext = getContext(namedOutput);
  24. getRecordWriter(taskContext, baseOutputPath).write(key, value);
  25. }

代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-core

  1. /**
  2. * Write key value to an output file name.
  3. *
  4. * Gets the record writer from job's output format.
  5. * Job's output format should be a FileOutputFormat.
  6. *
  7. * @param key the key
  8. * @param value the value
  9. * @param baseOutputPath base-output path to write the record to.
  10. * Note: Framework will generate unique filename for the baseOutputPath
  11. */
  12. @SuppressWarnings("unchecked")
  13. public void write(KEYOUT key, VALUEOUT value, String baseOutputPath)
  14. throws IOException, InterruptedException {
  15. checkBaseOutputPath(baseOutputPath);
  16. if (jobOutputFormatContext == null) {
  17. jobOutputFormatContext =
  18. new TaskAttemptContextImpl(context.getConfiguration(),
  19. context.getTaskAttemptID(),
  20. new WrappedStatusReporter(context));
  21. }
  22. getRecordWriter(jobOutputFormatContext, baseOutputPath).write(key, value);
  23. }

代码示例来源:origin: org.apache.hadoop/hadoop-mapred

  1. /**
  2. * Write key value to an output file name.
  3. *
  4. * Gets the record writer from job's output format.
  5. * Job's output format should be a FileOutputFormat.
  6. *
  7. * @param key the key
  8. * @param value the value
  9. * @param baseOutputPath base-output path to write the record to.
  10. * Note: Framework will generate unique filename for the baseOutputPath
  11. */
  12. @SuppressWarnings("unchecked")
  13. public void write(KEYOUT key, VALUEOUT value, String baseOutputPath)
  14. throws IOException, InterruptedException {
  15. checkBaseOutputPath(baseOutputPath);
  16. TaskAttemptContext taskContext =
  17. new TaskAttemptContextImpl(context.getConfiguration(),
  18. context.getTaskAttemptID(),
  19. new WrappedStatusReporter(context));
  20. getRecordWriter(taskContext, baseOutputPath).write(key, value);
  21. }

代码示例来源:origin: io.hops/hadoop-mapreduce-client-core

  1. /**
  2. * Write key value to an output file name.
  3. *
  4. * Gets the record writer from job's output format.
  5. * Job's output format should be a FileOutputFormat.
  6. *
  7. * @param key the key
  8. * @param value the value
  9. * @param baseOutputPath base-output path to write the record to.
  10. * Note: Framework will generate unique filename for the baseOutputPath
  11. * <b>Warning</b>: when the baseOutputPath is a path that resolves
  12. * outside of the final job output directory, the directory is created
  13. * immediately and then persists through subsequent task retries, breaking
  14. * the concept of output committing.
  15. */
  16. @SuppressWarnings("unchecked")
  17. public void write(KEYOUT key, VALUEOUT value, String baseOutputPath)
  18. throws IOException, InterruptedException {
  19. checkBaseOutputPath(baseOutputPath);
  20. if (jobOutputFormatContext == null) {
  21. jobOutputFormatContext =
  22. new TaskAttemptContextImpl(context.getConfiguration(),
  23. context.getTaskAttemptID(),
  24. new WrappedStatusReporter(context));
  25. }
  26. getRecordWriter(jobOutputFormatContext, baseOutputPath).write(key, value);
  27. }

代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-core

  1. /**
  2. * Write key value to an output file name.
  3. *
  4. * Gets the record writer from job's output format.
  5. * Job's output format should be a FileOutputFormat.
  6. *
  7. * @param key the key
  8. * @param value the value
  9. * @param baseOutputPath base-output path to write the record to.
  10. * Note: Framework will generate unique filename for the baseOutputPath
  11. */
  12. @SuppressWarnings("unchecked")
  13. public void write(KEYOUT key, VALUEOUT value, String baseOutputPath)
  14. throws IOException, InterruptedException {
  15. checkBaseOutputPath(baseOutputPath);
  16. if (jobOutputFormatContext == null) {
  17. jobOutputFormatContext =
  18. new TaskAttemptContextImpl(context.getConfiguration(),
  19. context.getTaskAttemptID(),
  20. new WrappedStatusReporter(context));
  21. }
  22. getRecordWriter(jobOutputFormatContext, baseOutputPath).write(key, value);
  23. }

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

  1. /**
  2. * Write key value to an output file name.
  3. *
  4. * Gets the record writer from job's output format.
  5. * Job's output format should be a FileOutputFormat.
  6. *
  7. * @param key the key
  8. * @param value the value
  9. * @param baseOutputPath base-output path to write the record to.
  10. * Note: Framework will generate unique filename for the baseOutputPath
  11. */
  12. @SuppressWarnings("unchecked")
  13. public void write(KEYOUT key, VALUEOUT value, String baseOutputPath)
  14. throws IOException, InterruptedException {
  15. checkBaseOutputPath(baseOutputPath);
  16. if (jobOutputFormatContext == null) {
  17. jobOutputFormatContext =
  18. new TaskAttemptContextImpl(context.getConfiguration(),
  19. context.getTaskAttemptID(),
  20. new WrappedStatusReporter(context));
  21. }
  22. getRecordWriter(jobOutputFormatContext, baseOutputPath).write(key, value);
  23. }

相关文章