org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.getContext()方法的使用及代码示例

x33g5p2x  于2022-01-25 转载在 其他  
字(4.5k)|赞(0)|评价(0)|浏览(114)

本文整理了Java中org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.getContext()方法的一些代码示例,展示了MultipleOutputs.getContext()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。MultipleOutputs.getContext()方法的具体详情如下:
包路径:org.apache.hadoop.mapreduce.lib.output.MultipleOutputs
类名称:MultipleOutputs
方法名:getContext

MultipleOutputs.getContext介绍

暂无

代码示例

代码示例来源:origin: io.prestosql.hadoop/hadoop-apache

  1. /**
  2. * Write key and value to baseOutputPath using the namedOutput.
  3. *
  4. * @param namedOutput the named output name
  5. * @param key the key
  6. * @param value the value
  7. * @param baseOutputPath base-output path to write the record to.
  8. * Note: Framework will generate unique filename for the baseOutputPath
  9. */
  10. @SuppressWarnings("unchecked")
  11. public <K, V> void write(String namedOutput, K key, V value,
  12. String baseOutputPath) throws IOException, InterruptedException {
  13. checkNamedOutputName(context, namedOutput, false);
  14. checkBaseOutputPath(baseOutputPath);
  15. if (!namedOutputs.contains(namedOutput)) {
  16. throw new IllegalArgumentException("Undefined named output '" +
  17. namedOutput + "'");
  18. }
  19. TaskAttemptContext taskContext = getContext(namedOutput);
  20. getRecordWriter(taskContext, baseOutputPath).write(key, value);
  21. }

代码示例来源:origin: com.github.jiayuhan-it/hadoop-mapreduce-client-core

  1. /**
  2. * Write key and value to baseOutputPath using the namedOutput.
  3. *
  4. * @param namedOutput the named output name
  5. * @param key the key
  6. * @param value the value
  7. * @param baseOutputPath base-output path to write the record to.
  8. * Note: Framework will generate unique filename for the baseOutputPath
  9. */
  10. @SuppressWarnings("unchecked")
  11. public <K, V> void write(String namedOutput, K key, V value,
  12. String baseOutputPath) throws IOException, InterruptedException {
  13. checkNamedOutputName(context, namedOutput, false);
  14. checkBaseOutputPath(baseOutputPath);
  15. if (!namedOutputs.contains(namedOutput)) {
  16. throw new IllegalArgumentException("Undefined named output '" +
  17. namedOutput + "'");
  18. }
  19. TaskAttemptContext taskContext = getContext(namedOutput);
  20. getRecordWriter(taskContext, baseOutputPath).write(key, value);
  21. }

代码示例来源:origin: org.apache.hadoop/hadoop-mapred

  1. /**
  2. * Write key and value to baseOutputPath using the namedOutput.
  3. *
  4. * @param namedOutput the named output name
  5. * @param key the key
  6. * @param value the value
  7. * @param baseOutputPath base-output path to write the record to.
  8. * Note: Framework will generate unique filename for the baseOutputPath
  9. */
  10. @SuppressWarnings("unchecked")
  11. public <K, V> void write(String namedOutput, K key, V value,
  12. String baseOutputPath) throws IOException, InterruptedException {
  13. checkNamedOutputName(context, namedOutput, false);
  14. checkBaseOutputPath(baseOutputPath);
  15. if (!namedOutputs.contains(namedOutput)) {
  16. throw new IllegalArgumentException("Undefined named output '" +
  17. namedOutput + "'");
  18. }
  19. TaskAttemptContext taskContext = getContext(namedOutput);
  20. getRecordWriter(taskContext, baseOutputPath).write(key, value);
  21. }

代码示例来源:origin: ch.cern.hadoop/hadoop-mapreduce-client-core

  1. /**
  2. * Write key and value to baseOutputPath using the namedOutput.
  3. *
  4. * @param namedOutput the named output name
  5. * @param key the key
  6. * @param value the value
  7. * @param baseOutputPath base-output path to write the record to.
  8. * Note: Framework will generate unique filename for the baseOutputPath
  9. */
  10. @SuppressWarnings("unchecked")
  11. public <K, V> void write(String namedOutput, K key, V value,
  12. String baseOutputPath) throws IOException, InterruptedException {
  13. checkNamedOutputName(context, namedOutput, false);
  14. checkBaseOutputPath(baseOutputPath);
  15. if (!namedOutputs.contains(namedOutput)) {
  16. throw new IllegalArgumentException("Undefined named output '" +
  17. namedOutput + "'");
  18. }
  19. TaskAttemptContext taskContext = getContext(namedOutput);
  20. getRecordWriter(taskContext, baseOutputPath).write(key, value);
  21. }

代码示例来源:origin: io.hops/hadoop-mapreduce-client-core

  1. /**
  2. * Write key and value to baseOutputPath using the namedOutput.
  3. *
  4. * @param namedOutput the named output name
  5. * @param key the key
  6. * @param value the value
  7. * @param baseOutputPath base-output path to write the record to.
  8. * Note: Framework will generate unique filename for the baseOutputPath
  9. * <b>Warning</b>: when the baseOutputPath is a path that resolves
  10. * outside of the final job output directory, the directory is created
  11. * immediately and then persists through subsequent task retries, breaking
  12. * the concept of output committing.
  13. */
  14. @SuppressWarnings("unchecked")
  15. public <K, V> void write(String namedOutput, K key, V value,
  16. String baseOutputPath) throws IOException, InterruptedException {
  17. checkNamedOutputName(context, namedOutput, false);
  18. checkBaseOutputPath(baseOutputPath);
  19. if (!namedOutputs.contains(namedOutput)) {
  20. throw new IllegalArgumentException("Undefined named output '" +
  21. namedOutput + "'");
  22. }
  23. TaskAttemptContext taskContext = getContext(namedOutput);
  24. getRecordWriter(taskContext, baseOutputPath).write(key, value);
  25. }

相关文章