org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.<init>()方法的使用及代码示例

x33g5p2x  于2022-01-25 转载在 其他  
字(7.2k)|赞(0)|评价(0)|浏览(145)

本文整理了Java中org.apache.hadoop.mapreduce.lib.output.MultipleOutputs.<init>()方法的一些代码示例,展示了MultipleOutputs.<init>()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。MultipleOutputs.<init>()方法的具体详情如下:
包路径:org.apache.hadoop.mapreduce.lib.output.MultipleOutputs
类名称:MultipleOutputs
方法名:<init>

MultipleOutputs.<init>介绍

[英]Creates and initializes multiple outputs support, it should be instantiated in the Mapper/Reducer setup method.
[中]创建并初始化多个输出支持,它应该在Mapper/Reducer设置方法中实例化。

代码示例

代码示例来源:origin: apache/kylin

  1. @Override
  2. protected void doSetup(Context context) throws IOException {
  3. super.bindCurrentConfiguration(context.getConfiguration());
  4. mos = new MultipleOutputs(context);
  5. String cubeName = context.getConfiguration().get(BatchConstants.CFG_CUBE_NAME);
  6. String segmentID = context.getConfiguration().get(BatchConstants.CFG_CUBE_SEGMENT_ID);
  7. KylinConfig config = AbstractHadoopJob.loadKylinPropsAndMetadata();
  8. CubeManager cubeManager = CubeManager.getInstance(config);
  9. CubeInstance cube = cubeManager.getCube(cubeName);
  10. CubeSegment optSegment = cube.getSegmentById(segmentID);
  11. CubeSegment originalSegment = cube.getOriginalSegmentToOptimize(optSegment);
  12. rowKeySplitter = new RowKeySplitter(originalSegment);
  13. baseCuboid = cube.getCuboidScheduler().getBaseCuboidId();
  14. recommendCuboids = cube.getCuboidsRecommend();
  15. Preconditions.checkNotNull(recommendCuboids, "The recommend cuboid map could not be null");
  16. }

代码示例来源:origin: apache/kylin

  1. @Override
  2. protected void doSetup(Context context) throws IOException {
  3. super.bindCurrentConfiguration(context.getConfiguration());
  4. mos = new MultipleOutputs(context);
  5. String cubeName = context.getConfiguration().get(BatchConstants.CFG_CUBE_NAME);
  6. String segmentID = context.getConfiguration().get(BatchConstants.CFG_CUBE_SEGMENT_ID);
  7. KylinConfig config = AbstractHadoopJob.loadKylinPropsAndMetadata();
  8. CubeInstance cube = CubeManager.getInstance(config).getCube(cubeName);
  9. CubeSegment cubeSegment = cube.getSegmentById(segmentID);
  10. CubeSegment oldSegment = cube.getOriginalSegmentToOptimize(cubeSegment);
  11. cubeDesc = cube.getDescriptor();
  12. baseCuboid = cube.getCuboidScheduler().getBaseCuboidId();
  13. rowKeySplitter = new RowKeySplitter(oldSegment);
  14. rowKeyEncoderProvider = new RowKeyEncoderProvider(cubeSegment);
  15. }

代码示例来源:origin: apache/kylin

  1. super.bindCurrentConfiguration(context.getConfiguration());
  2. Configuration conf = context.getConfiguration();
  3. mos = new MultipleOutputs(context);

代码示例来源:origin: apache/kylin

  1. @Override
  2. protected void doSetup(Context context) throws IOException {
  3. super.bindCurrentConfiguration(context.getConfiguration());
  4. Configuration conf = context.getConfiguration();
  5. mos = new MultipleOutputs(context);
  6. KylinConfig config = AbstractHadoopJob.loadKylinPropsAndMetadata();
  7. String cubeName = conf.get(BatchConstants.CFG_CUBE_NAME);
  8. CubeInstance cube = CubeManager.getInstance(config).getCube(cubeName);
  9. CubeDesc cubeDesc = cube.getDescriptor();
  10. List<TblColRef> uhcColumns = cubeDesc.getAllUHCColumns();
  11. int taskId = context.getTaskAttemptID().getTaskID().getId();
  12. col = uhcColumns.get(taskId);
  13. logger.info("column name: " + col.getIdentity());
  14. if (cube.getDescriptor().getShardByColumns().contains(col)) {
  15. //for ShardByColumns
  16. builder = DictionaryGenerator.newDictionaryBuilder(col.getType());
  17. builder.init(null, 0, null);
  18. } else {
  19. //for GlobalDictionaryColumns
  20. String hdfsDir = conf.get(BatchConstants.CFG_GLOBAL_DICT_BASE_DIR);
  21. DictionaryInfo dictionaryInfo = new DictionaryInfo(col.getColumnDesc(), col.getDatatype());
  22. String builderClass = cubeDesc.getDictionaryBuilderClass(col);
  23. builder = (IDictionaryBuilder) ClassUtil.newInstance(builderClass);
  24. builder.init(dictionaryInfo, 0, hdfsDir);
  25. }
  26. }

代码示例来源:origin: pl.edu.icm.coansys/coansys-io-input

  1. @SuppressWarnings({ "unchecked", "rawtypes" })
  2. @Override
  3. public void setup(Context context) {
  4. mos = new MultipleOutputs(context);
  5. }

代码示例来源:origin: openimaj/openimaj

  1. @Override
  2. protected void setup(Reducer<MAP_OUTPUT_KEY,MAP_OUTPUT_VALUE,OUTPUT_KEY,OUTPUT_VALUE>.Context context) throws IOException ,InterruptedException {
  3. this.multiOut = new MultipleOutputs<OUTPUT_KEY,OUTPUT_VALUE>(context);
  4. };
  5. }

代码示例来源:origin: org.openimaj/core-hadoop

  1. @Override
  2. protected void setup(Reducer<MAP_OUTPUT_KEY,MAP_OUTPUT_VALUE,OUTPUT_KEY,OUTPUT_VALUE>.Context context) throws IOException ,InterruptedException {
  3. this.multiOut = new MultipleOutputs<OUTPUT_KEY,OUTPUT_VALUE>(context);
  4. };
  5. }

代码示例来源:origin: apache/incubator-rya

  1. @Override
  2. public void setup(Context context) {
  3. mout = new MultipleOutputs<>(context);
  4. }
  5. @Override

代码示例来源:origin: hortonworks/hive-testbench

  1. protected void setup(Context context) throws IOException {
  2. mos = new MultipleOutputs(context);
  3. }
  4. protected void cleanup(Context context) throws IOException, InterruptedException {

代码示例来源:origin: cartershanklin/hive-testbench

  1. protected void setup(Context context) throws IOException {
  2. mos = new MultipleOutputs(context);
  3. }
  4. protected void cleanup(Context context) throws IOException, InterruptedException {

代码示例来源:origin: pl.edu.icm.coansys/coansys-io-input

  1. @Override
  2. public void setup(Context context) {
  3. mos = new MultipleOutputs<>(context);
  4. }

代码示例来源:origin: thinkaurelius/faunus

  1. public SafeMapperOutputs(final Mapper.Context context) {
  2. this.context = context;
  3. this.outputs = new MultipleOutputs(this.context);
  4. this.testing = this.context.getConfiguration().getBoolean(FaunusCompiler.TESTING, false);
  5. }

代码示例来源:origin: openimaj/openimaj

  1. @Override
  2. protected void setup(Context context) throws IOException, InterruptedException
  3. {
  4. indexer = VLADIndexerData.read(new File("vlad-data.bin"));
  5. mos = new MultipleOutputs<Text, BytesWritable>(context);
  6. }

代码示例来源:origin: thinkaurelius/faunus

  1. public SafeReducerOutputs(final Reducer.Context context) {
  2. this.context = context;
  3. this.outputs = new MultipleOutputs(this.context);
  4. this.testing = this.context.getConfiguration().getBoolean(FaunusCompiler.TESTING, false);
  5. }

代码示例来源:origin: apache/incubator-rya

  1. @Override
  2. public void setup(Context context) {
  3. Configuration conf = context.getConfiguration();
  4. debug = MRReasoningUtils.debug(conf);
  5. if (debug) {
  6. debugOut = new MultipleOutputs<>(context);
  7. }
  8. }
  9. @Override

代码示例来源:origin: apache/incubator-rya

  1. @Override
  2. protected void setup(Context context) {
  3. debugOut = new MultipleOutputs<>(context);
  4. Configuration conf = context.getConfiguration();
  5. if (schema == null) {
  6. schema = MRReasoningUtils.loadSchema(context.getConfiguration());
  7. }
  8. debug = MRReasoningUtils.debug(conf);
  9. }
  10. @Override

代码示例来源:origin: pl.edu.icm.coansys/coansys-io-input

  1. @Override
  2. protected void setup(Context context) throws IOException, InterruptedException {
  3. super.setup(context);
  4. mos = new MultipleOutputs(context);
  5. mainOutputsDir = context.getConfiguration().get("decided.dir");
  6. undecidedDir =context.getConfiguration().get("undecided.dir");
  7. }

代码示例来源:origin: apache/incubator-rya

  1. @Override
  2. public void setup(Context context) {
  3. mout = new MultipleOutputs<>(context);
  4. Configuration conf = context.getConfiguration();
  5. if (schema == null) {
  6. schema = MRReasoningUtils.loadSchema(conf);
  7. }
  8. debug = MRReasoningUtils.debug(conf);
  9. }
  10. @Override

代码示例来源:origin: apache/incubator-rya

  1. @Override
  2. public void setup(Context context) {
  3. Configuration conf = context.getConfiguration();
  4. mout = new MultipleOutputs<>(context);
  5. current = MRReasoningUtils.getCurrentIteration(conf);
  6. debug = MRReasoningUtils.debug(conf);
  7. }
  8. @Override

代码示例来源:origin: apache/incubator-rya

  1. @Override
  2. protected void setup(Context context) {
  3. schema = new SchemaWritable();
  4. debug = MRReasoningUtils.debug(context.getConfiguration());
  5. debugOut = new MultipleOutputs<>(context);
  6. }

相关文章