map-only mapreduce作业

vom3gejh  于 2021-06-02  发布在  Hadoop
关注(0)|答案(2)|浏览(485)

我们创建了一个mapreduce作业,将数据注入bigquery。在我们的工作中没有太多的过滤功能,所以我们想让它只Map工作,使它更快,更有效。
但是,bigquery接受的java类“com.google.gson.jsonobject”没有实现hadoopMap器接口所需的可写接口。jsonobject也是final的,我们不能扩展它。。。
对我们如何解决这个问题有什么建议吗?
谢谢,

dl5txlt9

dl5txlt91#

附加到william的响应:我想自己测试一下,我创建了一个安装了bigquery连接器的新集群,然后运行以下map-only作业:

import com.google.cloud.hadoop.io.bigquery.BigQueryConfiguration;
import com.google.cloud.hadoop.io.bigquery.BigQueryOutputFormat;
import com.google.common.base.Splitter;
import com.google.gson.JsonObject;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.util.GenericOptionsParser;

import java.io.IOException;
import java.util.regex.Pattern;

/**
 * An example MapOnlyJob with BigQuery output
 */
public class MapOnlyJob {
  public static class MapOnlyMapper extends Mapper<LongWritable, Text, LongWritable, JsonObject> {
    private static final LongWritable KEY_OUT = new LongWritable(0L);
    // This requires a new version of guava be included in a shaded / repackaged libjar.
    private static final Splitter SPLITTER =
        Splitter.on(Pattern.compile("\\s+"))
            .trimResults()
            .omitEmptyStrings();
    protected void map(LongWritable key, Text value, Context context)
        throws IOException, InterruptedException {
      String line = value.toString();
      for (String word : SPLITTER.split(line)) {
        JsonObject json = new JsonObject();
        json.addProperty("word", word);
        json.addProperty("mapKey", key.get());
        context.write(KEY_OUT, json);
      }
    }
  }

  /**
   * Configures and runs the main Hadoop job.
   */
  public static void main(String[] args)
      throws IOException, InterruptedException, ClassNotFoundException {

    GenericOptionsParser parser = new GenericOptionsParser(args);
    args = parser.getRemainingArgs();

    if (args.length != 3) {
      System.out.println("Usage: hadoop MapOnlyJob "
          + "[projectId] [input_file] [fullyQualifiedOutputTableId]");
      String indent = "    ";
      System.out.println(indent
          + "projectId - Project under which to issue the BigQuery operations. "
          + "Also serves as the default project for table IDs which don't explicitly specify a "
          + "project for the table.");
      System.out.println(indent
          + "input_file - Input file pattern of the form "
          + "gs://foo/bar*.txt or hdfs:///foo/bar*.txt or foo*.txt");
      System.out.println(indent
          + "fullyQualifiedOutputTableId - Output table ID of the form "
          + "<optional projectId>:<datasetId>.<tableId>");
      System.exit(1);
    }

    // Global parameters from args.
    String projectId = args[0];

    // Set InputFormat parameters from args.
    String inputPattern = args[1];

    // Set OutputFormat parameters from args.
    String fullyQualifiedOutputTableId = args[2];

    // Default OutputFormat parameters for this sample.
    String outputTableSchema =
        "[{'name': 'word','type': 'STRING'},{'name': 'mapKey','type': 'INTEGER'}]";

    Configuration conf = parser.getConfiguration();
    Job job = Job.getInstance(conf);
    // Set the job-level projectId.
    conf.set(BigQueryConfiguration.PROJECT_ID_KEY, projectId);
    // Set classes and configure them:
    job.setOutputFormatClass(BigQueryOutputFormat.class);
    BigQueryConfiguration.configureBigQueryOutput(
        job.getConfiguration() /* Required as Job made a new Configuration object */,
        fullyQualifiedOutputTableId,
        outputTableSchema);
    // Configure file-based input:
    FileInputFormat.setInputPaths(job, inputPattern);

    job.setJarByClass(MapOnlyMapper.class);
    job.setMapperClass(MapOnlyMapper.class);
    // The key will be discarded by BigQueryOutputFormat.
    job.setOutputKeyClass(LongWritable.class);
    job.setOutputValueClass(JsonObject.class);
    // Make map-only
    job.setNumReduceTasks(0);

    job.waitForCompletion(true);
  }
}

我有以下依赖关系:

<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-core</artifactId>
  <version>1.2.1</version>
  <scope>provided</scope>
</dependency>
<dependency>
  <groupId>com.google.cloud.bigdataoss</groupId>
  <artifactId>bigquery-connector</artifactId>
  <version>0.7.0-hadoop1</version>
</dependency>
<dependency>
  <groupId>com.google.guava</groupId>
  <artifactId>guava</artifactId>
  <version>18.0</version>
</dependency>
lqfhib0f

lqfhib0f2#

您应该能够为hadoop使用bigquery连接器(请参阅https://cloud.google.com/hadoop/bigquery-connector)它提供了hadoop outputformat类的实现。

相关问题