mongo-hadoop连接器问题

ki0zmccv  于 2021-05-30  发布在  Hadoop
关注(0)|答案(3)|浏览(388)

我正在尝试运行一个mapreduce作业:我从mongo提取数据,然后写入hdfs,但似乎无法运行该作业。我找不到一个例子,但是我遇到的问题是,如果我设置mongo的输入路径,那么它将成为mongo的输出路径。现在当我的mongodb示例没有身份验证时,我得到了一个身份验证错误。

final Configuration conf = getConf();
final Job job = new Job(conf, "sort");
MongoConfig config = new MongoConfig(conf);
MongoConfigUtil.setInputFormat(getConf(), MongoInputFormat.class);
FileOutputFormat.setOutputPath(job, new Path("/trythisdir"));
MongoConfigUtil.setInputURI(conf,"mongodb://localhost:27017/fake_data.file");
//conf.set("mongo.output.uri", "mongodb://localhost:27017/fake_data.file");
job.setJarByClass(imageExtractor.class);
job.setMapperClass(imageExtractorMapper.class);

job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);

job.setInputFormatClass( MongoInputFormat.class );

// Execute job and return status
return job.waitForCompletion(true) ? 0 : 1;

编辑:这是我当前遇到的错误:

Exception in thread "main" java.lang.IllegalArgumentException: Couldn't connect and authenticate to get collection
    at com.mongodb.hadoop.util.MongoConfigUtil.getCollection(MongoConfigUtil.java:353)
    at com.mongodb.hadoop.splitter.MongoSplitterFactory.getSplitterByStats(MongoSplitterFactory.java:71)
    at com.mongodb.hadoop.splitter.MongoSplitterFactory.getSplitter(MongoSplitterFactory.java:107)
    at com.mongodb.hadoop.MongoInputFormat.getSplits(MongoInputFormat.java:56)
    at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1079)
    at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1096)
    at org.apache.hadoop.mapred.JobClient.access$600(JobClient.java:177)
    at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:995)
    at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:948)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:948)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:566)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:596)
    at com.orbis.image.extractor.mongo.imageExtractor.run(imageExtractor.java:103)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at com.orbis.image.extractor.mongo.imageExtractor.main(imageExtractor.java:78)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
Caused by: java.lang.NullPointerException
    at com.mongodb.MongoURI.<init>(MongoURI.java:148)
    at com.mongodb.MongoClient.<init>(MongoClient.java:268)
    at com.mongodb.hadoop.util.MongoConfigUtil.getCollection(MongoConfigUtil.java:351)
    ... 22 more
fcwjkofz

fcwjkofz1#

迟回答。。这可能对人们有帮助。我在使用apachespark时遇到了同样的问题。
我认为您应该正确地设置mongo.input.uri和mongo.output.uri,它们将被hadoop使用,还应该设置输入和输出格式。

/*Correct input and output uri setting on spark(hadoop)*/
conf.set("mongo.input.uri", "mongodb://localhost:27017/dbName.inputColName");
conf.set("mongo.output.uri", "mongodb://localhost:27017/dbName.outputColName");

/*Set input and output formats*/
job.setInputFormatClass( MongoInputFormat.class );
job.setOutputFormatClass( MongoOutputFormat.class )

顺便说一句,如果“mongo.input.uri”或“mongo.output.uri”字符串有拼写错误,则会导致相同的错误。

5f0d552i

5f0d552i2#

您还没有共享完整的代码,因此很难说清楚,但是您所拥有的内容与用于hadoop的mongodb连接器的典型用法并不一致。
我建议您从github中的示例开始。

kzmpq1sx

kzmpq1sx3#

替换:

MongoConfigUtil.setInputURI(conf, "mongodb://localhost:27017/fake_data.file");

签署人:

MongoConfigUtil.setInputURI(job.getConfiguration(), "mongodb://localhost:27017/fake_data.file");

conf对象已经被作业“使用”,因此需要直接在作业的配置中设置它。

相关问题