无法阻止本地job runner运行

nbysray5  于 2021-06-02  发布在  Hadoop
关注(0)|答案(0)|浏览(274)

我正在尝试使用hadoop-1上的htable和loadincrementalhfiles从java程序填充hbase表。
我有一个完全分布式的3节点集群,有1个主节点和2个从节点。
namenode、jobtracker在master和3个datanodes上运行,3个tasktracker在所有3个节点上运行。
3个节点上有3个缩放器。
主节点上的hmaster和所有3个节点上的3个RegionServer。
my core-site.xml包含:

<property>
  <name>hadoop.tmp.dir</name>
  <value>/usr/local/hadoop/TMPDIR/</value>
  <description>A base for other temporary directories.</description>
</property>

<property>
  <name>fs.default.name</name>
  <value>hdfs://master:54310/</value>
</property>

mapred-site.xml包含:

<property>
  <name>mapred.job.tracker</name>
  <value>master:54311</value>
</property>

但是,当我运行程序时,它会给我以下错误:

15/08/06 00:11:14 INFO mapred.TaskRunner: Creating symlink: /usr/local/hadoop/TMPDIR/mapred/local/archive/328189779182527451_-1963144838_2133510842/192.168.72.1/user/hduser/partitions_736cc0de-3c15-4a3d-8ae3-e4d239d73f93 <- /usr/local/hadoop/TMPDIR/mapred/local/localRunner/_partition.lst
15/08/06 00:11:14 WARN fs.FileUtil: Command 'ln -s /usr/local/hadoop/TMPDIR/mapred/local/archive/328189779182527451_-1963144838_2133510842/192.168.72.1/user/hduser/partitions_736cc0de-3c15-4a3d-8ae3-e4d239d73f93 /usr/local/hadoop/TMPDIR/mapred/local/localRunner/_partition.lst' failed 1 with: ln: failed to create symbolic link `/usr/local/hadoop/TMPDIR/mapred/local/localRunner/_partition.lst': No such file or directory
15/08/06 00:11:14 WARN mapred.TaskRunner: Failed to create symlink: /usr/local/hadoop/TMPDIR/mapred/local/archive/328189779182527451_-1963144838_2133510842/192.168.72.1/user/hduser/partitions_736cc0de-3c15-4a3d-8ae3-e4d239d73f93 <- /usr/local/hadoop/TMPDIR/mapred/local/localRunner/_partition.lst
15/08/06 00:11:14 INFO mapred.JobClient: Running job: job_local_0001
15/08/06 00:11:15 INFO util.ProcessTree: setsid exited with exit code 0
15/08/06 00:11:15 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@35506f5f
15/08/06 00:11:15 INFO mapred.MapTask: io.sort.mb = 100
15/08/06 00:11:15 INFO mapred.JobClient:  map 0% reduce 0%
15/08/06 00:11:17 INFO mapred.MapTask: data buffer = 79691776/99614720
15/08/06 00:11:17 INFO mapred.MapTask: record buffer = 262144/327680
15/08/06 00:11:17 WARN mapred.LocalJobRunner: job_local_0001
java.lang.IllegalArgumentException: Can't read partitions file
     at org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner.setConf(TotalOrderPartitioner.java:116)
     at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
     at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
     at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:677)
     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:756)
     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
     at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:214)
Caused by: java.io.FileNotFoundException: File _partition.lst does not exist.
     at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:397)
     at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251)
     at org.apache.hadoop.fs.FileSystem.getLength(FileSystem.java:796)
     at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1479)
     at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1474)
     at org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner.readPartitions(TotalOrderPartitioner.java:301)
     at org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner.setConf(TotalOrderPartitioner.java:88)
     ... 6 more

我的代码中有几行:

Path input = new Path(args[0]);
    input = input.makeQualified(input.getFileSystem(conf));
    Path partitionFile = new Path(input, "_partitions.lst");
    TotalOrderPartitioner.setPartitionFile(conf, partitionFile);
    InputSampler.Sampler<IntWritable, Text> sampler = new InputSampler.RandomSampler<IntWritable, Text>(0.1, 100);
    InputSampler.writePartitionFile(job, sampler);
    job.setNumReduceTasks(2);
    job.setPartitionerClass(TotalOrderPartitioner.class);

    job.setJarByClass(TextToHBaseTransfer.class);

为什么它还在运行本地job runner并给我“不能读取分区文件”?
群集配置中缺少什么?

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题