hadoop流作业失败,使用rmr包时出现缺少选项错误

xjreopfe  于 2021-05-30  发布在  Hadoop
关注(0)|答案(1)|浏览(509)

我试图在amazonemr上使用rstudio中的rmr包编写一个从r到hdfs的Dataframe。我下面的教程是http://blogs.aws.amazon.com/bigdata/post/tx37rskrfdqntsl/statistical-analysis-with-open-source-r-and-rstudio-on-amazon-emr
我写的代码是

Sys.setenv(HADOOP_CMD="/home/hadoop/bin/hadoop")
    Sys.setenv(HADOOP_STREAMING="/home/hadoop/contrib/streaming/hadoop-streaming.jar")
    Sys.setenv(JAVA_HOME="/usr/java/latest/jre")

    # load librarys
    library(rmr2)
    library(rhdfs)
    library(plyrmr)

    # initiate rhdfs package
    hdfs.init()

    # a very simple plyrmr example to test the package
    library(plyrmr)
    # running code localy
    bind.cols(mtcars, carb.per.cyl = carb/cyl)
    # same code on Hadoop cluster
    to.dfs(mtcars, output="/tmp/mtcars")

我遵循以下代码教程:https://github.com/awslabs/emr-bootstrap-actions/blob/master/r/hadoop/examples/biganalyses_example.r
hadoop版本是clouderacdh5。我还适当地设置了环境变量。
运行上述代码时,出现以下错误:

> to.dfs(data,output="/tmp/cust_seg")
    15/03/09 20:00:21 ERROR streaming.StreamJob: Missing required options: input, output
    Usage: $HADOOP_HOME/bin/hadoop jar \
      $HADOOP_HOME/hadoop-streaming.jar [options]
    Options:
      -input    <path>     DFS input file(s) for the Map step
      -output   <path>     DFS output directory for the Reduce step
      -mapper   <cmd|JavaClassName>      The streaming command to run
      -combiner <JavaClassName> Combiner has to be a Java class
      -reducer  <cmd|JavaClassName>      The streaming command to run
      -file     <file>     File/dir to be shipped in the Job jar file
      -inputformat TextInputFormat(default)|SequenceFileAsTextInputFormat|JavaClassName Optional.
      -outputformat TextOutputFormat(default)|JavaClassName  Optional.
      -partitioner JavaClassName  Optional.
      -numReduceTasks <num>  Optional.
      -inputreader <spec>  Optional.
      -cmdenv   <n>=<v>    Optional. Pass env.var to streaming commands
      -mapdebug <path>  Optional. To run this script when a map task fails 
      -reducedebug <path>  Optional. To run this script when a reduce task fails 
      -verbose

    Generic options supported are
    -conf <configuration file>     specify an application configuration file
    -D <property=value>            use value for given property
    -fs <local|namenode:port>      specify a namenode
    -jt <local|resourcemanager:port>    specify a ResourceManager
    -files <comma separated list of files>    specify comma separated files to be copied to the map reduce cluster
    -libjars <comma separated list of jars>    specify comma separated jar files to include in the classpath.
    -archives <comma separated list of archives>    specify comma separated archives to be unarchived on the compute machines.

    The general command line syntax is
    bin/hadoop command [genericOptions] [commandOptions]

    For more details about these options:
    Use $HADOOP_HOME/bin/hadoop jar build/hadoop-streaming.jar -info

    Streaming Job Failed!

我想不出这个问题的解决办法。如果有人能尽快帮忙我会很感激的。

fquxozlt

fquxozlt1#

错误是由于 HADOOP_STREAMING 在代码中未正确设置环境变量。您应该指定完整路径和jar文件名。下面的r代码对我来说很好。
r代码(我使用的是Hadoop2.4.0)

Sys.setenv("HADOOP_CMD"="/usr/local/hadoop/bin/hadoop")
Sys.setenv("HADOOP_STREAMING"="/usr/local/hadoop/share/hadoop/tools/lib/hadoop-streaming-2.4.0.jar")

# load librarys

library(rmr2)
library(rhdfs)

# initiate rhdfs package

hdfs.init()

# a very simple plyrmr example to test the package

library(plyrmr)

# running code localy

bind.cols(mtcars, carb.per.cyl = carb/cyl)

# same code on Hadoop cluster

to.dfs(mtcars, output="/tmp/mtcars")

# list the files of tmp folder

hdfs.ls("/tmp")

  permission   owner      group   size          modtime                  file
1 -rw-r--r-- manohar supergroup   1685 2015-03-22 16:12           /tmp/mtcars

希望这有帮助。

相关问题