pig在尝试edureka的教程时出错

t98cgbkg  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(427)

我一直在尝试运行hadoop和其他组件,比如pig。
我正在尝试这个教程:https://www.edureka.co/blog/pig-programming-create-your-first-apache-pig-script/
一切正常,但当我在第2步运行脚本时,它会抛出以下错误:

2018-01-09 13:47:20,682 [JobControl] INFO  org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob - PigLatin:output.pig got an error while submitting 
org.apache.pig.backend.executionengine.ExecException: ERROR 2118: Input path does not exist: hdfs://localhost:9000/carlos/information.txt
    at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:279)
    at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
    at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
    at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
    at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:194)
    at java.lang.Thread.run(Thread.java:748)
    at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:276)
xytpbqjk

xytpbqjk1#

在步骤1之前,您需要这个命令

hadoop fs –copyFromLocal /home/carlos/information.txt /carlos

但是,这会将您的文件复制到名为 /carlos 在hdfs上,如果该目录不存在。
如果你愿意的话 /carlos 要成为目录,您需要删除该文件并使其

hadoop fs -rm /carlos
hadoop fs -mkdir /carlos

另外,在将文件复制到目录中时,通常应该使用尾部斜杠,如下所示

hadoop fs –copyFromLocal /home/carlos/information.txt /carlos/

你也可以让你的Pig代码加载 /carlos 作为文件。即使它是一个目录,也可以读取其中的所有文件

相关问题