hadoop c++hdfs测试运行异常

mm5n2pyu  于 2021-06-03  发布在  Hadoop
关注(0)|答案(3)|浏览(436)

我正在使用hadoop 2.2.0并尝试运行这个hdfs\u test.cpp应用程序:


# include "hdfs.h"

int main(int argc, char**argv) {

    hdfsFS fs = hdfsConnect("default", 0);
    const char* writePath = "/tmp/testfile.txt";
    hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT, 0, 0, 0);
    if(!writeFile) {
          fprintf(stderr, "Failed to open %s for writing!\n", writePath);
          exit(-1);
    }
    char* buffer = "Hello, World!";
    tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer, strlen(buffer)+1);
    if (hdfsFlush(fs, writeFile)) {
           fprintf(stderr, "Failed to 'flush' %s\n", writePath); 
          exit(-1);
    }
   hdfsCloseFile(fs, writeFile);
}

我编译了它,但是当我用./hdfs\u test运行它时,我有:

loadFileSystems error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
hdfsBuilderConnect(forceNewInstance=0, nn=default, port=0, kerbTicketCachePath=(NULL), userName=(NULL)) error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
hdfsOpenFile(/tmp/testfile.txt): constructNewObjectOfPath error:
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
Failed to open /tmp/testfile.txt for writing!

可能是类路径有问题。我的$hadoop\u主页是/usr/local/hadoop,这实际上是我的变量classpath

echo $CLASSPATH
/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/*:/usr/local/hadoop/share/hadoop/common/*:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/*:/usr/local/hadoop/share/hadoop/hdfs/*:/usr/local/hadoop/share/hadoop/yarn/lib/*:/usr/local/hadoop/share/hadoop/yarn/*:/usr/local/hadoop/share/hadoop/mapreduce/lib/*:/usr/local/hadoop/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar

感谢您的帮助。。谢谢

e3bfsja2

e3bfsja21#

试试这个:

hadoop classpath --glob

然后将结果添加到 CLASSPATH 变量输入 ~/.bashrc

e5njpo68

e5njpo682#

jni不会使用通配符类路径。所以,只需添加 hadoop classpath --glob 不起作用。正确的方法是:

export CLASSPATH=${HADOOP_HOME}/etc/hadoop:`find ${HADOOP_HOME}/share/hadoop/ | awk '{path=path":"$0}END{print path}'`
export LD_LIBRARY_PATH="${HADOOP_HOME}/lib/native":$LD_LIBRARY_PATH
gpnt7bae

gpnt7bae3#

在使用基于jni的程序时,我遇到了在类路径中使用通配符的问题。尝试类路径中的直接jar方法,比如我在https://github.com/qwertymaniac/cdh4-libhdfs-example/blob/master/exec.sh#l3,我认为它应该起作用。整篇文章都包含了一个例子https://github.com/qwertymaniac/cdh4-libhdfs-example 现在可以用了。
另请参见https://stackoverflow.com/a/9322747/1660002

相关问题