我尝试运行一个java程序,在hadoop 2.3.0中使用jni调用gpu程序,但出现以下错误:
java.lang.Exception: java.lang.UnsatisfiedLinkError: affy.qualityControl.PLM.wlsAcc([D[D[DII)V
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)
Caused by: java.lang.UnsatisfiedLinkError: affy.qualityControl.PLM.wlsAcc([D[D[DII)V
at affy.qualityControl.PLM.wlsAcc(Native Method)
at affy.qualityControl.PLM.rlm_fit_anova(PLM.java:141)
at affy.qualityControl.PLM.PLMsummarize(PLM.java:31)
at affy.qualityControl.SummarizePLMReducer.reduce(SummarizePLMReducer.java:59)
at affy.qualityControl.SummarizePLMReducer.reduce(SummarizePLMReducer.java:12)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
我猜这个错误是由jni引起的。我编写了一个小的测试java代码,通过jni调用我的gpu代码(wlsacc),它工作得很好。我还ldd我的gpu共享库,每个库都是链接的。我还将以下代码添加到mapreduce代码中(我的gpu代码在reducer中调用):
setInputParameters(conf, args);
DistributedCache.createSymlink(conf);
DistributedCache.addCacheFile(new URI("/user/sniu/libjniWrapper.so#libjniWrapper.so"), conf);
conf.set("mapred.reduce.child.java.opts", "-Djava.library.path=.");
我还将libjniwrapper.so复制到hdfs的/user/sniu/dir。我仍然不明白为什么hadoop找不到我的本地共享库。有人知道我的问题出在哪里吗?
1条答案
按热度按时间ncecgwcz1#
现在问题解决了,问题是对于本机c代码,最初我是这样写的:
相反,正确的方法应该是: