在python脚本中使用thrift运行配置单元查询时,“连接被拒绝”

rkue9o1l  于 2021-06-03  发布在  Hadoop
关注(0)|答案(1)|浏览(278)

全部,
我正在尝试使用thrift library for python在python脚本中运行配置单元查询。我可以运行不执行m/r的查询 create table ,和 select * from table 但是当我执行执行m/r作业的查询时(比如 select * from table where... ),我得到以下异常。

starting hive server...

Hive history file=/tmp/root/hive_job_log_root_201212171354_275968533.txt
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
java.net.ConnectException: Call to sp-rhel6-01/172.22.193.79:54311 failed on connection exception: java.net.ConnectException: Connection refused

Job Submission failed with exception 'java.net.ConnectException(Call to sp-rhel6-01/172.22.193.79:54311 failed on connection exception: java.net.ConnectException: Connection refused)'
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask

我有一个多节点hadoop集群,我的配置单元安装在namenode中,我也在同一个namenode上运行python脚本。
python脚本是

from hive_service import ThriftHive
from thrift import Thrift
from thrift.transport import TSocket
from thrift.transport import TTransport
from thrift.protocol import TBinaryProtocol

transport = TSocket.TSocket('172.22.193.79', 10000)
transport = TTransport.TBufferedTransport(transport)
protocol = TBinaryProtocol.TBinaryProtocol(transport)

client = ThriftHive.Client(protocol)
transport.open()

client.execute("select count(*) from example ")
print client.fetchAll();
transport.close()

有人能帮我理解出了什么问题吗?
-苏珊特

xlpyo6sf

xlpyo6sf1#

我很难完成 SELECT 但我可以完成 SHOW 以及 DESCRIBE 查询。我解决这个问题的方法是重新启动集群上的服务。我使用cloudera来管理集群,所以我运行的命令 $ sudo /etc/init.d/cloudera-scm-agent hard_restart . 我没有花太多时间调试,但我猜nn或jt崩溃了。有趣的是,我仍然可以完成对元数据的查询。我最好的猜测是,查询直接进入metastore,而不必接触hdfs。但我需要有人来证实。

相关问题