hiveserver2:使用自定义passwdauthenticationprovider时发生thrift sasl相关异常

e0uiprwp  于 2021-06-26  发布在  Hive
关注(0)|答案(1)|浏览(669)

我已经创建了 PasswdAuthenticationProvider 接口,基于oauth2。我认为代码与我遇到的问题无关,不过,可以在这里找到它。
我已经配置好了 hive-site.xml 具有以下属性:

  1. <property>
  2. <name>hive.server2.authentication</name>
  3. <value>CUSTOM</value>
  4. </property>
  5. <property>
  6. <name>hive.server2.custom.authentication.class</name>
  7. <value>com.telefonica.iot.cosmos.hive.authprovider.OAuth2AuthenticationProviderImpl</value>
  8. </property>

然后我重新启动了配置单元服务,并成功地连接了一个基于jdbc的远程客户端。这是在中找到的成功运行的示例 /var/log/hive/hiveserver2.log :

  1. 2016-02-01 11:52:44,515 INFO [pool-5-thread-5]: authprovider.HttpClientFactory (HttpClientFactory.java:<init>(66)) - Setting max total connections (500)
  2. 2016-02-01 11:52:44,515 INFO [pool-5-thread-5]: authprovider.HttpClientFactory (HttpClientFactory.java:<init>(67)) - Setting default max connections per route (100)
  3. 2016-02-01 11:52:44,799 INFO [pool-5-thread-5]: authprovider.HttpClientFactory (OAuth2AuthenticationProviderImpl.java:Authenticate(65)) - Doing request: GET https://account.lab.fiware.org/user?access_token=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx HTTP/1.1
  4. 2016-02-01 11:52:44,800 INFO [pool-5-thread-5]: authprovider.HttpClientFactory (OAuth2AuthenticationProviderImpl.java:Authenticate(76)) - Response received: {"organizations": [], "displayName": "frb", "roles": [{"name": "provider", "id": "106"}], "app_id": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", "email": "frb@tid.es", "id": "frb"}
  5. 2016-02-01 11:52:44,801 INFO [pool-5-thread-5]: authprovider.HttpClientFactory (OAuth2AuthenticationProviderImpl.java:Authenticate(104)) - User frb authenticated
  6. 2016-02-01 11:52:44,868 INFO [pool-5-thread-5]: thrift.ThriftCLIService (ThriftCLIService.java:OpenSession(188)) - Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V6
  7. 2016-02-01 11:52:44,871 INFO [pool-5-thread-5]: session.SessionState (SessionState.java:start(358)) - No Tez session required at this point. hive.execution.engine=mr.
  8. 2016-02-01 11:52:44,873 INFO [pool-5-thread-5]: session.SessionState (SessionState.java:start(358)) - No Tez session required at this point. hive.execution.engine=mr.

问题出现在以下错误反复出现之后:

  1. 2016-02-01 11:52:48,227 ERROR [pool-5-thread-4]: server.TThreadPoolServer (TThreadPoolServer.java:run(215)) - Error occurred during processing of message.
  2. java.lang.RuntimeException: org.apache.thrift.transport.TTransportException
  3. at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
  4. at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:189)
  5. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  6. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  7. at java.lang.Thread.run(Thread.java:745)
  8. Caused by: org.apache.thrift.transport.TTransportException
  9. at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
  10. at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
  11. at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
  12. at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
  13. at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
  14. at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
  15. at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
  16. ... 4 more
  17. 2016-02-01 11:53:18,323 ERROR [pool-5-thread-5]: server.TThreadPoolServer (TThreadPoolServer.java:run(215)) - Error occurred during processing of message.
  18. java.lang.RuntimeException: org.apache.thrift.transport.TTransportException
  19. at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
  20. at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:189)
  21. at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  22. at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  23. at java.lang.Thread.run(Thread.java:745)
  24. Caused by: org.apache.thrift.transport.TTransportException
  25. at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
  26. at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
  27. at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
  28. at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
  29. at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
  30. at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
  31. at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
  32. ... 4 more

为什么?我在其他几个问题中看到,当使用 hive.server2.authentication ,即。 SASL ,并且客户端没有进行握手。但就我而言,这样一处房产的价值 CUSTOM . 我无法理解,任何帮助都将不胜感激。

编辑1

我发现对hiveserver2有定期的请求。。。从hiveserver2本身!以下是导致thrift sasl错误的请求:

  1. $ sudo tcpdump -i lo port 10000
  2. tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
  3. listening on lo, link-type EN10MB (Ethernet), capture size 65535 bytes
  4. ...
  5. ...
  6. 10:18:48.183469 IP dev-fiwr-bignode-11.hi.inet.ndmp > dev-fiwr-bignode-11.hi.inet.55758: Flags [.], ack 7, win 512, options [nop,nop,TS val 1034162147 ecr 1034162107], length 0
  7. ^C
  8. 21 packets captured
  9. 42 packets received by filter
  10. 0 packets dropped by kernel
  11. [fiware-portal@dev-fiwr-bignode-11 ~]$ sudo netstat -nap | grep 55758
  12. tcp 0 0 10.95.76.91:10000 10.95.76.91:55758 CLOSE_WAIT 7190/java
  13. tcp 0 0 10.95.76.91:55758 10.95.76.91:10000 FIN_WAIT2 -
  14. [fiware-portal@dev-fiwr-bignode-11 ~]$ ps -ef | grep 7190
  15. hive 7190 1 1 10:10 ? 00:00:10 /usr/java/jdk1.7.0_71//bin/java -Xmx1024m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/hive -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.id.str=hive -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/lib/hadoop/lib/native/Linux-amd64-64:/usr/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx1024m -Xmx4096m -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/lib/hive/lib/hive-service-0.13.0.2.1.7.0-784.jar org.apache.hive.service.server.HiveServer2 -hiveconf hive.metastore.uris=" " -hiveconf hive.log.file=hiveserver2.log -hiveconf hive.log.dir=/var/log/hive
  16. 1011 14158 12305 0 10:19 pts/1 00:00:00 grep 7190

你知道吗?

编辑2

更多关于从hiveserver2发送到hiveserver2的连接的研究。数据包始终发送5个字节,以下字节(十六进制): 22 41 30 30 31 你知道这些联系吗?

irlmq6kh

irlmq6kh1#

我终于“修好”了。由于消息是由运行在hiveserver2机器上的ambari代理发送的(奇怪的ping之王),所以我只添加了一个 iptables 阻止所有到环回接口上tcp/10000端口的连接的规则:

  1. iptables -A INPUT -i lo -p tcp --dport 10000 -j DROP

当然,现在ambari警告hiveserver2没有活动(ping被丢弃)。如果我想从ambari重新启动服务器,必须删除上面的规则(在启动脚本中有另一个活动的检查);重新启动后,我可以再次启用规则。好吧,我可以接受。

相关问题