使用webhdfs restapi上传文件超时

jvidinwx  于 2021-07-15  发布在  Hadoop
关注(0)|答案(0)|浏览(275)

我正在windows/wsl2上试用hortonworks hdp沙盒3.0.1版和docker。
我可以使用 Jmeter 板中的ambari文件视图上传文件,并可以很好地查询它们。
我可以使用webhdfsrestapi上传小csv文件(~100kb),但是我遇到了大文件的问题。我试过25mb和60mb,但都失败了。
我已经运行docker-deploy-hdp30.sh,所以我让sandbox容器与sandbox nginx代理一起运行,以暴露所有端口。

hortonworks/sandbox-hdp:3.0.1
hortonworks/sandbox-proxy:1.0

我正在尝试使用hadoopwebhdfsrestapi来创建和上传一个文件,使用curl,遵循这里的说明。
我有以下主机条目:

127.0.0.1       sandbox-hdp.hortonworks.com

首次提交请求:

curl -i -X PUT "http://sandbox-hdp.hortonworks.com:50070/webhdfs/v1/user/admin/test/test.csv?op=CREATE&user.name=admin"

退货

HTTP/1.1 307 Temporary Redirect
Server: nginx/1.15.0
Date: Wed, 20 Jan 2021 23:28:28 GMT
Content-Type: application/octet-stream
Content-Length: 0
Connection: keep-alive
Cache-Control: no-cache
Expires: Wed, 20 Jan 2021 23:28:28 GMT
Pragma: no-cache
X-FRAME-OPTIONS: SAMEORIGIN
Set-Cookie: hadoop.auth="u=admin&p=admin&t=simple&e=1611221308442&s=rcoCzQsvXmjzp8pqzIHD/kocm+NdZUlBOD4WDCi1m9w="; Path=/; HttpOnly
Location: http://sandbox-hdp.hortonworks.com:50075/webhdfs/v1/user/admin/test/test.csv?op=CREATE&user.name=admin&namenoderpcaddress=sandbox-hdp.hortonworks.com:8020&createflag=&createparent=true&overwrite=false

然后,如文档中所述,将位置标头复制到第二个put请求中,其中包含要上载的文件:

curl -i -X PUT -T test.csv "http://sandbox-hdp.hortonworks.com:50075/webhdfs/v1/user/admin/test/test.csv?op=CREATE&user.name=admin&namenoderpcaddress=sandbox-hdp.hortonworks.com:8020&createflag=&createparent=true&overwrite=false"

大约一分钟后,这个错误就结束了

HTTP/1.1 100 Continue

HTTP/1.1 100 Continue
Server: nginx/1.15.0
Date: Wed, 20 Jan 2021 23:29:03 GMT
Connection: keep-alive

curl: (52) Empty reply from server

我在name node或data node日志文件中没有看到任何错误,但在nginx的docker沙盒代理容器日志中看到:

172.21.0.1 - - [20/Jan/2021:23:28:28 +0000] "PUT /webhdfs/v1/user/admin/test/test.csv?op=CREATE&user.name=admin HTTP/1.1" 307 0 "-" "curl/7.55.1" "-"
2021/01/20 23:28:56 [warn] 7#7: *1882 a client request body is buffered to a temporary file /var/cache/nginx/client_temp/0000000010, client: 172.21.0.1, server: sandbox-hdp.hortonworks.com, request: "PUT /webhdfs/v1/user/admin/test/test.csv?op=CREATE&user.name=admin&namenoderpcaddress=sandbox-hdp.hortonworks.com:8020&createflag=&createparent=true&overwrite=false HTTP/1.1", host: "sandbox-hdp.hortonworks.com:50075"
2021/01/20 23:30:03 [error] 7#7: *1882 upstream timed out (110: Connection timed out) while reading upstream, client: 172.21.0.1, server: sandbox-hdp.hortonworks.com, request: "PUT /webhdfs/v1/user/admin/test/test.csv?op=CREATE&user.name=admin&namenoderpcaddress=sandbox-hdp.hortonworks.com:8020&createflag=&createparent=true&overwrite=false HTTP/1.1", upstream: "http://172.21.0.2:50075/webhdfs/v1/user/admin/test/test.csv?op=CREATE&user.name=admin&namenoderpcaddress=sandbox-hdp.hortonworks.com:8020&createflag=&createparent=true&overwrite=false", host: "sandbox-hdp.hortonworks.com:50075"
172.21.0.1 - - [20/Jan/2021:23:30:03 +0000] "PUT /webhdfs/v1/user/admin/test/test.csv?op=CREATE&user.name=admin&namenoderpcaddress=sandbox-hdp.hortonworks.com:8020&createflag=&createparent=true&overwrite=false HTTP/1.1" 100 25 "-" "curl/7.55.1" "-"

在ambari文件视图中,该文件出现了,但它只是前429.8kb,而不是我期望的60mb文件。
顺便说一句,如果我给这个文件的原始名称(不是test.csv!)上传大小略有不同(497.8KB),即使上传的内容是相同的。取回时,磁盘上的大小正好是500kb。
你知道为什么会超时吗?会不会只是写第一个街区?
我可以使用这个文件和其他类似大小的文件时,上传通过ambari网络用户界面罚款。
对于小文件,restapi调用几乎立即完成。

HTTP/1.1 100 Continue

HTTP/1.1 100 Continue
Server: nginx/1.15.0
Date: Thu, 21 Jan 2021 00:18:21 GMT
Connection: keep-alive

HTTP/1.1 201 Created
Location: hdfs://sandbox-hdp.hortonworks.com:8020/user/admin/test/small.csv
Content-Length: 0
Access-Control-Allow-Origin: *
Connection: close

谢谢。

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题