hadoop kms with hdfs keystore:scheme“hdfs”没有文件系统

bvk5enib  于 2021-05-31  发布在  Hadoop
关注(0)|答案(1)|浏览(475)

我一直在尝试配置hadoop kms以使用hdfs作为密钥提供者。为此,我遵循hadoop文档,并在kms-site.xml中添加了以下字段:

<property> 
      <name>hadoop.kms.key.provider.uri</name>
      <value>jceks://hdfs@nn1.example.com/kms/test.jceks</value>
      <description>
      URI of the backing KeyProvider for the KMS.
      </description>
  </property>

该路由存在于hdfs中,我希望kms为其密钥库创建test.jceks文件。但是,由于以下错误,kms无法启动:

ERROR: Hadoop KMS could not be started

REASON: org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme "hdfs"

Stacktrace:
---------------------------------------------------
org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme "hdfs"
    at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3220)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3240)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3291)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3259)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
    at org.apache.hadoop.crypto.key.JavaKeyStoreProvider.<init>(JavaKeyStoreProvider.java:132)
    at org.apache.hadoop.crypto.key.JavaKeyStoreProvider.<init>(JavaKeyStoreProvider.java:88)
    at org.apache.hadoop.crypto.key.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:660)
    at org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:96)
    at org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:187)
    at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)
    at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779)
    at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)
    at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780)
    at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583)
    at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1080)
    at org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1003)
    at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:507)
    at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322)
    at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325)
    at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
    at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069)
    at org.apache.catalina.core.StandardHost.start(StandardHost.java:822)
    at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061)
    at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463)
    at org.apache.catalina.core.StandardService.start(StandardService.java:525)
    at org.apache.catalina.core.StandardServer.start(StandardServer.java:761)
    at org.apache.catalina.startup.Catalina.start(Catalina.java:595)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)
    at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)

据我所知,这个错误似乎是因为没有为hdfs实现文件系统。我查过这个错误,但它总是指升级时hdfs客户机缺少jar,我没有这样做(这是一个新的安装)。我正在使用hadoop2.7.2
谢谢你的帮助!

zed5wv10

zed5wv101#

我在hadoop的jira问题跟踪器中问了同样的问题。正如用户wei chiu chuang所指出的,在hdfs中拥有keystore不是一个有效的用例。kms不能使用hdfs作为备份存储,因为每个hdfs客户端文件访问都会经过一个hdfs namenode-->kms-->hdfs namenode-->kms….的循环。。。。
因此,只有基于文件的kms才能使用本地文件系统上的密钥库文件。

相关问题