带有冰山和s3的独立配置单元元存储

ljo96ir5  于 2021-05-27  发布在  Hadoop
关注(0)|答案(1)|浏览(445)

我想使用presto来查询作为Parquet文件存储在s3中的冰山表,因此我需要使用hivemetastore。我正在运行一个由mysql支持的独立配置单元元存储服务。我已将iceberg配置为使用配置单元目录:

import org.apache.hadoop.conf.Configuration;
import org.apache.iceberg.catalog.Namespace;
import org.apache.iceberg.hive.HiveCatalog;

public class MetastoreTest {

    public static void main(String[] args) {
        Configuration conf = new Configuration();
        conf.set("hive.metastore.uris", "thrift://x.x.x.x:9083");
        conf.set("hive.metastore.warehouse.dir", "s3://bucket/warehouse");
        HiveCatalog catalog = new HiveCatalog(conf);
        catalog.createNamespace(Namespace.of("my_metastore"));
    }

}

我得到以下错误: Caused by: MetaException(message:Got exception: org.apache.hadoop.fs.UnsupportedFileSystemException No FileSystem for scheme "s3") 我已经包括在内了 /hadoop-3.3.0/share/hadoop/tools/libHADOOP_CLASSPATH ,还将aws相关jar复制到 apache-hive-metastore-3.0.0-bin/lib . 还缺什么?

4si2a6ki

4si2a6ki1#

终于弄明白了。首先(正如我之前提到的)我必须包括 hadoop/share/hadoop/tools/libHADOOP_CLASSPATH . 然而,两者都没有改变 HADOOP_CLASSPATH 从工具中复制特定文件到公共文件对我来说也不管用。然后我切换到hadoop-2.7.7,它成功了。另外,我还得把Jackson的jar从工具里复制到普通的jar里。我的 hadoop/etc/hadoop/core-site.xml 看起来像这样:

<configuration>

    <property>
        <name>fs.default.name</name>
        <value>s3a://{bucket_name}</value>
    </property>

    <property>
        <name>fs.s3a.impl</name>
        <value>org.apache.hadoop.fs.s3a.S3AFileSystem</value>
    </property>

    <property>
        <name>fs.s3a.endpoint</name>
        <value>{s3_endpoint}</value>
        <description>AWS S3 endpoint to connect to. An up-to-date list is
            provided in the AWS Documentation: regions and endpoints. Without this
            property, the standard region (s3.amazonaws.com) is assumed.
        </description>
    </property>

    <property>
        <name>fs.s3a.access.key</name>
        <value>{access_key}</value>
    </property>

    <property>
        <name>fs.s3a.secret.key</name>
        <value>{secret_key}</value>
    </property>

</configuration>

在这一点上,您应该能够: hadoop fs -ls s3a://{bucket}/

相关问题