spark和shell hive的操作相同,效果不同,为什么?

fdbelqdn  于 2021-05-27  发布在  Spark
关注(0)|答案(0)|浏览(264)

此代码插入来自spark的数据

String warehouseLocation = new File("spark-warehouse").getAbsolutePath();
        SparkSession sparkSession = SparkSession.builder()
                .appName(appName)
                .config("spark.sql.warehouse.dir", warehouseLocation)
                .config("spark.sql.catalogImplementation","hive")
                .enableHiveSupport()
                .config("hive.exec.dynamic.partition", "true")
                .config("hive.exec.dynamic.partition.mode", "nonstrict")
                .getOrCreate();
        JavaStreamingContext jssc = new JavaStreamingContext(new JavaSparkContext(sparkSession.sparkContext()),
                Durations.seconds(duration));

        SQLContext sqlContext = sparkSession.sqlContext();
        sqlContext.sql("CREATE TABLE IF NOT EXISTS " + tableName + " (value1 STRING, value2 STRING, value3 STRING, " +
                "value4 STRING, value5 STRING, value6 STRING, value7 STRING) PARTITIONED BY (year STRING, mounth STRING, day STRING)" +
                " STORED AS ORC");

        sqlContext.sql("SET hive.merge.tezfiles=true");
        sqlContext.sql("SET hive.merge.mapfiles=true");
        sqlContext.sql( "SET hive.merge.size.per.task=256000000");
        sqlContext.sql ( "SET hive.merge.smallfiles.avgsize=16000000");
        sqlContext.sql("SET hive.merge.orcfile.stripe.level=true;");

        Map<String, Object> kafkaParams = new HashMap<>();
        kafkaParams.put("bootstrap.servers", broker);
        kafkaParams.put("key.deserializer", StringDeserializer.class);
        kafkaParams.put("value.deserializer", StringDeserializer.class);
        kafkaParams.put("group.id", "use_a_separate_group_id_for_each_stream");
        kafkaParams.put("auto.offset.reset", "latest");
        kafkaParams.put("enable.auto.commit", false);

        Collection<String> topicsSet = Collections.singletonList(topic);

        // Create direct kafka stream with brokers and topics
        JavaInputDStream<ConsumerRecord<String, String>> messages = KafkaUtils.createDirectStream(
                jssc,
                LocationStrategies.PreferConsistent(),
                ConsumerStrategies.Subscribe(topicsSet, kafkaParams));

        // Get the lines, split them into words, count the words and print
        JavaDStream<String> lines = messages.map(ConsumerRecord::value);
        lines.foreachRDD(new VoidFunction<JavaRDD<String>>() {
            @Override
            public void call(JavaRDD<String> rdd) {
                if (!rdd.isEmpty()) {
                    JavaRDD<Data> dataRDD = rdd.map(new Function<String, Data>() {
                        @Override
                        public Data call(String msg) {
                            try {
                                return Data.insertDataByString(msg);
                            } catch (ParseException e) {
                                e.printStackTrace();
                            }

                            return null;
                        }
                    });

                    Dataset<Row> dataRow = sqlContext.createDataFrame(dataRDD, Data.class);
                    dataRow.createOrReplaceTempView("temp_table");

                    sqlContext.sql("insert into " + tableName + " partition(year,mounth,day) select value1, value2, " +
                            "value3, value4, value5, value6, value7, year, mounth, day from temp_table");
                    //dataRow.write().format("orc").partitionBy("year", "day").mode(SaveMode.Append).insertInto(tableName);
                    //sqlContext.sql("ALTER TABLE " + tableName + " PARTITION(year='2020', mounth='4', day='26') " +  " CONCATENATE");

                }
            }

执行此代码时,将在中创建表hdfs://master.vmware.local:8020/apps/spark/warehouse/tablename/year=2020/mounth=4/day=26和into day=26存在更多文件。c000如果从配置单元外壳创建表,则表位于其他位置, hdfs://master.vmware.local:8020/warehouse/tablespace/managed/hive/table\u name/year=2020/mounth=4/day=26/和into day=26显示文件:orc\u acid\u version和\u bucket\u000000
我的目标是用spark创建orc文件,但是我认为用spark我是用hive的默认文件保存的。
如何将spark with hive的数据保存到ocr文件?

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题