如何运行多个Spark cassandra 查询

hjzp0vay  于 2022-11-05  发布在  Cassandra
关注(0)|答案(1)|浏览(183)

我需要运行下面这样一个任务。不知怎么的,我遗漏了一点。我知道,我不能像这样使用javasparkcontext并传递javafunctions,因为有序列化问题。
我需要运行多个cassandra查询,大小为cartesian.size()。有什么建议吗?

JavaSparkContext jsc = new JavaSparkContext(conf);
    JavaRDD<DateTime> dateTimeJavaRDD = jsc.parallelize(dateTimes); //List<DateTime>
    JavaRDD<Integer> virtualPartitionJavaRDD = jsc.parallelize(virtualPartitions); //List<Integer>
    JavaPairRDD<DateTime, Integer> cartesian = dateTimeJavaRDD.cartesian(virtualPartitionJavaRDD);

    long c = cartesian.map(new Function<Tuple2<DateTime, Integer>, Long>() {
        @Override
        public Long call(Tuple2<DateTime, Integer> tuple2) throws Exception {
            return javaFunctions(jsc).cassandraTable("keyspace", "table").where("p1 = ? and  p2 = ?", tuple2._1(), tuple2._2()).count();
        }
    }).reduce((a,b) -> a + b);

    System.out.println("TOTAL ROW COUNT IS: " + c);
xnifntxz

xnifntxz1#

正确的解决方案应该是在数据和Casasndra表之间执行连接。joinWithCassandraTable函数可以完成你所需要的工作, -你只需生成Tuple2的RDD,其中包含p1p2的值,然后调用joinWithCassandra表,如下所示(未测试,取自我的示例here):

JavaRDD<Tuple2<Integer, Integer>> trdd = cartesian.map(new Function<Tuple2<DateTime, Integer>, Tuple2<Integer, Integer>>() {
        @Override
        public Tuple2<Integer, Integer> call(Tuple2<DateTime, Integer> tuple2) throws Exception {
            return new Tuple2<Integer, Integer>(tuple2._1(), tuple2._2());
        }
    });
CassandraJavaPairRDD<Tuple2<Integer, Integer>, Tuple2<Integer, String>> joinedRDD =
     trdd.joinWithCassandraTable("test", "jtest",
     someColumns("p1", "p2"), someColumns("p1", "p2"),
     mapRowToTuple(Integer.class, String.class), mapTupleToRow(Integer.class));
// perform counting here...

相关问题