从pyspark调用javaMap函数java.util.hashmap

yi0zb3m4  于 2021-05-27  发布在  Spark
关注(0)|答案(0)|浏览(232)

下面是python代码:

from pyspark.java_gateway import JavaGateway,java_import
sc = SparkSession.builder.appName('test').master("local[*]").getOrCreate()
ssc = StreamingContext(sparkContext=sc.sparkContext, batchDuration=int(1))
kafka_params = ssc.sparkContext._gateway.jvm.java.util.HashMap()
kafka_params["group.id"] = "zrh-test-stream1"
sc._jvm.com.test.kafka.Bye.main(kafka_params)

Scala code:
package com.oracle.kafka
object Bye {
def main(hhh:java.util.HashMap[String,Object]){
 print("33333333333333333333")
 print(hhh.get(0))
 print("99999999999999999999999")
}

}

当我提交时,我得到空值。33333333333333333空999999999999999999**********************

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题