我们无法将数据写入大查询。面临以下问题。大查询连接器jar:spark-3. 4-bigquery-0. 33. 0. jar
错误客户端:应用程序诊断消息:用户类抛出异常:在sun.reflect.NativeMethodSystemorImpl.invoke0处出现java.lang.reflect. ObjectTargetException(本机方法)在sun. reflect. NativeMethodSystemorImpl.invoke(NativeMethodsManagerorImpl.java:62)在sun. reflect. DelegatingMethodsManagerorImpl.invoke(DelegatingMethodsSystemorImpl.java:43)在java.lang.reflect.Method.invoke(Method.java:498)在org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:49)at org.springframework. Boot .loader.Launcher. launch(Launcher.java:108)at org. springframework. Boot . loader. Launcher.launch(Launcher.java:58)at org.springframework. Boot .loader.JarLauncher.main(JarLauncher.java:65)在sun. reflect. NativeMethodsManager或Impl. invoke 0(本机方法)在sun.reflect. NativeMethodsManager或Impl.java:62)在sun.reflect. DelegatingMethodsManager或Impl.java:43)在java.lang.reflect.Method.invoke(Method.java:498)at org. apache. spark. deploy. yarn. ApplicationMaster $$anon$2.run(ApplicationMaster.scala:760)Caused by:com.google.cloud.spark.bigquery.repackaged.com.google.inject.ProvisionException:无法配置,看到以下错误:
1.[Guice/ErrorInCustomProvider]:IllegalArgumentException:BigQueryConnectorException$InvalidSchemaException:定位BigQueryDirectDataSourceWriterContext时,目标表的架构与BigQueryDataSourceWriterModule处的框架架构不兼容。provideDirectDataSourceWriterContext(BigQueryDataSourceWriterModule.java:61)
代码片段:我们使用下面的代码片段。System.setProperty(“GOOGLE_CLOUD_PROJECT”,“”); SparkSession spark = SparkSession.builder().appName(“BigQuerySparkApp”).getOrCreate();
Dataset<Row> csvData =
spark.read().format("csv").option("inferSchema","true").option("header",
"true").load("csv");
String projectID = "";
String datasetID = "";
String tableID = "communication_master";
String tableFullName = projectID + ":" + datasetID + "." + tableID;
csvData.write().format("bigquery")
.option("writeMethod", "direct")
.option("table", tableFullName)
.option("schema", "")
.option("credentials", "")
.mode(SaveMode.Overwrite) .save();
spark.stop();
字符串
1条答案
按热度按时间8i9zcol21#
我们都需要解决问题,但请记住,其中一些问题是隐藏在其他问题中的。
这个问题在Spark Big Query Connector Github上有记录。这似乎是一个内部问题。我推断这是因为你没有提供任何代码是什么导致它。