使用apache spark将数据存储导出到gcs bucket中的datastore\u backup

des4xlb0  于 2021-05-18  发布在  Spark
关注(0)|答案(1)|浏览(449)

我想每天将我的数据存储导出到gcs bucket中,格式为datastore\u backup。目前我正在使用curl命令通过gcp数据存储导出服务进行导出,如下所示:

  1. -X POST \
  2. -H "Authorization: Bearer $access_token" \
  3. -H "Content-Type: application/json" \
  4. https://datastore.googleapis.com/v1/projects/viu-data-warehouse-prod:export \
  5. -d '{
  6. "labels": {
  7. "exportVersion": "'"$BUILD_ID"'"
  8. },
  9. "outputUrlPrefix": "'"$output_url"'",
  10. "entityFilter": {
  11. "namespaceIds": ["customer_one_view"],
  12. "kinds": ["user_view"]
  13. },
  14. }') ```
  15. I want it to be done by Apache Spark to make it faster. My Problem is it takes 5 to 6 hrs to finish and as Data is growing it is increasing,
  16. I need suggestion to optimize this process by achieving Parallel processing. I would like to do it via Apache Spark as it is very Fast. Please suggest me how can I do it.
jucafojl

jucafojl1#

如果您不受spark或特定导出格式的限制。您可以从gcs apache beam(dataflow)模板的datastore开始https://cloud.google.com/dataflow/docs/guides/templates/provided-batch#datastore-云存储文本,然后分叉以满足您的需要。

相关问题