如何在使用spark-on-k8s时向驱动程序pod注入evnironment变量?

brtdzjyr  于 2021-05-27  发布在  Spark
关注(0)|答案(1)|浏览(349)

我正在k8s上使用gcp spark编写kubernetes spark应用程序。
目前,我无法将环境变量注入到容器中。
我在跟踪医生
显示:

apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
  name: spark-search-indexer
  namespace: spark-operator
spec:
  type: Scala
  mode: cluster
  image: "gcr.io/spark-operator/spark:v2.4.5"
  imagePullPolicy: Always
  mainClass: com.quid.indexer.news.jobs.ESIndexingJob
  mainApplicationFile: "https://lala.com/baba-0.0.43.jar"
  arguments:
    - "--esSink"
    - "http://something:9200/mo-sn-{yyyy-MM}-v0.0.43/searchable-article"
    - "-streaming"
    - "--kafkaTopics"
    - "annotated_blogs,annotated_ln_news,annotated_news"
    - "--kafkaBrokers"
    - "10.1.1.1:9092"
  sparkVersion: "2.4.5"
  restartPolicy:
    type: Never
  volumes:
    - name: "test-volume"
      hostPath:
        path: "/tmp"
        type: Directory
  driver:
    cores: 1
    coreLimit: "1200m"
    memory: "512m"
    env:
      - name: "DEMOGRAPHICS_ES_URI"
        value: "somevalue"
    labels:
      version: 2.4.5
    volumeMounts:
      - name: "test-volume"
        mountPath: "/tmp"
  executor:
    cores: 1
    instances: 1
    memory: "512m"
    env:
      - name: "DEMOGRAPHICS_ES_URI"
        value: "somevalue"
    labels:
      version: 2.4.5
    volumeMounts:
      - name: "test-volume"
        mountPath: "/tmp"

pod设置的环境变量:

Environment:
      SPARK_DRIVER_BIND_ADDRESS:   (v1:status.podIP)
      SPARK_LOCAL_DIRS:           /var/data/spark-1ed8539d-b157-4fab-9aa6-daff5789bfb5
      SPARK_CONF_DIR:             /opt/spark/conf
6bc51xsx

6bc51xsx1#

原来要用这个必须启用 webhooks 另一种方法是 envVars 例子:

spec:
       executor:
           envVars:
               DEMOGRAPHICS_ES_URI: "somevalue"

裁判:https://github.com/googlecloudplatform/spark-on-k8s-operator/issues/978

相关问题