apachespark/scala中的log4j kafka功能

uqjltbpv  于 2021-06-07  发布在  Kafka
关注(0)|答案(2)|浏览(330)

嗨,我正在尝试从一堆使用apachespark和log4j以及kafka appender的执行者那里登录到一个kafka主题。我可以用一个基本的文件附加器登录执行者,但不能登录Kafka。
这是我的log4j.properties,我为此定制:

log4j.rootLogger=INFO, console, KAFKA, file

log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n

log4j.appender.KAFKA=org.apache.kafka.log4jappender.KafkaLog4jAppender
log4j.appender.KAFKA.topic=test2
log4j.appender.KAFKA.name=localhost
log4j.appender.KAFKA.host=localhost
log4j.appender.KAFKA.port=9092
log4j.appender.KAFKA.brokerList=localhost:9092
log4j.appender.KAFKA.compressionType=none
log4j.appender.KAFKA.requiredNumAcks=0
log4j.appender.KAFKA.syncSend=true
log4j.appender.KAFKA.layout=org.apache.log4j.PatternLayout
log4j.appender.KAFKA.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L %% - %m%n

log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.File=log4j-application.log
log4j.appender.file.MaxFileSize=5MB
log4j.appender.file.MaxBackupIndex=10
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n

这是我的代码(到目前为止)。我试图传递一个记录器定义,以便每个执行者都能得到一个副本,但我不知道为什么它不会传递给Kafka:

import org.apache.log4j._
import org.apache.spark._
import org.apache.spark.rdd.RDD
import java.io._
import org.apache.kafka.log4jappender.KafkaLog4jAppender

class Mapper(n: Int) extends Serializable{
  @transient lazy val suplogger: Logger = Logger.getLogger("myLogger")

  def doSomeMappingOnDataSetAndLogIt(rdd: RDD[Int]): RDD[String] =
    rdd.map{ i =>
      val sparkConf: SparkConf =new org.apache.spark.SparkConf()
      logger.setLevel((Level) Level.ALL)
      suplogger.warn(sparkConf.toDebugString)
      val pid = Integer.parseInt(new File("/proc/self").getCanonicalFile().getName());
      suplogger.warn("--------------------")
      suplogger.warn("mapping: " + i)
      val supIterator = new scala.collection.JavaConversions.JEnumerationWrapper(suplogger.getAllAppenders())
      suplogger.warn("List is " + supIterator.toList)
      suplogger.warn("Num of list is: " + supIterator.size)

      //(i + n).toString
      "executor pid = "+pid + "debug string: " + sparkConf.toDebugString.size
    }
}

object Mapper {
  def apply(n: Int): Mapper = new Mapper(n)
}

object HelloWorld {
  def main(args: Array[String]): Unit = {
    println("sup")
    println("yo")
    val log = LogManager.getRootLogger
    log.setLevel(Level.WARN)
    val nameIterator = new scala.collection.JavaConversions.JEnumerationWrapper(log.getAllAppenders())
    println(nameIterator.toList)

    val conf = new SparkConf().setAppName("demo-app")
    val sc = new SparkContext(conf)
    log.warn(conf.toDebugString)
    val pid = Integer.parseInt(new File("/proc/self").getCanonicalFile().getName());
    log.warn("--------------------")
    log.warn("IP: "+java.net.InetAddress.getLocalHost() +" PId: "+pid)

    log.warn("Hello demo")

    val data = sc.parallelize(1 to 100, 10)

    val mapper = Mapper(1)

    val other = mapper.doSomeMappingOnDataSetAndLogIt(data)

    other.collect()

    log.warn("I am done")
  }

}

以下是日志文件的一些示例输出:

2017-01-25 06:29:15 WARN  myLogger:19 - spark.driver.port=54335
2017-01-25 06:29:15 WARN  myLogger:21 - --------------------
2017-01-25 06:29:15 WARN  myLogger:23 - mapping: 1
2017-01-25 06:29:15 WARN  myLogger:25 - List is List()
2017-01-25 06:29:15 WARN  myLogger:26 - Num of list is: 0
2017-01-25 06:29:15 WARN  myLogger:19 - spark.driver.port=54335
2017-01-25 06:29:15 WARN  myLogger:21 - --------------------
2017-01-25 06:29:15 WARN  myLogger:23 - mapping: 2
2017-01-25 06:29:15 WARN  myLogger:25 - List is List()
2017-01-25 06:29:15 WARN  myLogger:26 - Num of list is: 0
2017-01-25 06:29:15 WARN  myLogger:19 - spark.driver.port=54335
2017-01-25 06:29:15 WARN  myLogger:21 - --------------------

谢谢你们的帮助,如果你们有什么需要请告诉我!
这是spark submit命令的副本

spark-submit \
    --deploy-mode client \
    --files spark_test/mylogger.props \
    --packages "com.databricks:spark-csv_2.10:1.4.0,org.apache.kafka:kafka-log4j-appender:0.10.1.1" \
    --num-executors 4 \
    --executor-cores 1 \
    --driver-java-options "-Dlog4j.configuration=file:///home/mapr/spark_test/mylogger.props" \
    --conf "spark.executor.extraJavaOptions=-Dlog4j.configuration=file:///home/mapr/spark_test/mylogger.props" \
    --class "HelloWorld" helloworld.jar
g0czyy6m

g0czyy6m1#

问题

你的问题是你没有通过考试 spark_test/mylogger.props 提交给遗嘱执行人。

配置

部署模式客户端

你还是需要上传文件 files 为遗嘱执行人。

spark-submit \
    --deploy-mode client \
    --driver-java-options "-Dlog4j.configuration=file:/home/mapr/spark_test/mylogger.props \
    --conf "spark.executor.extraJavaOptions=-Dlog4j.configuration=file:mylogger.props" \
    --files /home/mapr/spark_test/mylogger.props \
    ...

部署模式群集

spark-submit \
    --deploy-mode cluster \
    --conf "spark.driver.extraJavaOptions=-Dlog4j.configuration=file:mylogger.props" \
    --conf "spark.executor.extraJavaOptions=-Dlog4j.configuration=file:mylogger.props" \
    --files /home/mapr/spark_test/mylogger.props \
    ...

需要更多选择吗?

查看我关于配置spark日志的完整帖子:
https://stackoverflow.com/a/55596389/1549135
关于spark+kafka appender的更多详细信息:
https://stackoverflow.com/a/58883911/1549135

mpbci0fu

mpbci0fu2#

我知道问题出在哪里了。我没有部署到集群,我只是在客户端模式下部署。说实话,我不知道为什么当我被派到集群的时候会这样。
我用的是mapr沙盒vmhttps://www.mapr.com/products/mapr-sandbox-hadoop
如果有人能解释为什么客户机/集群在这里起了作用,我将非常感激!

相关问题