在scala中调用collect()函数时出现异常

2fjabf4q  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(459)

我试图编写一个自定义代码来比较源模式(sas)和目标模式(hive)的数据类型。在sas中,我们有不同的数据类型。例如,对于datetime,数据类型定义为 Num 格式定义为 DateTime20. (例如)。在配置单元中,此数据类型等效于 Timestamp .
因此,我的源架构文件如下所示:
source.csv文件

S_No,Variable,Type,Len,Format,Informat
6,EMP_HOURS,Num,8,15.2,15.1
4,EMP_NAME,Char,50,,
1,DATETIME,Num,8,DATETIME20.,DATETIME20.
5,HEADER_ROW_COUNT,Num,8,,
2,LOAD_DATETIME,Num,8,DATETIME20.,DATETIME20.
3,SOURCE_BANK,Char,1,,

sastohivemappings.csv文件

Num,Double,Double
Num,DateTime,Timestamp
Num, ,Integer
Char, ,String

我在下面定义了一个自定义函数:

def _getHiveTypeMapping(dataType: String, dataFormat: String) : String = {
    val sasToHiveMappingLocation = "s3a://abc/SASToHiveMappings.csv"
    val mappings = sc.textFile(sasToHiveMappingLocation)
    var definedType=""
    try { 
         if(dataFormat.toUpperCase.contains("DATETIME")){ definedType="datetime" }
         else if(dataFormat.toDouble.getClass.getName == "double") { definedType="Double" }
         else { definedType="Unknown" }
        } 
    catch { 
         case _: Throwable => definedType="Unknown"
        } 

    if (definedType=="" || definedType=="Unknown" ) definedType=dataFormat
    else definedType=definedType
    try {        
     val atype=mappings.map(x => x.split(",")).filter(x => x(0).toUpperCase.contains(dataType.toUpperCase)).filter(x => x(1).toUpperCase.contains(definedType.toUpperCase)).take(1).map(_(2))
     if (atype.size >0) atype(0) else ""
    }
    catch {
        case e: Exception => e.getMessage.toString
    }
}

现在,当我运行下面的,它给了我正确的结果。

scala> rows.map(x => x.split(",")).map(y => (y(1),y(2),y(4))).take(6).map { case (a,b,c) => (a,_getHiveTypeMapping(b,c)) }
res196: Array[(String, String)] = Array((EMP_HOURS,Double), (EMP_NAME,String), (DATETIME,Timestamp), (HEADER_ROW_COUNT,Integer), (LOAD_DATETIME,Timestamp), (SOURCE_BANK,String))

但当我离开的时候 take(6) 在两者之间,试着运行 collect() 功能,我明白 NullPointerException . 我不知道为什么我会得到这个。即

scala> rows.map(x => x.split(",")).map(y => (y(1),y(2),y(4))).map { case (a,b,c) => (a,_getHiveTypeMapping(b,c)) }.collect()

例外情况是:

18/01/04 10:42:13 WARN TaskSetManager: Lost task 1.0 in stage 267.0 (TID 313, localhost, executor driver): TaskKilled (stage cancelled)
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 267.0 failed 1 times, most recent failure: Lost task 0.0 in stage 267.0 (TID 312, localhost, executor driver): java.lang.NullPointerException
        at _getHiveTypeMapping(<console>:33)
        at $anonfun$3.apply(<console>:42)
        at $anonfun$3.apply(<console>:42)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
        at scala.collection.Iterator$class.foreach(Iterator.scala:893)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
        at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
        at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
        at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
        at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
        at scala.collection.AbstractIterator.to(Iterator.scala:1336)
        at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
        at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)
        at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
        at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)
        at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:936)
        at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:936)
        at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2062)
        at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2062)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:108)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
  at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1499)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1487)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1486)
  at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
  at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1486)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:814)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:814)
  at scala.Option.foreach(Option.scala:257)
  at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:814)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1714)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1669)
  at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1658)
  at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
  at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:630)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2022)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2043)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2062)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2087)
  at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
  at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
  at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
  at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
  ... 48 elided
Caused by: java.lang.NullPointerException
  at _getHiveTypeMapping(<console>:33)
  at $anonfun$3.apply(<console>:42)
  at $anonfun$3.apply(<console>:42)
  at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
  at scala.collection.Iterator$class.foreach(Iterator.scala:893)
  at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
  at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
  at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
  at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
  at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
  at scala.collection.AbstractIterator.to(Iterator.scala:1336)
  at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
  at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)
  at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
  at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)
  at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:936)
  at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:936)
  at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2062)
  at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2062)
  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
  at org.apache.spark.scheduler.Task.run(Task.scala:108)
  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  at java.lang.Thread.run(Thread.java:748)
`

请你帮忙,因为我有点不明白为什么会这样。

qq24tv8q

qq24tv8q1#

您正在使用 SparkContext 用你的方法 _getHiveTypeMapping . 在你申请的代码中 _getHiveTypeMapping 在一个 map 对…的操作 RDD . 该代码将在执行器上执行,而不是在驱动程序中执行。这个 SparkContext 是驱动程序的一部分。不能在执行器上运行的代码中使用它。

相关问题