从flink1.3.2升级到1.4.0hadoop文件系统和路径问题

piztneat  于 2021-06-25  发布在  Flink
关注(0)|答案(2)|浏览(256)

我最近尝试从flink 1.3.2升级到1.4.0,但遇到了一些无法导入的问题 org.apache.hadoop.fs.{FileSystem, Path} 不再。这个问题发生在两个地方:
Parquet师:

import org.apache.avro.Schema
import org.apache.avro.generic.GenericRecord
import org.apache.hadoop.fs.{FileSystem, Path}
import org.apache.flink.streaming.connectors.fs.Writer
import org.apache.parquet.avro.AvroParquetWriter
import org.apache.parquet.hadoop.ParquetWriter
import org.apache.parquet.hadoop.metadata.CompressionCodecName

class AvroWriter[T <: GenericRecord]() extends Writer[T] {

  @transient private var writer: ParquetWriter[T] = _
  @transient private var schema: Schema = _

  override def write(element: T): Unit = {
    schema = element.getSchema
    writer.write(element)
  }

  override def duplicate(): AvroWriter[T] = new AvroWriter[T]()

  override def close(): Unit = writer.close()

  override def getPos: Long = writer.getDataSize

  override def flush(): Long = writer.getDataSize

  override def open(fs: FileSystem, path: Path): Unit = {
    writer = AvroParquetWriter.builder[T](path)
      .withSchema(schema)
      .withCompressionCodec(CompressionCodecName.SNAPPY)
      .build()
  }

}

客户计数器:

import org.apache.flink.streaming.connectors.fs.bucketing.Bucketer
import org.apache.flink.streaming.connectors.fs.Clock
import org.apache.hadoop.fs.{FileSystem, Path}
import java.io.ObjectInputStream
import java.text.SimpleDateFormat
import java.util.Date

import org.apache.avro.generic.GenericRecord

import scala.reflect.ClassTag

class RecordFieldBucketer[T <: GenericRecord: ClassTag](dateField: String = null, dateFieldFormat: String = null, bucketOrder: Seq[String]) extends Bucketer[T] {

  @transient var dateFormatter: SimpleDateFormat = _

  private def readObject(in: ObjectInputStream): Unit = {
    in.defaultReadObject()
    if (dateField != null && dateFieldFormat != null) {
      dateFormatter = new SimpleDateFormat(dateFieldFormat)
    }
  }

  override def getBucketPath(clock: Clock, basePath: Path, element: T): Path = {
    val partitions = bucketOrder.map(field => {
      if (field == dateField) {
        field + "=" + dateFormatter.format(new Date(element.get(field).asInstanceOf[Long]))
      } else {
        field + "=" + element.get(field)
      }
    }).mkString("/")
    new Path(basePath + "/" + partitions)
  }

}

我注意到Flink现在有:

import org.apache.flink.core.fs.{FileSystem, Path}

但是新的 Path 似乎不适用于 AvroParquetWriter 或者 getBucketPath 方法。我知道flink的文件系统和hadoop依赖关系有一些变化,我只是不确定需要导入什么才能使代码再次工作。
我是否需要使用hadoop依赖项,或者现在是否有不同的方式将Parquet文件写入s3?
内部版本.sbt:

val flinkVersion = "1.4.0"

libraryDependencies ++= Seq(
  "org.apache.flink" %% "flink-scala" % flinkVersion % Provided,
  "org.apache.flink" %% "flink-streaming-scala" % flinkVersion % Provided,
  "org.apache.flink" %% "flink-connector-kafka-0.10" % flinkVersion,
  "org.apache.flink" %% "flink-connector-filesystem" % flinkVersion,
  "org.apache.flink" % "flink-metrics-core" % flinkVersion,
  "org.apache.flink" % "flink-metrics-graphite" % flinkVersion,
  "org.apache.kafka" %% "kafka" % "0.10.0.1",
  "org.apache.avro" % "avro" % "1.7.7",
  "org.apache.parquet" % "parquet-hadoop" % "1.8.1",
  "org.apache.parquet" % "parquet-avro" % "1.8.1",
  "io.confluent" % "kafka-avro-serializer" % "3.2.2",
  "com.fasterxml.jackson.core" % "jackson-core" % "2.9.2"
)
4dc9hkyq

4dc9hkyq1#

必要的 org.apache.hadoop.fs.{FileSystem, Path} 类可以在hadoop commons项目中找到。

k4emjkb1

k4emjkb12#

构建一个“无hadoop的flink”是1.4版本的一个主要特性。您所要做的就是将hadoop依赖项包含到类路径中或引用变更日志:
... 这也意味着,在使用hdfs连接器的情况下,比如bucketingsink或rollingsink,现在必须确保使用带有捆绑hadoop依赖项的flink发行版,或者确保在为应用程序构建jar文件时包含hadoop依赖项。

相关问题