需要替换scala spark中的列值

nuypyhwy  于 2023-11-18  发布在  Scala
关注(0)|答案(3)|浏览(157)

我有一个JSON数据存储为字符串列在spark框架,它有一些坏数据例如:

  1. {"name":"neo",
  2. "age":"22",
  3. "city""nowhere",
  4. "country":""}

字符串
我需要转换列值以添加“city”:“nowhere”。我尝试了以下方法,但它无法按预期工作。我应该更改什么才能使其工作

  1. val regex = """(".*?")(".*?")"""
  2. df.withColumn("updated_col", regexp_replace(col("value"), regex, "$1:$2"))

gcxthw6b

gcxthw6b1#

检查以下解决方案

  1. scala> df
  2. .withColumn(
  3. "updated",
  4. regexp_replace(
  5. $"data",
  6. """[^:]("")""",
  7. """\":\""""
  8. )
  9. )
  10. .show(false)
  11. +------------------------------------------------------+------------------------------------------------------+
  12. |data |updated |
  13. +------------------------------------------------------+------------------------------------------------------+
  14. |{"name":"neo","age":"22","city""nowhere","country":""}|{"name":"neo","age":"22","cit":"nowhere","country":""}|
  15. +------------------------------------------------------+------------------------------------------------------+

个字符

展开查看全部
2lpgd968

2lpgd9682#

由于Spark的regexp_replace的限制(或者可能是我理解它的局限性),我成功做到这一点的唯一方法是使用如下所示的临时唯一的索引来执行两步过程:

  1. import org.apache.spark.sql.functions._
  2. import org.apache.spark.sql.SparkSession
  3. import spark.implicits._
  4. val spark = SparkSession.builder().appName("JsonFix").master("local[1]").getOrCreate()
  5. val df = Seq(
  6. (1, """{"name":"neo", "age":"22", "city""nowhere", "country":""}""")
  7. ).toDF("id", "value")
  8. df.show(false)
  9. // Introduce a temporary unique delimiter to isolate "city" segments
  10. val tmpDelimiter = "___TMP___"
  11. val df1 = df.withColumn("value", regexp_replace(col("value"), "\"city\"\"", "\"city\"" + tmpDelimiter + "\""))
  12. // adding missing ':' for "city" using the temporary delimiter
  13. val df2 = df1.withColumn("value", regexp_replace(col("value"), "\"city\"" + tmpDelimiter, "\"city\":"))
  14. df2.show(false)

字符串
我也想知道另一种管理方式。

展开查看全部
gcxthw6b

gcxthw6b3#

下面是使用json和map spark函数的代码。我已经测试过了。

  1. import spark.implicits._
  2. import org.apache.spark.sql.functions.{from_json,col, map_concat, expr, to_json}
  3. import org.apache.spark.sql.types.{MapType, StringType}
  4. val jsonString="""{"name":"neo","age":"22","country":""}"""
  5. val data = Seq((1, jsonString))
  6. val df=data.toDF("id","value")
  7. df.show(truncate = false)
  8. val df2 = df.withColumn("value", to_json(map_concat(expr("""map("city", "nowhere")"""), from_json(col("value"),MapType(StringType,StringType)))))
  9. df2.show(truncate = false)

字符串
输出量:

  1. +---+--------------------------------------------+
  2. |id |value |
  3. +---+--------------------------------------------+
  4. |1 |{"name":"neo",\n "age":"22",\n "country":""}|
  5. +---+--------------------------------------------+
  6. +---+-------------------------------------------------------+
  7. |id |value |
  8. +---+-------------------------------------------------------+
  9. |1 |{"city":"nowhere","name":"neo","age":"22","country":""}|
  10. +---+-------------------------------------------------------+

展开查看全部

相关问题