在pyspark rdd和dataframe中将文本文件数据过滤为列

nom7f22z  于 2021-05-27  发布在  Hadoop
关注(0)|答案(3)|浏览(450)

我有如下数据:

It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. 
It was popularised in the 1960s with the release of Image sheets containing Buddy passages, and more recently with desktop publishing
 software like

1   long title 1
2 long title 2
3 long title 3
4 long title 4
5 long title 5
6 long title 6
7 long title 7
8 long title 8
9 long title 9
10 long title 10
11 long title 11
12 long title 12
13 long title 13
14 long title 14
15 long title 15
16 long title 16
17 long title 17
18 long title 18
19 long title 19
20 long title 20

现在,在加载这个文本文件时,我必须排除垃圾数据,即段落,并且必须包含从 long title 1 i、 e列数据。我正在使用rdd,但无法正确加载它。一旦rdd中的数据被正确填充,我就可以将其转换为dataframe。下面是我的代码:

from pyspark.context import SparkContext
from pyspark.sql import SparkSession
from pyspark import SparkConf

sc = SparkContext.getOrCreate(SparkConf().setMaster("local[*]"))
load_data=sc.textFile("E://long_sample.txt").filter(lambda x : "title")
load_data.foreach(print())

即使我试着过滤” title “我仍然得到不正确的全部数据。请帮我整理一下。没有显示错误。

yv5phkfx

yv5phkfx1#

请同时尝试使用regexp\u extract。以后可以删除空行。

from pyspark.sql import functions as F

df=spark.read.text('test.txt')

df.select(F.regexp_extract('value', r'(^\d+)\s+(.*)\s+(\d+$)', 1).alias('id')
    ,F.regexp_extract('value',r'(^\d+)\s+(.*)\s+(\d+$)',2).alias('name')
    ,F.regexp_extract('value', r'(^\d+)\s+(.*)\s+(\d+$)', 3).alias('number')
    ).show()
2ic8powd

2ic8powd2#

在Pypark中尝试以下操作:

>>> load_data=sc.textFile("file:///home/mahesh/Downloads/line_text.txt")

使用in语句筛选数据,并从现有rdd创建Dataframe

>>> load_data.filter(lambda x: "title" in x).map(lambda x:(x.split(" ")[0],x.split(" ")[1]+" " + x.split(" ")[2],x.split(" ")[3] )).toDF(["Id","Name","Number"])

>>> df.show()
+---+----------+------+
| Id|      Name|Number|
+---+----------+------+
|  1|long title|     1|
|  2|long title|     2|
|  3|long title|     3|
|  4|long title|     4|
|  5|long title|     5|
|  6|long title|     6|
|  7|long title|     7|
|  8|long title|     8|
|  9|long title|     9|
| 10|long title|    10|
| 11|long title|    11|
| 12|long title|    12|
| 13|long title|    13|
| 14|long title|    14|
| 15|long title|    15|
| 16|long title|    16|
| 17|long title|    17|
| 18|long title|    18|
| 19|long title|    19|
| 20|long title|    20|
+---+----------+------+

如果你需要更多的帮助,请告诉我。

nbewdwxp

nbewdwxp3#

这是另一种使用 rlike 和正则表达式:

import pyspark.sql.functions as f
from pyspark.sql.types import StringType

df = spark.createDataFrame([
  ('It was popularised in the 1960s with the release of Image sheets containing Buddy passages, and more recently with desktop publishing'),
  ('software like'),
  ('1   long title 1'),
  ('2 long title 2'),
  ('3 long title 3'),
  ('4 long title 4'),
  ('5 long title 5'),
  ('6 long title 6'),
  ('7 long title 7'),
  ('8 long title 8'),
  ('9 long title 9'),
  ('10 long title 10'),
  ('11 long title 11'),
  ('12 long title 12')
], StringType())

df.where(f.col("value").rlike("\d+\s+\w+\s+\w+\s+\d+")).show(100, False)

# +----------------+

# |           value|

# +----------------+

# |1   long title 1|

# |  2 long title 2|

# |  3 long title 3|

# |  4 long title 4|

# |  5 long title 5|

# |  6 long title 6|

# |  7 long title 7|

# |  8 long title 8|

# |  9 long title 9|

# |10 long title 10|

# |11 long title 11|

# |12 long title 12|

# +----------------+

rlike将标识与regex匹配的行 \d+\s+\w+\s+\w+\s+\d+ . 以下是正则表达式的解释:
\d+:一个或多个数字
\s+:后跟一个或多个空格
\w+:后跟一个或多个小写字母
\s+:后跟一个或多个空格
.....
如果您确定单词long和title始终存在,您可以将regex修改为: \d+\s+long\s+title\s+\d+ .
更新:
为了将您的数据集拆分为一个包含列的新数据集 id, name, number 使用“选择并拆分”作为下一步:

df.where(df["value"].rlike("\d+\s+long\s+title\s+\d+")) \
  .select(
          f.split(df["value"], "\s+").getItem(0).alias("id"),
          f.concat(f.split(df["value"], "\s+").getItem(1), f.split(df["value"], "\s+").getItem(2)).alias("name"),
          f.split(df["value"], "\s+").getItem(3).alias("number")
  ).show()

# +---+---------+------+

# | id|     name|number|

# +---+---------+------+

# |  1|longtitle|     1|

# |  2|longtitle|     2|

# |  3|longtitle|     3|

# |  4|longtitle|     4|

# |  5|longtitle|     5|

# |  6|longtitle|     6|

# |  7|longtitle|     7|

# |  8|longtitle|     8|

# |  9|longtitle|     9|

# | 10|longtitle|    10|

# | 11|longtitle|    11|

# | 12|longtitle|    12|

# +---+---------+------+

相关问题