我正在使用pysparkDataframe。我需要执行tf-idf,为此我使用spark-nlp进行标记化、规范化等之前的步骤。
我有一个df,在应用标记器之后看起来是这样的:
df.select('tokenizer').show(5, truncate = 130)
+----------------------------------------------------------------------------------------------------------------------------------+
| tokenized |
+----------------------------------------------------------------------------------------------------------------------------------+
|[content, type, multipart, alternative, boundary, nextpart, da, df, nextpart, da, df, content, type, text, plain, charset, asci...|
|[receive, ameurht, eop, eur, prod, protection, outlook, com, cyprmb, namprd, prod, outlook, com, https, via, cyprca, namprd, pr...|
|[plus, every, photographer, need, mm, lens, digital, photography, school, email, newsletter, http, click, aweber, com, ct, l, m...|
|[content, type, multipart, alternative, boundary, nextpart, da, beb, nextpart, da, beb, content, type, text, plain, charset, as...|
|[original, message, customer, service, mailto, ilpjmwofnst, qssadxnvrvc, narrig, stepmotherr, eviews, com, send, thursday, dece...|
+----------------------------------------------------------------------------------------------------------------------------------+
only showing top 5 rows
下一步是应用normalizer:
我要设置多个清理模式:
1) remove all numerics and numerics from words
-> example: [jhghgb56, 5897t95, fhgbg4, 7474, hfgbgb]
-> expected output: [jhghgb, fhgbg, hfgbgb]
2) remove all words less than 4
-> example: [gfh, ehfufibf, hi, df, jdfh]
-> expected output: [ehfufibf, jdfh]
我试过这个:
tokenizer = Tokenizer()\
.setInputCols(['document'])\
.setOutputCol('tokenized')\
.setMinLength(3)
cleanup = ["[^A-Za-z]"]
normalizer = Normalizer()\
.setInputCols(['tokenized'])\
.setOutputCol('normalized')\
.setLowercase(True)\
.setCleanupPatterns(cleanup)
到目前为止 cleanup = ["[^A-Za-z]"]
满足第一个条件。但是现在我得到了少于4个字符的干净单词,我不知道如何删除这些单词。如果您能帮忙,我们将不胜感激!
暂无答案!
目前还没有任何答案,快来回答吧!