wordcount nonetype错误pyspark-

oxosxuxt  于 2021-05-29  发布在  Hadoop
关注(0)|答案(1)|浏览(414)

我试着做一些文本分析:

def cleaning_text(sentence):
   sentence=sentence.lower()
   sentence=re.sub('\'','',sentence.strip())
   sentence=re.sub('^\d+\/\d+|\s\d+\/\d+|\d+\-\d+\-\d+|\d+\-\w+\-\d+\s\d+\:\d+|\d+\-\w+\-\d+|\d+\/\d+\/\d+\s\d+\:\d+',' ',sentence.strip())# dates removed
   sentence=re.sub(r'(.)(\/)(.)',r'\1\3',sentence.strip())
   sentence=re.sub("(.*?\//)|(.*?\\\\)|(.*?\\\)|(.*?\/)",' ',sentence.strip())
   sentence=re.sub('^\d+','',sentence.strip())
   sentence = re.sub('[%s]' % re.escape(string.punctuation),'',sentence.strip())
   cleaned=' '.join([w for w in sentence.split() if not len(w)<2 and w not in ('no', 'sc','ln') ])
   cleaned=cleaned.strip()
   if(len(cleaned)<=1):
        return "NA"
   else:
       return cleaned

org_val=udf(cleaning_text,StringType())
df_new =df.withColumn("cleaned_short_desc", org_val(df["symptom_short_description_"]))
df_new =df_new.withColumn("cleaned_long_desc", org_val(df_new["long_description"]))
longWordsDF = (df_new.select(explode(split('cleaned_long_desc',' ')).alias('word'))
longWordsDF.count()

我得到以下错误。 File "<stdin>", line 2, in cleaning_text AttributeError: 'NoneType' object has no attribute 'lower' 我想执行字数统计,但任何类型的聚合函数都会给我一个错误。
我试过以下方法:

sentence=sentence.encode("ascii", "ignore")

在文本函数中添加了此语句

df.dropna()

它仍然给同样的问题,我不知道如何解决这个问题。

1sbrub3j

1sbrub3j1#

看起来有些列中有空值。在开始处添加if cleaning_text 函数,错误将消失:

if sentence is None:
    return "NA"

相关问题