这个问题在这里已经有了答案:
spark nlp“javapackage”对象不可调用(1个答案)
上个月关门了。
我正在学习apachespark,我在googlecolab上运行了下面的代码。
# installed based upon https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/jupyter/quick_start_google_colab.ipynb#scrollTo=lNu3meQKEXdu
import os
# Install java
!apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
!wget -q "https://downloads.apache.org/spark/spark-3.1.1/spark-3.1.1-bin-hadoop2.7.tgz" > /dev/null
!tar -xvf spark-3.1.1-bin-hadoop2.7.tgz > /dev/null
!pip install -q findspark
os.environ["SPARK_HOME"] = "/content/spark-3.1.1-bin-hadoop2.7"
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"]
! java -version
# Install spark-nlp and pyspark
! pip install spark-nlp==3.0.0 pyspark==3.1.1
import sparknlp
spark = sparknlp.start()
from sparknlp.base import DocumentAssembler
documentAssembler = DocumentAssembler().setInputCol(text_col).setOutputCol('document')
我得到下面的错误。我该怎么解决呢
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-48-535b177b526b> in <module>()
4
5 from sparknlp.base import DocumentAssembler
----> 6 documentAssembler = DocumentAssembler().setInputCol(text_col).setOutputCol('document')
4 frames
/usr/local/lib/python3.7/dist-packages/pyspark/ml/wrapper.py in _new_java_obj(java_class, *args)
64 java_obj = getattr(java_obj, name)
65 java_args = [_py2java(sc, arg) for arg in args]
---> 66 return java_obj(*java_args)
67
68 @staticmethod
TypeError: 'JavaPackage' object is not callable
1条答案
按热度按时间np8igboo1#
正如我在上次评论中提到的:
通过spark数据框中列的名称更改文本列,通过可添加的its名称创建文档。setcleanupmode(“clean\u mode”)有关详细信息,请参阅以下链接:https://spark.apache.org/docs/latest/ml-features