typeerror:需要整数(获取bytes类型)-spark-2.4.5-bin-hadoop2.7、hadoop2.7.1、python 3.8.2

ztigrdn8  于 2021-05-31  发布在  Hadoop
关注(0)|答案(1)|浏览(356)

我正在尝试在我的64位windows操作系统计算机上安装spark。我安装了python3.8.2。我有版本为20.0.2的pip。我下载了spark-2.4.5-bin-hadoop2.7,将环境变量设置为hadoop\u home、spark\u home,并将pyspark添加到path变量中。当我从cmd运行pyspark时,我看到下面给出的错误:

C:\Users\aa>pyspark
Python 3.8.2 (tags/v3.8.2:7b3ab59, Feb 25 2020, 23:03:10) [MSC v.1916 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
Traceback (most recent call last):
  File "C:\Users\aa\Downloads\spark-2.4.5-bin-hadoop2.7\spark-2.4.5-bin-hadoop2.7\python\pyspark\shell.py", line 31, in <module>
    from pyspark import SparkConf
  File "C:\Users\aa\Downloads\spark-2.4.5-bin-hadoop2.7\spark-2.4.5-bin-hadoop2.7\python\pyspark\__init__.py", line 51, in <module>
    from pyspark.context import SparkContext
  File "C:\Users\aa\Downloads\spark-2.4.5-bin-hadoop2.7\spark-2.4.5-bin-hadoop2.7\python\pyspark\context.py", line 31, in <module>
    from pyspark import accumulators
  File "C:\Users\aa\Downloads\spark-2.4.5-bin-hadoop2.7\spark-2.4.5-bin-hadoop2.7\python\pyspark\accumulators.py", line 97, in <module>
    from pyspark.serializers import read_int, PickleSerializer
  File "C:\Users\aa\Downloads\spark-2.4.5-bin-hadoop2.7\spark-2.4.5-bin-hadoop2.7\python\pyspark\serializers.py", line 72, in <module>
    from pyspark import cloudpickle
  File "C:\Users\aa\Downloads\spark-2.4.5-bin-hadoop2.7\spark-2.4.5-bin-hadoop2.7\python\pyspark\cloudpickle.py", line 145, in <module>
    _cell_set_template_code = _make_cell_set_template_code()
  File "C:\Users\aa\Downloads\spark-2.4.5-bin-hadoop2.7\spark-2.4.5-bin-hadoop2.7\python\pyspark\cloudpickle.py", line 126, in _make_cell_set_template_code
    return types.CodeType(
TypeError: an integer is required (got type bytes)

我想将pyspark导入到python代码中,但在pycharm中,但在运行代码文件后,会出现一个类似typeerror的错误:还需要一个整数(get type bytes)。我卸载了Python3.8.2并尝试使用Python2.7,但在本例中出现了一个折旧错误。我接受下面给出的错误并更新pip安装程序。

Could not find a version that satisfies the requirement pyspark (from versions: )
No matching distribution found for pyspark

然后我就跑 python -m pip install --upgrade pip 更新pip但是我有 TypeError: an integer is required (got type bytes) 问题又来了。

C:\Users\aa>python --version
Python 3.8.2

C:\Users\aa>pip --version
pip 20.0.2 from c:\users\aa\appdata\local\programs\python\python38\lib\site-packages\pip (python 3.8)

C:\Users\aa>java --version
java 14 2020-03-17
Java(TM) SE Runtime Environment (build 14+36-1461)
Java HotSpot(TM) 64-Bit Server VM (build 14+36-1461, mixed mode, sharing)

如何解决和克服这个问题?目前我有spark-2.4.5-bin-hadoop2.7和python3.8.2。提前谢谢!

zbwhf8kr

zbwhf8kr1#

这是一个python3.8和spark版本的兼容性问题,您可以看到:https://github.com/apache/spark/pull/26194.
要使其发挥功能(在一定程度上),您需要:
将pyspark目录中的cloudpickle.py文件替换为其1.1.1版本,请在以下位置找到它:https://github.com/cloudpipe/cloudpickle/blob/v1.1.1/cloudpickle/cloudpickle.py.
编辑cloudpickle.py文件以添加:

def print_exec(stream):
    ei = sys.exc_info()
    traceback.print_exception(ei[0], ei[1], ei[2], None, stream)

然后就可以导入pyspark了。

相关问题