python – pyspark错误:AttributeError:’SparkSession’对象没有属性’parallelize’

我在Jupyter笔记本上使用pyspark.以下是Spark设置的方式:

import findspark
findspark.init(spark_home='/home/edamame/spark/spark-2.0.0-bin-spark-2.0.0-bin-hadoop2.6-hive', python_path='python2.7')

    import pyspark
    from pyspark.sql import *

    sc = pyspark.sql.SparkSession.builder.master("yarn-client").config("spark.executor.memory", "2g").config('spark.driver.memory', '1g').config('spark.driver.cores', '4').enableHiveSupport().getOrCreate()

    sqlContext = SQLContext(sc)

然后,当我这样做:

spark_df = sqlContext.createDataFrame(df_in)

其中df_in是一个pandas数据帧.然后我得到以下错误:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-9-1db231ce21c9> in <module>()
----> 1 spark_df = sqlContext.createDataFrame(df_in)


/home/edamame/spark/spark-2.0.0-bin-spark-2.0.0-bin-hadoop2.6-hive/python/pyspark/sql/context.pyc in createDataFrame(self, data, schema, samplingRatio)
    297         Py4JJavaError: ...
    298         """
--> 299         return self.sparkSession.createDataFrame(data, schema, samplingRatio)
    300 
    301     @since(1.3)

/home/edamame/spark/spark-2.0.0-bin-spark-2.0.0-bin-hadoop2.6-hive/python/pyspark/sql/session.pyc in createDataFrame(self, data, schema, samplingRatio)
    520             rdd, schema = self._createFromRDD(data.map(prepare), schema, samplingRatio)
    521         else:
--> 522             rdd, schema = self._createFromLocal(map(prepare, data), schema)
    523         jrdd = self._jvm.SerDeUtil.toJavaArray(rdd._to_java_object_rdd())
    524         jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json())

/home/edamame/spark/spark-2.0.0-bin-spark-2.0.0-bin-hadoop2.6-hive/python/pyspark/sql/session.pyc in _createFromLocal(self, data, schema)
    400         # convert python objects to sql data
    401         data = [schema.toInternal(row) for row in data]
--> 402         return self._sc.parallelize(data), schema
    403 
    404     @since(2.0)

AttributeError: 'SparkSession' object has no attribute 'parallelize'

有谁知道我做错了什么?谢谢!

解决方法:

SparkSession不是SparkContext的替代品,而是SQLContext的等价物.只需使用它,就像使用SQLContext一样:

spark.createDataFrame(...)

如果您必须访问SparkContext,请使用sparkContext属性:

spark.sparkContext

因此,如果您需要SQLContext以实现向后兼容性,您可以:

SQLContext(sparkContext=spark.sparkContext, sparkSession=spark)
上一篇:python – PySpark – 将列表作为参数传递给UDF


下一篇:PC-CSS-多浏览器支持HTML5