python – Spark Dataframe区分具有重复名称的列

正如我在Spark Dataframe中所知,多列的名称可以与下面的数据帧快照中显示的名称相同:

[
Row(a=107831, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0}), a=107831, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0})),
Row(a=107831, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0}), a=125231, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0047, 3: 0.0, 4: 0.0043})),
Row(a=107831, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0}), a=145831, f=SparseVector(5, {0: 0.0, 1: 0.2356, 2: 0.0036, 3: 0.0, 4: 0.4132})),
Row(a=107831, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0}), a=147031, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0})),
Row(a=107831, f=SparseVector(5, {0: 0.0, 1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0}), a=149231, f=SparseVector(5, {0: 0.0, 1: 0.0032, 2: 0.2451, 3: 0.0, 4: 0.0042}))
]

上面的结果是通过将数据框连接到自身来创建的,你可以看到有4列同时包含两个a和f.

问题是,当我尝试使用a列进行更多计算时,我无法找到一种方法来选择a,我尝试了df [0]和df.select(‘a’),两者都返回我下面的错误mesaage :

AnalysisException: Reference 'a' is ambiguous, could be: a#1333L, a#1335L.

无论如何在Spark API中我可以再次将列与重复的名称区分开来吗?或者某种方式让我改变列名?

解决方法:

我建议您更改连接的列名称

df1.select('a as "df1_a", 'f as "df1_f")
   .join(df2.select('a as "df2_a", 'f as "df2_f"), 'df1_a === 'df2_a)

生成的DataFrame将具有架构

(df1_a, df1_f, df2_a, df2_f)
上一篇:python – 将Spark DataFrame写入Parquet时的Py4JError


下一篇:34岁程序员年薪50w,java虚拟机下载资源