spark sql 去重 distinct dropDuplicates

1distinct 对行级别的过滤重复的数据

df.distinct()

2dropDuplicates 可以选择对字段进行过滤重复

>>> from pyspark.sql import Row
>>> df = sc.parallelize([ \
...     Row(name='Alice', age=5, height=80), \
...     Row(name='Alice', age=5, height=80), \
...     Row(name='Alice', age=10, height=80)]).toDF()
>>> df.dropDuplicates().show()
+---+------+-----+
|age|height| name|
+---+------+-----+
|  5|    80|Alice|
| 10|    80|Alice|
+---+------+-----+
>>> df.dropDuplicates(['name', 'height']).show()
+---+------+-----+
|age|height| name|
+---+------+-----+
|  5|    80|Alice|
+---+------+-----+
上一篇:GBase 8s中SUM函数使用


下一篇:达摩院2020十大科技趋势发布:科技浪潮新十年序幕开启