python-合并每个组以填充时间序列

我正在尝试为每个组合并两个数据框,以便为每个用户填充时间.考虑以下pyspark数据帧,

df = sqlContext.createDataFrame(
    [
        ('2018-03-01 00:00:00', 'A', 5),
        ('2018-03-01 03:00:00', 'A', 7),
        ('2018-03-01 02:00:00', 'B', 3),
        ('2018-03-01 04:00:00', 'B', 2)
     ],
     ('datetime', 'username', 'count')
)

#and

df1 = sqlContext.createDataFrame(
    [
        ('2018-03-01 00:00:00',1),
        ('2018-03-01 01:00:00', 2),
        ('2018-03-01 02:00:00', 2),
        ('2018-03-01 03:00:00', 3),
        ('2018-03-01 04:00:00', 1),
        ('2018-03-01 05:00:00', 5)
    ],
    ('datetime', 'val')
)

产生,

+-------------------+--------+-----+
|           datetime|username|count|
+-------------------+--------+-----+
|2018-03-01 00:00:00|       A|    5|
|2018-03-01 03:00:00|       A|    7|
|2018-03-01 02:00:00|       B|    3|
|2018-03-01 04:00:00|       B|    2|
+-------------------+--------+-----+

#and 

+-------------------+---+
|           datetime|val|
+-------------------+---+
|2018-03-01 00:00:00|  1|
|2018-03-01 01:00:00|  2|
|2018-03-01 02:00:00|  2|
|2018-03-01 03:00:00|  3|
|2018-03-01 04:00:00|  1|
|2018-03-01 05:00:00|  5|
+-------------------+---+

df1中的val列无关紧要,并且在最终结果中不需要,因此我们可以将其删除.最后,预期结果将是

+-------------------+--------+-----+
|           datetime|username|count|
+-------------------+--------+-----+
|2018-03-01 00:00:00|       A|    5|
|2018-03-01 01:00:00|       A|    0|
|2018-03-01 02:00:00|       A|    0|
|2018-03-01 03:00:00|       A|    7|
|2018-03-01 04:00:00|       A|    0|
|2018-03-01 05:00:00|       A|    0|
|2018-03-01 00:00:00|       B|    0|
|2018-03-01 01:00:00|       B|    0|
|2018-03-01 02:00:00|       B|    3|
|2018-03-01 03:00:00|       B|    0|
|2018-03-01 04:00:00|       B|    2|
|2018-03-01 05:00:00|       B|    0|
+-------------------+--------+-----+

我尝试了groupBy()并加入,但是没有用.我也尝试创建一个函数并将其注册为pandas_udf(),但仍然无法正常工作,即

df.groupBy('usernames').join(df1, 'datetime', 'right')

@pandas_udf('datetime string, username string, count double', F.PandasUDFType.GROUPED_MAP)
def fill_time(df):
    return df.merge(df1, on = 'cdatetime', how = 'right')

有什么建议么?

解决方法:

只需跨产品不同的时间戳记和用户名,然后将数据与外部联接起来即可:

from pyspark.sql.functions import broadcast

(broadcast(df1.select("datetime").distinct())
    .crossJoin(df.select("username").distinct())
    .join(df, ["datetime", "username"], "leftouter")
    .na.fill(0))

要使用pandas_udf,您需要一个本地对象作为参考

from pyspark.sql.functions import PandasUDFType, pandas_udf

def fill_time(df1):
    @pandas_udf('datetime string, username string, count double', PandasUDFType.GROUPED_MAP)
    def _(df):
        df_ = df.merge(df1, on='datetime', how='right')
        df_["username"] = df_["username"].ffill().bfill()
        return df_
    return _

(df.groupBy("username")
    .apply(fill_time(
        df1.select("datetime").distinct().toPandas()
    ))
    .na.fill(0))

但是它将比仅使用SQL的解决方案慢.

上一篇:34岁Java程序员裸辞,javajson中文乱码


下一篇:34岁Android开发大叔感慨,完整版开放免费下载!