KDD Cup 2020 去偏推荐之旅 协同过滤

2019年,全球零售电子商务销售额达3.53万亿美元,电子零售收入预计到2022年将增长至6.54万亿美元。随着流量的巨大增长必须面临的基本挑战。缓解马太效应,让中小商家有潜力的商品快速获得流量,并进行有效的跨类目推荐,对手淘的生态有着深远影响。

传统推荐模型是深度学习推荐模型的基石,先从ItemCF算法入手,来对Item进行召回。

ItemCF算法

ItemCF的主要思想是:给用户推荐之前喜欢物品的相似物品。

  • 基于物品的协同滤波算法主要有三步:
    1. 计算物品之间的相似度
    2. 计算推荐结果
    3. 惩罚热门物品

物品相似度的计算公式如下:
wij=N(i)N(j)N(i) w_{ij} = \frac{|N(i)\bigcap N(j)|}{|N(i)|} wij​=∣N(i)∣∣N(i)⋂N(j)∣​
式中,分子部分表示的是对物品i和物品j共同产生过行为的用户个数,分母表示的是对物品i产生过行为的个数。

wij=N(i)N(j)N(i)N(j) w_{ij} = \frac{|N(i)\bigcap N(j)|}{\sqrt{|N(i)||N(j)|}} wij​=∣N(i)∣∣N(j)∣​∣N(i)⋂N(j)∣​
式中,分母表示的是对物品i和物品j产生过行为的并集个数的开方。该公式中将N(i)N(i)N(i)改为了N(i)N(j)\sqrt{|N(i)||N(j)|}∣N(i)∣∣N(j)∣​,即增加了对热门物品的惩罚。

因为不活跃用户对物品相似度的贡献应该大于活跃用户对物品相似度的贡献,所以应该降低活跃用户对相似度的贡献,所以应该降低活跃用户对相似度计算的权重,改进为:
wij=uN(i)N(j)1log1+N(u)N(i)N(j) w_{ij} = \frac{\sum_{u\in N(i)\bigcap N(j)}\frac{1}{\log{1 + N(u)}}}{\sqrt{|N(i)||N(j)|}} wij​=∣N(i)∣∣N(j)∣​∑u∈N(i)⋂N(j)​log1+N(u)1​​
式中,N(u)N(u)N(u)表示用户u的评分物品集合。
考虑到用户离当前时间越近的行为越能体现用户此时的兴趣,所以时间相隔近的行为相对于时间相隔远的行为更能反映物品之间的相似度,所以增加了增减时间衰减因子函数,改进为:
wij=uN(i)N(j)1log1+N(u)f(tuituj)N(ui)α×N(uj)1α w_{ij} = \frac{\sum_{u\in N(i)\bigcap N(j)}\frac{1}{\log{1 + N(u)}}f(|t_{ui} - t_{uj}|)}{N(u_i)^\alpha \times N(u_j)^{1 - \alpha}} wij​=N(ui​)α×N(uj​)1−α∑u∈N(i)⋂N(j)​log1+N(u)1​f(∣tui​−tuj​∣)​
式中,f(tuituj)f(|t_{ui} - t_{uj}|)f(∣tui​−tuj​∣)为时间衰减函数,其形式如下:
f(tuituj)=11+αtimetuituj f(|t_{ui} - t_{uj}|) = \frac{1}{1 + \alpha_{time} |t_{ui} - t_{uj}|} f(∣tui​−tuj​∣)=1+αtime​∣tui​−tuj​∣1​
式中,tuit_{ui}tui​和tujt_{uj}tuj​分别表示用户u对物品i和物品j产生行为的时间。

def ItemSimilarity(df, user_col, item_col, user_time_dict): 
    alpha = 0.9  # alpha越大 50_full大 50_half小  (0-1)
    alpha_time_decay = 0.9
    user_item_ = df.groupby(user_col)[item_col].agg(set).reset_index()  
    user_item_dict = dict(zip(user_item_[user_col], user_item_[item_col]))      
  
    sim_item = dict()            # 构建共现矩阵
    item_cnt = defaultdict(int)  # 物品 被点击用户数
    for user, items in tqdm(user_item_dict.items()):
        for loc1,item in enumerate(items):  
            item_cnt[item] += 1  # 点击物品i的用户数
            sim_item.setdefault(item, {})  
            for loc2,relate_item in enumerate(items):  
                if item == relate_item:  
                    continue  
                sim_item[item].setdefault(relate_item, 0)  
                t1 = user_time_dict[user][loc1] # 用户对物品item产生行为的时间
                t2 = user_time_dict[user][loc2] # 用户对相关物品relate_item产生行为的时间
                # 因为不活跃用户对物品相似度的贡献应该大于活跃用户对物品相似度的贡献,所以应该降低活跃用户对相似度计算的权重。
                # sim_item[item][relate_item] += (1 / math.log(1 + len(items)))
                # 考虑到用户离当前时间越近的行为越能体现用户此时的兴趣,所以时间相隔近的行为相对于时间相隔远的行为更能反映物品之间的相似度。
                f_t = 1 / (1 + alpha_time_decay * abs(t1 - t2)) # 时间衰减函数
                sim_item[item][relate_item] += (f_t / math.log(1 + len(items)))
    # 共现矩阵 -> 相似度矩阵                
    sim_item_corr = sim_item.copy()  
    for i, related_items in tqdm(sim_item.items()):  
        for j, cij in related_items.items(): 
            sim_item_corr[i][j] = cij/(math.pow(item_cnt[i],alpha)*math.pow(item_cnt[j],1-alpha))  # 物品的相似度矩阵
    assert len(sim_item_corr.keys()) == len(set(df['item_id']))  
    return sim_item_corr, user_item_dict

结合用户喜好对物品排序

用户当前的行为应该和用户最近的行为关系最大,所以在计算用户对物品评分时还要加上时间衰减函数f(t0tuj)f(|t_0 - t_uj|)f(∣t0​−tu​j∣),所以最终得到的用户u对物品j的偏好程度为:
ruj=wij×rui×f(t0tui)=uN(i)N(j)1log1+N(u)f(tuituj)N(ui)α×N(uj)1α×rui×f(t0tui) r_{uj} = w_{ij}\times{r_{ui}}\times{f(|t_0 - t_{ui}|)} = \frac{\sum_{u\in N(i)\bigcap N(j)}\frac{1}{\log{1 + N(u)}}f(|t_{ui} - t_{uj}|)}{N(u_i)^\alpha \times N(u_j)^{1 - \alpha}} \times{r_{ui}}\times{f(|t_0 - t_{ui}|)} ruj​=wij​×rui​×f(∣t0​−tui​∣)=N(ui​)α×N(uj​)1−α∑u∈N(i)⋂N(j)​log1+N(u)1​f(∣tui​−tuj​∣)​×rui​×f(∣t0​−tui​∣)
式中,f(t0tui)f(|t_0 - t_{ui}|)f(∣t0​−tui​∣)的表达式如下:
f(t0tui)=11+βt0tui f(|t_0 - t_{ui}|) = \frac{1}{1 + \beta |t_0 - t_{ui}|} f(∣t0​−tui​∣)=1+β∣t0​−tui​∣1​
式中,t0t_0t0​表示当前时间。

"""
    为用户进行推荐
        user_id:用户
        top_k:k个临近物品
        item_num:总共返回n个物品
"""
def Recommend(user_time_dict, user_qtime_dict, sim_item_corr, user_item_dict, user_id, top_k, item_num, hot_list): 
    rank = dict()
    beta = 100  # beta=1000下降 beta=10 下降
    t_0 = user_qtime_dict[user_id][0]
    interacted_items = user_item_dict[user_id]
    for loc,i in enumerate(interacted_items): 
        for j, wij in sorted(sim_item_corr[i].items(), key=lambda d: d[1], reverse=True)[0:top_k]: # 推荐top_k个临近物品
            # 如果用户已经点击过,则不再推荐
            if j not in interacted_items:
                rank.setdefault(j, 0)
                # 待推荐的物品j与用户已点击过的物品i相似,则累加上相似分数
                # 用户当前的行为应该和用户最近的行为关系最大,所以在计算用户对物品的评分时还要加上时间衰减函数
                t_ui = user_time_dict[user_id][loc] # 用户对物品i产生行为的时间
                f_t = 1 / (1 + beta * abs(t_0 - t_ui))
                rank[j] += wij * f_t
    rank_item = sorted(rank.items(), key=lambda d: d[1], reverse=True)[:item_num] 
    # 不足的热度补全
    if len(rank_item) < item_num:
        for index, item in enumerate(hot_list):
            if (item not in interacted_items) and (item not in rank_item):
                item_similar = item
                score_similar = - index - 100 
                rank_item.append( (item_similar, score_similar) )
            if len(rank_item) == item_num:
                break
    assert len(rank_item) == item_num   
    return rank_item

加载数据

now_phase = 6
all_click_test = pd.DataFrame()
all_qtime = pd.DataFrame()
whole_click = pd.DataFrame()
for phase in range(now_phase + 1): 
    print('phase:', phase)
    click_train = pd.read_csv('./data/underexpose_train/underexpose_train_click-{}.csv'.format(phase),header=None,
                              names=['user_id', 'item_id', 'time'],sep=',', dtype={'user_id':np.str,'item_id':np.str})  
    click_test = pd.read_csv('./data/underexpose_test/underexpose_test_click-{0}/underexpose_test_click-{0}.csv'.format(phase),header=None,
                             names=['user_id', 'item_id', 'time'],sep=',', dtype={'user_id':np.str,'item_id':np.str})
    answer_click = pd.read_csv('./data/underexpose_test/underexpose_test_click-{0}/underexpose_test_qtime_with_answer-{0}.csv'.format(phase),header=None,
                             names=['user_id', 'item_id', 'item_deg', 'time', 'phase', 'test'],sep=',', dtype={'user_id':np.str,'item_id':np.str})
    qtime = pd.read_csv('./data/underexpose_test/underexpose_test_click-{0}/underexpose_test_qtime-{0}.csv'.format(phase),header=None,
                             names=['user_id','qtime'],sep=',', dtype={'user_id':np.str})
    all_click_test = all_click_test.append(answer_click)
    all_qtime = all_qtime.append(qtime)
    all_click = click_train.append(click_test)
    whole_click = whole_click.append(all_click)  
    whole_click = whole_click.drop_duplicates(subset=['user_id','item_id','time'],keep='last')
    whole_click = whole_click.sort_values(by=['time'])

制作数据集train

# 筛选数据集 每一个user最后一次点击的数据
temp_ = whole_click
temp_ = temp_.drop_duplicates(['user_id'],keep='last')
temp_['remove'] = 'remove'
# 取筛选后剩下的数据集
whole_click = whole_click.merge(temp_,on=['user_id','item_id','time'],how='left')
whole_click = whole_click[whole_click['remove']!='remove']

召回Item

# 热门物品排序
temp_ = whole_click.groupby(['item_id'])['user_id'].count().reset_index()
temp_ = temp_.sort_values(by=['user_id'],ascending=False)
hot_list = list(temp_['item_id'])
# 当前时间
user_qtime_ = all_qtime.groupby('user_id')['qtime'].agg(list).reset_index()
user_qtime_dict = dict(zip(user_qtime_['user_id'], user_qtime_['qtime']))

user_time_ = whole_click.groupby('user_id')['time'].agg(list).reset_index() # 引入时间因素
user_time_dict = dict(zip(user_time_['user_id'], user_time_['time']))

# 相似度计算
item_sim_list, user_item = ItemSimilarity(whole_click, 'user_id', 'item_id', user_time_dict)  

# 召回item
recom_item = []   
for i in tqdm(all_click_test['user_id'].unique()): 
    rank_item = Recommend(user_time_dict, user_qtime_dict, item_sim_list, user_item, i, 500, 100, hot_list)
    for j in rank_item:
        recom_item.append([int(i), j[0], j[1]])  # 用户 物品 相似度
def get_predict(df, pred_col):  
    ids = list(df['user_id'].unique())  
    df.sort_values(pred_col, ascending=False, inplace=True)  
    df = df.drop_duplicates(subset=['user_id', 'item_id'], keep='first')  
    df['rank'] = df.groupby('user_id')[pred_col].rank(method='first', ascending=False)  
    df = df[df['rank'] <= 50]  
    df = df.groupby('user_id')['item_id'].apply(lambda x: ','.join([str(i) for i in x])).str.split(',', expand=True).reset_index()  
    return df
recom_df = pd.DataFrame(recom_item, columns=['user_id', 'item_id', 'sim']) 
result = get_predict(recom_df, 'sim') 
result = result.sort_values('user_id')

评估指标

Score=i=0n1log(ranki+2) Score = \sum_{i=0}^n \frac{1}{ \log(rank_i+2)} Score=i=0∑n​log(ranki​+2)1​

prediction_dict = {}
for i, row in result.iterrows():
    user_id = row['user_id']
    item_list = [ row[num] for num in range(50)]
    prediction_dict[user_id] = item_list
# the higher scores, the better performance
def evaluate_each_phase(predictions, answer):
    list_item_degress = []
    for i, row in answer.iterrows():
        item_degree = row['item_deg']
        list_item_degress.append(item_degree)
    list_item_degress.sort()
    median_item_degree = list_item_degress[len(list_item_degress) // 2]

    num_cases_full = 0.0
    ndcg_50_full = 0.0
    ndcg_50_half = 0.0
    num_cases_half = 0.0
    hitrate_50_full = 0.0
    hitrate_50_half = 0.0
    for i, row in tqdm(answer.iterrows()):
        user_id = row['user_id']
        item_id = row['item_id']
        item_degree = row['item_deg']
        rank = 0
        
        while (rank < 50) and (predictions[int(user_id)][rank] != item_id):
            rank += 1
        num_cases_full += 1.0
        if rank < 50:
            ndcg_50_full += 1.0 / np.log2(rank + 2.0)
            hitrate_50_full += 1.0
        if item_degree <= median_item_degree:
            num_cases_half += 1.0
            if rank < 50:
                ndcg_50_half += 1.0 / np.log2(rank + 2.0)
                hitrate_50_half += 1.0
    ndcg_50_full /= num_cases_full
    hitrate_50_full /= num_cases_full
    ndcg_50_half /= num_cases_half
    hitrate_50_half /= num_cases_half
    return np.array([ndcg_50_full, ndcg_50_half,
                     hitrate_50_full, hitrate_50_half], dtype=np.float32)

evaluate_score = np.zeros(4, dtype=np.float32)
for phase in range(now_phase + 1):
    answer = all_click_test[all_click_test['phase'] == phase]
    evaluate_score += evaluate_each_phase(prediction_dict, answer)
print("------------- eval result -------------")
print("hitrate_50_full : ", evaluate_score[2],'\n','  ndcg_50_full : ', evaluate_score[0], '\n')
print("hitrate_50_half : ", evaluate_score[3],'\n','  ndcg_50_half : ', evaluate_score[1], '\n')
print("score:",evaluate_score[0],'\n')

参考《推荐算法:3种协同过滤的原理及实现》

上一篇:【模板】LCA


下一篇:论文笔记:Meta-Graph Based Recommendation Fusion over Heterogeneous Information Networks (KDD 2017)