zikele

zikele

人生如此自可乐

基於大語言模型的互補產品重新排序方法

2507.16237v1

中文标题#

基於大語言模型的互補產品重新排序方法

英文标题#

LLM-Enhanced Reranking for Complementary Product Recommendation

中文摘要#

互補產品推薦旨在建議一起使用的物品以提升客戶價值,這是一個電子商務中關鍵但具有挑戰性的任務。儘管現有的圖神經網絡(GNN)方法在捕捉複雜的產品關係方面取得了顯著進展,但它們在準確性和多樣性之間的權衡上往往存在困難,尤其是對於長尾商品。本文介紹了一種模型無關的方法,利用大型語言模型(LLMs)來增強互補產品推薦的重新排序。與之前主要將 LLMs 用於數據預處理和圖增強的工作不同,我們的方法直接將基於 LLM 的提示策略應用於從現有推薦模型中檢索到的候選物品進行重新排序,無需重新訓練模型。通過在公開數據集上的廣泛實驗,我們證明了我們的方法在互補產品推薦的準確性和多樣性之間有效平衡,平均而言,在所有數據集的頂級推薦物品中,準確度指標至少提升了 50%,多樣性指標提升了 2%。

英文摘要#

Complementary product recommendation, which aims to suggest items that are used together to enhance customer value, is a crucial yet challenging task in e-commerce. While existing graph neural network (GNN) approaches have made significant progress in capturing complex product relationships, they often struggle with the accuracy-diversity tradeoff, particularly for long-tail items. This paper introduces a model-agnostic approach that leverages Large Language Models (LLMs) to enhance the reranking of complementary product recommendations. Unlike previous works that use LLMs primarily for data preprocessing and graph augmentation, our method applies LLM-based prompting strategies directly to rerank candidate items retrieved from existing recommendation models, eliminating the need for model retraining. Through extensive experiments on public datasets, we demonstrate that our approach effectively balances accuracy and diversity in complementary product recommendations, with at least 50% lift in accuracy metrics and 2% lift in diversity metrics on average for the top recommended items across datasets.

PDF 獲取#

查看中文 PDF - 2507.16237v1

智能達人抖店二維碼

抖音掃碼查看更多精彩內容

載入中......
此文章數據所有權由區塊鏈加密技術和智能合約保障僅歸創作者所有。