中文标题#
基于大语言模型的互补产品重新排序方法
英文标题#
LLM-Enhanced Reranking for Complementary Product Recommendation
中文摘要#
互补产品推荐旨在建议一起使用的物品以提升客户价值,这是电子商务中一个关键但具有挑战性的任务。 尽管现有的图神经网络(GNN)方法在捕捉复杂的产品关系方面取得了显著进展,但它们在准确性和多样性之间的权衡上往往存在困难,尤其是对于长尾商品。 本文介绍了一种模型无关的方法,利用大型语言模型(LLMs)来增强互补产品推荐的重新排序。 与之前主要将 LLMs 用于数据预处理和图增强的工作不同,我们的方法直接将基于 LLM 的提示策略应用于从现有推荐模型中检索到的候选物品进行重新排序,无需重新训练模型。 通过在公开数据集上的广泛实验,我们证明了我们的方法在互补产品推荐的准确性和多样性之间有效平衡,平均而言,在所有数据集的顶级推荐物品中,准确度指标至少提升了 50%,多样性指标提升了 2%。
英文摘要#
Complementary product recommendation, which aims to suggest items that are used together to enhance customer value, is a crucial yet challenging task in e-commerce. While existing graph neural network (GNN) approaches have made significant progress in capturing complex product relationships, they often struggle with the accuracy-diversity tradeoff, particularly for long-tail items. This paper introduces a model-agnostic approach that leverages Large Language Models (LLMs) to enhance the reranking of complementary product recommendations. Unlike previous works that use LLMs primarily for data preprocessing and graph augmentation, our method applies LLM-based prompting strategies directly to rerank candidate items retrieved from existing recommendation models, eliminating the need for model retraining. Through extensive experiments on public datasets, we demonstrate that our approach effectively balances accuracy and diversity in complementary product recommendations, with at least 50% lift in accuracy metrics and 2% lift in diversity metrics on average for the top recommended items across datasets.
PDF 获取#
抖音扫码查看更多精彩内容