Understanding Language Modeling Paradigm Adaptations in Recommender Systems: Lessons Learned and Open Challenges

Tutorial at 27th European Conference of Artificial Intelligence (ECAI), Santiago de Compostela, Spain, 2024
Schedule: October 19, 2024

Abstract

The emergence of Large Language Models (LLMs) has achieved tremendous success in the field of Natural Language Processing owing to diverse training paradigms that empower LLMs to effectively capture intricate linguistic patterns and semantic representations. In particular, the recent "pre-train, prompt and predict" training paradigm has attracted significant attention as an approach for learning generalizable models with limited labeled data. In line with this advancement, these training paradigms have recently been adapted to the recommendation domain and are seen as a promising direction in both academia and industry.

This half-day tutorial aims to provide a thorough understanding of extracting and transferring knowledge from pre-trained models learned through different training paradigms to improve recommender systems from various perspectives, such as generality, sparsity, effectiveness and trustworthiness. In this tutorial, we first introduce the basic concepts and a generic architecture of the language modeling paradigm for recommendation purposes. Then, we focus on recent advancements in adapting LLM-related training strategies and optimization objectives for different recommendation tasks. After that, we will systematically introduce ethical issues in LLM-based recommender systems and discuss possible approaches to assessing and mitigating them. We will also summarize the relevant datasets, evaluation metrics, and an empirical study on the recommendation performance of training paradigms. Finally, we will conclude the tutorial with a discussion of open challenges and future directions.

Programs


Part 1: Introduction by Dr. Yong Zheng (20 mins)

  • Overview of Language Models and RSs
  • Overview of Language Modeling Paradigm in RSs

Part 2: Training Strategies of LLM-based RSs by Dr. Lemei Zhang (45 mins)

  • Pre-train, fine-tune paradigm for RSs
  • Prompting paradigm for RSs

Part 3: Optimization Objectives of LLM-based RSs by Dr. Peng Liu (20 mins)

  • Language modeling objectives to recommendation
  • Adaptive objectives to recommendation

Part 4: Ethical Issues and Trustworthiness of LLM-based RSs by Dr. Yashar Deldjoo (50 mins)

  • Different harm types, stakeholders involved, and harm severity in LLM-based RSs
  • Possible approaches to assessing and mitigating ethical issues and harms

Part 5: Evaluation and Available Resources by Dr. Peng Liu (20 mins)

  • Evaluation on recommendation accuracy and language perspectives
  • Open-sourced datasets and training platforms

Part 6: Summary and Future Directions by Dr. Jon Atle Gulla (20 mins)

Presenters

Image

Lemei Zhang

Postdoctoral Researcher

Norwegian Research Center for AI Innovation, NTNU, Norway

Image

Peng Liu

Researcher

Norwegian Research Center for AI Innovation, NTNU, Norway

Image

Yashar Deldjoo

Assistant Professor

Polytechnic University of Bari,
Italy

Image

Yong Zheng

Associate Professor

Illinois Institute of Technology,
USA

Image

Jon Atle Gulla

Professor

Norwegian Research Center for AI Innovation, NTNU, Norway

Materials