Home
Generative models are changing the way we seek information online. Large language models (LLMs) such as Chat-GPT represent one successful application of generative models, leveraging vast amounts of texts encoded in their billion-scale parameters. Recommendation systems employing generative models go beyond LLMs, encompassing a broader range of models such as deep generative models (DGMs) trained directly on user-item interactions, multi-modal foundation models, and other non-LLM generative models. These models offer new opportunities in the field of recommender systems by enhancing how user preferences are learned, connecting us with vast amounts of information available on the Internet. They are capable of delivering more personalized and contextually relevant content, generating recommendations without reliance on narrowly defined datasets, and addressing the cold-start issue. Furthermore, these models significantly enhance the level of interactivity users have with recommender systems, boosting conversational capabilities.
However, there is no “free cake”; new advantages bring new challenges and risks that must be addressed when using LLMs and other categories of DGMs. Some of these challenges are new (e.g., hallucination, out-of-inventory recommendations) and some are newly intensified due to the expanded capabilities of these systems (privacy, fairness and biases, security and robustness, manipulation, opacity, accountability, over-reliance on automation). A critical aspect of utilizing these technologies is to develop robust evaluation systems that can effectively assess the performance, fairness, and security of these Gen-RecSys. Proper evaluation is essential to ensure these systems are reliable and trustworthy, especially when dealing with sensitive user data and making impactful recommendations.
This workshop will specifically focus on the Risks, Opportunities, and Evaluation in real-world Recommender System applications, aiming to cover a full spectrum of current challenges and advances. Additionally, the workshop invites discussions on the application of LLMs and Generative Models in specific tasks and areas such as Conversation, Explanation, Bundle Recommendation, among others. The discussions are encouraged as long as the goal pertains to some form of information seeking.
Topics
This workshop will focus on the opportunities and challenges of Recommender Systems in real-world applications, structured around three key pillars: opportunities, risks and challenges, and evaluation and mitigation strategies. Discussions will cover topics such as data preprocessing, model evaluation, fairness, debiasing, cold-start problems, model distillation, and user interaction design. We also welcome stories of Recommender System usage in specific applications, such as E-commerce, Streaming Services, News, Social Media, and Personalized Marketing. The goal is to bring together industry and academia researchers to foster knowledge sharing and create discussions that pave the way to societally beneficial exploitation of this technology.
Opportunities
- Integrating large language models and other (multimodal) pre-trained generative models to enhance recommender algorithms for user modeling.
- Generative recommendation using AI to help create personalized item content, such as advertisements, images, and micro-videos.
- Combining content generation and retrieval for personalized information seeking and presentation.
- Leveraging the advantages of generative models to improve traditional recommender tasks, including collaborative, sequential, cold-start, social, conversational, multimodal, and causal recommendation.
- Applications of generative model-enhanced recommender systems in various sectors, such as finance, streaming platforms, social networks, entertainment, music, e-commerce, education, fashion, and healthcare.
Risks and Challenges
- Identifying and addressing the potential risks and challenges associated with deploying generative models in recommender systems.
- Challenges of hallucination and misinformation risks, particularly with the advent of sophisticated image generation models.
- Examining bias and fairness in models with a special emphasis on mitigating biases related to race, gender, or brands.
- Privacy implications of utilizing extensive data for model training.
- Technical challenges in achieving transparency, explainability, security challenges, accountability and user control in recommendations.
- Ensuring compliance with emerging ethical and legal standards.
Evaluation and Mitigation
- Developing new benchmarks, evaluation metrics, and protocols to assess the efficacy, fairness, and security of generative models in recommender systems.
- Novel strategies for bias detection, measurement, and understanding.
- Red-teaming and ensuring recommendations are transparent, explainable, and safeguard user privacy.
- Evaluating the robustness, trustworthiness, and real-time performance of generative models across different domains and modalities.
- Designing evaluation methodologies to examine the usage of generative models in recommender systems, including human evaluation paradigms and interfaces.
- Novel approaches for (generative model) Alignment and RLHF for recommendation tasks.
Submission Guidelines for the Workshop
We welcome submissions in the form of full papers, short papers, and extended abstracts that address any of the listed topics and related areas. Submissions should clearly articulate the contribution to the field, methodology, results, and implications for the design, implementation, or understanding of LLMs and Generative models in recommender systems. Submissions should follow the CEUR-WS 2 column (https://ceur-ws.org/Vol-XXX/CEURART.zip).
- Full Papers: Full Papers (up to 8 pages, excluding references): Detailed studies, theoretical analyses, or extensive reviews of specific aspects of LLMs and Generative models in recommender systems.
- Short Papers: (up to 4 pages, excluding references): Preliminary findings, innovative concepts, or case studies on the application of LLMs and Generative models in recommender systems.
- Extended Abstracts: (2-3 pages, excluding references): Proposals for discussions, work-in-progress, or initial insights into the application of LLMs and Generative models in recommender systems.
Submissions must be anonymized and adhere to the specified formatting guidelines, which will be provided on the workshop website.
The submission site is EasyChair RecSys 2024 Workshops.
Make sure to select the “The 1st Workshop on Risks, Opportunities, and Evaluation of Generative Models in Recommender Systems (ROEGEN@RECSYS'24)” track when creating a submission.
All papers will undergo a rigorous double-blind peer review process, focusing on relevance, originality, technical quality, relation to the workshop scope, and overall contribution to the field. Acceptance will be based on these criteria. Accepted papers will be presented at the workshop and included in the proceedings, and will be published in CEUR Workshop Proceedings. All papers will undergo the same review process and review period. High-quality submissions will be recommended for a special issue in ACM TORS on using generative models for recommendation.
Important Dates
- Submission Deadline:
September 5, 2024September 7, 2024 Final (AOE) - Notification of Acceptance: September 13, 2024
- Camera-Ready Submission: September 20, 2024
- Workshop Date: October 18, 2024
Featured Speakers
Minmin Chen
Principal research scientist at Google Deepmind, USA: TBC
Jiaqi Zhai
Senior Research Scientist at Meta, USA: "ACTIONS SPEAK LOUDER THAN WORDS".
Craig Boutilier
Senior Research Scientist at Google: "Alignment in Recommendation Systems"
Mikael Ekstrand
Assistant professor at Drexel University, USA: "Responsible Recommendation in the Age of Generative AI"
Workshop Organizers
- Yashar Deldjoo, Tenure-Track Assistant Professor, Polytechnic University of Bari, Italy
- Julian McAuley, Professor, UC San Diego, USA
- Scott Sanner, Associate Professor, University of Toronto, Canada
- Pablo Castells, Professor, Autonomous University of Madrid, Spain
- Shuai Zhang, Applied Scientist, Amazon Web Services AI, USA
- Enrico Palumbo, Senior Research Scientist, Spotify
Program Committee
- Aleksandr Petrov, a.petrov.1@research.gla.ac.uk, University of Glasgow
- Branislav Kveton, bkveton@amazon.com, Amazon
- Chao Zhang, zclfe00@gmail.com, University of Science and Technology of China
- Chengkai Huang, chengkai.huang1@unsw.edu.au, The University of New South Wales
- Chen Ma, chenma@cityu.edu.hk, City University of Hong Kong
- Claudia Hauff, claudia.hauff@gmail.com, Spotify
- Dietmar Jannach, dietmar.jannach@aau.at, University of Klagenfurt
- Gustavo Penha, gustavop@spotify.com, Spotify
- Hugues Bouchard, hb@spotify.com, Spotify
- Martin Mladenov, mmladenov@google.com, Google
- Michael Ekstrand, mde48@drexel.edu, Drexel University
- Mohammad Aliannejadi, m.aliannejadi@uva.nl, University of Amsterdam
- Narges Tabari, nargesam@amazon.com, Amazon
- Paolo Garza, paolo.garza@polito.it, Politecnico di Torino
- Reza Shirvany, reza.shirvany@zalando.de, Zalando
- Thong Nguyen, thongnguyen@microsoft.com, Microsoft
- Zhankui He, zhh004@ucsd.edu, University of California San Diego