Overview
The recent advancements in large language models (LLMs), such as GPT, PaLM, and Llama, along with the generative AI capabilities they possess, have garnered significant attention within both the research community and the public sphere. Although these models are easily accessible to users and researchers through conventional prompting interfaces, API calls, or static snapshots, there is an increasing demand for these models to provide personalized and context-aware responses. This requirement arises from diverse application scenarios where assistive creation and tailored generation are essential for individual and groups/sub-populations of users with even more diverse backgrounds and preferences. Merely relying on generic responses is insufficient in addressing the specific needs and constraints of users in personal, group, or even societal contexts. Instead, such scenarios demand the models’ ability to consider and align their responses to the preferences and objectives of the users in these aforementioned contexts.
This workshop aims to create a collaborative and interdisciplinary platform that brings together creators, researchers, and practitioners of large language models. By fostering an open and forward-looking environment, the workshop seeks to facilitate discussions on the current landscape of personalizing LLMs, adapting LLMs to individual and group contexts, and aligning LLMs with the value and objectives of the society at large. It provides an opportunity for participants to share insights, exchange ideas, and explore innovative approaches in the field. The ultimate goal is to drive progress and shape the future of large language models for individuals, groups, and the society through collective expertise and collaboration.
Topics of the workshop will include but not limited to:
- Novel models and algorithms for adapting large language models to personal contexts.
- New developments in aligning large language models with the preferences and objectives of individuals, sub-populations, or the society at large.
- Theoretical and empirical results of applying reinforcement learning from the feedback of individuals and groups of human users to LLMs.
- Evaluation of personalization and societal alignment of LLMs, including datasets, metrics, and benchmarks.
- Personalizing and aligning LLMs under resource constraints. For example, deploying personalized LLMs on mobile devices or aligning the output of frozen LLMs through APIs.
- Applications of personalization and societal-alignment of LLMs, including but not limited to search engines, recommender systems, email/writing assistants, social networking, entertainment, education, healthcare, scientific discovery, and future of work.
- Ethics of personalizing LLMs, including but not limited to privacy, fairness, bias, transparency, diversity, and other potential impacts of LLMs to individuals, groups, and the society.
- Equitable applications of LLM to diverse user groups.
Schedule
Time | Agenda |
---|---|
9:00-9:10 AM | Opening remarks |
9:10-10:00 AM | Keynote by Zhiyong Lu Transforming Medicine with AI: from PubMed Search to TrialGPT |
10:00-10:30 AM | Contributing Talk 1 Unlocking the ‘Why’ of Buying: Introducing a New Dataset and Benchmark for Purchase Reason and Post-Purchase Experience |
10:30-11:00 AM | Break |
11:00-11:30 AM | Contributing Talk 2 Session Context Embedding for Intent Understanding in Product Search |
1:30-2:00 PM | Invited talk by Hongning Wang Human vs. Generative AI in Content Creation Competition: Symbiosis or Conflict |
2:00-3:00 PM | Panel Discussion (Panelists: Michael Bendersky, Vanessa Murdock, Hongning Wang, Wei Ai; Moderator: Mingyang Zhang) Opportunities and Challenges for Personalizing LLMs |
3:00-3:30 PM | Break |
3:30-4:20 PM | Talk by Hamed Zamani Personalizing Large Language Models |
Keynote Speaker
Zhiyong Lu
Senior Investigator, NIH/NLM
Deputy Director for Literature Search, NCBI
Professor of Computer Science (Adjunct), UIUC
The explosion of biomedical big data and information in the past decade or so has created new opportunities for discoveries to improve the treatment and prevention of human diseases. As such, the field of medicine is undergoing a paradigm shift driven by AI-powered analytical solutions. This talk explores the benefits (and risks) of AI and ChatGPT, highlighting their pivotal roles in revolutionizing biomedical discovery, patient care, diagnosis, treatment, and medical research. By demonstrating their uses in some real-world applications such as improving PubMed searches (Fiorini et al., Nature Biotechnology 2018), supporting precision medicine (LitVar, Allot et al., Nature Genetics 2023), and accelerating patient trial matching (TrialGPT), we underscore the potential of AI and ChatGPT in enhancing clinical decision-making, personalizing patient experiences, and accelerating knowledge discovery.
Invited Speaker
Hongning Wang
Copenhaver Associate Professor of Computer Science, University of Virginia
The advent of generative AI technology produces transformative impact on the content creation landscape, offering alternative approaches to produce diverse, good-quality content across media, thereby reshaping the ecosystems of online content creation and publishing, but also raising concerns about market over-saturation and the potential marginalization of human creativity. Our recent work introduces a competition model generalized from the Tullock contest to analyze the tension between human creators and generative AI. Our theory and simulations suggest that despite challenges, a stable equilibrium between human and AI-generated content is possible. Our work contributes to understanding the competitive dynamics in the content creation industry, offering insights into the future interplay between human creativity and technological advancements in generative AI.
Speaker
Hamed Zamani
Associate Professor, UMass
Many users these days rely on Large Language Models (LLMs) to learn about topics and find the answer to their questions. In this talk, I will discuss models and evaluation methodologies for generating personalized outputs, depending on the user’s preferences, history, or background knowledge. In more detail, I will first introduce the Language Model Personalization (LaMP) benchmark (https://lamp-benchmark.github.io/) – a large-scale benchmark for studying personalization for text classification and generation using LLMs. I will later draw connections between LLM personalization and retrieval-enhanced machine learning (REML) and introduce retrieval-augmented approaches for personalizing large language models.
Accepted Papers
Paper ID | Title | Link |
---|---|---|
2 | Session Context Embedding for Intent Understanding in Product Search | arxiv |
3 | Unlocking the ‘Why’ of Buying: Introducing a New Dataset and Benchmark for Purchase Reason and Post-Purchase Experience | arxiv |
Organizers
Please contact us through this email address if you have any questions.