Overview

The recent advancements in large language models (LLMs), such as GPT, PaLM, and Llama, along with the generative AI capabilities they possess, have garnered significant attention within both the research community and the public sphere. Although these models are easily accessible to users and researchers through conventional prompting interfaces, API calls, or static snapshots, there is an increasing demand for these models to provide personalized and context-aware responses. This requirement arises from diverse application scenarios where assistive creation and tailored generation are essential for individual and groups/sub-populations of users with even more diverse backgrounds and preferences. Merely relying on generic responses is insufficient in addressing the specific needs and constraints of users in personal, group, or even societal contexts. Instead, such scenarios demand the models’ ability to consider and align their responses to the preferences and objectives of the users in these aforementioned contexts.

This workshop aims to create a collaborative and interdisciplinary platform that brings together creators, researchers, and practitioners of large language models. By fostering an open and forward-looking environment, the workshop seeks to facilitate discussions on the current landscape of personalizing LLMs, adapting LLMs to individual and group contexts, and aligning LLMs with the value and objectives of the society at large. It provides an opportunity for participants to share insights, exchange ideas, and explore innovative approaches in the field. The ultimate goal is to drive progress and shape the future of large language models for individuals, groups, and the society through collective expertise and collaboration.

Topics of the workshop will include but not limited to:

  • Novel models and algorithms for adapting large language models to personal contexts.
  • New developments in aligning large language models with the preferences and objectives of individuals, sub-populations, or the society at large.
  • Theoretical and empirical results of applying reinforcement learning from the feedback of individuals and groups of human users to LLMs.
  • Evaluation of personalization and societal alignment of LLMs, including datasets, metrics, and benchmarks.
  • Personalizing and aligning LLMs under resource constraints. For example, deploying personalized LLMs on mobile devices or aligning the output of frozen LLMs through APIs.
  • Applications of personalization and societal-alignment of LLMs, including but not limited to search engines, recommender systems, email/writing assistants, social networking, entertainment, education, healthcare, scientific discovery, and future of work.
  • Ethics of personalizing LLMs, including but not limited to privacy, fairness, bias, transparency, diversity, and other potential impacts of LLMs to individuals, groups, and the society.
  • Equitable applications of LLM to diverse user groups.

Schedule

Keynote Speakers

Invited Speakers

Organizers

Please contact us through this email address if you have any questions.

Previous Editions