top of page
Professor Lecturing on Stage

ACML 2024 Tutorial on 
Efficient Pre-trained Model Tuning via Model Reprogramming

This tutorial will focus on model reprogramming (MR), an increasingly important topic centering around machine learning efficiency, democracy, and safety. As pre-trained models grow in scale, the need for efficient adaptation methods is more critical than ever. MR offers a resource-efficient alternative to conventional fine-tuning by enabling the model reuse for new tasks, without modifying model parameters, nor accessing pre-training data.

Home: Schedule

BACKGROUND

“Knowledge should not be accessible only to those who can pay” said Robert C. May, chair of UC faculty Academic Senate. With machine learning (ML), this sentiment is particularly relevant as recent foundation models--those pre-trained on vast datasets with substantial resources--have widen the gap between those who can afford to access and even govern large-scale pre-trained models and those who cannot. These models, though promising in addressing global challenges across healthcare, education, and infrastructure, are prohibitively expensive to fine-tuning for domain-specific tasks due to their size, sometimes reaching billions of parameters. This cost is insurmountable for most institutions and companies in developing regions (like parts of Asia). Model reprogramming (MR) addresses this issue by providing a cost-effective alternative. Rather than resource-heavy model retraining or parameter adjustment, MR repurposes pre-trained models by reprogramming their input space and transforming their output space to fit new data and tasks.
MR not only mitigates inefficiencies related to fine-tuning models with growing sizes but also enables more institutions and companies to enjoy the benefits brought by foundation models.

DETAILED DESCRIPTIONS

This tutorial will elucidate MR from two core perspectives: input reprogramming and output transformation in the visual reprogramming contexts, while showcasing a range of real-world applications beyond vision tasks, where MR has proven valuable, especially in resource-constrained settings.​

​

Input Reprogramming for MR. Input reprogramming (IR), including visual prompting as a special case, injects trainable noise patterns into the target data to achieve task-specific optimization. The noise acts as a form of input manipulation that enables the model to handle new data while preserving its architecture and parameters. IR depends on strategies that place trainable noises along with reshaping operations (e.g., interpolation or crop), aligning the noisy target data with the pre-trained model's input domain. This tutorial will cover the taxonomy of existing IR strategies and discuss key IR methods in detail. Participants will also gain insights into how different noise injection methods optimize alignment with the pre-trained model's inherent structure, maximizing the model performance with minimal efforts in training noise patterns.

 

Output Transformation for MR. Output transformation (OT) complements IR by employing parameter-free functions that map a pre-trained model's predictions to the output space of target tasks, allowing for adaptation across different tasks. Since the discrepancy of semantics and dimensionality between output spaces, OT strategies advance through refining the mapping that automatically discover the correlation between model's existing output labels and the target task labels. This tutorial will explore the latest OT techniques and discuss how to harness the knowledge encoded in model prediction to achieve semantics matching. Attendees will leave with a deeper understanding of how specific OT techniques can unlock new possibilities to enhance MR and even benefit other model adaptation paradigms, in terms of both the accessibility and interpretability of large-scale pre-trained models.

 

Real-world Applications of MR. Going beyond visual reprogramming, MR has been applied in many real-world applications where data and computational capacity are inherently limited. Notably, MR has enabled the efficient out-of-distribution and cross-modality adaptation of pre-trained models, with examples like repurposing language models for protein sequence prediction. This tutorial will summarize the practical applications of MR across diverse domains, providing the audience with actionable insights and techniques to deploy MR practically. 
By the end of the session, participants can leverage the power of MR for efficient model tuning, even in resource-constrained environments, broadening the benefits of pre-trained models to more AI participants.

Home: About

SCHEDULE

13:30 - 16:00, 8 Dec, 2024 (Hanoi time, GMT +7)

Home: Schedule

13:30 - 14:20

SESSION ONE

Lecturer: Dr Feng Liu

In this session, Dr Liu will introduce IR.

14:20 - 14:30

Having a break outside of the room.

BREAK

14:30 - 15:20

Lecturer: Dr Feng Liu

In this session, Dr Liu will introduce OT.

SESSION TWO

15:20 - 15:30

Having a break outside of the room.

BREAK

15:30 - 16:00

SESSION THREE

Lecturer: Dr Feng Liu

In this session, Dr Liu will introduce OT.

Bridge over a City

ORGANISER

Home: Speakers
Feng_Liu.png

FENG LIU

Assistant Professor of Machine Learning, 
The University of Melbourne

Dr Feng Liu is a machine learning researcher with research interests in hypothesis testing and trustworthy machine learning. Currently, he is the recipient of the ARC DECRA Fellowship, a Lecturer at The University of Melbourne, Australia, and a Visiting Scientist at RIKEN-AIP, Japan. He has served as an Area Chair for ICML, NeurIPS, ICLR. He has received the ARC Discovery Early Career Researcher Award, the Outstanding Paper Award of NeurIPS (2022), the Outstanding Reviewer Award of NeurIPS (2021), and the Outstanding Reviewer Award of ICLR (2021). 

GET IN TOUCH

If you have questions about the submission/registration process, don’t hesitate to reach out.

Networking
Home: Contact

©2024 by The Tutorial on Efficient Pre-trained Model Tuning via Model Reprogramming. Proudly created with Wix.com

bottom of page