Paper Accepted to ICLR 2026!
๐ Excited to announce that our paper, “DUET: Distilled LLM Unlearning from an Efficiently Contextualized Teacher,” has been accepted to ICLR 2026!
DUET addresses the critical trade-off in machine unlearning between computational cost and model robustness. By using a novel Top-K Logit Alignment strategy, we distill the behavior of a prompt-steered teacher into a student model, achieving effective forgetting with extreme data efficiencyโusing just 100 samples compared to millions of tokens in traditional retraining.
Unlike lightweight in-context methods, DUET embeds unlearning directly into parameters, making it highly resilient to reverse-engineering attacks.
Huge thanks to my advisors @Zhuangdi Zhu and collaborators @Zhengbang Yang and @Yumeng Yang; see you all in Rio de Janeiro! ๐ง๐ท
๐ PDF