Paper Accepted to ICLR 2026!

Jan 2026ยท
้’Ÿ้€ธๆ™Ÿ
้’Ÿ้€ธๆ™Ÿ
ยท 1 min read

๐Ÿš€ Excited to announce that our paper, “DUET: Distilled LLM Unlearning from an Efficiently Contextualized Teacher,” has been accepted to ICLR 2026!

DUET addresses the critical trade-off in machine unlearning between computational cost and model robustness. By using a novel Top-K Logit Alignment strategy, we distill the behavior of a prompt-steered teacher into a student model, achieving effective forgetting with extreme data efficiencyโ€”using just 100 samples compared to millions of tokens in traditional retraining.

Unlike lightweight in-context methods, DUET embeds unlearning directly into parameters, making it highly resilient to reverse-engineering attacks.

Huge thanks to my advisors @Zhuangdi Zhu and collaborators @Zhengbang Yang and @Yumeng Yang; see you all in Rio de Janeiro! ๐Ÿ‡ง๐Ÿ‡ท

๐Ÿ”— PDF