报告题目:
Continuous Self-Correction for LLMs viaPerplexity-Guided Intervention
报告时间:2026年4月28日 10:30
报告地点:线上
报告人:Z. Jane Wang教授
报告内容(摘要):
Large Language Models (LLMs) often experience compounding errors during long text generation. Early mistakes can propagate and lead to drift, faulty reasoning, or repetition. Self-correction is a promising technique for addressing this issue. However existing approaches have limitations. Here we introduce Once-More, a model-agnostic post-hoc self-correction framework that intervenes during generation. Once-More is the first post-hoc method to leverage token-level perplexity and external feedback from verifiers to provide continuous guided steering of the generation path through a logit redistribution mechanism. This approach essentially helps accumulate "more correct" steps throughout the generation process. Evaluation on several benchmarks demonstrates that Once-More achieves state-of-the-art results compared to other self-correction methods.
报告人简介:
Z. Jane Wang received the B.Sc. degree from Tsinghua University and the M.Sc. and Ph.D. degrees from the University of Connecticut (UConn), respectively, all in electrical engineering. She has been Research Associate at the University of Maryland - College Park. Since 2004, she has been with the ECE dept. at the University of British Columbia (UBC), Canada, and she is currently Professor. She is an IEEE Fellow, a Fellow of the Canadian Academy of Engineering (FCAE). Her research interests are in the broad areas of statistical signal processing and machine learning, with current focuses on digital media and biomedical data analytics. She has been key Organizing Committee Member for numerous IEEE conferences and workshops; and she has been Associate Editor for the IEEE TSP, SPL, TMM, TIFS, TBME, and SPM, and Area Editor of SPM, and as Editor-in-Chief for IEEE SPL.