Veranstaltungen

07. Mai 2024, 10:00 bis 11:00
Zeroth-Order Optimization Meets Human Feedback: Provable Learning via Ranking Oracles
Speaker: Tsung-Hui Chang (The Chinese University of Hong Kong

Prof. Tsung-Hui Chang from the Chinese University of Hong Kong is currently attending the Int. Conf. on Learning Representations in Vienna this week and was so kind to spontaneously offer to visit us and give a talk.

Biography:

Tsung-Hui Chang is an Associate Professor and Assistant Dean (Education) at the School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, China, and Shenzhen Research Institute of Big Data. His research interests lie in optimization problems in data communications and machine learning. He is an Elected Member of IEEE SPS SPCOM TC and the Founding Chair of IEEE SPS ISAC TWG. He received the IEEE ComSoc Asian-Pacific Outstanding Young Researcher Award in 2015, and the IEEE SPS Best Paper Award twice in 2018 and 2021. He is currently a Senior Area Editor of IEEE TSP and an Associate Editor of IEEE OJSP. He is a Fellow of IEEE.

 

Abstract:

In this study, we delve into an emerging optimization challenge involving a black-box objective function that can only be gauged via a ranking oracle—a situation frequently encountered in real-world scenarios, especially when the function is evaluated by human judges. A prominent instance of such a situation is Reinforcement Learning with Human Feedback (RLHF), an approach recently employed to enhance the performance of Large Language Models (LLMs) using human. We introduce ZO-RankSGD, an innovative zeroth-order optimization algorithm designed to tackle this optimization problem, accompanied by theoretical assurances. Our algorithm utilizes a novel rank-based random estimator to determine the descent direction and guarantees convergence to a stationary point. Last but not least, we demonstrate the effectiveness of ZO-RankSGD in a novel application: improving the quality of images generated by a diffusion generative model with human ranking feedback. Throughout experiments, we found that ZO-RankSGD can significantly enhance the detail of generated images with only a few rounds of human feedback. Overall, our work advances the field of zeroth-order optimization by addressing the problem of optimizing functions with only ranking feedback, and offers a new and effective approach for aligning Artificial Intelligence (AI) with human intentions.