报告题目:A duality framework for analyzing random feature and two-layer neural networks

报告人:吴磊

报告日期:2025年6月26日

报告时间:14:00 - 15:00

报告地点:光华楼东主楼2001

报告摘要:In this talk, we consider the problem of learning with random feature models (RFMs), two-layer neural networks, as well as kernel methods. Leveraging tools from information-based complexity (IBC), we establish a dual equivalence between approximation and estimation, and then apply it to study the learning of functions within the preceding function spaces.  To showcase the efficacy of our duality framework, we delve into two important but under-explored problems:   Random feature learning beyond kernel regime and the $L^\inf$ learning of reproducing kernel Hilbert space. To establish the aforementioned duality, we introduce a type of IBC, termed I-complexity, to measure the complexity of a function class. Notably, I-complexity offers a tight characterization of learning in noiseless settings, yields lower bounds comparable to Le Cam's in noisy settings, and is versatile in deriving upper bounds. Our duality framework holds potential for broad applications in learning analysis across more scenarios.

个人简介:吴磊,北京大学数学科学学院与国际机器学习研究中心助理教授,主要研究方向为深度学习的数理基础。2012年毕业于南开大学,获数学与应用数学学士学位;2018年毕业于北京大学,获得计算数学博士学位。2018年11月至2021年10月,先后在美国普林斯顿大学与宾夕法尼亚大学从事博士后研究工作。

海报:吴磊 学术报告.jpg