报告人:毛志平(厦门大学)
时间:2023年06月29日 10:00--
地址:理科楼LA106
摘要:The optimization algorithm plays an important role in deep learning and significantly affects the stability and efficiency of the training process, and consequently the accuracy of the neural network approximation. A suitable (initial) learning rate is crucial for the optimization algorithm in deep learning. However, a small learning rate is usually needed to guarantee the convergence of the optimization algorithm, resulting in a slow training process. We develop in this work efficient and energy stable optimization methods for function approximation problems in deep learning. In particular, we consider the gradient flows arising from deep learning from the continuous point of view, and employ the particle method (PM) and the smoothed particle method (SPM) for the space discretization while we adopt SAV-based schemes, combined with the adaptive strategy used in Adam algorithm, for the time discretization. To illustrate the effectiveness of the proposed methods, we present a number of numerical tests to demonstrate that the SAV-based schemes significantly improve the efficiency and stability of the training as well as the accuracy of the neural network approximation. We also show that the SPM approach gives a slightly better accuracy than the PM approach. We further demonstrate the advantage of using adaptive learning rate in dealing with more complex problems.
简介:毛志平,厦门大学数学科学学院教授,2009年本科毕业于太阳成集团,2015年博士毕业于厦门大学计算数学专业,国家高层次青年人才,2015年10月至2020年9月在美国布朗大学应用数学系从事博士后研究。毛志平教授主要从事谱方法以及机器学习方面的研究,其目前在SIREV, JCP,SISC,SINUM、CMAME等国际高水平杂志上发表论文30余篇。
邀请人:王坤
欢迎广大师生积极参与!