APS March Meeting 2022
Volume 67, Number 3
Monday–Friday, March 14–18, 2022;
Chicago
Session D32: Data Science, Artificial Intelligence and Machine Learning I
3:00 PM–4:48 PM,
Monday, March 14, 2022
Room: McCormick Place W-192B
Sponsoring
Unit:
GDS
Chair: Pavel Lukashev, University of Northern Iowa
Abstract: D32.00007 : Local Extreme Learning Machines: A Neural Network-Based Spectral Element-Like Method for Computational PDEs*
4:36 PM–4:48 PM
Abstract
Presenter:
Suchuan Dong
(Purdue University)
Author:
Suchuan Dong
(Purdue University)
Existing deep neural network-based methods for solving boundary/initial-value problems suffer from several drawbacks (e.g. lack of convergence with a certain convergence rate, limited accuracy, extremely high computational cost) that make them numerically less than attractive and computationally uncompetitive. In this talk we present a neural network-based method that has largely overcome these drawbacks. This method, termed local extreme learning machines (locELM), combines three ideas: extreme learning machines, domain decomposition, and local neural networks. The field solution on each sub-domain is represented by a local feed-forward neural network, and $C^k$ continuity conditions are imposed on the sub-domain boundaries. The hidden-layer coefficients of the local neural networks are pre-set to random values and fixed, and only the weight coefficients in the output layers are trainable. The overall neural network is trained by a linear or nonlinear least squares computation, not by the back-propagation (or gradient descent) type algorithms. The current method exhibits a clear sense of convergence with respect to the degrees of freedom in the neural network. Its numerical errors decrease exponentially or nearly exponentially as the number of degrees of freedom (number of training parameters, number of training data points) increases, which is reminiscent of the spectral convergence of traditional spectral or spectral element-type methods. LocELM far outperforms the physics informed neural network (PINN) and the deep Galerkin method (DGM) method in terms of the accuracy and computational cost (network training time). Its computational performance (accuracy/cost) is on par with the classical finite element method (FEM), and outperforms FEM when the problem size becomes larger. These points will be demonstrated for a number of problems.
*This work was partially supported by NSF (DMS-2012415)