Aengus graduated from the University of Bath in 2024 with a MSci in Mathematics and Physics
Aengus graduated from the University of Bath in 2024 with a MSci in Mathematics and Physics. His MSci project titled “A lagrangian mechanics oriented approach to neural networks” focused on creating loss functions to train neural networks to identify physical equations of motion from observational data. His main research interests lie in PDE’s and Theoretical Physics with a keen focus on General Relativity.
Outside of mathematics and physics, Aengus enjoys baking, reading, DnD, board games and spending time outdoors.
Research project title: Bounding optimisation error in scientific machine learning methods for solving differential equation
Supervisor(s): Chris Budd and Michael Murray
Project description: In recent years, scientific machine learning (SciML) methods such as Physics-Informed Neural Networks (PINNs) and the Deep Ritz method (DRM) have emerged as a promising alternative to traditional methods like finite element (FEM), finite difference (FDM) and collocation methods (CM). Unlike classical approaches, these SciML methods are presented in literature as mesh-free, meaning they do not require domain specific meshes or element decomposition to represent solutions. This flexibility is particularly advantageous in high-dimensional problems or when dealing with irregular domains, where methods like FEM can become computationally expensive or break down entirely. It has been shown that for higher dimensional problems, PINNs often computationally outperform FEM in both efficiency and accuracy.
However, despite their promise, these methods still lack a robust theoretical foundation for understanding when and why they succeed. Existing convergence results typically assume that the neural networks are perfectly trained; that is, the optimisation process has found an exact solution. In contrast to classical methods, which come with theorems guaranteeing convergence to the true solution, this optimisation error remains a major barrier to establishing similar guarantees for SciML based approaches, except in the simplest settings. In practice, training these models is difficult: the loss landscapes are highly non-convex, causing the optimisation error to dominate the problem. Currently, there are no general results that account for these real-world limitations, especially in ill-conditioned or nonlinear PDEs. My research aims to understand the causes of this optimisation error theoretically, and develop convergence theorems that explicitly incorporate it.
Students joining SAMBa in 2024