Langevin field-theoretic simulation (L-FTS) is a promising tool in polymer field theory that can account for the compositional fluctuation effect, which is neglected in the self-consistent field theory (SCFT). However, L-FTS is a computationally expensive tool, and it may take more than a week to accurately calculate ensemble averages of thermodynamic quantities. In our previous study, we introduced a deep neural network (DNN) that estimates the saddle point of the pressure field to reduce the subsequent Anderson mixing (AM) iterations. Herein, we propose a novel DNN that can be successively applied to determine the saddle point without using conventional field-update algorithms. Major deep learning (DL) models for semantic segmentation in computer vision are adopted to construct the optimal DNN architecture. Our model utilizing atrous convolutions in parallel is accurate and computationally efficient, and it is robust to the simulation parameter changes and can consequently be reused after single training. We demonstrate that our DNN can achieve speedup of factor 6 or more compared to the AM method without affecting accuracy. Open-source code for our deep Langevin FTS (DL-FTS) enables an easy and rapid Python scripting of SCFT and L-FTS incorporated with CPU or GPU parallelization and DL.
A research team, affiliated with UNIST has proposed a novel deep neural network (DNN) that can be successively applied to determine the saddle point without using conventional field-update algorithms. Their findings are now made freely available to the public. They are expected to contribute to the developments in polymer field-based simulations, according to the research team. This breakthrough has been led by Professor Jaeup U. Kim and his research team in the Department of Physics at UNIST.
In this work, the research team proposed a new deep learning (DL) approach that predicts the difference between the input and saddle-point pressure fields. According to the research team, their NN is designed to retain its predictive power at various incompressibility error levels so that it is less sensitive to the Langevin step interval. “[The new] approach completely replaces the conventional field-update algorithms with deep NN so that the saddle point search can be completed without conventional relaxation methods, such as AM,” said the research team. “Consequently, the performance is greatly enhanced because our new approach utilizes the predictive power of NN several times during each Langevin step.”
Figure 1. Comparison between Anderson Mixing (AM) and Deep Learning (DL). Above shows the two-dimensional slices of the ground truth ΔW+(r) and predictions of DL and AM, respectively, where the ground truth denotes an ideal output for the ML model.
The findings of this study have been published in the August 2022 issue of Macromolecules. This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of the Ministry of Science and ICT (MSIT) and the Sejong Science Fellowship.
Daeseong Yong and Jaeup U. Kim, “Accelerating Langevin Field-Theoretic Simulation of Polymers with Deep Learning,” Macromolecules, (2022).