Reducing Autocorrelation Times in Lattice Simulations with Generative Adversarial Networks
Nov 8, 20189 pages
Published in:
- Mach.Learn.Sci.Tech. 1 (2020) 045011
e-Print:
- 1811.03533 [hep-lat]
DOI:
- 10.1088/2632-2153/abae73 (publication)
View in:
Citations per year
Abstract: (arXiv)
Short autocorrelation times are essential for a reliable error assessment in Monte Carlo simulations of lattice systems. In many interesting scenarios, the decay of autocorrelations in the Markov chain is prohibitively slow. Generative samplers can provide statistically independent field configurations, thereby potentially ameliorating these issues. In this work, the applicability of neural samplers to this problem is investigated. Specifically, we work with a generative adversarial network (GAN). We propose to address difficulties regarding its statistical exactness through the implementation of an overrelaxation step, by searching the latent space of the trained generator network. This procedure can be incorporated into a standard Monte Carlo algorithm, which then permits a sensible assessment of ergodicity and balance based on consistency checks. Numerical results for real, scalar -theory in two dimensions are presented. We achieve a significant reduction of autocorrelations while accurately reproducing the correct statistics. We discuss possible improvements to the approach as well as potential solutions to persisting issues.Note:
- 9 pages, 9 figures
- autocorrelation
- generative adversarial networks
- hybrid Monte Carlo
- overrelaxation
- numerical calculations: Monte Carlo
- Monte Carlo: hybrid
- phi**n model: 4
- dimension: 2
- network
- lattice field theory
References(51)
Figures(14)
- [1]
- [2]
- [3]
- [4]
- [5]
- [6]
- [7]
- [8]
- [9]
- [10]
- [11]
- [12]
- [13]
- [14]
- [15]
- [16]
- [17]
- [18]
- [19]
- [20]
- [21]
- [22]
- [23]
- [24]
- [25]