INT Program INT-18-2b
Advances in Monte Carlo Techniques for Many-Body Quantum Systems
July 30 - September 7, 2018
MOTIVATION and CONTEXT
Quantum Monte Carlo (QMC) techniques have become essential tools in a myriad of fields
including computational condensed matter, high-energy physics, lattice qcd, quantum chemistry and nuclear theory.
Although the physics that is being extracted is diverse in nature, the methodologies, algorithms and
fundamental obstacles are common across these fields. In each case, there exists an effective
Hamiltonian (Lagrangian) whose ground state (partition function) properties are being
computed.
Unfortunately, the computational complexity for current exact algorithmic approaches scale
exponentially with system size. In the context of stochastic algorithms (i.e. projector quantum
Monte Carlo) this problem comes from the decay of the signal to noise ratio occurring in the
computation of highly dimensional integrals in which the integrand has alternate signs. This is
typically called the fermion sign problem. This exponential scaling, though, arises in various
ways in different algorithms. In exact diagonalization, the wave function requires exponential
memory to store; in DMRG the bond dimension grows with the entanglement (exponentially with
the width of the system); and in diagrammatic approaches, the system size is infinite but there is
an exponential scaling in order.
Historically, this problem has been dealt with by limiting calculations to small system sizes (or
zero baryon density) or by using uncontrolled approximations such as using the variational
principle to 'guess' a good ansatz for the solution, or by applying an artificial constraint on the
space that can be explored by a random walk.
Within the last five years, there has been an algorithmic paradigm shift in approaching strongly
correlated systems. Various new systematically exact algorithms are being developed which
still scale exponentially but do so with reduced computational complexity (i.e a smaller exponent
or exponential cost in a different parameter). Examples include full configuration interaction
quantum Monte Carlo (FCIQMC) which is exponential in system size but not temperature,
diagrammatic Monte Carlo which is exponential in diagram order but not system size,
two-dimensional tensor networks such as the projected entangled plaquette state (PEPS)
which is a variational wave function and which compactly represents many ground states but is
exponential in cost to evaluate, etc. In addition, various hybrid algorithmic approaches have
been developed to take simultaneous advantage of features of multiple algorithms such as
semi-stochastic QMC, partial node FCIQMC and QMC+MPS approaches.
It should be noted that this change in perspective has been driven by various forces including
significantly improved computational resources which makes low scaling exponential algorithms
potentially useful.
This workshop then exists within this context of significant new algorithmic development and possibilities in the area of quantum Monte Carlo techniques broadly
defined. In particular, we believe that there is significant room for new algorithms, unique hybrid
approaches, and order of magnitude improvements in newly developed techniques.
As an example of the confluence of these ideas, we consider the specific case of nuclear
physics. The recent progress in the field concerns a better understanding of this general
formulation of MC methods in quantum systems. For example, in the field of nuclear structure
calculations, it became possible in the last few years to study properties of nuclei and nuclear
matter using nuclear interactions derived within chiral effective field theory. This was possible by
adapting EFT Hamiltonians to be used by QMC methods on the continuum, but also through the
development of novel algorithms that work in momentum and/or Fock space, like configuration
interaction Monte Carlo (CIMC), and permit solutions of the nuclear many-body problem using
non-local Hamiltonians.
These advances are extremely important, as it is possible now to study
and quote theoretical uncertainties given by using different models. In addition, recent
calculations showed that QMC methods can also be used to calculate response functions and
matrix elements to calculate, for example, electron- and neutrino-scattering, electroweak
transitions; they will also be used to calculate ab initio important quantities like matrix elements
related to double-beta decays and neutrino rates in neutron stars and supernovae.
Historically, cross-fertilization of ideas among communities dealing with different physical
problems have proven particularly fruitful. Consider as examples the transition of the CORE
approach from its origin in lattice QCD to the condensed matter community; the utilization of
coupled cluster theory in nuclear theory from its origin in quantum chemistry and more recently
its variational application to condensed matter systems; the application of annihilation
techniques originally devised in the condensed matter community to applications in quantum
chemistry as an important component of FCIQMC.
We note five key themes in the purview of this workshop:
A deeper understanding of the fermion sign problem and the impacts of techniques associated with the exponential computational complexity.
Standard lore is that the computational complexity of working with fermions is exponential in system size and beta (imaginary time projection or temperature). Although it seems some "cost" needs to be exponential there are already countless examples where this exponential cost is moved around or removed from some aspect. In FCIQMC there is no cost in beta (imaginary temperature) and in diagrammatic Monte Carlo there is no cost in system size but instead in diagram order. Despite the countless examples, there is yet little understanding developed for the full phase space of such trade-offs. Succinctly put, we can ask, what can you be exponential in? Bringing practitioners in various methodologies together will be an important first step in developing this understanding. Secondly, there is still much to understand about the difficulty of solving fermionic problems. The perspective hinted in the famous paper by Troyer and Jens-Wiese (PRL 94, 170201 (2005)) showing that the sign problem generically is NP complete has given way to a more nuanced point of view driven from the field of quantum information. We know indeed that ground states we are interested in (those accessible to the universe) are in bounded quantum polynomial time (BQP) and almost certainly not NP complete. Moreover, we now know that typical ground states are very special, generically having atypically low entanglement, that suggests they should be describable by a variational ansatz. This new understanding hints that the standard and general "no-go" theorems may not preclude the development of algorithms for large classes of problems we care about. This workshop will explore this.Code bases for new algorithms; improving new algorithms
As described earlier, quantum Monte Carlo algorithms have recently evolved quickly. One step toward improving these algorithms is to better proliferate both understanding and prototype codes for these techniques. Unfortunately the fact that novel algorithms do not propagate very quickly throughout the research community and the practitioners, limits in part the potential for further improvements. In the workshop we will both contribute to a deeper understanding and improvement of new algorithms as well as discuss how to ensure a wider set of researchers are be able to experiment with the newest QMC ideas.Improving systematic comparisons between new approaches and more standard ones.
The proliferation of new algorithmic approaches has been very healthy for the field leading to new potentially exact approaches. In spite of this fact, these new techniques still generically scale exponentially and in many cases may have trouble reaching, for a given system size, the accuracy already achieved by more standard methods. Unfortunately there has been little systematic study of this trade-off with the occasional hyperbolized claims about the efficacy of new approaches making it difficult to arbitrate these issues. By bringing together various practitioners in one place, this gives a venue for arbitrating these issues as well as exploring the development of hybrid approaches which may allow for combination of new ideas and older approaches.Dynamics and other observables
While historically, the ground state energy has been the key quantity to report out of quantum Monte Carlo, there has recently been a significant push to be able to compute quantities that align more closely with experiments. While for static quantities this is straightforward, for dynamical quantities like the density of states and for gaps, things are more subtle. In fact, in many cases some form of analytic continuation or bayesian reconstruction is required. These problems exist throughout the quantum Monte Carlo community (LQCD getting gaps, DMFT reporting density of states, PQMC analytically continuing to report various dynamic quantities, etc) and different partial solutions have been found. As an example, the problems related to estimating the ground state energy from the signal decay transient in an unconstrained calculation are quite similar to the problem of determining masses in an unquenched calculation in LQCD. All the issues of the statistical analysis of these transients, in particular the way in which one can extrapolate the asymptotic value in presence of strong statistical noise are certainly of common interest. Similarly, the evaluation of response functions in particle systems are strongly related to the computation of correlators in lattice problems. Moreover, additional attention among the QMC community needs to be focused on better exploiting the features of projection algorithms allowing for the computation of integral transforms of response functions, in particular by devising better kernels in coordinate space and in general Fock space, and on devising proper and efficient inversion techniques. One goal of the workshop will be the development of best practices for analytical continuation as well as a better understanding of how to do dynamics otherwise, etc.Bridging the gap between new ideas and the associated test codes and full-production workflows.
In particular, there are important questions about: restructuring algorithms and codes to be well-suited for parallelization at the exo-scale; developing common libraries and best practices; better understanding the systematic and statistical errors.Compute cores have stopped getting faster while the number of cores has exploded
Part of the success of QMC is the increased availability of high-performance parallel computing resources. Until recently one could get a significant computational speedup by waiting for the clock-speed of each individual node to increase. Clock speeds are no longer getting faster challenging researchers to rethink the traditional algorithmic schemes to gain efficiency through novel parallelization schemes, and to fully make use of modern supercomputers. For example, almost no codes are prepared for the widespread use of machines full of GPU's nor have removed asynchrony to the amount necessary to reach exoscale. It may make sense to leverage experts in many sub-fields to accomplish this. In particular, we would ask participants to share methods, new libraries, and ideas to exploit these resources in order to improve scalability and be ready to the future generation of exa-scale facilities.Understanding the errors
Also driven by experiments is the push to go from finite size calculations with unknown systematic bias to the thermodynamic limit with a quantified statistical bias. Another important issue that is related to the widespread use of effective interactions is the assessment of systematic uncertainties vs. statistical ones. Whenever the calculation depends on some "external" parameters one has to face the existence of systematic uncertainties that depend in turn on on the uncertainties on the parameters. This might be related, for instance, to the necessity of fitting the parameters of an interaction on the experimental data; or, when computing forces in a solid from QMC calculations, on the intrinsic uncertainty on the force itself and on the evolution of the nuclear positions which are in turn parameters in the many-electron Hamiltonian. Sometime the interaction between statistical and systematic uncertainties generates confusion inside and outside the MC community.
The proposed program bears a strong connection to several recent INT programs, in particular the previous program on Advances in Quantum Monte Carlo Techniques for Non-Relativistic Many-Body Systems (INT-13-2a). In this sense, we expect this program to play an important role in sustaining the communication of methods across fields, extending it more explicitly to the LQCD community, as well as maintaining the continued discussion and interest in the development of stochastic methods for the many-body problem within nuclear physics and beyond.