An underlying drive for all research in the lab is to leverage large-scale supercomputers to enable studies of an unprecedented scale. We are developing HARVEY, a massively parallel computational fluid dynamics code, to study the mechanisms driving disease development, inform treatment planning, and improve clinical care. The potential impact of blood flow simulations on diagnosing and treating patients suffering from vascular disease is tremendous. Empowering models of the full arterial tree can provide insight into diseases such as arterial hypertension and enable the study of the influence of local factors on global hemodynamics. We are developing a new, highly scalable implementation of the lattice Boltzmann method to address key challenges such as multiscale coupling, limited memory capacity and bandwidth, and robust load balancing in complex geometries.
In pursuit of this goal, we initially worked to scale HARVEY efficiently to 1.6 million cores of the IBM Blue Gene/Q supercomputer at Lawrence Livermore National Laboratory (LLNL). In collaboration with Erik Draeger, Liam Krauss, and Tomas Oppelstrup at LLNL and John Gunnels at IBM Watson, we completed the first 3D simulation of flow in the arterial network of all vessels greater than 1mm in diameter. This work was selected as an ACM Gordon Bell Finalist in 2015 for achievement in high performance computing. The results are shown below.
To enable simulation of this scale, we developed techniques to improve load balance, enable distributed pre-processing, and optimize memory access. We have demonstrated near-optimal scaling on the full Sequoia supercomputer at LLNL, as shown below.
We are continuing to focus on the parallel computing aspect of this research. Two main areas of emphasis are scaling the immersed boundary model, introducing deformable cells and walls to HARVEY's capabilities, and scaling the code on heterogeneous architectures such as Oak Ridge National Laboratory's Frontier supercomputer and Argonne National Laboratory's Aurora supercomputer.