An underlying drive for all research in the lab is to leverage large-scale supercomputers to enable studies of unprecedented scale. We are developing HARVEY, a massively parallel computational fluid dynamics code, to study the mechanisms driving disease development, inform treatment planning, and improve clinical care. The potential impact of blood flow simulations on diagnosing and treating patients suffering from vascular disease is tremendous. Empowering models of the full arterial tree can provide insight into diseases such as arterial hypertension and enable the study of the influence of local factors on global hemodynamics. We are developing a new, highly scalable implementation of the lattice Boltzmann method to address key challenges such as multiscale coupling, limited memory capacity and bandwidth, heterogeneous computing architectures, and robust load balancing in complex geometries.
In pursuit of this goal, we have scaled HARVEY efficiently to leadership-class supercomputers across multiple architectures, from IBM Blue Gene/Q to GPU-accelerated exascale systems such as Oak Ridge National Laboratory’s Frontier and Argonne National Laboratory’s Aurora. Early work included the first 3D simulation of flow in the arterial network of all vessels greater than 1 mm in diameter, completed in collaboration with Lawrence Livermore National Laboratory and IBM, which was recognized as an ACM Gordon Bell Finalist in 2015. Since then, we have undertaken a systematic effort to re-architect HARVEY for sustained performance on emerging heterogeneous platforms, including portability layers that support multiple GPU vendors, enabling us to target both current and future systems without rewriting the solver core.
To address the increasing complexity and memory constraints of modern architectures, we have introduced moment space formulations of the lattice Boltzmann method that improve numerical stability, reduce communication overhead, and better utilize high-bandwidth memory. We have also developed optimized load-balancing strategies for irregular vascular geometries, along with distributed pre-processing and I/O pipelines capable of handling petascale datasets.
Recognizing the bottleneck of traditional post-processing, we have integrated scalable in situ visualization and analysis tools directly into HARVEY, allowing for feature extraction, metric computation, and interactive exploration while simulations are still running. This capability is essential for steering large-scale experiments, reducing storage costs, and accelerating time-to-insight.
We continue to focus on advancing HARVEY’s parallel computing capabilities, with current priorities including scaling deformable cell and wall models, further optimizing GPU portability, and expanding in situ workflows to couple directly with machine learning–driven analysis. These HPC innovations are critical to achieving our broader goal: delivering patient-specific vascular simulations at unprecedented speed and fidelity, supporting applications from device testing to large-scale virtual clinical trials.