## Scalable Space-time adaptivity for Simulations of Binary Black Hole Intermediate-Mass-Ratio-Inspirals
We present a highly scalable framework that targets problems of interest to the numerical relativity and broader astrophysics communities. This framework combines a parallel octree-refined adaptive mesh with a wavelet adaptive multiresolution and a physics module to solve the Einstein equations of general relativity. The goal of this work is to perform advanced, massively parallel numerical simulations of intermediate-mass-ratio inspirals of binary black holes with mass ratios on the order of 100:1. These studies will be used to generate waveforms as used in the data analysis of the Laser Interferometer Gravitational-Wave Observatory and to calibrate semi-analytical approximate methods. Our framework consists of a distributed memory octree-based adaptive meshing framework in conjunction with a sophisticated code generator from symbolic expressions. In addition, high-levels of adaptivity are also required to ensure scalability when the mass ratios become large. The code generator makes our code portable across different architectures, including SIMD vectorization, OpenMP, and CUDA combined with efficient distributed memory adaptive data-structures. The equations corresponding to the target application are written in symbolic notation, and generators for different architectures can be added independently of the application. Additionally, this symbolic interface also makes our code extensible and as such has been designed to easily accommodate many existing algorithms in astrophysics for plasma dynamics and radiation hydrodynamics. Our adaptive meshing algorithms and data structures have been optimized for modern architectures with deep memory hierarchies. This enables our framework to achieve excellent performance and scalability on modern leadership architectures. We demonstrate excellent weak scalability up to 131K cores on the Oak Ridge National Laboratory’s Titan for binary mergers for mass ratios up to 100.
Bio: Hari Sundar is an Assistant Professor in the School of Computing at the University of Utah. His research focuses on the development of computationally optimal parallel, high-performance algorithms, that are efficient and scalable on state-of-the-art architectures. It is driven by applications in biosciences, geophysics, and computational relativity. His research has resulted in the development of state-of-the-art distributed algorithms for adaptive mesh refinement, geometric multigrid, fast Gauss transform and sorting. He received his Ph.D. from the University of Pennsylvania and was a PostDoc at the Institute for Computational Engineering & Sciences at the University of Texas at Austin.