Author: Lena Bühler (High Performance Computing Center Stuttgart, Germany)

After three years of working on key algorithmic challenges in CFD (Computational Fluid Dynamics), the ExaFLOW project reminisces about what has been achieved during the project runtime. With three flagship runs, ExaFLOW has managed to work on different specific CFD use cases which highlight the importance of their outcomes for both industry and academia.

CFD is a prime contender for reaching exascale performance: In fluid dynamics and in particular turbulence there is virtually no limit to the size of a relevant system to be studied via numerical simulations, which can be exploited for extreme parallel scaling. Moreover, turbulent flows are an essential element of many industrial and academic problems: A crude estimate shows that 10% of the energy use in the world is spent overcoming turbulent friction.

In order to enable the use of accurate simulation models in exascale environments, ExaFLOW conducted three Flagship runs with high industrial relevance, addressing key innovation areas such as mesh adaptivity, resilience, and strong scaling at exascale. Each of the flagship runs addresses specific CFD use cases, demonstrating the improvements made to the utilized open-source community codes: Nektar++, Nek5000, and OpenSBLI.

Author: Joaquim Peiró (Imperial College London, UK), David Moxey (University of Exeter, UK), Michael Turner (Imperial College London, UK), Gianmarco Mengaldo (California Institute of Technology, USA), Rodrigo C. Moura (Instituto Tecnológico de Aeronáutica, Brazil), Ayad Jassim (Hewlett Packard Enterprise, UK), Mark Taylor (London Computational Solutions, UK), Spencer J. Sherwin (Imperial College London, UK)

Introduction

We review the technical innovations incorporated in the high-order incompressible Navier-Stokes solver that is part of the spectral/hp element platform Nektar++. We discuss the construction of a robust underresolved DNS (often called implicit LES) framework for complex geometries at high Reynolds numbers that typically arise in industrial applications. These problems require the use of unstructured high-order curvilinear (and possibly hybrid) meshes, as well as the application of regularisation techniques that help to stabilise the underlying numerics. We focus on (i) the complex high-order mesh generation step, (ii) the regularisation techniques required for numerical stabilisation, specifically a novel spectral vanishing viscosity approach, (iii) the physical accuracy of the results, and (iv) the computational performance of the overall simulation workflow. We then use as a practical example of this framework the high-fidelity simulation of a race car at a Reynolds number of 1,000,000 (based on the car length). The example shows that high-fidelity computations can help construct better aerodynamic solutions for industrial problems, provided that the (complex) geometry is represented correctly, accurate and stable numerical discretization schemes with low dispersion/diffusion errors are adopted, and numerics, physics and computer science are integrated efficiently under a single developmental umbrella.

Author: Allan S. Nielsen, Ecole polytechnique fédérale de Lausanne (EPFL)

HPC resilience is expected to be a major challenge for future Exascale systems. On todays Peta scale systems, hardware failures that disturb the cluster work flow is already a daily occurrence. The currently used most common approach for safe-guarding applications against the impact of hardware failure is to write all relevant data on the parallel file system at regular intervals. In the event of a failure, one may simply restart the application from the most recent checkpoint rather than recomputing everything again. This approach works fairly well for smaller applications using 1-100 nodes, but becomes very inefficient for large scale computations for various reasons. A major challenge is that the parallel file system is unable to create checkpoints fast enough. It has been suggested that if one were to use the current checkpoint-recover from parallel file system approach on future Exascale systems, applications would be unable to make progress due to being in a constant state of checkpointing to, or restarting from, the parallel file system.

Turbulent (incompressible) flow around a NACA-4412 profile - adaptive mesh refinement 

Author: Adam Peplinski, Nicolas Offermans and Philipp Schlatter (KTH Mechanics)

All of these cases were designed to expose some of the difficulties encountered in CFD, for instance complex geometry, intricate physical interactions etc., but having the same common denominator that they represent relevant cases which can be scaled to large sizes (i.e. number of grid points, and running time) to be of industrial relevance.

In the present blog, we describe the progress that we are making for one of those flagship runs, namely the incompressible flow around a asymmetric wing profile, the NACA 4412 airfoil. Whereas we have done similar simulations before in our group, the major innovation from ExaFLOW comes with a novel treatment of the discretisation inside the computational domain: For the first time, we allow the mesh to evolve dynamically depending on the estimated computational error at any given point in space and time. During ExaFLOW, we coupled this so-called adaptive mesh refinement to the highly accurate spectral-element code Nek5000. Special focus has been on the design of the preconditioners necessary to efficiently solve the arising linear systems, the definition of the error indicators (in this case the so-called spectral error indicators), and the overall scalability of our implementation.

Author: Sebastian Wagner, Automotive Simulation Center Stuttgart e.V.

In our previous blog-post we presented the results from strong scalability  test only  performed with Nektar++. As a next step, the code performance of Nektar++  in contrast to the commercial simulation software  Fluent was investigated. The big difference are the numerical methods. Nektar++ uses a spectral/hp element method, in Fluent (DES) the Finite Volume method is implemented. In Nektar++ the polynomial expansion factors were 3 and 2 for velocity and pressure.

Table 1: Test conditions for strong scalability test
Parameter Nektar++ Fluent
Reynoldsnumber 100 6.3×106
Δt 10-5 10-4
Timesteps 5000 500
Physical time 0.05s 0.05s