Authors: Dr. Christian Jacobs, University of Southampton; Niclas Jansson, KTH Royal Institute of Technology

ParCFD mini-symposium: Towards Exascale in High-Order Computational Fluid Dynamics

  • "High-Fidelity Road Car & Full-Aircraft Simulations using OpenFOAM on ARCHER - Perspectives On The Need For Exa-Scale" - N. Ashton
  • "Incorporating complex physics in the Nek5000 code: reactive and multiphase flows"A. Tomboulides
  • "Towards Resilience at Exascale: Memoryconservative fault tolerance in Nektar++"C. Cantwell
  • "Future-proofing CFD codes against uncertain HPC architectures: experiences with OpenSBLI"N.D. Sandham
  • "Towards adaptive mesh refinement for the spectral element solver Nek5000" - A. Peplinksi

Author: Dr. Christian Jacobs, University of Southampton

In simulations of fluid turbulence, small-scale structures must be sufficiently well resolved. A feature of under-resolved regions of flow is the appearance of grid-to-grid point oscillations, and such oscillations are often used to decide when/where grid refinement is required. Two new error indicators have recently been developed by SOTON as part of the ExaFLOW project, that permit the quantification of these features of under-resolution. These are both based on spectral techniques using small-scale Fourier transforms.

The High-Performance Computing Center Stuttgart (HLRS) will host one out of four residencies in conjunction with the ExaFLOW project. We invite applications from artists and designers who are interested in computer science and technology to join us as part of the VERTIGO Project of the European Commission. The deadline for applications is May 29, 2017 at 10:00 CET.

Author: Dr. Chris Cantwell, Imperial College London

Fluid dynamics is one of the application areas driving advances in supercomputing towards the next great milestone of exascale – achieving 10^18 floating-point operations per second (flops). One of the major challenges of running such simulations is the reliability of the computing hardware on which the tasks execute. With saturation of clock-speeds and growth in available flops now being primarily achieved through increased parallelism, an exascale machine is likely to consist of a far greater number of components than previous systems. Since the reliability of these components is not improving considerably, the time for which an exascale system can run before a failure occurs (mean time to interrupt, MTTI) will be on the order of a few minutes. Indeed, the latest petascale (10^15 flops) supercomputers have an MTTI of around 8 hours.

Author: Dr. Nick Johnson, EPCC

 

Having just returned from Lausanne where we had the most recent all-hands meeting, it was time to write our periodic report. These are good opportunities to step back and see what we've covered, as a work-package and partner since our first meeting in Stockholm in October 2015.

I resurrected a set of slides to do a comparison and see that we've covered a fair amount of work in the past 18 months and I even now understand some of the maths! We've worked heavily on energy efficiency, benchmarking codes in-depth on a number of systems. We are lucky that we have three similar (but not identical) systems from the same vendor so we can easily exchange measurement tips and libraries. It is also apparent that despite using well tuned systems, we see variances between runs of a simulation and have to be careful to design out experiments.