top of page

News

  • Writer's pictureRefMap

HPC optimisation techniques in the service of efficient CFD Simulations



Computational Fluid Dynamics (CFD) simulations play a pivotal role today by enabling engineers and scientists to analyse fluid dynamics, offering crucial insights into design optimisation and problem-solving across diverse industries such as aerospace, automotive, energy, and biomedical engineering. The inherent computational intensity of CFD solvers has also formed a severe bottleneck and motivated the need for HPC optimisation techniques. The shift towards more sophisticated high-fidelity simulations and the rising adoption of optimisation strategies and machine learning-driven workflows have intensified the demand for HPC resources. This blog article provides an overview of HPC optimisation approaches and trends for efficient CFD.


The most common optimisation approach is the deployment of CFD solver on a variety of supercomputers starting from single powerful cores and moving nowadays to massively parallel clusters. Efficient deployment on these platforms is based on parallel computing frameworks that split the mesh and workload into independent tasks and distribute each task across multiple cores. This strategy is implemented using MPI, OpenMP, and Hybrid MPI+OpenMP solutions. A typical example is ANSYS FLUENT[1], where the mesh is divided into as many partitions as the number of cores and each core solves one partition. Nek5000[2], the most used solver in academia, leverages MPI to exploit both inter- and intra-node parallelism and has demonstrated considerable scalability across several thousands of nodes on petascale systems.



Figure 1. Memory and speed requirements of CFD solvers have dictated the shift towards supercomputers and exascale computing throughout the years.[3]


Most solvers however are not optimized to exploit the capabilities of modern High-Performance Computing (HPC) architectures that are shifting from traditional homogeneous systems towards heterogeneous Exascale computing platforms. Therefore, an active field of research focuses on transforming solvers so that they can benefit from the use of accelerated technologies, such as GPUs, FPGAs, CGRAs, etc. Initial efforts for such solvers leveraged OpenACC[4] and OCCA[5]. OpenACC is an open standard designed to simplify parallel programming for heterogeneous computing systems that adds directives to existing programs and creates portable code across various hardware platforms. OCCA is a software library that allows developers to write high-performance applications for various architectures without having to rewrite code for each specific hardware platform. An alternative solution is leveraging the CUDA programming interface for programming NVIDIA GPUs and OpenCL for FPGA-based acceleration. AWS[6] is contributing to this shift of CFD to heterogeneous exascale clusters by equipping EC2 instances with GPUs and popular tools and codes to achieve faster and highly parallel deployment of simulations.



Figure 2. Picture of Summit or OLCF-4 IBM supercomputer, equipped with 27.648 Nvidia Tesla V100 GPUs and capable of 200 petaFLOPS.


CFD simulations in the RefMap project

Towards this direction, RefMap project plans to optimise CFD simulations by adopting one of the most recent attempts at a GPU-enabled highly scalable solver, i.e., SOD2D, a CFD algorithm for scalable simulation of turbulent compressible flows that leverages hardware acceleration. SOD2D is a Fortran-based solver annotated with GPU-oriented OpenACC directives to offload the most demanding functions of the time-integration module to the GPU for acceleration. SOD2D also supports MPI execution, allowing scaling across multiple GPU-enhanced nodes. To further enhance the performance and portability of SOD2D on various supercomputing clusters, RefMap aims at coupling the existing HPC-optimized solver with sophisticated autotuning HPC techniques for performance tuning on any GPU architecture and parallel system.


Sources:

[1] https://www.ansys.com/products/fluids/ansys-fluent

[2] https://nek5000.mcs.anl.gov/

[3]Source:“The Opportunities and Challenges of Exascale Computing”, DOE Office of Science (2010) in reference to aerospace applications

[4] https://www.openacc.org/

[5] https://libocca.org/#/

[6] https://aws.amazon.com/


 

Stay connected with RefMap and never miss an update! Subscribe to our newsletter for the latest news, insights, and advancements in sustainable aviation. For real-time updates and engaging discussions, follow us on LinkedIn and Twitter. Join our growing community and be part of the conversation shaping the future of aviation.

24 views

Comments


bottom of page