Skip to content

Working with Fortran in 2020: Parallelisation

Fortran – Making optimal use of hardware

by DI Dr. Christoph Hofer

The name Fortran is composed of “FORmel TRANslator” and is one of the oldest programming languages. For many software developers, it is the archetype of an old, ponderous, limited and difficult-to-understand programming language with which one would best not have anything to do. For the old versions of Fortran, this prejudice may indeed be true. However, Fortran has changed a lot in its long history, so that in its “modern” variant (such as Fortran 2003) the language has a much worse reputation than it deserves. The typical use case for Fortran as a programming language is computationally intensive numerical simulations, such as weather forecasts, flow simulations, stability calculations, and many more.

Table of contents

  • From old to new
  • Parallelisation: how to
  • OpenMP
  • MPI
  • Coarray
  • Author

From old to new

Fortran is considered to be the first higher programming language ever realised and was developed by IBM in the years 1954 – 1957 (FORTRAN I). The scope of the language was still very limited, for example, there were only integers and reals (floating point numbers) as data types and no functions yet. In the following years, new improved and more extensive Fortran versions were developed (FORTRAN II, FORTRAN III, FORTAN 66). Fortran received its next major update in 1977 (FORTRAN 77). Due to new features in the language, this version became very popular and thus quickly became “the” Fortran. Even today, when people talk about Fortran code, they mainly mean FORTRAN 77 code, which also explains the many prejudices against the language. Since then, there have been several more updates to the language, bringing it closer to modern programming concepts and standards.

Major milestones in the development were the updates to Fortran 90 and Fortran 2003, which, in addition to the name change (FORTRAN → Fortran), added common concepts such as free source file formats, modules, operator overloading, derived data types, pointers and object-oriented programming to the programming language. In addition, Fortran 95 and Fortran 2008 were each minor updates of the language. The latest version of the Fortran standard is Fortran 2018, although no compiler manufacturer supports all features yet.

Info

Higher Programming Language

Microprocessors are programmed in a so-called machine language, i.e. a binary code that is interpreted by the microprocessor as a sequence of instructions. Since these languages are very dependent on the hardware used and direct programming in a machine language is very time-consuming, the development of higher programming languages and compilers was a great step forward. Higher programming languages use mathematical expressions or expressions based on a natural language (usually English) that are translated into machine language by a compiler (and linker). Higher programming languages are independent of the hardware, the adaptation to the concrete hardware is done by the compiler.

Operator Overloading

is a programming technique with which the meaning of operators (such as +, -, *, …) depends on the respective types used. For example, 1 + 2 returns the number 3, but “Helllo ” + “World” returns the string “Hello World”.

Derived Data Type

allow the user to define data types from existing types. This offers the possibility of defining logically related data in one type and reusing it in different parts of the programme.

Pointer

is a data type that stores the memory address of a variable instead of the variable itself. Pointers refer to a memory address, so to speak, and the extraction of the value behind it is called dereferencing. In contrast to pointers in C/C++, Fortran pointers have even more information and also allow to refer to unrelated memory areas (in the case of arrays).

Object-oriented Programming

is a programming style in which data is not only collected in Derived Data Types but encapsulated together with logic and functionality in so-called objects. Each object has defined interfaces through which it can interact with other objects, e.g. often not all data and functions of an object are visible to other objects. The aim is to avoid code duplications in order to reduce the potential for errors and the maintenance effort.

Parallelisation: how to

Since the trend in the development of new CPUs is clearly moving towards an ever higher number of processor cores, efficient parallelisation is an integral part of high-performance code. The goal is to provide all cores with approximately the same computing load and to keep the communications or data exchange between the cores as low as possible through intelligent design of the programme code. Common options for parallelising programs on a CPU are OpenMP and MPI, as well as the new coarrays added to the Fortran standard.

Hall

OpenMP

OpenMP is designed for shared memory systems such as desktop PCs or shared memory mainframes. On a technical level, threads are created (fork) at the beginning of an OpenMP parallel region using a so-called “fork-join” model and joined again at the end. OpenMP offers a simple way to run certain parts of a programme, such as loops, in parallel without making major structural changes to the programme. In C/C++ this is realised via “#pragma”, in Fortran with the help of the special syntax “!$”. Due to the simple way of implementation, the code remains easy to understand and the danger of bugs during development is low. However, parallel scalability is often limited by the frequent creation of threads and the synchronisation of shared variables in the CPU caches.

Fork-Join

“fork-join”-Model OpenMP

MPI

MPI (Message Passing Interface) is based on a distributed memory model, such as computing clusters consisting of several computers or CPUs. In contrast to OpenMP, processes are created instead of threads, which operate independently of each other and do not have any common (shared) variables. Processes do not have to operate on the same CPU, which is why they are suitable for distributed memory systems, but can also be used on shared memory systems. To exchange data and information between processes, messages are sent between processes. These processes are created at the start of the programme and remain in existence throughout the entire runtime. Furthermore, there is no need to synchronise variables stored in the CPU caches, which usually makes MPI much more scalable than OpenMP. The disadvantage, however, is that under normal circumstances large structural changes have to be made to the code, which also greatly increases the risk of new bugs due to the changes. MPI also allows communication between different programs, so it follows the so-called multiple programs multiple data (MPMD) model.

Coarray

Coarray Fortran was added to the language in the Fortran 2008 standard. Like MPI, coarrays support both distributed and shared memory systems. Coarrays operate on the single program multiple data (SPMD) model, a program creates multiple copies of itself (images), each running in parallel and independently. Communication is again enabled via messages. In contrast to MPI, programming with coarrays is more intuitive, as the interface is more abstracted for the user and less “low-level” knowledge is required. Since MPI is implemented in C, data often has to be copied through the Fortran/C interface, like array slices. Coarrays, on the other hand, are programmed natively in Fortran and can also handle Fortran-specific data types more efficiently. Nevertheless, coarrays are still relatively new in Fortran and not as mature as MPI.

Unfortunately, there is no module for “threading” in Fortran, which is similar to the “multithreading” library in C++, which was added to C++ with the release of the C++ 11 standard.

Contact









    Author

    DI Dr. Christoph Hofer

    Professional Software Engineer Unit Industrial Software Applications