Current Issue

Technical news, white papers, tips & hints and other news from NAG

NAGnews 151

In this issue:

 


Coarray Fortran Correctness Checking added to the latest NAG Fortran Compiler


The latest release of the NAG Fortran Compiler was announced at SC17, Denver last week. Release 6.2 includes more Fortran 2008 features (partial coverage), and support and correctness checking for coarray syntax and semantics. Coarray Fortran is a parallel processing feature added to the Fortran language to aid efficient parallel programming and is scalable from single-core to multi-CPU to clusters. The latest NAG Fortran Compiler release also provides Fortran 2003 support (complete standard including user-defined derived-type I/O), Fortran 95 and OpenMP 3.1.

All platforms include further tools for software development:

  • Fortran pretty printer
  • Dependency generator for module and include files
  • Call-graph generator
  • Interface builder
  • Precision unifier

The NAG Fortran Compiler is available on Linux, Microsoft Windows and Mac OS X platforms. For users preferring an Integrated Development Environment (IDE) on Microsoft Windows or Apple Mac, we include the NAG Fortran Builder.

If you are an existing supported user of the NAG Fortran Compiler you can access the latest features via upgrading your software. If you have any questions about this please don’t hesitate to contact us. Alternatively you can try the Compiler for 30 days with a full product trial - apply here.

 


NAG provides improvements to a convolution gridding algorithm for the Square Kilometre Array Telescope


NAG was recently asked by the Scientific Computing Group at the University of Oxford's prestigious e-Research Centre to investigate methods for improving the performance of a convolution gridding algorithm used in radio astronomy for processing fringe visibilities, targeting Intel Knights Landing (Xeon Phi) and NVIDIA P100 GPU. During their investigation, NAG experts used simulated Square Kilometre Array (SKA) data to observe the potential differences in algorithm enhancements that related to particular hardware choices.

Although SKA Radio Telescope is not due to begin collecting data until 2020, work is already underway to design and implement the software needed to process the vast amounts of data that the project will produce, hence NAG being asked to look at algorithm use.

Click here to see some of the study findings.


Screenshot of Technical Poster 'Parallel Convolution Gridding for Radio Astronomy Applications Running on KNL and GPU'

 


New 'Optimization Corner' technical blog series kicks-off with 'The Price of Derivatives - Using Finite Differences'


Derivatives play an important role in the whole field of nonlinear optimization as a majority of the algorithms require derivative information in one form or another. This post describes several ways to compute derivatives and focuses on the well-known finite difference approximation in detail.

Through the text we assume that the objective function f(x) is sufficiently smooth and is minimized without constraints.

Why are derivatives so important?

Recall how a typical nonlinear optimization solver works. It gradually improves the estimate of the solution step by step. The solver needs to decide at each iteration the location of the new estimate, however, it has only very local information about the landscape of the function - the current function value and the derivatives. The derivatives express the slope of the function at a point so it seems natural that the derivative information be used to define, for example, a descent search direction and the solver searches along this ray for a new (better) point. In addition, if the derivatives are close to zero, the function is locally flat, they report (together with other pieces of information) that a local solution has been reached. It is easy to imagine that mistakes in derivatives might mislead the solver which then fails. It is therefore crucial to have a reliable way to provide derivatives whenever possible.

Read the full blog post here.

Learn more about 'The Optimization Corner' series.

 


Algorithm Spotlight Mark 26.1: Interior Point Method for Large Scale Linear Programming Problems


In the last issue of NAGnews we announced the latest NAG Library additions at Mark 26.1. Today we are focussing on the new Interior Point Method for Large Scale Linear Programming Problems.

The new solver is based on an Interior Point Method (IPM), a viable alternative to simplex/active-set methods. Active research in the last 20 years has led to the development of extremely efficient and reliable IPMs. The main characteristic of this type of solver compared to active-set methods is a fast convergence in a low number of iterations, each of which has a high computational cost. In practice, one interior point iteration consists mainly in factorizing one big sparse linear system.

The new solver is built upon a very efficient sparse linear algebra package and actually implements two variants of interior point methods: the Primal-Dual and Self-Dual methods. The Primal-Dual usually offers the fastest convergence and is the default choice for the solver. However, the Self-Dual presents several attractive features:

  • all convergence measures decrease at the same rate;
  • very reliable infeasibility detection;
  • generally better behaviour on pathological problems.

Both methods should present significant improvements for large scale problems over the current LP/QP solvers in the Library.

Learn more about the new functionality here.


Blog Bites


Modernising an old Fortran program

One of the delights of turning 70 is that skills learned many years ago become in demand. Not too many people these days learn about 'ALTERNATE RETURNS' and 'COMMON BLOCKS' even if they are taught Fortran. However, in order to 'modernise' an old program one has to know exactly what these constructs did.

Recently I agreed to help a friend with a program he would like to improve. Read more

Finding a Competitive Edge with High Performance Computing

High Performance Computing (HPC), or supercomputing, is a critical enabling capability for many industries, including energy, aerospace, automotive, manufacturing, and more. However, one of the most important aspects of HPC is that it is not only an enabler, it is often also a differentiator - a fundamental means of gaining a competitive advantage. Read more


Out & About with NAG


Exhibitions, Conferences, Trade Shows & Webinars

Webinar: Leverage multi-core performance using Intel® Threading Building Blocks (Intel® TBB)

16, 17, 18 January 2018

This series of 2-hour theory and practical webinars delivered over 3 days will introduce Intel's Threading Building Blocks (TBB). Those attending this series need no prior knowledge of TBB and perhaps only rudimentary understanding of parallel programming. On completing the series, participants will know what TBB is, how it enables parallel programming, what differentiates it from other parallel programming models, and how to use common parallel programming patterns to parallelize their own code. Register here.