Current Issue

Technical news, white papers, tips & hints and other news from NAG

Quant Finance News - December 2019

In this issue:

 


Super-charged Fixed Point Iterations using Anderson Acceleration and the NAG Library


Fixed Point Iterations appear in many areas of science, finance, engineering and applied mathematics. Their general form is

$$x_{n+1} = f(x_n)$$

which is repeated until a convergence criterion is reached. One problem with such techniques is that convergence can be slow which limits their usefulness.

In a 2015 paper, NAG collaborator Nick Higham along with Nataša Strabić applied a technique called Anderson Acceleration to his Alternating-Projections algorithm for computing Nearest Correlation Matrices (NCMs) resulting in much faster convergence.

The Anderson Accelerated NCM routine was included in Mark 27 of the NAG library with the function name nag_correg_corrmat_fixed. While implementing it, the NAG team decided to make general Anderson Acceleration methods available to users of the NAG Library: sys_func_aa and sys_func_aa_rcomm which can be applied to any fixed point problem.

In a recently published Jupyter Notebook, we demonstrate the use of these routines from Python where we first show how the convergence of the simple fixed point iteration $x_{n+1} = \cos(x_n)$ can be improved from requiring 88 iterations to only 10 using Anderson Acceleration. A speed-up of almost a factor of 9

We go on to show how Anderson Acceleration can be applied to three well-known methods for solving Poisson's equation in two dimensions. For example, in the Jacobi Method, a fixed point iteration of the form

$$u^{n+1}_{j,i} = \frac{1}{4} \left(u^{n}_{j+1,i} +u^{n}_{j-1,i} +u^{n}_{j,i+1} +u^{n}_{j,i-1} \right)+\frac{h^2}{4}f_{j,i}$$

is applied to every element of a $100 \times 100$ solution grid per iteration. Without Anderson Acceleration, the Jacobi method took 28734 iterations to converge on an example problem. With Anderson acceleration, it converged in only 313 iterations -- a speed-up of over 90x

/blog_files/jacobi.png

Head over to GitHub to access the notebook and let us know if you apply Anderson Acceleration to one of your own problems.

 


Better (and faster) Portfolio Optimization - how? New NAG SOCP Solver


A host of new mathematical optimization algorithms have been introduced to the NAG Library which provide quants with the means for better and faster portfolio optimization.

How?

The new Second-order Cone Programming (SOCP) solver is fully tested on multiple benchmarking datasets and offers great performance, stability, efficiency and accuracy.

WEBINAR: "Modern modelling techniques in Convex Optimization and its applicability to finance and beyond" - REGISTER

Classical portfolio optimization models for active portfolio management can be enhanced by including a tracking-error volatility (TEV) constraint. However, this is a quadratic constraint which cannot be handled by traditional quadratic optimization solvers. The SOCP solver is preferred as it can solve convex QCQP problems easily.

Efficient Frontier with Tracking-Error Volatility (TEV) constraint

Note that without the absolute risk in the objective, the problem reduces to excess-return optimization. However, Roll (1992) noted this classical model leads to the unpalatable result that the active portfolio has a systematically higher risk than the benchmark and is not optimal. By taking the absolute risk into account, the QCQP model (solved by SOCP) improves the performance of an active portfolio.

The NAG SOCP solver was used to solve the above model. Each efficient frontier in the figure was generated by solving 2000 SOCPs. The whole process took around 4 minutes on an ordinary laptop (less than 0.02s per solving).

Webinar: 6 February 2020 - "Modern modelling techniques in Convex Optimization and its applicability to finance and beyond". The webinar will introduce the background of SOCP and QCQP, and review basic and more advanced modelling techniques. These techniques which will be demonstrated in real-world examples in Portfolio Optimization. Register here: https://register.gotowebinar.com/register/5588930156736514306

There's also a stack of resource on the SOCP (and other new solvers) on our website:

Second Order Cone Programming (SOCP) Derivative-free Optimization Solver Nearest Correlation Matrix Numerical Linear Algebra
What is it? What is it? What is it? Tech Poster
Tech Poster Tech Poster Tech Poster  
GitHub Examples   GitHub Examples  

Free trials of the new NAG Library are available.

 


How to use dco/c++ and the NAG AD Library to compute adjoints of a non-trivial PDE solver?


A new technical report demonstrates how to use dco/c++ and the NAG AD Library to compute adjoints of a non-trivial PDE (partial differential equation) solver. It shows how dco/c++ can be used to couple hand-written symbolic adjoint code with an overall algorithmic solution. It also demonstrates the easy-to-use interface when dco/c++ is coupled with the NAG AD Library. Here, the sparse linear solver (f11jc) can be switched from algorithmic to symbolic mode in one code line. It introduces the primal solver, the adjoint solver, and respective run time and memory results. An optimization algorithm is also run on a test case using steepest descent to show the potential use of the computed adjoints.

Read the report.

 


Introducing the NAG Library for C++


Earlier this year we began developing a set of C++11 interfaces for the NAG Library - read the full story in the blogs: Part 1 and Part 2 - since then development work has moved on apace. We are now able to release interfaces for a small subset of NAG Library routines available in C++11 form. These interfaces are automatically generated, and we're releasing them so that we can gain feedback on their design before rolling out the process across the whole library.

Please note: to make use of this software you will need a copy of the NAG Library, Mark 27 installed on your system. For installation or user help contact Technical Support.

 


Latest Nearest Correlation Matrix Routines in the NAG Library


The NAG Library has recently undergone a major update with many new routines introduced. Added to the Correlation and Regression Analysis Chapter are two new Nearest Correlation Matrix (NCM) routines:

We have lots of material on the NCM. If you would like to learn more, check out the following resources:

We recommend that you upgrade to the latest NAG Library to take advantage of the additional functionality and guaranteed support levels (all supported clients are guaranteed technical assistance on the current release and one previous). If you have any questions about this release do contact your Account Manager or the NAG Technical Support Service.

Free trials of the new NAG Library are available.

 


CVA in the Cloud - Solve Large Scale CVA Computations


Computing CVA and sensitivities using Origami and AD offers significant performance benefits compared with using finite differences and legacy grid execution software which might take many hours.

NAG has developed, in collaboration with Xi-FINTIQ, a CVA demonstration code to show how the NAG Library and NAG Algorithmic Differentiation (AD) tool dco/c++ combined with Origami - a Grid/Cloud Task Execution Framework available through NAG - can work together to solve large scale CVA computations.

CVA Demonstrator

Origami is a lightweight task execution framework. Users combine tasks into a task graph that Origami can execute on an ad-hoc cluster of workstations, on a dedicated in-house grid, on production cloud, or on a hybrid of all these. Origami handles all data transfers.

CVA Demonstrator
 

In our CVA demonstrator, the trades in netting sets are valued in batches. CVA is calculated per netting set by running the code forward as normal. The graph is then reserved and the dco/c++ adjoint version of each task is run to calculate sensitivities with respect to market instruments. The resulting graph has a large number of tasks with non-trivial dependencies which Origami automatically processes and executes.

Read the full article here.

 


Impressive results for Scotiabank using NAG Library, AD Tools, Origami and Azure for XVA


In a recent article by Anthony Malakian, Editor at Large, at WatersTechnology, he discovers how a new technology stack has enabled significant speed-ups for Scotiabank in calculating its valuation adjustments.

Using a technology stack of cloud GPUs, provided by Microsoft Azure, NAG Algorithmic Differentiation software tools, dco/c++, dco/map and the NAG AD Library, and Origami, a Grid/Cloud Task Execution Framework provided by Xi-FINTIQ and NAG in partnership, Scotiabank achieved an impressive runtime speedup of x30 for their risk calculations and derivatives pricing. In the words of Anthony Malakian, this "allows brokers to deliver more accurate derivatives pricing in 20 seconds, which would previously have taken 10 minutes".

To read the article in full click here. For more information on our solutions for XVA contact us at info@nag.com.

 


Extending Stan's Automatic Differentiation (AD) capabilities using dco/c++


Tape-based AD Libraries, such as NAG's dco/c++ tool, keep a record of calculations that are executed by a program to evaluate derivatives. They apply to a wider range of numerical codes than tape-free AD libraries, which are typically written to compute derivatives for a specific library of functions. The Stan Math Library is a tape-free AD library.

Philip Maybank, NAG Numerical Software Developer, recently presented at StanCon 2019. His presentation slides 'Extending Stan's Automatic Differentiation (AD) capabilities using dco/c++' can be viewed here.

The basic idea of the work in the presentation is that dco/c++ can be used to supply derivatives to Stan. This extends the range of functions which can be used by Stan's MCMC samplers. Philip illustrated this idea on a toy problem: inferring the parameters of a damped harmonic oscillator driven by white noise using Stan's NUTS.