Adjoint Algorithmic Differentiation: What is it and why use it?
Suppose that running your C++ pricing routine on the available computer(s) takes 1 minute. You might be interested in sensitivities of the computed price with respect to a (potentially large, e.g, due to term structure) number n of uncertain parameters, e.g, market volatilities. Let n=180. Sequential finite difference approximation requires at least n+1 pricing calculations yielding a total run time of at least 3 hours. The quality of this approximation may turn out unsatisfactory due to truncation and catastrophic cancellation in finite precision floating-point arithmetic. Careful application of NAG's AAD software dco/c++ (derivative code by overloading in C++; see (3)) can be expected to reduce the overall run time to less than 20 (often less than 10) minutes, while producing truncation-free gradient information with machine accuracy.
The following example shows that even better run time behaviour can be observed in certain practically relevant applications. In (2) we applied dco/c++ to an in-house implementation of the Longstaff-Schwartz algorithm for American option pricing. The gradient of the option price with respect to five parameters (stock price, strike, interest rate, time to maturity, volatility) was computed with machine accuracy at the expense of roughly one evaluation of the pricer. Central finite difference approximation took more than six times as long as illustrated by the following run time measurements (in seconds for sequential evaluation on a standard PC):
|Number of Monte Carlo Paths||Pricer (365 time steps)||Central Finite Differences||dco/c++ Adjoint|
Why Software Tool Support?
AAD can be implemented manually, that is, given an implementation of an arbitrary pricing routine, AAD experts may be able to write a corresponding adjoint version the run time of which is likely to undercut that of a tool-based AAD solution. This process can be tedious, error-prone, and extremely hard to debug. More importantly, it does not meet basic requirements for modern software engineering such as sustainability and maintainability. Each modification in the original pricer implies the need for corresponding modifications in the adjoint. To keep both codes consistent over time will become challenging. NAG's dco/c++ was designed for use with real-world large-scale C++ codes (including financial) as demonstrated by numerous successful applications.
The above statements gain further relevance in the context of second-order AAD. dco/c++ supports adjoints of arbitrary order through recursive template instantiation combined with C++ overloading and metaprogramming techniques. It provides a powerful and flexible user interface to its cache-optimized internal representation. Supported features include checkpointing methods for evolutions (e.g, finite difference methods for Partial Differential Equations) and ensembles (e.g, Monte-Carlo methods for Stochastic Differential Equations), user-defined adjoints of implicit functions (e.g, solvers for systems of linear and nonlinear equations, quadrature, and calibration routines), and hybrid AAD combining manually derived adjoints of parts of the computation with an overall tool-based adjoint (e.g, integration of GPUs, see (5), or smoothing techniques).
Moreover, a growing set of NAG Library routines are accepted by dco/c++ as intrinsics. Linkage of the NAG AD Library provides access to the corresponding adjoint versions.
Integration of AAD into a large real-world software stack is not trivial. A good understanding of both the theoretical background and of typical use cases of dco/c++ is essential. NAG runs both public and dedicated in-house courses including a limited time trial licence for the software. In-house courses allow prospective users to see dco/c++ applied to their own sample code. Two days of intense training will give you a thorough insight into the theory of Adjoint Algorimithic Differentiation and into its implementation with dco/c++ as well as an understanding of how this technology is applicable to your code base.
- NAG & Algorithmic Differentiation
- J. Deussen: Adjoint Methods for American Option Pricing. MSc Thesis, STCE, Dept. of Computer Science, RWTH Aachen, 2015.
- U. Naumann and J. du Toit: Adjoint Algorithmic Differentiation Tool Support for Typical Numerical Patterns in Computational Finance. NAG Technical Report TR3/14, 2014.
- U. Naumann: The Art of Differentiating Computer Programs. An Introduction to Algorithmic Differentiation. Number 24 in Software, Environments, and Tools, SIAM, 2012.
- J. du Toit, J. Lotz, and U. Naumann: Adjoint Algorithmic Differentiation of a GPU Accelerated Application. NAG Technical Report TR2/14, 2014.