Skip to main content

The NAG Fortran Library receives a major update including new optimizers

Press release   •   May 01, 2013 10:19 BST

1 May 2013 – The Numerical Algorithms Group (NAG) announces new numerical functionality added to its numerical library for Fortran. The new functionality included at Mark 24 of the NAG Fortran Library brings the number of available routines to over 1,700, all of which are expertly documented and includes extensions in the areas of multivariate methods, optimization, wavelet transforms, time series analysis, random number generators, special functions, correlation and regression analysis, eigenvalues and eigenvectors and operational research.

The new NAG Library contains new routines that have been added in response to customer requests, and further enhancements contributed by NAG’s expert developers and collaborators.

The inherent flexibility of the mathematical and statistical routines in the NAG Fortran Library enable it to be used across multiple programming languages, environments and operating systems including Excel, Java, Microsoft .NET,  Visual Basic and many more.

New NAG Fortran Library mathematical and statistical functionality:

Multi-start (­global) optimization – The first of two entirely new routines added to the library is for nonlinear programming. It employs a sequential QP algorithm to find, from a number of different starting points, the minimum of a general nonlinear function subject to linear, nonlinear and simple bound constraints. The more starting points specified by the user, the greater confidence he or she might have that the best of the local solutions is a global solution. Optionally the user could ask for the best few solutions, getting a ‘bigger picture’ and enabling the user to choose a solution which, despite not being the very best, satisfies other desirable qualities not specified in the mathematical model.

The second multi-start routine is based upon a nonlinearly constrained, nonlinear sum of squares local optimiser. This global optimiser can also return the best few solutions found, mirroring the advantages of the other, more general, routine.

Non-negative least squares (local optimization) – In response to user-demand NAG has added a bounded variable linear least squares solver into our local optimisation chapter. Often the requirement is for the less general non-negative least squares problem which this routine also addresses. It is designed for dense problems of moderate size, though no practical size limit is enforced by the routine itself. Optionally the user may ask for a solution of minimal length to enforce uniqueness whenever the matrix is not of full rank.

Nearest Correlation Matrix – Adding to the existing nearest correlation matrix functionality in the NAG Library is the cutting edge ‘individually weighted elements’ nearest correlation matrix routine. This routine allows the user to weight individual elements in their approximate correlation matrix. It can also force the computed correlation matrix to be positive definite, required by some applications to improve the condition of the matrix.

Inhomgeneous Time Series – A suite of three new routines for processing inhomogeneous time series has been added to the time series analysis chapter. An inhomogeneous time series is one that has been sampled at irregular time intervals.

Gaussian Mixture Model – A new statistical clustering routine, requested by our market research customers, has been added. The Gaussian Mixture Model is a useful tool for summarising groups in a multivariate data.

Confluent Hypergeometric Function (1F1)– Routines to evaluate the confluent hypergeometric function, commonly found in many applications including option pricing have been included. These routines have been designed to provide high accuracy solutions over a large range of input parameters. Furthermore, they may be used to determine scaled solutions for when the value of the function is not explicitly representable.

Brownian Bridge & random fields – Routines for simulating a Brownian bridge and from a family of random fields have been added to the random number generators chapter.

Best subsets – A general purpose branch and bound algorithm for selecting the best subset of features from a larger population of features has been added to the operations research chapter.

Real sparse eigenproblems – A new routine that computes selected eigenvalues and eigenvectors of a real sparse general matrix has been included. It combines flexible algorithms from the sparse linear algebra chapters under a simple interface. The routine has proven to be extremely efficient at solving user-supplied large sparse eigenproblems.

Matrix Functions – Further additions have been made in the area of matrix functions resulting from NAG’s Knowledge Transfer Partnership with the University of Manchester. New routines  compute  the  matrix logarithm, exponential, sine, cosine, sinh or cosh of real and complex matrices (Schur-Partlett algorithm), function of real and complex matrices (using numerical differentiation) and function of  real and complex matrices (using user-supplied derivatives).

Two stage spline approximation – Mark 24 features the first part of collaborative work with the University of Strathclyde. The new functionality sits in the curve and surface fitting chapter; it may be used to compute a spline approximation to a set of scattered data, which it does using a two stage approximation method (shown in the image above).

Speaking of Mark 24, a Senior Developer at one of NAG’s partners said “I was particularly pleased to see the addition of the Log Matrix and Exponential Matrix functions and the fact that these solvers worked with general matrices as well as symmetric is particularly useful for me. The further additions to the suite of Nearest Correlation Matrix functions as well as new additions to the Eigen Values Chapter are of course very valuable too. I am also interested to experiment with the Real Confluent Hypergeometric function as this may have several uses in future projects”

More benefits of the NAG Fortran Library:

·  Highly detailed documentation giving background information and function specification. In addition it guides users, via decision trees,  to the right routine to solve their problem.

·  Expert Support Service direct from NAG’s algorithm development team – if users need help, NAG’s development team, are on hand to offer assistance.

Hands-on Product Training – NAG offers a wide range of tailored training courses, including ‘hands-on’ practical sessions, either at our offices or in-house helping users to get the most out of their software.

For more information visit the website or contact us .

Image shows "Calculation and Evaluation of Bivariate Spline Fit from Scattered Data using Two-Stage Approximation" - NAG's Curve and Surface Fitting Chapter now includes a routine for computing a spline approximation to a set of scattered data.


The Numerical Algorithms Group (NAG) is dedicated to applying its unique expertise in numerical engineering to delivering high-quality computational software and high performance computing services. For over 40 years NAG experts have worked closely with world-leading researchers in academia and industry to create powerful, reliable and flexible software which today is relied on by tens of thousands of individual users, as well as numerous independent software vendors. NAG serves its customers from offices in Oxford, Manchester, Chicago, Tokyo and Taipei, through staff in France and Germany, as well as via a global network of distributors.

Comments (0)

Add comment


By submitting the comment you agree that your personal data will be processed according to Mynewsdesk's Privacy Policy.