JSoC'19: TraceEstimation.jl
Introduction
Hey everyone! It has been an amazing experience working on Trace Estimation package during my Summer Internship at Julia. This is my final article with JSoC'19 tag. It aims to be a quick-start guide + introductory analysis on using TraceEstimation.jl package.
Getting Started
To get started, we will first import Trace Estimation package.
# We are gonna bring in a specific branch from the TraceEstimation github repo using Pkg Pkg.add(PackageSpec(url="https://github.com/luca-aki/traceEstimation.jl", rev = "dapp")) using TraceEstimation
Now, import other required packages.
# Ingredients # Matrix Depot for some sample matrices Pkg.add("MatrixDepot") using LinearAlgebra, SparseArrays, MatrixDepot, TraceEstimation
Dense Matrix
Let us begin by generating two symmetric positive definite matrices for our use. Following that we will do some test-runs.
# A random matrix generated inside julia A = rand(4000,4000); A = A + A' + 400I # A random hilbert matrix generated using Matrix Depot # "The Hilbert matrix has (i,j) element 1/(i+j-1). It is notorious for being ill conditioned. It is symmetric positive definite and totally positive." B = matrixdepot("hilb", 4000) + 0.01I println() #extra print statement to remove clutter in notebook
Square Root, Cube Root, ..., N-th Root of a Matrix
We can use Stochastic Lanczos Quadrature (SLQ) method as well Chebyshev Interpolation (chebyhutch) method to calculate n-th root of a matrix. This is used to calculate schatten p-norms.
Both SLQ and chebyhutch perform really well while calculating these norms. However, SLQ gives much more accurate results without making any changes to default parameters.
Following is an example where we calculate SQRT and CBRT of matrices A and B using SLQ.
# SQRT of Matrix A # We simply pass the matrix and the function as parameters to slq function slq(A, fn = sqrt)
# CBRT of Matrix B # Custom function cuberoot(x) = x^(1/3) # We can change the eps value for higher accuracy. Lower the eps value, higher the accuracy. slq(B, skipverify = true, eps = 0.01, fn = cuberoot)
For reference, we can calculate the actual values.
tr(sqrt(A))
tr(B^(1/3))
For more information on how SLQ works visit here.
For more information on how to use SLQ, simply type ?slq in julia REPL.
Log Determinant
Log-determinant is the trace of log of the given matrix.
Either SLQ or chebyhutch can be used to calculate log-determinant. However, in this case, both of them produce sufficiently accurate results. Therefore, chebyhutch is preferred as it is a parallel algorithm and performs better.
# Calculate logdet of A # Chebyhutch takes two additional parameters # Iteration number (m) = 4 # Polynomial degree (n) = 6 chebyhutch(A, 4, 6, fn = log)
In his 2015 paper, I. Han showed that the polynomial degree required for chebyshev interpolation to converge is
# Calculate logdet of B # We can increase polynomial degree for better accuracy (n = 24) # We can also increase iteration number for better precision (m = 6) chebyhutch(B, 6, 24, fn = log)
For reference, here are the original values.
logdet(A)
logdet(B)
For more information on how chebyhutch works and speed comparison visit here.
For more information on how to use chebyhutch, type ?chebyhutch in julia REPL.
Matrix Inverse
Since SLQ essentially works by estimating Ritz-values of the matrix, it is stable and the ritz-values can be used to find the inverse values of the eigenvalues of given matrix with sufficient accuracy.
# Inverse of Matrix A slq(A, fn = inv)
Though it suffers the same fate as chebyshev interpolation method, i.e., as the condition number increases, lanczos step value must also increase. This can be done via decreasing the m tolerance value.
# Inverse of Matrix B # Decreasing mtol value (results in increse of m lanczos steps) slq(B, mtol = 0.001, fn = inv)
Following are the actual values for reference.
tr(inv(A))
tr(inv(B))
Sparse Matrix
Both above methods work identically for sparse matrices as well. SLQ benefits from low non-zero values as its running time, as shown in 2016 paper by S. Ubaru, is
# Generate two sparse matrices for our example N = sprand(8000,8000,0.0006); N = N + N' + 80I M = matrixdepot("poisson", 90) #8100x8100 println()
Similar Calculations
# SLQ to find logdet slq(N, fn = log)
# SLQ to find inverse # Increasing the m value for large condition number matrix slq(M, mtol = 0.005, eps = 0.005, fn = inv)
Reference values
Nm = Matrix(N) logdet(Nm)
It is not possible to run tr(inv(M)) on this notebook, as it will run out of memory.
tr(inv(M))
Sparse Matrix Inverse
SLQ and chebyhutch both struggle with very large condition number matrices
To use diagonal approximation method, we can call tr_inv.
# trace of inverse # tr_inv(Matrix, Probing-Vectors-Count, Interpolation-Points) tr_inv(M, 4, 20)
This method can achieve really low relative error performance for ill-conditioned matrices. There are a few parameters this method depends on.
The most prominent ones are the Interpolation-Points and Precoditioner("pc").
# Increasing the interpolation points # Changing preconditioner to Incomplete-LU tr_inv(M, 4, 30, pc = "ilu")
There are three preconditioners available: Incomplete Cholesky, Incomplete LU, AMG, and Chebyshev Approximation. There are also two fitting models available: Linear Regression and PCHIP.
# Changing preconditioner tr_inv(M, 8, 60, pc = "ilu", model = "pchip")
The selection of preconditioner and fitting model depends upon the type of matrix.
Ending Note
I would like to thank my mentor Mr. Mohamed Tarek for his help throughout the course of my summer internship. Without his help and devotion to teach me this project/package couldn't have been accomplished.
Finally, I would say that there are still openings and scope for improvements. Some of these are mentioned in the previous articles as well.