Jerry Chen / Jun 14 2019
Remix of Julia by Nextjournal

MAGMA binding: intro and gesvd

First Blog Post

This is my first blog post for the JSoC project: MAGMA binding.

In this post, I will try introducing the background of our project, and reveal some recent progresses.

For those who are not so familiar with Julia, you may want to read the official documentation Julia

Remix this to get started with Julia .

"$VERSION"
"1.1.0"

MAGMA (Matrix Algebra on GPU and Multicore Architectures) is a collection of next-generation linear algebra libraries for heterogeneous architectures. By completing the Julia interface of MAGMA, we can provide powerful heterogeneous computing toolkits to users who require a large amount of linear algebra computation and face great computational complexity along with huge memory costs, such as condensed matter physicists who focus on tensor networks and machine learning.

BACKGROUND

Julia is becoming more and more popular and excellent for big data analytics as well as high-performance computation. Also, it is suitable for CPU and GPU computing especially with Julia packages for CUDA (e.g., Project JuliaGPU).

Meanwhile, recent activities of major chip manufacturers, such as Intel, AMD, IBM,and NVIDIA, make it more evident than ever that future designs of microprocessors and large HPC systems will be hybrid/heterogeneous in nature, relying on the integration (in varying proportions) of two major types of components:

  1. many-cores CPU technology, where the number of cores will continue to escalate because of the desire to pack more and more components on a chip while avoiding the power wall, instruction level parallelism wall, and the memory wall; and
  2. special purpose hardware and accelerators, especially Graphics Processing Units (GPUs), which are in commodity production, have outpaced standard CPUs in floating point performance in recent years, and have become as easy, if not easier to program than multicore CPUs.

While the relative balance between these component types in future designs is not clear, and will likely to vary over time, there seems to be no doubt that future generations of computer systems, ranging from laptops to supercomputers, will consist of a composition of heterogeneous components.

Among the various libraries used for CPU/GPU computing, MAGMA (Matrix Algebra on GPU and Multicore Architectures) is a collection of next-generation linear algebra libraries for heterogeneous architectures. MAGMA is designed and implemented by the team that developed LAPACK and ScaLAPACK, incorporating the latest developments in hybrid synchronization- and communication-avoiding algorithms, as well as dynamic runtime systems. Interfaces for the current LAPACK and BLAS standards are supported to allow computational scientists to seamlessly port any linear algebra reliant software components to heterogeneous architectures.

MAGMA allows applications to fully exploit the power of current heterogeneous systems of multi/many-core CPUs and multi-GPUs to deliver the fastest possible time to an accurate solution within given energy constraints. MAGMA can help scientists and engineers with different work in machine learning and other popular computing techniques.

For the ease of Julia users' CPU/GPU linear algebra computing, one natural thought is to implement the Julia binding for MAGMA. By binding MAGMA to Julia and providing corresponding BinaryProvider.jl for installation, both of which should be fitted in with existing Julia GPU project, we will definitely benefit many MAGMA and Julia users sustainably.

Progress

Recently, we have developed the simplest but available wrappers for the routines gesvd. All our work are open-sourced on GitHub

As all the other versions in LAPACK, ScaLAPACK or CUBLAS, gesvd routines use QR iteration to carry the singular value decomposition.

In MAGMA, the gesvd routines are divided into four different subroutines: Sgesvd (for matrices with elements Float32), Dgesvd (Float64), Cgesvd (Complex F32) and Zgesvd (Complex F64).

For Julia users, there will be no need for one task to own distinct methods so instead we used the macro @eval to realize the multiple dispatch. The final method is named by 'gesvd!', and will be able to receive three arguments: jobu and jobvt, which should belong to 'AbstractChar' and indicate how the U or VT should be returned (see MAGMA to find the details), and A which should be an 'AbstractMatrix'.

After one puts these three arguments into the method gesvd!, the method itself will help deal with everything between Julia and MAGMA. The only thing you need to do is waiting for the returned U, S, VT that

USVT=AU S V^T=A

or for the complex situations

USVH=AUSV^H=A

Tests

Currently we use the gesvd in the Julia stdlib to test whether the MAGMA binding works well or not.

The idea is that we first randomly generate a matrix with certain scale such as 2 × 2, and then get the standard answer S from stdlib and get the testing answer s from our wrappers. The criteria for the right calculation is that norm(S .- s) < 1e-7, and if it fails there will be tons of information telling you how it went wrong.

Obstacles & Troubleshooting

I believe the most common troubles lies in the environment.

MAGMA's dynamic library

To use the MAGMA lib in Julia, one has to get the MAGMA dynamic library at first, e.g. libmagma.so on Linux systems. It will be helpful to check the official manual how to install MAGMA if you are confront with any problems along the way.

Usually, we should carefully check the installation and paths of pre-conditional libraries, such as CUDA, MKL, OPENBLAS or so on. Especially the necessary editing in the file make.inc before compiling.

Julia's environment

To develop the binding project in Julia REPL, one should type ']' to enter the Pkg's own REPL.

Then, dev https://github.com/Roger-luo/MAGMA.jl.git

An important constant is

libmagma = "/usr/local/magma/lib/libmagma.so"

which indicates the location of MAGMA's shared library.

In the Pkg mode, one can 'test MAGMA' to run current testsets for MAGMA binding.