My Julia Notes: Applications for Finite Element Analysis - Basis Functions

Author's note: This article is primarily intended for me - for me to log my thoughts to refresh my memory, to try out NextJournal functionality, and to refer my coworkers and colleagues to as I introduce them to Julia. I am by no means an expert in computer science nor Julia and so make no guarantees that the approach I've taken is the one you'd find in "The Book". I welcome any constructive suggestions or comments!

Mathematics

Lagrange Basis Functions

The Lagrange basis polynomials are defined as:

formula not implemented

The degree-1 (i.e. linear) Lagrange basis polynomials in one dimension (i.e. univariate) are simple enough to write down:

formula not implemented

Syntax note: When referring to Lagrange basis polynomials I will use the syntax above (i.e. inline_formula not implemented), while I will refer to Lagrange basis functions on a domain I will use the syntax: inline_formula not implemented. In the finite element method the Lagrange basis functions are restricted to the domain inline_formula not implemented.

The Hexahedral Finite Element

To construct the basis functions for a three-dimensional hexahedral finite element, we can simply compute the tensor product of the three univariate basis functions corresponding to each dimension:

formula not implemented

From which we can explicitly compute the basis functions as:

formula not implemented

For finite element analysis we often need to use the Jacobian of these basis functions. Below we'll compute the full spatial Jacobian matrix, although finite element codes often compute various Jacobian-like matrices (i.e. matrices involving derivative evaluations) such as the strain-displacement matrix.

formula not implemented

The vector of degree 1 Lagrange basis functions and its Jacobian matrix are relatively straightforward to hard-code into a finite element software. However as we increase the polynomial degree (e.g. to degree 2) of the basis we can see the size and complexity of these arrays rapidly increases:

formula not implementedformula not implemented

The number of entries in the trivariate shapefunction array isinline_formula not implementedwhile the Jacobian matrix has inline_formula not implementedentries. Imagine the absolute monstrosities that would come from evaluating a degree 8 Lagrange basis and Jacobian (729 and 2187 entries respectively) -- several commercial so-called p-element finite element codes employ degree 8 elements. Because of the inordinate amount of effort it would require to code higher order elements by hand, I heavily suspect that these are not hand-coded. And I would strongly suggest that if your project is dependent on hard-coded capabilities you should limit yourself to second order elements. However, as we'll show below, there is a nifty technology that, more or less, removes this limitation and allows for rapid development and deployment of general high-order finite element routines.

Code Listing

## SECTION 0
import Pkg
requiredPackages = ["LinearAlgebra", "ForwardDiff", "BenchmarkTools"]
for package in requiredPackages
  if haskey(Pkg.installed(), package) == false
  	Pkg.add(package)
  end
end
## SECTION 1
import LinearAlgebra
import ForwardDiff
import BenchmarkTools
## SECTION 2
function lagrangeBasis(k,variate)
  node = range(-1,1,length=k+1)
  L = Array{Any,1}(undef,k+1)
  for j = 1:k+1
    L[j] = 1.0
  	for m = 1:k+1
    	if j != m
			  L[j] *= ((variate-node[m])/(node[j]-node[m]))
      end
    end
  end
  return L
end
## SECTION 3
k = 2
u(ξ) = lagrangeBasis(k,ξ)
v(η) = lagrangeBasis(k,η)
w(ζ) = lagrangeBasis(k,ζ)
## SECTION 4
(f::Array{Any},g::Array{Any}) = LinearAlgebra.kron(f,g)
## SECTION 5
N(ξ) = u(ξ)
N(ξ,η) = u(ξ)  v(η)
N(ξ,η,ζ) = N(ξ,η)  w(ζ)
## SECTION 6
JN(ξηζ) = ForwardDiff.jacobian(ξηζ->N(ξηζ[1],ξηζ[2],ξηζ[3]),ξηζ)
## SECTION 7
println(repeat("#",20))
println("Benchmarking the Evaluation of the ",(k+1)^3)
println("Degree ", k, " Lagrange Basis Functions...")
BenchmarkTools.@btime N($rand(),$rand(),$rand());
println(repeat("#",20))
println("Benchmarking the Evaluation of the ",3(k+1)^3)
println("Entries of the Jacobian Matrix of the Degree ", k, " Lagrange Basis Functions...")
BenchmarkTools.@btime JN([$rand();$rand();$rand()]);
119.9s

Code Explanation

Section 0: Install required packages

import Pkg
requiredPackages = ["LinearAlgebra", "ForwardDiff", "BenchmarkTools"]
for package in requiredPackages
  if haskey(Pkg.installed(), package) == false
  	Pkg.add(package)
  end
end
Julia

I believe that code-shares should be a self-contained example; Don't assume that the reader will have optional packages or modules, nor know how to install them manually. So here is a short section that imports Julia's built-in package management package, Pkg, and then checks to see if the user has the required packages for this code listing. If the user doesn't have the packages installed, then we install the packages. Note that this requires internet connection.

Section 1: Import required packages

import LinearAlgebra
import ForwardDiff
import BenchmarkTools
Julia

The three packages we'll be using are LinearAlgebra, ForwardDiff, and BenchmarkTools. Technically we probably don't need LinearAlgebra as the only function we'll use from this package is kron, which is available in Base, but in most computational routines we'll probably be using more functions from LinearAlgebra, so we import it just in case ¯\_(ツ)_/¯

Section 2: Define a function for the Lagrange basis

function lagrangeBasis(k,variate)
  node = range(-1,1,length=k+1)
  L = Array{Any,1}(undef,k+1)
  for j = 1:k+1
    L[j] = 1.0
  	for m = 1:k+1
    	if j != m
			  L[j] *= ((variate-node[m])/(node[j]-node[m]))
      end
    end
  end
  return L
end
Julia

Here we define a method that evaluates the k+1 Lagrange basis functions, for a degree k Lagrange element, on a bi-unit domain (i.e. [-1, 1]), at a specified location (variate).

Section 3: Construct univariate Lagrange basis functions

k = 2
u(ξ) = lagrangeBasis(k,ξ)
v(η) = lagrangeBasis(k,η)
w(ζ) = lagrangeBasis(k,ζ)
Julia

In this section we define three methods u(ξ), v(η), and w(ζ) that essentially provide a clear shorthand to the previously defined Lagrange basis function. Perhaps my favorite feature in Julia is the ability to utilize Unicode characters! The language of science is rife with letters and symbols not within the Latin script, for instance the Greek letter: π. It is quite nice to be able to shorten lines of code by substituting variable names with their symbolic shorthand:

# Linear Elasticity
stress = elasticModulus * strain 	# Spelled-out variable names
σ = E * ϵ 												# Symbolic shorthand
# Transonic Flow
[1 + (specificHeatRatio + 1) * (peturbationPotential / velocityFreestream)] *  machFreestream < 1								# Spelled-out variable names
[1 + (γ+1)*(ϕ/V∞)] * M∞^2 < 1			# Symbolic shorthand
Julia

Section 4: Define the tensor product function

(f::Array{Any},g::Array{Any}) = LinearAlgebra.kron(f,g)
Julia

Another cool feature of using Unicode characters is that you can use them in function names, which is especially useful for defining shorthand functions for some more advanced mathematical operations. In this section I alias the tensor product function, specifically the Kronecker product, as the ⊗ Unicode character which is its standard mathematical symbol. We can see in the next section how this allows us to drastically simplify our code.

Section 5: Specify the n-variate shapefunction routines

N(ξ) = u(ξ)
N(ξ,η) = u(ξ)  v(η)
N(ξ,η,ζ) = N(ξ,η)  w(ζ)
Julia

Here we sort of demonstrate part of the multiple dispatch functionality of Julia, as we define three functions for computing basis functions. Honestly, I'm not sure if this actually exercises multiple dispatch (though I believe it does) or if it's merely handled via variable argument functionality. The first function N(ξ) will be used if a single variate value is passed - as in the univariate case - and the second function N(ξ,η) will be used if two variate values are passed, the bivariate case. The function of most interest to us is the third function in this section N(ξ,η,ζ), the trivariate case. Note that the bivariate and trivariate cases construct their shapefunction arrays via the Kronecker product (a special case of the tensor product) method within the LinearAlgebra package.

Note the usage of the ⊗ character as described in section 4. Compare the following methods of writing the third line in the above code listing:

## Using Symbolic Shorthand
N(ξ,η,ζ) = N(ξ,η)  w(ζ)
N(ξ,η,ζ) = (u(ξ)  v(η))  w(ζ)
## Conventional / Verbose 
N(ξ,η,ζ) = LinearAlgebra.kron(N(ξ,η),w(ζ))
N(ξ,η,ζ) = LinearAlgebra.kron(LinearAlgebra.kron(u(ξ),v(η)),w(ζ))
Julia

Section 6: Automatic Differentiation

JN(ξηζ) = ForwardDiff.jacobian(ξηζ->N(ξηζ[1],ξηζ[2],ξηζ[3]),ξηζ)
Julia

This is where things get really slick. We can use the ForwardDiff package to apply forward-mode automatic differentiation, an approach that essentially applies the chain-rule to all the elementary operations that occur when your computer evaluates commands. Instead of needing to precalculate, either by hand or computer-assisted, the large array of derivatives, we instead accomplish this in a single line of code -- which is accurate to machine precision.

As I'll talk more about later the automatic differentiation approach I've taken is not as performant as a hand-coded (or code-generated) Jacobian array, but this approach is very flexible, easy to mistake-proof & code-review, and scales well to higher-degree polynomials.

Section 7: Benchmarking

BenchmarkTools.@btime N($rand(),$rand(),$rand());
BenchmarkTools.@btime JN([$rand();$rand();$rand()]);
Julia

Lastly, here we use BenchmarkTools to benchmark our critical functions. We use $ to interpolate the values into the benchmark expression, to avoid the problems of benchmarking with globals. Essentially, any interpolated variable $ or expression $(...) is "precomputed" before benchmarking begins.

Citation: https://github.com/JuliaCI/BenchmarkTools.jl

Conclusion

On NextJournal's built-in Julia v1.1 REPL, a benchmark ran on July 26th, 2019 evaluated the degree 2 basis functions in 2.278 μs, and the corresponding Jacobian matrix in 8.393 μs. Respectively, this means that per second, on a single CPU thread/process, we could evaluate roughly the shapefunctions of 440k quadratic elements or the full Jacobian matrix of 120k quadratic elements. That's not too bad!

In a forthcoming NextJournal, I will run a parameter sweep and compare automatic differentiation with hard-coded routines. But I'd like to end this blog-post with a few key takeaways. Note: I will make the "NextJournal" text in this paragraph hyperlink to the article once it's written.

Firstly, this code is at least very flexible in that we've only had to code our base Lagrange function once and, by changing the input to the function, we can trivially evaluate arbitrary degree and dimension shapefunctions.

Secondly, this code is (in my opinion) fairly easy to read, comprehend, verify, and maintain. I believe this is a testament to the motivation that's driven Julia's development.

Lastly, while I'll show in my next post that this approach is not as fast as hand-coded functions that, when truly evaluating "optimized" code, you should include the time spent in development, maintenance, and bug-fixing.

Runtimes (1)