Bayesian Sequential Learning

This entry was originally published in Estadistika, and is replicated here for reproducibility.

In my previous article, I discussed how we can use Bayesian approach in estimating the parameters of the model. The process revolves around solving the following conditional probability, popularly known as the Bayes' Theorem:

where P(w)\mathbb{P}(\mathbf{w}) is the a priori (prior distribution) for the objective parameters, P(yw)\mathbb{P}(\mathbf{y}|\mathbf{w}) is the likelihood or model evidence, and P(y)\mathbb{P}(\mathbf{y}) is the normalizing constant with the following form:

where P\mathscr{P} is the parameter space.

Posterior Distribution

The details on the derivation of the a posteriori were also provided in the said article, but there were missing pieces, which I think is necessary for us to support our proposition, and thus we have the following result:

Proposition. Let D{(x1,y1),,(xn,yn)}\mathscr{D}\triangleq\{(\mathbf{x}_1,y_1),\cdots,(\mathbf{x}_n,y_n)\} be the set of data points s.t. xRp\mathbf{x}\in\mathbb{R}^{p}. If yiiidN(w0+w1xi,1/α)y_i\overset{\text{iid}}{\sim}\mathcal{N}(w_0+w_1x_i,1/\alpha) and w[w0,w1]T\mathbf{w}\triangleq[w_0,w_1]^{\text{T}} s.t. wiidN2(0,I)\mathbf{w}\overset{\text{iid}}{\sim}\mathcal{N}_2(\mathbf{0},\mathbf{I}), then wyiidN2(μ,Σ)\mathbf{w}|\mathbf{y}\overset{\text{iid}}{\sim}\mathcal{N}_2(\boldsymbol{\mu},\boldsymbol{\Sigma}) where

and A[(xiT)]i\mathbf{\mathfrak{A}}\triangleq[(\mathbf{x}_i^{\text{T}})]_{\forall i}.

Proof: Let y^iw0+w1xi\hat{y}_i\triangleq w_0+w_1x_i be the model, then the data can be described as follows:

where εi\varepsilon_i is the innovation that the model can't explain, and Var(εi)=α1\mathbb{Var}(\varepsilon_i)=\alpha^{-1} since Var(yi)=α1\mathbb{Var}(y_i)=\alpha^{-1} as given above. Then the likelihood of the model is given by:

or in vector form:

where A[(xiT)]i\mathbf{\mathfrak{A}}\triangleq[(\mathbf{x}_i^{\text{T}})]_{\forall i} is the design matrix given above. If the a priori of the parameter is assumed to be standard bivariate Gaussian distribution, i.e. wiidN2(0,I)\mathbf{w}\overset{\text{iid}}{\sim}\mathcal{N}_2(\mathbf{0}, \mathbf{I}), then

Expanding the terms in the exponential function returns the following:

thus

The inner terms of the exponential function is of the form ax22bxax^2-2bx. This a quadratic equation and therefore can be factored by completing the square. To do so, let DαATA+βI\mathbf{D}\triangleq\alpha\boldsymbol{\mathfrak{A}}^{\text{T}}\boldsymbol{\mathfrak{A}}+\beta\mathbf{I} and bαATy\mathbf{b}\triangleq\alpha\boldsymbol{\mathfrak{A}}^{\text{T}}\mathbf{y}, then

In order to proceed, the matrix D\mathbf{D} must be symmetric and invertible (this can be proven separately). If satisfied, then IDD1=D1D\mathbf{I}\triangleq\mathbf{D}\mathbf{D}^{-1}=\mathbf{D}^{-1}\mathbf{D}, so that the terms inside the exponential function above become:

Finally, let ΣD1\boldsymbol{\Sigma}\triangleq\mathbf{D}^{-1} and μD1b\boldsymbol{\mu}\triangleq\mathbf{D}^{-1}\mathbf{b}, then

where bTD1DD1b-\mathbf{b}^{\text{T}}\mathbf{D}^{-1}\mathbf{D}\mathbf{D}^{-1}\mathbf{b} becomes part of C\mathcal{C}, and that proves the proposition. \blacksquare

Simulation Experiment

The above result can be applied to any linear models (cross-sectional or time series), and I'm going to demonstrate how we can use it to model the following simulated data. I will be using Julia, which already has an image available here in Nextjournal. Having said, some of the libraries are already pre-installed in the said image, for example Plots.jl. Thus, we only have to install the remaining libraries that we will need for this experiment.

]add Distributions Measures PlotThemes; precompile
228.0s

The installation will take about 3.5mins, and the first time you run the succeeding codes it will also take a few seconds because Julia needs to precompile some of the functions invoked, after that it will be fast.

Load the libraries as follows:

using Distributions
using LinearAlgebra
using Measures
using Plots
using PlotThemes
using Random
Random.seed!(1)
theme(:sand)
0.2s

The theme above simply sets the theme of the plots below. Further, for reproducibility purposes, I provided a seed as well. The following function will be used to simulate a cross-sectional data with population parameters set to w0w_0= -.3 and w1w_1= -.5 for 20 sample size.

function simulate(
    n::Int64 = 20; 
    w0::Float64 = -.3, 
    w1::Float64 = -.5, 
    α::Float64 = 1 / 5)
  x = rand(Uniform(-1, 1), n)
  w = [w0; w1]
  A = [ones(size(x)) x]
  y = A * w + rand(Normal(0, α), n)
  return (x, y, A, α)
end
0.1s
simulate (generic function with 2 methods)

From the above results, the parameters of the a posteriori can be implemented as follows:

function posterior(
    y::Array{Float64, 1}, 
    A::Array{Float64, 2}, 
    α::Float64,
    β::Float64 = (1/2)^2)
  
  Σinv = α * A'A + (1/sqrt(β)) * I;
  Σ = Σinv^(-1);
  μ = (α * Σ) * A'y;
  return (μ, Σ)
end
0.2s
posterior (generic function with 3 methods)

One feature of Julia is it supports unicode, making it easy to relate the codes to the math above, i.e. Σ and μ in the code are obviously Σ\boldsymbol{\Sigma} and μ\boldsymbol{\mu} above, respectively. The vector operations in Julia are also cleaner compared to that in R and in Python, for example we can encode Julia's A'A above as t(A) %*% A in R, and A.T.dot(A) in Python.

Finally, we simulate the data as follows:

wtrue = [-.3, -.5];
x, y, A, a = simulate(w0 = wtrue[1], w1 = wtrue[2]);
0.2s

We don't really have to write down the wtrue variable above since that's the default values of w0 and w1 arguments, but we do so just for emphasis.

While the main subject here is Bayesian Statistics, it would be better if we have an idea as to what the Frequentist solution would be. As most of you are aware of, the solution to the weights above is given by the following normal equation:

this is a known result, and we can prove this in a separate article. The above equation is implemented as follows:

west = (A'A)^(-1)*A'y
0.2s
2-element Array{Float64,1}: -0.322642 -0.59357

Therefore, for the current sample data, the estimate we got when assuming the weights as fixed and unknown is w^=[0.32264,0.59357]T.\hat{\mathbf{w}}=[-0.32264, -0.59357]^{\text{T}}. The figure below depicts the corresponding fitted line.

yhat = west[1] .+ west[2].*collect(-1:0.1:1)
l = @layout[p p];
p1 = plot(
  x, y, 
  xlab = "x", 
  ylab = "y", 
  title = " \nSampled Data with n=20",
  title_location = :left,
  markershape = :circle,
  markersize = 5,
  markeralpha = 0.5,
  markerstrokewidth = 0,
  linealpha = 0, 
  legend = false
)
p2 = deepcopy(p1)
plot!(p2, -1:0.1:1, yhat, 
  title = " \nFitted Line (Frequentist)", 
  linewidth = 2)
plot(p1, p2, layout = l, 
  size = (700, 400), dpi = 300, 
  margin = 5mm, link = :y)
0.6s

Not bad given that we have very small dataset. Now let's proceed and see how we can infer the parameters in Bayesian framework. The prior distribution can be implemented as follows:

function prior(
    w0::Float64, 
    w1::Float64)::Float64
  
  μ = [0., 0.]
  return pdf(MultivariateNormal(μ, I(length(μ))), [w0, w1])
end
0.1s
prior (generic function with 1 method)

As indicated in the above proposition, the parameters are jointly modelled by a bivariate Normal distribution, as indicated by the dimension of the hyperparameter μ above. Indeed, the true parameter we are interested in is the weight vector, w\mathbf{w}, but because we considered it to be random, then the parameters of the model we assign to it are called hyperparameters, in this case the vector μ\boldsymbol{\mu} and the identity matrix I\mathbf{I} of the prior distribution.

Moreover, the likelihood of the data can be implemented as follows:

function likelihood(
    w0::T, 
    w1::T, 
    i::Int64, y::Array{T,1}, 
    A::Array{T,2}, 
    α::T)::T where T<:Float64
  
  w = [w0, w1]
  μ = A[i, :]'w
  return pdf(Normal(μ, α), y[i])
end
function likelihood(
    w0::T, 
    w1::T, 
    n::Int64)::T where T<:Float64
  
  return likelihood(w0, w1, n, y, A, (1/a)^2)
end
0.1s
likelihood (generic function with 2 methods)

The mean of the likelihood is the specified linear model itself, which in vector form is the inner product of the transformed design matrix, A\boldsymbol{\mathfrak{A}}, and the weights vector, w\mathbf{w}, i.e. A[i, :]'w. This assumption is valid since the fitted line must be at the center of the data points, and that the error should be random. One of my favorite features of Julia language is the multiple dispatch. For example, the two likelihood functions defined above are not in conflict since Julia evaluates the inputs based on the type of the function arguments. The same is true for the posterior distribution implemented below. Unlike in R and in Python, I usually have to write this as a helper function, e.g. likelihood_helper.

function posterior(
    w0::T, 
    w1::T, 
    n::Int64) where T<:Float64
  
  w = [w0, w1]
  μ, Σ = posterior(y[1:n], A[1:n, :], (1/a)^2)
  return pdf(MultivariateNormal(μ, Σ), w)
end
0.1s
posterior (generic function with 3 methods)

Finally, the prediction is done by sampling the weights from the posterior distribution. The center of these weights is of course the mean of the a posteriori.

function predicted(
    n::T, 
    sample_size::T; 
    model::Symbol = :posterior) where T<:Int64
  if model === :posterior
    μ, Σ = posterior(y[1:n], A[1:n, :], (1/a)^2)
    w = rand(MultivariateNormal(μ, Σ), sample_size)
    X = [ones(length(-1:.1:1)) -1:.1:1]
    return X[:, 2], X*w
  elseif model === :prior
    μ = [0., 0.]
		Σ = I(length(μ))
    w = rand(MultivariateNormal(μ, Σ), sample_size)
    X = [ones(length(-1:.1:1)) -1:.1:1]
    return X[:, 2], X*w
  else
    throw(ArgumentError("density can only take :prior or :posterior."))
  end
end
0.2s
predicted (generic function with 1 method)

For example, to sample 30 weights from the posterior distribution using all the sampled data, and return the corresponding predictions, is done as follows:

predicted(length(y), 30)[2]
0.3s

The predicted function returns both x and y values, that's why we indexed the above result to 2, to show the predicted ys only. Further, to use only the first 10 observations of the data for calculating the y^\hat{y}, is done as follows:

predicted(10, 30)[2]
0.2s

So you might be wondering, what's the motivation of only using the first 10 and not all observations? Well, we want to demonstrate how Bayesian inference learns the weights or the parameters of the model sequentially.

Visualization

At this point, we are ready to generate our main vis. The first function (dataplot) plots the generated fitted lines from the a priori or a posteriori, both of which is plotted using the extended method of contour we defined below.

function dataplot(
    n::T, 
    sample_size::T; 
    model::Symbol = :posterior, 
    reverse::Bool = false) where T<:Int64
	
  pred = predicted(n, sample_size, model = model)
  
  if model === :posterior
	  markeralpha = 0.9
  else
    markeralpha = 0.
  end  
  p = plot(pred[1], pred[2][:, 1], linealpha = 0.3)
  for i in 2:size(pred[2], 2)
    plot!(p, pred[1], pred[2][:, i], linealpha = 0.3)
  end
  plot!(
    p,
    A[1:n, 2], 
    y[1:n],   
    xlim = (-1, 1),
    ylim = (-1, 1),
    color = colorant"#595753",
    markershape = :circle,
    markersize = 5,
    markeralpha = markeralpha,
    markerstrokewidth = 0,
    linealpha = 0,
    legend = false, 
    framestyle = :none,
    right_margin = 7mm
  )
end
0.2s
dataplot (generic function with 1 method)
import Plots.contour
function contour(prob_fn)
  
  w0_range = -1:0.01:1
  w1_range = -1:0.01:1
  X = repeat(
    reshape(w0_range, 1, :), 
    length(w1_range), 1)
  Y = repeat(w1_range, 1, length(w0_range))
  Z = map(prob_fn, X, Y)
  
  p = contour(
    w0_range, 
    w1_range, 
    Z, 
    fill = true,
    colorbar = false,
    legend = false, 
    xlim = (-1, 1),
    ylim = (-1, 1),
    framestyle = :none
	)
  
  plot!(p, 
    [wtrue[1]], 
    [wtrue[2]], 
    markershape = :star5, 
    markersize = 8, 
    markerstrokewidth = 0, 
    color = colorant"#fff", 
    ylim = (-1, 1), 
    framestyle = :none)
end
0.1s
contour (generic function with 2 methods)

Tying all the codes together, gives us this beautiful grid plot.

l = @layout [q q q; q q q; q q q; q q q; q q q; q q q];
p01 = plot(xlim = (-1, 1), ylim = (-1, 1), 
  framestyle = :none, top_margin = 7mm, left_margin = 2mm)
annotate!(p01, [-0.9], [0.9], 
  text("B A Y E S I A N
    S E Q U E N T I A L
    L E A R N I N G", 7, :left, :top)
)
annotate!(p01, [0], [0], 
  text("Likelihood (L)
    Prior/Posterior (C)
    Data Space (R)", 7, :center, :center
  )
)
annotate!(p01, [0.9], [-0.9], 
  text("AL-AHMADGAID B. ASAAD
    estadistika.github.io", 7, :right, :bottom)
)
p02 = contour(prior)
p03 = dataplot(1, 100, model = :prior)
p04 = contour((x, y) -> likelihood(x, y, 1))
p05 = contour((x, y) -> posterior(x, y, 1))
p06 = dataplot(1, 100)
p07 = contour((x, y) -> likelihood(x, y, 3))
p08 = contour((x, y) -> posterior(x, y, 3))
p09 = dataplot(3, 100)
p10 = contour((x, y) -> likelihood(x, y, 5))
p11 = contour((x, y) -> posterior(x, y, 5))
p12 = dataplot(5, 100)
p13 = contour((x, y) -> likelihood(x, y, 10))
p14 = contour((x, y) -> posterior(x, y, 10))
p15 = dataplot(10, 100)
p16 = contour((x, y) -> likelihood(x, y, 20))
p17 = contour((x, y) -> posterior(x, y, 20))
p18 = dataplot(20, 100)
plot(
  p01, p02, p03, 
  p04, p05, p06, 
  p07, p08, p09,
  p10, p11, p12,
  p13, p14, p15,
  p16, p17, p18,
  layout = l, 
  size = (700, 1500),
  dpi = 300,
  link = :both,
  margin = 0mm
)
3.8s

Since we went with style before comprehension, let me guide you then with the axes. All figures have unit square space, with contour plots having the following axes: w0w_0 (the x-axis) and w1w_1 (the y-axis). Obviously, the data space has the following axes: predictor (the x-axis) and response (the y-axis).

Discussions

We commenced the article with emphasis on the approach of Bayesian Statistics to modeling, whereby the estimation of the parameters as mentioned is based on the Bayes' Theorem, which is a conditional probability with the following form:

Now we will relate this to the above figure using some analogy, as to how the model sequentially learns the optimal estimate for the parameter of the linear model.

Consider the following: say you forgot where you left your phone, and for some reason you can't ring it up, because it could be dead or can't pick up some signals. Further, suppose you don't wanna look for it, rather you let Mr. Bayes, your staff, to do the task. How would he then proceed? Well, let us consider the weight vector, w[w0,w1]T,\mathbf{w}\triangleq[w_0,w_1]^{\text{T}}, be the location of your phone. In order to find or at least best approximate the exact location, we need to first consider some prior knowledge of the event. In this case, we need to ask the following questions: where were you the last time you had the phone? Were you in the living room? Or in the kitchen? Or in the garage? And so on. In the context of modeling, this prior knowledge about the true location can be described by a probability distribution, and we refer to this as the a priori (or the prior distribution). These set of possible distributions obviously are models itself with parameters, as mentioned above, referred to as the hyperparameters, which we can tweak to describe our prior knowledge of the event. For example, you might consider the kitchen as the most probable place where you left your phone. So we adjust the location parameter of our a priori model to where the kitchen is. Hence Mr. Bayes should be in the kitchen already, assessing the coverage of his search area. Of course, you need to help Mr. Bayes on the extent of the coverage. This coverage or domain can be described by the scale parameter of your a priori. If we relate this to the main plot, we assumed the prior distribution over the weight vector to be standard bivariate Gaussian distribution, centered at zero vector with identity variance-covariance matrix. Since the prior knowledge can have broad domain or coverage on the possible values of the weight vector, the samples we get generates random fitted lines as we see in the right-most plot of the first row of the figure above.

Once we have the prior knowledge in place, that is, we are already in the kitchen and we know how wide the search area likely to be, we can start looking for evidence. The evidence is the realization of your true model, relating to the math above, these realizations are the yiy_is, coming from yi=f(xw)y_i=f(x|\mathbf{w}), where f(xw)f(x|\mathbf{w}) is the link function of the true model, which we attempt to approximate with our hypothesized link function, h(xw^)h(x|\hat{\mathbf{w}}), that generated the predicted y^i\hat{y}_is. For example, you may not know where exactly your phone is, but you are sure with your habits. So you inform Mr. Bayes, that the last time you were with your phone in the kitchen was drinking some coffee. Mr. Bayes will then use this as his first evidence, and assess the likelihood of each suspected location in the kitchen. That is, what is the likelihood that a particular location, formed (or realized, generated or connected to) the first evidence (taking some coffee)? For example, is it even comfortable to drink coffee in the sink? Obviously not, so very low likelihood, but likely in the dining table or close to where the coffee maker is. If we assess all possible location within our coverage using the first evidence, we get the profile likelihood, which is what we have in the first column of the grid plot above, profile likelihood for the ith evidence. Further, with the first evidence observed, the prior knowledge of Mr. Bayes needs to be updated to obtain the posterior distribution. The new distribution will have an updated location and scale parameters. If we relate to the above figure, we can see the samples of the fitted lines in the data space plot (third column, second row), starting to make guesses of possible lines given the first evidence observed. Moving on, you inform Mr. Bayes of the second evidence, that you were reading some newspaper while having some coffee. At this point, the prior assumption of Mr. Bayes, for the next posterior, will be the posterior of the first evidence, and so the coverage becomes restrictive and with new location, which further help Mr. Bayes on managing the search area. The second evidence, as mentioned, will then return a new posterior. You do this again and again, informing Mr. Bayes of other evidences sequentially until the last evidence. The final evidence will end up with the final posterior distribution, which we expect to have new location parameter, closer to the exact location, and small scale parameter, covering the small circle of the exact solution. The final posterior will then be your best guess that would describe the exact location of your phone.

This may not be the best analogy, but that is how the above figure sequentially learns the optimal estimate for the weight vector in Bayesian framework.

Bayesian in Deep Learning

This section deserves a separate article, but I will briefly give some motivation on how we can generalize the above discussion into complex modeling.

The intention of the article is to give the reader a low-level understanding of how the Bayes' theorem works, and without loss of generalization, I decided to go with simple linear regression to demonstrate the above subject. However, this can be applied to any model indexed by or a function of some parameters or weights w\mathbf{w}, with the assumption that the solution is random but govern by some probability distribution.

Complex modeling such as in Deep Learning are usually based on the assumption that the weights are fixed and unknown, which in Statistics is the Frequentist approach to inference, but without assuming some probability distribution on the error of the model. Therefore, if we are to assume some randomness on the weights, we can then use Bayesian inference to derive or at least approximate (for models with no closed-form solution) the posterior distribution. Approximate Bayesian inference are done via Markov Chain Monte Carlo (MCMC) or Variational Inference, which we can tackle in a separate post.

Libraries

There are several libraries for doing Bayesian inference, the classic and still one of the most powertful library is Stan. For Python, we have PyMC3, Pyro (based on Pytorch), and TensorFlow Porbability. For Julia, we have Turing.jl, Mamba.jl, Gen.jl, and Stan.jl. I will have a separate article for these libraries.

Next Steps

The obvious next steps for readers to try out is to model the variance as well, since in the above result, the variance of the innovation or the error is known and is equal to α.\alpha. Further, one might consider the Frequentist sequential learning as well. Or proceed with other nonlinear complex models, such as Neural Networks. We can have these in a separate article.

versioninfo()
2.3s
Runtimes (1)