4.1 Linear Algebra in Julia#

First we need to learn how to do linear algebra in Julia. In the first exercise we learned how to generate and manipulate vectors and matrices. We will now learn how to do linear algebra with these objects.

Transposition#

As in other languages A' is the conjugate transpose, or adjoint

import Pkg
Pkg.instantiate()
A = [1 2 3; 4 5 6; 7 8 10]
3×3 Matrix{Int64}:
 1  2   3
 4  5   6
 7  8  10
A'
3×3 adjoint(::Matrix{Int64}) with eltype Int64:
 1  4   7
 2  5   8
 3  6  10
transpose(A)
3×3 transpose(::Matrix{Int64}) with eltype Int64:
 1  4   7
 2  5   8
 3  6  10

Transposed multiplication#

Julia allows us to write this without *

A'A
3×3 Matrix{Int64}:
 66   78   97
 78   93  116
 97  116  145

Note that you can also use A\b to solve the linear equation even if A is not square. A\b gives us the least squares solution if we have an overdetermined linear system (a “tall” matrix). If we have an underdetermined linear system (a “short” matrix) A\b gives us the minimum norm solution.

Linear algebra package#

Julia has a built in package for linear algebra called LinearAlgebra. This package contains a lot of useful functions for linear algebra. Let’s look at some of them.

using LinearAlgebra

Computing the dot product of two vectors is done using the dot function:

v = [1, 2, 3]
dot(v, v)
14

The outer product of two vectors is computed using the transpose of the second vector:

v*v'
3×3 Matrix{Int64}:
 1  2  3
 2  4  6
 3  6  9

We can compute inverse of a matrix using the inv function:

inv(A)
3×3 Matrix{Float64}:
 -0.666667  -1.33333   1.0
 -0.666667   3.66667  -2.0
  1.0       -2.0       1.0

Linear Equation solver in Julia#

In the next Notebook we will implement our own solvers. However, Julia already has a built in solver for linear equations. We can use the backslash operator \ to solve a linear equation. For example we can solve the linear equation \(Ax = b\) by writing x = A \ b. Note, however, that while this solver is already quite good in solving a lot of linear equations more sophisitcated solver which are adapted to specific problems sets are often more efficient and accurate. It is also important to keep in mind that using a generic solver like the backslash operator can fail silently or can give you an very inaccurate solution without warning.

A = [2 0 4 3 ; -2 0 2 -13; 1 15 2 -4.5 ; -4 5 -7 -10]
x = [4,9,9,4]
b = A*x
x_hat = A\b
4-element Vector{Float64}:
 3.999999999999949
 9.000000000000005
 9.000000000000018
 4.000000000000011

We can also do the LU factorization of \(A\) using the lu function (from the LinearAlgebra package). The function returns the lower triangular matrix \(L\), the upper triangular matrix \(U\).

lu(A)
LU{Float64, Matrix{Float64}, Vector{Int64}}
L factor:
4×4 Matrix{Float64}:
  1.0    0.0       0.0        0.0
 -0.25   1.0       0.0        0.0
  0.5   -0.153846  1.0        0.0
 -0.5    0.153846  0.0833333  1.0
U factor:
4×4 Matrix{Float64}:
 -4.0   5.0   -7.0      -10.0
  0.0  16.25   0.25      -7.0
  0.0   0.0    5.53846   -9.07692
  0.0   0.0    0.0       -0.166667

For more advanced linear equation solving we can use iterativesolvers.jl. It also contains the “Jacobi” and “Gauss-Seidel” solvers we had in the lecture. To get a list of solvers you can look at the documentation: https://iterativesolvers.julialinearalgebra.org/dev/

We can use the inverse to solve a linear equation. However, this is in general not recommended since it is often slower and less accurate than using \.

x_hat_2 = inv(A)*b
4-element Vector{Float64}:
 4.000000000000025
 9.000000000000025
 9.000000000000036
 4.000000000000051

Norm of a vector#

There are different commonly used norms of a vector.

  • The 2-norm is the Euclidean norm \(\|x\|_2 = \sqrt{\sum_{i=1}^n x_i^2}\)

  • The 1-norm is the sum of the absolute values of the elements \(\|x\|_1 = \sum_{i=1}^n |x_i|\)

  • The \(\infty\)-norm is the maximum absolute value of the elements \(\|x\|_\infty = \max_i |x_i|\)

The norm of a vector is computed using the norm function:

x = [2,-3,1,-1]
twonorm = norm(x) # 2-norm
3.872983346207417
infnorm = norm(x,Inf)
3.0
onenorm = norm(x,1)
7.0

We can also normalize a vector using the normalize function:

normalize(x,2)
4-element Vector{Float64}:
  0.5163977794943222
 -0.7745966692414833
  0.2581988897471611
 -0.2581988897471611

Matrix norms#

There are also different matrix norms. If we view the matrix similar to a vector and use the 2-norm we get the Frobenius norm \(\|A\|_F = \sqrt{\sum_{i=1}^m \sum_{j=1}^n |a_{ij}|^2}\). However, often we want to use the induced matrix norm which is defined as \(\|A\| = \max_{x \neq 0} \frac{\|Ax\|}{\|x\|}\).

A = [ 2 0; 1 -1 ]
2×2 Matrix{Int64}:
 2   0
 1  -1
Fronorm = norm(A)
2.449489742783178
twonorm = opnorm(A)
2.2882456112707374
onenorm = opnorm(A,1) # 1-norm: sum down the first matrix dimension 
3.0
infnorm = opnorm(A,Inf) # infinity-norm: sum down the second matrix dimension
2.0