Code should execute sequentially if run in a Jupyter notebook

- See the set up page to install Jupyter, Julia (0.6+) and all necessary libraries
- Please direct feedback to contact@quantecon.org or the discourse forum

# Vectors, Arrays and Matrices¶

Contents

“Let’s be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. Science, on the contrary, requires only one investigator who happens to be right, which means that he or she has results that are verifiable by reference to the real world. In science consensus is irrelevant. What is relevant is reproducible results.” – Michael Crichton

## Overview¶

In Julia, arrays are the most important data type for working with collections of numerical data

In this lecture we give more details on

- creating and manipulating Julia arrays
- fundamental array processing operations
- basic matrix algebra

## Array Basics¶

### Shape and Dimension¶

We’ve already seen some Julia arrays in action

```
a = [10, 20, 30]
```

```
3-element Array{Int64,1}:
10
20
30
```

```
a = ["foo", "bar", 10]
```

```
3-element Array{Any,1}:
"foo"
"bar"
10
```

The REPL tells us that the arrays are of types `Array{Int64,1}`

and `Array{Any,1}`

respectively

Here `Int64`

and `Any`

are types for the elements inferred by the compiler

We’ll talk more about types later on

The `1`

in `Array{Int64,1}`

and `Array{Any,1}`

indicates that the array is
one dimensional

This is the default for many Julia functions that create arrays

```
typeof(randn(100))
```

```
Array{Float64,1}
```

To say that an array is one dimensional is to say that it is flat — neither a row nor a column vector

We can also confirm that `a`

is flat using the `size()`

or `ndims()`

functions

```
size(a)
```

```
(3,)
```

```
ndims(a)
```

```
1
```

The syntax `(3,)`

displays a tuple containing one element — the size along the one dimension that exists

Here are some functions that create two-dimensional arrays

```
eye(3)
```

```
3×3 Array{Float64,2}:
1.0 0.0 0.0
0.0 1.0 0.0
0.0 0.0 1.0
```

```
diagm([2, 4])
```

```
2×2 Array{Int64,2}:
2 0
0 4
```

```
size(eye(3))
```

```
(3,3)
```

#### Array vs Vector vs Matrix¶

In Julia, in addition to arrays you will see the types `Vector`

and `Matrix`

However, these are just aliases for one- and two-dimensional arrays respectively

```
Array{Int64, 1} == Vector{Int64}
```

```
true
```

```
Array{Int64, 2} == Matrix{Int64}
```

```
true
```

```
Array{Int64, 1} == Matrix{Int64}
```

```
false
```

```
Array{Int64, 3} == Matrix{Int64}
```

```
false
```

In particular, a `Vector`

in Julia is a flat array

#### Changing Dimensions¶

The primary function for changing the dimension of an array is `reshape()`

```
a = [10, 20, 30, 40]
```

```
4-element Array{Int64,1}:
10
20
30
40
```

```
b = reshape(a, 2, 2)
```

```
2×2 Array{Int64,2}:
10 30
20 40
```

```
b
```

```
2×2 Array{Int64,2}:
10 30
20 40
```

Notice that this function returns a “view” on the existing array

This means that changing the data in the new array will modify the data in the old one:

```
b[1, 1] = 100 # Continuing the previous example
```

```
100
```

```
b
```

```
2×2 Array{Int64,2}:
100 30
20 40
```

```
a
```

```
4-element Array{Int64,1}:
100
20
30
40
```

To collapse an array along one dimension you can use `squeeze()`

```
a = [1 2 3 4] # Two dimensional
```

```
1×4 Array{Int64,2}:
1 2 3 4
```

```
squeeze(a, 1)
```

```
4-element Array{Int64,1}:
1
2
3
4
```

The return value is an array with the specified dimension “flattened”

#### Why Flat Arrays?¶

As we’ve seen, in Julia we have both

- one-dimensional arrays (i.e., flat arrays)
- arrays of size
`(1, n)`

or`(n, 1)`

that represent row and column vectors respectively

Why do we need both?

On one hand, dimension matters when we come to matrix algebra

- Multiplying by a row vector is different to multiplication by a column vector

On the other, we use arrays in many settings that don’t involve matrix algebra

In such cases, we don’t care about the distinction between row and column vectors

This is why many Julia functions return flat arrays by default

### Creating Arrays¶

#### Functions that Return Arrays¶

We’ve already seen some functions for creating arrays

```
eye(2)
```

```
2×2 Array{Float64,2}:
1.0 0.0
0.0 1.0
```

```
zeros(3)
```

```
3-element Array{Float64,1}:
0.0
0.0
0.0
```

You can create an empty array using the `Array()`

constructor

```
x = Array{Float64}(2, 2)
```

```
2×2 Array{Float64,2}:
6.92478e-310 6.92477e-310
6.92477e-310 0.0
```

The printed values you see here are just garbage values

(the existing contents of the allocated memory slots being interpreted as 64 bit floats)

Other important functions that return arrays are

```
ones(2, 2)
```

```
2×2 Array{Float64,2}:
1.0 1.0
1.0 1.0
```

```
fill("foo", 2, 2)
```

```
2×2 Array{String,2}:
"foo" "foo"
"foo" "foo"
```

#### Manual Array Definitions¶

As we’ve seen, you can create one dimensional arrays from manually specified data like so

```
a = [10, 20, 30, 40]
```

```
4-element Array{Int64,1}:
10
20
30
40
```

In two dimensions we can proceed as follows

```
a = [10 20 30 40] # Two dimensional, shape is 1 x n
```

```
1×4 Array{Int64,2}:
10 20 30 40
```

```
ndims(a)
```

```
2
```

```
a = [10 20; 30 40] # 2 x 2
```

```
2×2 Array{Int64,2}:
10 20
30 40
```

You might then assume that `a = [10; 20; 30; 40]`

creates a two dimensional column vector but unfortunately this isn’t the case

```
a = [10; 20; 30; 40]
```

```
4-element Array{Int64,1}:
10
20
30
40
```

```
ndims(a)
```

```
1
```

Instead transpose the row vector

```
a = [10 20 30 40]'
```

```
4×1 Array{Int64,2}:
10
20
30
40
```

```
ndims(a)
```

```
2
```

### Array Indexing¶

We’ve already seen the basics of array indexing

```
a = collect(10:10:40)
```

```
4-element Array{Int64,1}:
10
20
30
40
```

```
a[end-1]
```

```
30
```

```
a[1:3]
```

```
3-element Array{Int64,1}:
10
20
30
```

For 2D arrays the index syntax is straightforward

```
a = randn(2, 2)
```

```
2×2 Array{Float64,2}:
0.665276 -3.12382
1.49257 -0.756258
```

```
a[1, 1]
```

```
0.6652764178803451
```

```
a[1, :] # First row
```

```
2-element Array{Float64,1}:
0.665276
-3.12382
```

```
a[:, 1] # First column
```

```
2-element Array{Float64,1}:
0.665276
1.49257
```

Booleans can be used to extract elements

```
a = randn(2, 2)
```

```
2×2 Array{Float64,2}:
-0.614422 0.0864195
0.865028 -0.76479
```

```
b = [true false; false true]
```

```
2×2 Array{Bool,2}:
true false
false true
```

```
a[b]
```

```
2-element Array{Float64,1}:
-0.614422
-0.76479
```

This is useful for conditional extraction, as we’ll see below

An aside: some or all elements of an array can be set equal to one number using slice notation

```
2-element Array{Float64,1}:
-0.614422
-0.76479
```

```
a = Array{Float64}(4)
```

```
4-element Array{Float64,1}:
6.92478e-310
6.92477e-310
6.92477e-310
6.92477e-310
```

```
a[2:end] = 42
```

```
42
```

```
a
```

```
4-element Array{Float64,1}:
6.92478e-310
42.0
42.0
42.0
```

### Passing Arrays¶

As in Python, all arrays are passed by reference

What this means is that if `a`

is an array and we set `b = a`

then `a`

and `b`

point to exactly the same data

Hence any change in `b`

is reflected in `a`

```
a = ones(3)
```

```
3-element Array{Float64,1}:
1.0
1.0
1.0
```

```
b = a
```

```
3-element Array{Float64,1}:
1.0
1.0
1.0
```

```
b[3] = 44
```

```
44
```

```
a
```

```
3-element Array{Float64,1}:
1.0
1.0
44.0
```

If you are a MATLAB programmer perhaps you are recoiling in horror at this idea

But this is actually the more sensible default – after all, it’s very inefficient to copy arrays unnecessarily

If you do need an actual copy in Julia, just use `copy()`

```
a = ones(3)
```

```
3-element Array{Float64,1}:
1.0
1.0
1.0
```

```
b = copy(a)
```

```
3-element Array{Float64,1}:
1.0
1.0
1.0
```

```
b[3] = 44
```

```
44
```

```
a
```

```
3-element Array{Float64,1}:
1.0
1.0
1.0
```

## Operations on Arrays¶

### Array Methods¶

Julia provides standard functions for acting on arrays, some of which we’ve already seen

```
a = [-1, 0, 1]
```

```
3-element Array{Int64,1}:
-1
0
1
```

```
length(a)
```

```
3
```

```
sum(a)
```

```
0
```

```
mean(a)
```

```
0.0
```

```
std(a)
```

```
1.0
```

```
var(a)
```

```
1.0
```

```
maximum(a)
```

```
1
```

```
minimum(a)
```

```
-1
```

```
b = sort(a, rev=true) # Returns new array, original not modified
```

```
3-element Array{Int64,1}:
1
0
-1
```

```
b === a # === tests if arrays are identical (i.e share same memory)
```

```
false
```

```
b = sort!(a, rev=true) # Returns *modified original* array
```

```
3-element Array{Int64,1}:
1
0
-1
```

```
b === a
```

```
true
```

### Matrix Algebra¶

For two dimensional arrays, `*`

means matrix multiplication

```
a = ones(1, 2)
```

```
1×2 Array{Float64,2}:
1.0 1.0
```

```
b = ones(2, 2)
```

```
2×2 Array{Float64,2}:
1.0 1.0
1.0 1.0
```

```
a * b
```

```
1×2 Array{Float64,2}:
2.0 2.0
```

```
b * a'
```

```
2×1 Array{Float64,2}:
2.0
2.0
```

To solve the linear system `A X = B`

for `X`

use `A \ B`

```
A = [1 2; 2 3]
```

```
2×2 Array{Int64,2}:
1 2
2 3
```

```
B = ones(2, 2)
```

```
2×2 Array{Float64,2}:
1.0 1.0
1.0 1.0
```

```
A \ B
```

```
2×2 Array{Float64,2}:
-1.0 -1.0
1.0 1.0
```

```
inv(A) * B
```

```
2×2 Array{Float64,2}:
-1.0 -1.0
1.0 1.0
```

Although the last two operations give the same result, the first one is numerically more stable and should be preferred in most cases

Multiplying two **one** dimensional vectors gives an error — which is reasonable since the meaning is ambiguous

```
ones(2) * ones(2)
```

```
MethodError: no method matching *(::Array{Float64,1}, ::Array{Float64,1})
Closest candidates are:
*(::Any, ::Any, ::Any, ::Any...) at operators.jl:138
*{T<:Union{Complex{Float32},Complex{Float64},Float32,Float64},S}(::Union{Base.ReshapedArray{T<:Union{Complex{Float32},Complex{Float64},Float32,Float64},2,A<:DenseArray,MI<:Tuple{Vararg{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64},N}}},DenseArray{T<:Union{Complex{Float32},Complex{Float64},Float32,Float64},2},SubArray{T<:Union{Complex{Float32},Complex{Float64},Float32,Float64},2,A<:Union{Base.ReshapedArray{T,N,A<:DenseArray,MI<:Tuple{Vararg{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64},N}}},DenseArray},I<:Tuple{Vararg{Union{Base.AbstractCartesianIndex,Colon,Int64,Range{Int64}},N}},L}}, ::Union{Base.ReshapedArray{S,1,A<:DenseArray,MI<:Tuple{Vararg{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64},N}}},DenseArray{S,1},SubArray{S,1,A<:Union{Base.ReshapedArray{T,N,A<:DenseArray,MI<:Tuple{Vararg{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64},N}}},DenseArray},I<:Tuple{Vararg{Union{Base.AbstractCartesianIndex,Colon,Int64,Range{Int64}},N}},L}}) at linalg/matmul.jl:79
*(::Base.LinAlg.AbstractTriangular{T,S<:AbstractArray{T,2}}, ::AbstractArray{T,1}) at linalg/triangular.jl:1496
...
```

If you want an inner product in this setting use `dot()`

```
dot(ones(2), ones(2))
```

```
2.0
```

Matrix multiplication using one dimensional vectors is a bit inconsistent — pre-multiplication by the matrix is OK, but post-multiplication gives an error

```
b = ones(2, 2)
```

```
2×2 Array{Float64,2}:
1.0 1.0
1.0 1.0
```

```
b * ones(2)
```

```
2-element Array{Float64,1}:
2.0
2.0
```

```
ones(2) * b
```

```
DimensionMismatch("A has dimensions (2,1) but B has dimensions (2,2)")
in gemm_wrapper!(::Array{Float64,2}, ::Char, ::Char, ::Array{Float64,2}, ::Array{Float64,2}) at ./linalg/matmul.jl:309
in *(::Array{Float64,1}, ::Array{Float64,2}) at ./linalg/matmul.jl:86
```

It’s probably best to give your vectors dimension before you multiply them against matrices

### Elementwise Operations¶

#### Algebraic Operations¶

Suppose that we wish to multiply every element of matrix `A`

with the corresponding element of matrix `B`

In that case we need to replace `*`

(matrix multiplication) with `.*`

(elementwise multiplication)

For example, compare

```
ones(2, 2) * ones(2, 2) # Matrix multiplication
```

```
2×2 Array{Float64,2}:
2.0 2.0
2.0 2.0
```

```
ones(2, 2) .* ones(2, 2) # Element by element multiplication
```

```
2×2 Array{Float64,2}:
1.0 1.0
1.0 1.0
```

This is a general principle: `.x`

means apply operator `x`

elementwise

```
A = -ones(2, 2)
```

```
2×2 Array{Float64,2}:
-1.0 -1.0
-1.0 -1.0
```

```
A.^2 # Square every element
```

```
2×2 Array{Float64,2}:
1.0 1.0
1.0 1.0
```

However in practice some operations are unambiguous and hence the `.`

can be omitted

```
ones(2, 2) + ones(2, 2) # Same as ones(2, 2) .+ ones(2, 2)
```

```
2×2 Array{Float64,2}:
2.0 2.0
2.0 2.0
```

Scalar multiplication is similar

```
A = ones(2, 2)
```

```
2×2 Array{Float64,2}:
1.0 1.0
1.0 1.0
```

```
2 * A # Same as 2 .* A
```

```
2×2 Array{Float64,2}:
2.0 2.0
2.0 2.0
```

In fact you can omit the `*`

altogether and just write `2A`

#### Elementwise Comparisons¶

Elementwise comparisons also use the `.x`

style notation

```
a = [10, 20, 30]
```

```
3-element Array{Int64,1}:
10
20
30
```

```
b = [-100, 0, 100]
```

```
3-element Array{Int64,1}:
-100
0
100
```

```
b .> a
```

```
3-element BitArray{1}:
false
false
true
```

```
a .== b
```

```
3-element BitArray{1}:
false
false
false
```

We can also do comparisons against scalars with parallel syntax

```
b
```

```
3-element Array{Int64,1}:
-100
0
100
```

```
b .> 1
```

```
3-element BitArray{1}:
false
false
true
```

This is particularly useful for *conditional extraction* — extracting the elements of an array that satisfy a condition

```
a = randn(4)
```

```
4-element Array{Float64,1}:
-1.55242
-1.0166
-1.27692
1.38091
```

```
a .< 0
```

```
4-element BitArray{1}:
true
true
true
false
```

```
a[a .< 0]
```

```
3-element Array{Float64,1}:
-1.55242
-1.0166
-1.27692
```

### Vectorized Functions¶

Julia provides standard mathematical functions such as `log`

, `exp`

, `sin`

, etc.

```
log(1.0)
```

```
0.0
```

By default, these functions act *elementwise* on arrays

```
log.(ones(4))
```

```
4-element Array{Float64,1}:
0.0
0.0
0.0
0.0
```

Functions that act elementwise on arrays in this manner are called **vectorized functions**

Note that we can get the same result as with a comprehension or more explicit loop

```
[log(x) for x in ones(4)]
```

```
4-element Array{Float64,1}:
0.0
0.0
0.0
0.0
```

In Julia loops are typically fast and hence the need for vectorized functions is less intense than for some other high level languages

Nonetheless the syntax is convenient

## Linear Algebra¶

Julia provides some a great deal of additional functionality related to linear operations

```
A = [1 2; 3 4]
```

```
2×2 Array{Int64,2}:
1 2
3 4
```

```
det(A)
```

```
-2.0
```

```
trace(A)
```

```
5
```

```
eigvals(A)
```

```
2-element Array{Float64,1}:
-0.372281
5.37228
```

```
rank(A)
```

```
2
```

For more details see the linear algebra section of the standard library

## Exercises¶

### Exercise 1¶

This exercise is on some matrix operations that arise in certain problems, including when dealing with linear stochastic difference equations

If you aren’t familiar with all the terminology don’t be concerned — you can skim read the background discussion and focus purely on the matrix exercise

With that said, consider the stochastic difference equation

Here

- \(X_t, b\) and \(X_{t+1}\) ar \(n \times 1\)
- \(A\) is \(n \times n\)
- \(\Sigma\) is \(n \times k\)
- \(W_t\) is \(k \times 1\) and \(\{W_t\}\) is iid with zero mean and variance-covariance matrix equal to the identity matrix

Let \(S_t\) denote the \(n \times n\) variance-covariance matrix of \(X_t\)

Using the rules for computing variances in matrix expressions, it can be shown from (1) that \(\{S_t\}\) obeys

It can be shown that, provided all eigenvalues of \(A\) lie within the unit circle, the sequence \(\{S_t\}\) converges to a unique limit \(S\)

This is the **unconditional variance** or **asymptotic variance** of the stochastic difference equation

As an exercise, try writing a simple function that solves for the limit \(S\) by iterating on (2) given \(A\) and \(\Sigma\)

To test your solution, observe that the limit \(S\) is a solution to the matrix equation

This kind of equation is known as a **discrete time Lyapunov equation**

The QuantEcon package
provides a function called `solve_discrete_lyapunov`

that implements a fast
“doubling” algorithm to solve this equation

Test your iterative method against `solve_discrete_lyapunov`

using matrices

## Solutions¶

### Exercise 1¶

Here’s the iterative approach

```
function compute_asymptotic_var(A,
Sigma,
S0=Sigma * Sigma',
tolerance=1e-6,
maxiter=500)
V = Sigma * Sigma'
S = S0
err = tolerance + 1
i = 1
while err > tolerance && i <= maxiter
next_S = A * S * A' + V
err = norm(S - next_S)
S = next_S
i = i + 1
end
return S
end
```

```
A = [0.8 -0.2;
-0.1 0.7]
Sigma = [0.5 0.4;
0.4 0.6]
```

Note that all eigenvalues of \(A\) lie inside the unit disc:

```
maximum(abs, eigvals(A))
```

Let’s compute the asymptotic variance:

```
0.9
```

```
compute_asymptotic_var(A, Sigma)
```

```
2×2 Array{Float64,2}:
0.671228 0.633476
0.633476 0.858874
```

Now let’s do the same thing using QuantEcon’s solve_discrete_lyapunov() function and check we get the same result

```
using QuantEcon
```

```
solve_discrete_lyapunov(A, Sigma * Sigma')
```

```
2×2 Array{Float64,2}:
0.671231 0.633474
0.633474 0.858874
```