Code should execute sequentially if run in a Jupyter notebook

- See the set up page to install Jupyter, Python and all necessary libraries
- Please direct feedback to contact@quantecon.org or the discourse forum

# NumPy¶

“Let’s be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. Science, on the contrary, requires only one investigator who happens to be right, which means that he or she has results that are verifiable by reference to the real world. In science consensus is irrelevant. What is relevant is reproducible results.” – Michael Crichton

### References¶

## Introduction to NumPy¶

The essential problem that NumPy solves is fast array processing

For example, suppose we want to create an array of 1 million random draws from a uniform distribution and compute the mean

If we did this in pure Python it would be orders of magnitude slower than C or Fortran

This is because

- Loops in Python over Python data types like lists carry significant overhead
- C and Fortran code contains a lot of type information that can be used for optimization
- Various optimizations can be carried out during compilation, when the compiler sees the instructions as a whole

However, for a task like the one described above there’s no need to switch back to C or Fortran

Instead we can use NumPy, where the instructions look like this:

```
import numpy as np
x = np.random.uniform(0, 1, size=1000000)
x.mean()
```

The operations of creating the array and computing its mean are both passed out to carefully optimized machine code compiled from C

More generally, NumPy sends operations *in batches* to optimized C and Fortran code

This is similar in spirit to Matlab, which provides an interface to fast Fortran routines

### A Comment on Vectorization¶

NumPy is great for operations that are naturally *vectorized*

Vectorized operations are precompiled routines that can be sent in batches, like

- matrix multiplication and other linear algebra routines
- generating a vector of random numbers
- applying a fixed transformation (e.g., sine or cosine) to an entire array

In a later lecture we’ll discuss code that isn’t easy to vectorize and how such routines can also be optimized

## NumPy Arrays¶

The most important thing that NumPy defines is an array data type formally called a numpy.ndarray

NumPy arrays power a large proportion of the scientific Python ecosystem

To create a NumPy array containing only zeros we use np.zeros

```
a = np.zeros(3)
a
```

```
type(a)
```

NumPy arrays are somewhat like native Python lists, except that

- Data
*must be homogeneous*(all elements of the same type) - These types must be one of the data types (
`dtypes`

) provided by NumPy

The most important of these dtypes are:

- float64: 64 bit floating point number
- int64: 64 bit integer
- bool: 8 bit True or False

There are also dtypes to represent complex numbers, unsigned integers, etc

On modern machines, the default dtype for arrays is `float64`

```
a = np.zeros(3)
type(a[0])
```

If we want to use integers we can specify as follows:

```
a = np.zeros(3, dtype=int)
type(a[0])
```

### Shape and Dimension¶

```
z = np.zeros(10)
```

Here `z`

is a *flat* array with no dimension — neither row nor column vector

The dimension is recorded in the `shape`

attribute, which is a tuple

```
z.shape
```

Here the shape tuple has only one element, which is the length of the array (tuples with one element end with a comma)

To give it dimension, we can change the `shape`

attribute

```
z.shape = (10, 1)
z
```

```
z = np.zeros(4)
z.shape = (2, 2)
z
```

### Creating Arrays¶

As we’ve seen, the `np.zeros`

function creates an array of zeros

You can probably guess what `np.ones`

creates

Related is `np.empty`

, which creates arrays in memory that can later be populated with data

```
z = np.empty(3)
z
```

The numbers you see here are garbage values

(Python allocates 3 contiguous 64 bit pieces of memory, and the existing contents of those memory slots are interpreted as `float64`

values)

To set up a grid of evenly spaced numbers use `np.linspace`

```
z = np.linspace(2, 4, 5) # From 2 to 4, with 5 elements
```

To create an identity matrix use either `np.identity`

or `np.eye`

```
z = np.identity(2)
z
```

In addition, NumPy arrays can be created from Python lists, tuples, etc. using `np.array`

```
z = np.array([10, 20]) # ndarray from Python list
z
```

```
type(z)
```

```
z = np.array((10, 20), dtype=float) # Here 'float' is equivalent to 'np.float64'
z
```

```
z = np.array([[1, 2], [3, 4]]) # 2D array from a list of lists
z
```

See also `np.asarray`

, which performs a similar function, but does not make
a distinct copy of data already in a NumPy array

```
na = np.linspace(10, 20, 2)
na is np.asarray(na) # Does not copy NumPy arrays
```

```
na is np.array(na) # Does make a new copy --- perhaps unnecessarily
```

To read in the array data from a text file containing numeric data use `np.loadtxt`

or `np.genfromtxt`

—see the documentation for details

### Array Indexing¶

```
z = np.linspace(1, 2, 5)
z
```

```
z[0]
```

```
z[0:2] # Two elements, starting at element 0
```

```
z[-1]
```

For 2D arrays the index syntax is as follows:

```
z = np.array([[1, 2], [3, 4]])
z
```

```
z[0, 0]
```

```
z[0, 1]
```

And so on

Note that indices are still zero-based, to maintain compatibility with Python sequences

Columns and rows can be extracted as follows

```
z[0, :]
```

```
z[:, 1]
```

NumPy arrays of integers can also be used to extract elements

```
z = np.linspace(2, 4, 5)
z
```

```
indices = np.array((0, 2, 3))
z[indices]
```

Finally, an array of `dtype bool`

can be used to extract elements

```
z
```

```
d = np.array([0, 1, 1, 0, 0], dtype=bool)
d
```

```
z[d]
```

We’ll see why this is useful below

An aside: all elements of an array can be set equal to one number using slice notation

```
z = np.empty(3)
z
```

```
z[:] = 42
z
```

### Array Methods¶

Arrays have useful methods, all of which are carefully optimized

```
a = np.array((4, 3, 2, 1))
a
```

```
a.sort() # Sorts a in place
a
```

```
a.sum() # Sum
```

```
a.mean() # Mean
```

```
a.max() # Max
```

```
a.argmax() # Returns the index of the maximal element
```

```
a.cumsum() # Cumulative sum of the elements of a
```

```
a.cumprod() # Cumulative product of the elements of a
```

```
a.var() # Variance
```

```
a.std() # Standard deviation
```

```
a.shape = (2, 2)
a.T # Equivalent to a.transpose()
```

Another method worth knowing is `searchsorted()`

If `z`

is a nondecreasing array, then `z.searchsorted(a)`

returns the index of the first element of `z`

that is `>= a`

```
z = np.linspace(2, 4, 5)
z
```

```
z.searchsorted(2.2)
```

Many of the methods discussed above have equivalent functions in the NumPy namespace

```
a = np.array((4, 3, 2, 1))
```

```
np.sum(a)
```

```
np.mean(a)
```

## Operations on Arrays¶

### Arithmetic Operations¶

The operators `+`

, `-`

, `*`

, `/`

and `**`

all act *elementwise* on arrays

```
a = np.array([1, 2, 3, 4])
b = np.array([5, 6, 7, 8])
a + b
```

```
a * b
```

We can add a scalar to each element as follows

```
a + 10
```

Scalar multiplication is similar

```
a * 10
```

The two dimensional arrays follow the same general rules

```
A = np.ones((2, 2))
B = np.ones((2, 2))
A + B
```

```
A + 10
```

```
A * B
```

### Matrix Multiplication¶

With Anaconda’s scientific Python package based around Python 3.5 and above,
one can use the `@`

symbol for matrix multiplication, as follows:

```
A = np.ones((2, 2))
B = np.ones((2, 2))
A @ B
```

(For older versions of Python and NumPy you need to use the np.dot function)

We can also use `@`

to take the inner product of two flat arrays

```
A = np.array((1, 2))
B = np.array((10, 20))
A @ B
```

In fact, we can use `@`

when one element is a Python list or tuple

```
A = np.array(((1, 2), (3, 4)))
A
```

```
A @ (0, 1)
```

Since we are postmultiplying, the tuple is treated as a column vector

### Mutability and Copying Arrays¶

NumPy arrays are mutable data types, like Python lists

In other words, their contents can be altered (mutated) in memory after initialization

We already saw examples above

Here’s another example:

```
a = np.array([42, 44])
a
```

```
a[-1] = 0 # Change last element to 0
a
```

Mutability leads to the following behavior (which can be shocking to MATLAB programmers…)

```
a = np.random.randn(3)
a
```

```
b = a
b[0] = 0.0
a
```

What’s happened is that we have changed `a`

by changing `b`

The name `b`

is bound to `a`

and becomes just another reference to the
array (the Python assignment model is described in more detail later in the course)

Hence, it has equal rights to make changes to that array

This is in fact the most sensible default behavior!

It means that we pass around only pointers to data, rather than making copies

Making copies is expensive in terms of both speed and memory

#### Making Copies¶

It is of course possible to make `b`

an independent copy of `a`

when required

This can be done using `np.copy`

```
a = np.random.randn(3)
a
```

```
b = np.copy(a)
b
```

Now `b`

is an independent copy (called a *deep copy*)

```
b[:] = 1
b
```

```
a
```

Note that the change to `b`

has not affected `a`

## Additional Functionality¶

Let’s look at some other useful things we can do with NumPy

### Vectorized Functions¶

NumPy provides versions of the standard functions `log`

, `exp`

, `sin`

, etc. that act *element-wise* on arrays

```
z = np.array([1, 2, 3])
np.sin(z)
```

This eliminates the need for explicit element-by-element loops such as

```
n = len(z)
y = np.empty(n)
for i in range(n):
y[i] = np.sin(z[i])
```

Because they act element-wise on arrays, these functions are called *vectorized functions*

In NumPy-speak, they are also called *ufuncs*, which stands for “universal functions”

As we saw above, the usual arithmetic operations (`+`

, `*`

, etc.) also
work element-wise, and combining these with the ufuncs gives a very large set of fast element-wise functions

```
z
```

```
(1 / np.sqrt(2 * np.pi)) * np.exp(- 0.5 * z**2)
```

Not all user defined functions will act element-wise

For example, passing the function `f`

defined below a NumPy array causes a `ValueError`

```
def f(x):
return 1 if x > 0 else 0
```

The NumPy function `np.where`

provides a vectorized alternative:

```
x = np.random.randn(4)
x
```

```
np.where(x > 0, 1, 0) # Insert 1 if x > 0 true, otherwise 0
```

You can also use `np.vectorize`

to vectorize a given function

```
def f(x): return 1 if x > 0 else 0
f = np.vectorize(f)
f(x) # Passing the same vector x as in the previous example
```

However, this approach doesn’t always obtain the same speed as a more carefully crafted vectorized function

### Comparisons¶

```
z = np.array([2, 3])
y = np.array([2, 3])
z == y
```

```
y[0] = 5
z == y
```

```
z != y
```

The situation is similar for `>`

, `<`

, `>=`

and `<=`

We can also do comparisons against scalars

```
z = np.linspace(0, 10, 5)
z
```

```
z > 3
```

This is particularly useful for *conditional extraction*

```
b = z > 3
b
```

```
z[b]
```

Of course we can—and frequently do—perform this in one step

```
z[z > 3]
```

### Subpackages¶

NumPy provides some additional functionality related to scientific programming through its subpackages

We’ve already seen how we can generate random variables using np.random

```
z = np.random.randn(10000) # Generate standard normals
y = np.random.binomial(10, 0.5, size=1000) # 1,000 draws from Bin(10, 0.5)
y.mean()
```

Another commonly used subpackage is np.linalg

```
A = np.array([[1, 2], [3, 4]])
np.linalg.det(A) # Compute the determinant
```

```
np.linalg.inv(A) # Compute the inverse
```

Much of this functionality is also available in SciPy, a collection of modules that are built on top of NumPy

We’ll cover the SciPy versions in more detail soon

For a comprehensive list of what’s available in NumPy see this documentation

## Exercises¶

### Exercise 1¶

Consider the polynomial expression

$$ p(x) = a_0 + a_1 x + a_2 x^2 + \cdots a_N x^N = \sum_{n=0}^N a_n x^n \tag{1} $$

Earlier, you wrote a simple function `p(x, coeff)`

to evaluate (1) without considering efficiency

Now write a new function that does the same job, but uses NumPy arrays and array operations for its computations, rather than any form of Python loop

(Such functionality is already implemented as `np.poly1d`

, but for the sake of the exercise don’t use this class)

- Hint: Use
`np.cumprod()`

### Exercise 2¶

Let `q`

be a NumPy array of length `n`

with `q.sum() == 1`

Suppose that `q`

represents a probability mass function

We wish to generate a discrete random variable $ x $ such that $ \mathbb P\{x = i\} = q_i $

In other words, `x`

takes values in `range(len(q))`

and `x = i`

with probability `q[i]`

The standard (inverse transform) algorithm is as follows:

- Divide the unit interval $ [0, 1] $ into $ n $ subintervals $ I_0, I_1, \ldots, I_{n-1} $ such that the length of $ I_i $ is $ q_i $
- Draw a uniform random variable $ U $ on $ [0, 1] $ and return the $ i $ such that $ U \in I_i $

The probability of drawing $ i $ is the length of $ I_i $, which is equal to $ q_i $

We can implement the algorithm as follows

```
from random import uniform
def sample(q):
a = 0.0
U = uniform(0, 1)
for i in range(len(q)):
if a < U <= a + q[i]:
return i
a = a + q[i]
```

If you can’t see how this works, try thinking through the flow for a simple example, such as `q = [0.25, 0.75]`

It helps to sketch the intervals on paper

Your exercise is to speed it up using NumPy, avoiding explicit loops

- Hint: Use
`np.searchsorted`

and`np.cumsum`

If you can, implement the functionality as a class called `discreteRV`

, where

- the data for an instance of the class is the vector of probabilities
`q`

- the class has a
`draw()`

method, which returns one draw according to the algorithm described above

If you can, write the method so that `draw(k)`

returns `k`

draws from `q`

### Exercise 3¶

Recall our earlier discussion of the empirical cumulative distribution function

Your task is to

- Make the
`__call__`

method more efficient using NumPy - Add a method that plots the ECDF over $ [a, b] $, where $ a $ and $ b $ are method parameters

## Solutions¶

```
import matplotlib.pyplot as plt
%matplotlib inline
```

### Exercise 1¶

This code does the job

```
def p(x, coef):
X = np.empty(len(coef))
X[0] = 1
X[1:] = x
y = np.cumprod(X) # y = [1, x, x**2,...]
return coef @ y
```

Let’s test it

```
coef = np.ones(3)
print(coef)
print(p(1, coef))
# For comparison
q = np.poly1d(coef)
print(q(1))
```

### Exercise 2¶

Here’s our first pass at a solution:

```
from numpy import cumsum
from numpy.random import uniform
class DiscreteRV:
"""
Generates an array of draws from a discrete random variable with vector of
probabilities given by q.
"""
def __init__(self, q):
"""
The argument q is a NumPy array, or array like, nonnegative and sums
to 1
"""
self.q = q
self.Q = cumsum(q)
def draw(self, k=1):
"""
Returns k draws from q. For each such draw, the value i is returned
with probability q[i].
"""
return self.Q.searchsorted(uniform(0, 1, size=k))
```

The logic is not obvious, but if you take your time and read it slowly, you will understand

There is a problem here, however

Suppose that `q`

is altered after an instance of `discreteRV`

is
created, for example by

```
q = (0.1, 0.9)
d = DiscreteRV(q)
d.q = (0.5, 0.5)
```

The problem is that `Q`

does not change accordingly, and `Q`

is the
data used in the `draw`

method

To deal with this, one option is to compute `Q`

every time the draw
method is called

But this is inefficient relative to computing `Q`

once off

A better option is to use descriptors

A solution from the quantecon library using descriptors that behaves as we desire can be found here

```
"""
Modifies ecdf.py from QuantEcon to add in a plot method
"""
class ECDF:
"""
One-dimensional empirical distribution function given a vector of
observations.
Parameters
----------
observations : array_like
An array of observations
Attributes
----------
observations : array_like
An array of observations
"""
def __init__(self, observations):
self.observations = np.asarray(observations)
def __call__(self, x):
"""
Evaluates the ecdf at x
Parameters
----------
x : scalar(float)
The x at which the ecdf is evaluated
Returns
-------
scalar(float)
Fraction of the sample less than x
"""
return np.mean(self.observations <= x)
def plot(self, a=None, b=None):
"""
Plot the ecdf on the interval [a, b].
Parameters
----------
a : scalar(float), optional(default=None)
Lower end point of the plot interval
b : scalar(float), optional(default=None)
Upper end point of the plot interval
"""
# === choose reasonable interval if [a, b] not specified === #
if a is None:
a = self.observations.min() - self.observations.std()
if b is None:
b = self.observations.max() + self.observations.std()
# === generate plot === #
x_vals = np.linspace(a, b, num=100)
f = np.vectorize(self.__call__)
plt.plot(x_vals, f(x_vals))
plt.show()
```

Here’s an example of usage

```
X = np.random.randn(1000)
F = ECDF(X)
F.plot()
```