Step 6: Array Operations with NumPy#
For more computationally intensive programs, the use of built-in Numpy functions can provide an increase in execution speed many-times over. Keep in mind, for optimal performance in Python programming, it’s essential to substitute for-loops with array operations wherever possible. As a simple example, consider the following equation:
Now, given a vector \(u^n = [0, 1, 2, 3, 4, 5]\ \ \) we can calculate the values of \(u^{n+1}\) by iterating over the values of \(u^n\) with a for loop.
import numpy as np
u = np.array((0,1,2,3,4,5))
u1 = np.zeros(len(u)-1)
#for i in range(len(u)): #this outputs 0,1,2,3,4,5
#for i in range(1, len(u)): # this outputs 1,2,3,4,5
# print(i)
for i in range(1, len(u)):
u1[i-1] = u[i] - u[i-1]
u1
array([1., 1., 1., 1., 1.])
This is the expected result and the execution time was nearly instantaneous. If we perform the same operation as an array operation, then rather than calculate \(u^n_i-u^n_{i-1}\ \) 5 separate times, we can slice the \(u\) array and calculate each operation with one command:
u[1:] - u[0:-1]
array([1, 1, 1, 1, 1])
print(u[1:])
print(u[0:-1])
[1 2 3 4 5]
[0 1 2 3 4]
What this command says is subtract the 0th, 1st, 2nd, 3rd, 4th and 5th elements of 𝑢 from the 1st, 2nd, 3rd, 4th, 5th and 6th elements of 𝑢 .
Speed Increases#
For a 6 element array, the benefits of array operations are pretty slim. There will be no appreciable difference in execution time because there are so few operations taking place. But if we revisit 2D linear convection, we can see some substantial speed increases.
nx = 81
ny = 81
nt = 100
c = 1
dx = 2 / (nx - 1)
dy = 2 / (ny - 1)
sigma = .2
dt = sigma * dx
x = np.linspace(0, 2, nx)
y = np.linspace(0, 2, ny)
u = np.ones((nx, ny)) ##create a 1xn vector of 1's
un = np.ones((nx, ny))
###Assign initial conditions
u[int(.5 / dx): int(1 / dx + 1), int(.5 / dy):int(1 / dy + 1)] = 2
With our initial conditions all set up, let’s first try running our original nested loop code, making use of the iPython “magic” function %%timeit
, which will help us evaluate the performance of our code.
Note: The %%timeit
magic function will run the code several times and then give an average execution time as a result. If you have any figures being plotted within a cell where you run %%timeit
, it will plot those figures repeatedly which can be a bit messy.
The execution times below will vary from machine to machine. Don’t expect your times to match these times, but you should expect to see the same general trend in decreasing execution time as we switch to array operations.
%%timeit
u = np.ones((nx, ny))
u[int(.5 / dx): int(1 / dx + 1), int(.5 / dy):int(1 / dy + 1)] = 2
for n in range(nt):
un = u.copy()
row, col = u.shape
for i in range(1, row):
for j in range(1, col):
u[i,j] = (un[i,j] - (c * dt / dx * (un[i,j] - un[i-1,j])) -
(c * dt / dy * (un[i,j] - un[i,j-1])))
u[0, :] = 1
u[:, 0] = 1
765 ms ± 2.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
With the “raw” Python code above, the mean execution time achieved was 728 milliseconds (on a intel CORE i7 CPU). Keep in mind that with these three nested loops, that the statements inside the j loop are being evaluated more than 650,000 times (\(81*81*100\)). Let’s compare that with the performance of the same code implemented with array operations:
%%timeit
u = np.ones((nx, ny))
u[int(.5 / dx): int(1 / dx + 1), int(.5 / dy):int(1 / dy + 1)] = 2
for n in range(nt + 1): ##loop across number of time steps
un = u.copy()
u[1:, 1:] = (un[1:, 1:] - (c * dt / dx * (un[1:, 1:] - un[0:-1, 1:])) -
(c * dt / dy * (un[1:, 1:] - un[1:, 0:-1])))
u[0, :] = 1
u[:, 0] = 1
2.62 ms ± 25.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
As you can see, the speed increase (over 300x) is substantial. Seconds isn’t a huge amount of time to wait, but these speed gains will increase exponentially with the size and complexity of the problem being evaluated.
Defining Functions in Python#
In the previous steps, we wrote Python code that is meant to run from top to bottom. We were able to reuse code (to great effect!) by copying and pasting, to incrementally build a solver for the Burgers’ equation. But moving forward there are more efficient ways to write our Python codes. In this lesson, we are going to introduce function definitions, which will allow us more flexibility in reusing and also in organizing our code.
We’ll begin with a trivial example: a function which adds two numbers.
To create a function in Python, we start with the following:
def simpleadd(a,b):
This statement creates a function called simpleadd
which takes two inputs, a
and b
. Let’s execute this definition code.
def simpleadd(a, b):
return a + b
The return
statement tells Python what data to return in response to being called. Now we can try calling our simpleadd
function:
simpleadd(3, 4)
7