MCS 275 Spring 2021
Emily Dumas
Course bulletins:
Let's implement the paper folding sequence recursively.
If you use the infinite paper folding sequence as the binary digits of a real number (starting at $\frac{1}{2}$ place and moving right), you get the paper folding constant.
$$ \begin{split} PFC &= (0.11011001110010011101100\ldots)_2\\ &= 0.85073618820186\ldots \end{split} $$It is irrational. In 2007 it was shown1 that this constant is furthermore transcendental, i.e. cannot be expressed in terms of square roots, cube roots, or any solutions of polynomials with rational coefficients.
Recursive functions are limited by a maximum call stack size.
Python imposes a limit to prevent the memory area used to store the call stack from running out (a stack overflow), which would abruptly stop the interpreter.
Let's write iterative versions of factorial, Fibonacci, and paper folding. (Or as many as time allows.)
How do iterative and recursive versions compare on speed?
I made a module decs
contains a decorator called timed
that prints the time a function takes to return.
Why is recursive fact()
somewhat competitive, but fib()
is dreadfully slow?
Decorator decs.count_calls
will keep track of number of function calls.
fib
computes the same terms over and over again.
Instead, let's store all previously computed results, and use the stored ones whenever possible.
This is called memoization. It only works for pure functions, i.e. those which always produce the same return value for any given argument values.
math.sin(...)
is pure; time.time()
is not.
n=35 | n=450 | |
---|---|---|
recursive | 1.9s | > age of universe |
memoized recursive | <0.001s | 0.003s |
iterative | <0.001s | 0.001s |
Measured on a 4.00Ghz Intel i7-6700K CPU (2015 release date) with Python 3.8.5
Recursive functions with multiple self-calls often benefit from memoization.
Memoized version is conceptually similar to an iterative solution.
Memoization does not alleviate recursion depth limits.