Optimization exercise with Rust
Introduction Optimizing code is something we all need to do everyday as developers. But trying to immediately find the best solution is slow, whereas starting from a naive solution that you incrementally improve is more efficient. Let's see this through a dynamic programming problem, the Fibonacci sequence, and using Rust. This problem will be solved with 4 different approaches: naive solution top-down approach (memoization) bottom-up approach optimized bottom-up approach Naive solution The Fibonacci sequence is defined as follows: Fn={F0=0F1=1Fn=Fn−1+Fn−2for n>1F_n = \begin{cases} F_0 = 0\\ F_1 = 1\\ F_n = F_{n-1} + F_{n-2} &\text{for n>1} \end{cases} Fn=⎩⎨⎧F0=0F1=1Fn=Fn−1+Fn−2for n>1 Let's start with a naive, yet simple implementation : fn fibonacci(n: usize) -> u128 { match n { 0 => 0, 1 => 1, _ => fibonacci(n - 1) + fibonacci(n - 2), } } fn main() { println!("{}", fibonacci(35)); } The corresponding time complexity is O(2n)O(2^n) O(2n) : every fibonacci call triggers 2 calls, and we're doing that n times. Exponential runtime... we must avoid that. Top-down approach (memoization) A lot of intermediate results are identical: fibonacci(5) computes 3 times fibonacci(2) for example. Caching the results would avoid to compute them again and again, let's do that. fn fibonacci(n: usize, memo: &mut [u128]) -> u128 { match n { 0 => 0, 1 => 1, _ => { if memo[n] == 0 { memo[n] = fibonacci(n - 1, memo) + fibonacci(n - 2, memo); } memo[n] } } } fn main() { let n = 35; println!("{}", fibonacci(n, &mut vec![0u128; n + 1])); } Now the time complexity is O(n)O(n) O(n) , since the cached cases immediately return. But we can do better : this array becomes very big if we try to compute big Fibonacci numbers. Before removing this array, we need to take another approach. Bottom-up approach Let's consider again the first naive solution. It is slow because of recursive calls. Let's unstack them and store intermediate results : fn fibonacci(n: usize) -> u128 { match n { 0 => 0, 1 => 1, _ => { let mut memo = vec![0; n + 1]; memo[0] = 0; memo[1] = 1; for i in 2..=n { memo[i] = memo[i - 1] + memo[i - 2]; } memo[n] } } } fn main() { println!("{}", fibonacci(35)); } But wait, we're only considering memo[i - 1] and memo[i - 2], why do we store other results ? Last optimization We only need the two last results, let's use two simple variables for this : fn fibonacci(n: usize) -> u128 { if n == 0 { return 0; } let (mut a, mut b) = (0, 1); for _ in 2..n { let c = a + b; a = b; b = c; } a + b } fn main() { println!("{}", fibonacci(35)); } Finally, the time complexity is still O(n)O(n) O(n) , but without the array as in the top-down solution. Conclusion Starting from a naive solution allows you to make sure you really understood the problem. Then you can iterate until your code is optimized enough.

Introduction
Optimizing code is something we all need to do everyday as developers.
But trying to immediately find the best solution is slow, whereas starting from a naive solution that you incrementally improve is more efficient.
Let's see this through a dynamic programming problem, the Fibonacci sequence, and using Rust.
This problem will be solved with 4 different approaches:
- naive solution
- top-down approach (memoization)
- bottom-up approach
- optimized bottom-up approach
Naive solution
The Fibonacci sequence is defined as follows:
Let's start with a naive, yet simple implementation :
fn fibonacci(n: usize) -> u128 {
match n {
0 => 0,
1 => 1,
_ => fibonacci(n - 1) + fibonacci(n - 2),
}
}
fn main() {
println!("{}", fibonacci(35));
}
The corresponding time complexity is O(2n)O(2^n) O(2n) : every fibonacci call triggers 2 calls, and we're doing that n times.
Exponential runtime... we must avoid that.
Top-down approach (memoization)
A lot of intermediate results are identical: fibonacci(5) computes 3 times fibonacci(2) for example.
Caching the results would avoid to compute them again and again, let's do that.
fn fibonacci(n: usize, memo: &mut [u128]) -> u128 {
match n {
0 => 0,
1 => 1,
_ => {
if memo[n] == 0 {
memo[n] = fibonacci(n - 1, memo) + fibonacci(n - 2, memo);
}
memo[n]
}
}
}
fn main() {
let n = 35;
println!("{}", fibonacci(n, &mut vec![0u128; n + 1]));
}
Now the time complexity is
O(n)O(n) O(n)
, since the cached cases immediately return.
But we can do better : this array becomes very big if we try to compute big Fibonacci numbers.
Before removing this array, we need to take another approach.
Bottom-up approach
Let's consider again the first naive solution. It is slow because of recursive calls.
Let's unstack them and store intermediate results :
fn fibonacci(n: usize) -> u128 {
match n {
0 => 0,
1 => 1,
_ => {
let mut memo = vec![0; n + 1];
memo[0] = 0;
memo[1] = 1;
for i in 2..=n {
memo[i] = memo[i - 1] + memo[i - 2];
}
memo[n]
}
}
}
fn main() {
println!("{}", fibonacci(35));
}
But wait, we're only considering memo[i - 1]
and memo[i - 2]
, why do we store other results ?
Last optimization
We only need the two last results, let's use two simple variables for this :
fn fibonacci(n: usize) -> u128 {
if n == 0 {
return 0;
}
let (mut a, mut b) = (0, 1);
for _ in 2..n {
let c = a + b;
a = b;
b = c;
}
a + b
}
fn main() {
println!("{}", fibonacci(35));
}
Finally, the time complexity is still O(n)O(n) O(n) , but without the array as in the top-down solution.
Conclusion
Starting from a naive solution allows you to make sure you really understood the problem.
Then you can iterate until your code is optimized enough.