Skip to content

Commit

Permalink
Create JIT Compilation.html
Browse files Browse the repository at this point in the history
  • Loading branch information
DerekMelchin authored Sep 4, 2024
1 parent 2d820a7 commit f32b5fe
Showing 1 changed file with 95 additions and 0 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
<p>
To increase the speed of some methods, you can use decorators like <code>@jit</code> or <code>@lru_cache</code>.
The <code>@jit</code> decorator from Numba is for Just-In-Time (JIT) compilation.
It compiles Python code to machine code at runtime to increase the speed of loops and mathematical operations.
However, Numba can't compile all Python code.
It performs best with NumPy arrays.
If you add the <code>@jit</code> decorator to your methods, it can make the debugging process more challenging since the code is compiled to machine code.
The following code snippet shows an example of using the <code>@jit</code> decorator:
</p>

<div class="section-example-container">
<pre class="python">import numpy as np
from numba import jit
import time

# Without JIT
def slow_function(arr):
result = 0
for i in range(len(arr)):
result += np.sin(arr[i]) * np.cos(arr[i])
return result

# With JIT
@jit(nopython=True)
def fast_function(arr):
result = 0
for i in range(len(arr)):
result += np.sin(arr[i]) * np.cos(arr[i])
return result

# Example usage
arr = np.random.rand(1000000)

# Test without JIT.
start = time.time()
slow_function(arr)
print(f"Slow function took: {time.time() - start} seconds")

# Test with JIT.
start = time.time()
fast_function(arr) # This will compile first, then run.
print(f"Fast function took: {time.time() - start} seconds")</pre>
</div>

<blockquote>
<p>Slow function took: 1.1557221412658691 seconds</p>
<p>Fast function took: 0.32973551750183105 seconds</p>
</blockquote>
<br>

<p>
The <code>@lru_cache</code> decorator from the <code>functools</code> module is for Least Recently Used (LRU) caching.
It caches the result of a function based on its arguments, so that the function body doesn't have to repeatedly execute when you call the function multiple times with the same inputs.
The <code>maxsize</code> argument of the decorator defines the number of results to cache.
For example, <code>@lru_cache(maxsize=512)</code> means that it should cache a maximum of 512 results.
If you add the <code>@lru_cache</code> decorator to your methods, make sure your function is completely dependent on the arguments (not any global state variables) or else the caching process may lead to incorrect results.
The following code snippet shows an example of using the <code>@lru_cache</code> decorator:
</p>

<div class="section-example-container">
<pre class="python">from functools import lru_cache
import time

# Without caching
def fibonacci(n):
if n < 2:
return n
return fibonacci(n-1) + fibonacci(n-2)

# With caching
@lru_cache(maxsize=512)
def fibonacci_cached(n):
if n < 2:
return n
return fibonacci_cached(n-1) + fibonacci_cached(n-2)

n = 30

# Test without caching.
start = time.time()
for _ in range(n):
fibonacci(n)
print(f"Fibonacci without cache took: {time.time() - start} seconds")

# Test with caching.
start = time.time()
for _ in range(n):
fibonacci_cached(n)
print(f"Fibonacci with cache took: {time.time() - start} seconds")</pre>
</div>

<blockquote>
<p>Fibonacci without cache took: 4.387492418289185 seconds</p>
<p>Fibonacci with cache took: 9.322166442871094e-05 seconds</p>
</blockquote>

0 comments on commit f32b5fe

Please sign in to comment.