Lazy vs Eager

Understand the difference between eager and lazy compilation and transpilation.

⚠️ If you are running this notebook in Colab, you will have to install Ivy and some dependencies manually. You can do so by running the cell below ⬇️

If you want to run the notebook locally but don’t have Ivy installed just yet, you can check out the Setting Up section of the docs.

!git clone https://github.com/unifyai/ivy.git
!cd ivy && git checkout d6bc18c64a47a135fe18404d9f83f98d9f3b63cf && python3 -m pip install --user -e .

For the installed packages to be available you will have to restart your kernel. In Colab, you can do this by clicking on “Runtime > Restart Runtime”. Once the runtime has been restarted you should skip the previous cell 😄

To use the compiler and the transpiler now you will need an API Key. If you already have one, you should replace the string in the next cell.

API_KEY = "PASTE_YOUR_KEY_HERE"
!mkdir -p .ivy
!echo -n $API_KEY > .ivy/key.pem

ivy.unify, ivy.compile and ivy.transpile can all be performed either eagerly or lazily. All previous examples have been performed lazily, which means that the unification, compilation, or transpilation process actually occurs during the first call of the returned function.

This is because all three of these processes depend on function tracing, which requires function arguments to use for the tracing. Alternatively, the arguments can be provided during the ivy.unify, ivy.compile or ivy.transpile call itself, in which case the process is performed eagerly. We show some simple examples for each case below.

Unify

Consider again this simple torch function:

import ivy
import torch

def normalize(x):
    mean = torch.mean(x)
    std = torch.std(x)
    return torch.div(torch.sub(x, mean), std)

And let’s also create the dummy numpy arrays as before:

# import NumPy
import numpy as np

# create random numpy array for testing
x = np.random.uniform(size=10)

Let’s assume that our target framework is tensorflow:

import tensorflow as tf
ivy.set_backend("tensorflow")

x = tf.constant(x)

In the example below, the function is unified lazily, which means the first function call will execute slowly, as this is when the unification process actually occurs.

norm = ivy.unify(normalize, source="torch")
norm(x) # slow, lazy unification
norm(x) # fast, unified on previous call
ivy.array([-0.54320029,  1.30825614,  1.17176882,  1.14351968, -0.98934778,
        0.82910388, -0.89044143, -0.71881472, -0.1666683 , -1.14417601])

However, in the following example the unification occurs eagerly, and both function calls will be fast:

ivy.set_backend("tensorflow")
norm = ivy.unify(normalize, source="torch", args=(x,))
norm(x) # fast, unified at ivy.unify
norm(x) # fast, unified at ivy.unify
ivy.array([-0.54320029,  1.30825614,  1.17176882,  1.14351968, -0.98934778,
        0.82910388, -0.89044143, -0.71881472, -0.1666683 , -1.14417601])

Compile

The same is true for compiling. In the example below, the function is compiled lazily, which means the first function call will execute slowly, as this is when the compilation process actually occurs.

norm_comp = ivy.compile(norm)
norm_comp(x) # slow, lazy compilation
norm_comp(x) # fast, compiled on previous call
<tf.Tensor: shape=(10,), dtype=float64, numpy=
array([-0.54320029,  1.30825614,  1.17176882,  1.14351968, -0.98934778,
        0.82910388, -0.89044143, -0.71881472, -0.1666683 , -1.14417601])>

However, in the following example the compilation occurs eagerly, and both function calls will be fast:

norm_comp = ivy.compile(norm, args=(x,))
norm_comp(x) # fast, compiled at ivy.compile
norm_comp(x) # fast, compiled at ivy.compile
<tf.Tensor: shape=(10,), dtype=float64, numpy=
array([-0.54320029,  1.30825614,  1.17176882,  1.14351968, -0.98934778,
        0.82910388, -0.89044143, -0.71881472, -0.1666683 , -1.14417601])>

Transpile

The same is true for transpiling. In the example below, the function is transpiled lazily, which means the first function call will execute slowly, as this is when the transpilation process actually occurs.

norm_trans = ivy.transpile(normalize, source="torch", to="tensorflow")
norm_trans(x) # slow, lazy transpilation
norm_trans(x) # fast, transpiled on previous call
<tf.Tensor: shape=(10,), dtype=float64, numpy=
array([-0.54320029,  1.30825614,  1.17176882,  1.14351968, -0.98934778,
        0.82910388, -0.89044143, -0.71881472, -0.1666683 , -1.14417601])>

However, in the following example the transpilation occurs eagerly, and both function calls will be fast:

norm_trans = ivy.transpile(normalize, source="torch", to="tensorflow", args=(x,))
norm_trans(x) # fast, transpiled at ivy.transpile
norm_trans(x) # fast, transpiled at ivy.transpile
<tf.Tensor: shape=(10,), dtype=float64, numpy=
array([-0.54320029,  1.30825614,  1.17176882,  1.14351968, -0.98934778,
        0.82910388, -0.89044143, -0.71881472, -0.1666683 , -1.14417601])>

Round Up

That’s it, you now know the difference between lazy vs eager execution for ivy.unify, ivy.compile and ivy.transpile! Next, we’ll be exploring how these three functions can all be called as function decorators!