```
!git clone https://github.com/unifyai/ivy.git
!cd ivy && git checkout d6bc18c64a47a135fe18404d9f83f98d9f3b63cf && python3 -m pip install --user -e .
```

# Lazy vs Eager

Understand the difference between eager and lazy compilation and transpilation.

⚠️ If you are running this notebook in Colab, you will have to install `Ivy`

and some dependencies manually. You can do so by running the cell below ⬇️

If you want to run the notebook locally but don’t have Ivy installed just yet, you can check out the Setting Up section of the docs.

For the installed packages to be available you will have to restart your kernel. In Colab, you can do this by clicking on **“Runtime > Restart Runtime”**. Once the runtime has been restarted you should skip the previous cell 😄

To use the compiler and the transpiler now you will need an API Key. If you already have one, you should replace the string in the next cell.

`= "PASTE_YOUR_KEY_HERE" API_KEY `

```
!mkdir -p .ivy
!echo -n $API_KEY > .ivy/key.pem
```

`ivy.unify`

, `ivy.compile`

and `ivy.transpile`

can all be performed either eagerly or lazily. All previous examples have been performed **lazily**, which means that the unification, compilation, or transpilation process actually occurs during the first call of the **returned** function.

This is because all three of these processes depend on function tracing, which requires function arguments to use for the tracing. Alternatively, the arguments can be provided during the `ivy.unify`

, `ivy.compile`

or `ivy.transpile`

call itself, in which case the process is performed **eagerly**. We show some simple examples for each case below.

## Unify

Consider again this simple `torch`

function:

```
import ivy
import torch
def normalize(x):
= torch.mean(x)
mean = torch.std(x)
std return torch.div(torch.sub(x, mean), std)
```

And let’s also create the dummy `numpy`

arrays as before:

```
# import NumPy
import numpy as np
# create random numpy array for testing
= np.random.uniform(size=10) x
```

Let’s assume that our target framework is `tensorflow`

:

```
import tensorflow as tf
"tensorflow")
ivy.set_backend(
= tf.constant(x) x
```

In the example below, the function is unified **lazily**, which means the first function call will execute slowly, as this is when the unification process actually occurs.

```
= ivy.unify(normalize, source="torch")
norm # slow, lazy unification
norm(x) # fast, unified on previous call norm(x)
```

```
ivy.array([-0.54320029, 1.30825614, 1.17176882, 1.14351968, -0.98934778,
0.82910388, -0.89044143, -0.71881472, -0.1666683 , -1.14417601])
```

However, in the following example the unification occurs **eagerly**, and both function calls will be fast:

```
"tensorflow")
ivy.set_backend(= ivy.unify(normalize, source="torch", args=(x,))
norm # fast, unified at ivy.unify
norm(x) # fast, unified at ivy.unify norm(x)
```

```
ivy.array([-0.54320029, 1.30825614, 1.17176882, 1.14351968, -0.98934778,
0.82910388, -0.89044143, -0.71881472, -0.1666683 , -1.14417601])
```

## Compile

The same is true for compiling. In the example below, the function is compiled **lazily**, which means the first function call will execute slowly, as this is when the compilation process actually occurs.

```
= ivy.compile(norm)
norm_comp # slow, lazy compilation
norm_comp(x) # fast, compiled on previous call norm_comp(x)
```

```
<tf.Tensor: shape=(10,), dtype=float64, numpy=
array([-0.54320029, 1.30825614, 1.17176882, 1.14351968, -0.98934778,
0.82910388, -0.89044143, -0.71881472, -0.1666683 , -1.14417601])>
```

However, in the following example the compilation occurs **eagerly**, and both function calls will be fast:

```
= ivy.compile(norm, args=(x,))
norm_comp # fast, compiled at ivy.compile
norm_comp(x) # fast, compiled at ivy.compile norm_comp(x)
```

```
<tf.Tensor: shape=(10,), dtype=float64, numpy=
array([-0.54320029, 1.30825614, 1.17176882, 1.14351968, -0.98934778,
0.82910388, -0.89044143, -0.71881472, -0.1666683 , -1.14417601])>
```

## Transpile

The same is true for transpiling. In the example below, the function is transpiled **lazily**, which means the first function call will execute slowly, as this is when the transpilation process actually occurs.

```
= ivy.transpile(normalize, source="torch", to="tensorflow")
norm_trans # slow, lazy transpilation
norm_trans(x) # fast, transpiled on previous call norm_trans(x)
```

```
<tf.Tensor: shape=(10,), dtype=float64, numpy=
array([-0.54320029, 1.30825614, 1.17176882, 1.14351968, -0.98934778,
0.82910388, -0.89044143, -0.71881472, -0.1666683 , -1.14417601])>
```

However, in the following example the transpilation occurs *eagerly*, and both function calls will be fast:

```
= ivy.transpile(normalize, source="torch", to="tensorflow", args=(x,))
norm_trans # fast, transpiled at ivy.transpile
norm_trans(x) # fast, transpiled at ivy.transpile norm_trans(x)
```

```
<tf.Tensor: shape=(10,), dtype=float64, numpy=
array([-0.54320029, 1.30825614, 1.17176882, 1.14351968, -0.98934778,
0.82910388, -0.89044143, -0.71881472, -0.1666683 , -1.14417601])>
```

## Round Up

That’s it, you now know the difference between lazy vs eager execution for `ivy.unify`

, `ivy.compile`

and `ivy.transpile`

! Next, we’ll be exploring how these three functions can all be called as function decorators!