Video Tutorial

Trace code#

Turn your Ivy code into an efficient fully-functional graph, removing wrappers and unused parts of the code.

⚠️ If you are running this notebook in Colab, you will have to install Ivy and some dependencies manually. You can do so by running the cell below ⬇️

If you want to run the notebook locally but don’t have Ivy installed just yet, you can check out the Get Started section of the docs.

[ ]:
!pip install ivy

Firstly, let’s pick up where we left off in the last notebook, with our unified normalize function:

import ivy
import torch

def normalize(x):
    mean = torch.mean(x)
    std = torch.std(x)
    return torch.div(torch.sub(x, mean), std)

normalize = ivy.unify(normalize, source="torch")

For the purpose of illustration, we will use jax as our backend framework:

# set ivy's backend to jax

# Import jax
import jax

# create random jax arrays for testing
key = jax.random.PRNGKey(42)
x = jax.random.uniform(key, shape=(10,))

As in the previous example, the Ivy function can be executed like so (in this case it will trigger lazy unification, see the Lazy vs Eager section for more details):

ivy.array([ 0.55563945, -0.65538704, -1.14150524,  1.46951997,  1.30220294,
       -1.14739668, -0.57017946, -0.91962677,  0.51029003,  0.59644395])

When calling this function, all of ivy’s function wrapping is included in the call stack of normalize, which adds runtime overhead. In general, ivy.trace_graph strips any arbitrary function down to its constituent functions in the functional API of the target framework. The code can be traced like so:

traced = ivy.trace_graph(normalize)  # traces to jax, due to ivy.set_backend

The traced function can be executed in exactly the same manner as the non-traced function (in this case it will also trigger lazy graph tracing, see the Lazy vs Eager section for more details):

Array([ 0.5556394 , -0.655387  , -1.1415051 ,  1.4695197 ,  1.3022028 ,
       -1.1473966 , -0.5701794 , -0.91962665,  0.51028997,  0.5964439 ],      dtype=float32)

With all lazy graph tracing calls now performed (which all increase runtime during the very first call of the function), we can now assess the runtime efficiencies of each function:

985 µs ± 6.76 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
69.5 µs ± 1.24 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)

As expected, we can see that normalize is slower, as it includes all ivy wrapping overhead. On the other hand, traced has no wrapping overhead and it’s more efficient!

Round Up#

That’s it, you can now trace ivy code for more efficient inference! However, there are several other important topics to master before you’re ready to unify ML code like a pro 🥷. Next, we’ll be learning how to transpile code from one framework to another in a single line of code 🔄