Applied Libraries#
In other parts of the overview, we have focused on the the Ivy framework itself. Here, we explore how Ivy has been used to create a suite of libraries in various fields related to ML. Aside from being useful tools for ML developers in any framework, these libraries are a perfect showcase of what is possible using Ivy!
Currently there are Ivy libraries for: Mechanics, 3D Vision, Robotics, Gym Environments and Differentiable Memory. We run through some demos from these library now, and encourage you to pip install the libraries and run the demos yourself if you like what you see!
Ivy Mechanics#
Ivy Mechanics provides functions for conversions of orientation, pose, and positional representations, as well as transformations, and some other more applied functions. The orientation module is the largest, with conversions to and from all Euler conventions, quaternions, rotation matrices, rotation vectors, and axis-angle representations.
For example, this demo shows the use of ivy_mech.target_facing_rotation_matrix
:

This demo shows the use of ivy_mech.polar_to_cartesian_coords
:

Ivy Vision#
Ivy Vision focuses predominantly on 3D vision, with functions for image projections, co-ordinate frame transformation, forward warping, inverse warping, optical flow, depth generation, voxel grids, point clouds, and others.
For example, this demo shows the use of ivy_vision.coords_to_voxel_grid
:

This demo shows the use of ivy_vision.render_pixel_coords
:

This demo shows Neural Radiance Fields (NeRF):

Ivy Robot#
Ivy Robot provides functions and classes for gradient-based trajectory optimization and motion planning. Classes are provided both for mobile robots and robot manipulators.
For example, this demo shows the use of ivy_robot.sample_spline_path
and ivy_robot.RigidMobile.sample_body
for gradient-based motion planning of a drone.

This demo shows the use of ivy_robot.sample_spline_path
and ivy_robot.Manipulator.sample_links
for gradient-based motion planning of a robot manipulator:

Ivy Gym#
Ivy Gym provides differentiable implementations of the control environments provided by OpenAI Gym, as well as new “Swimmer” task which illustrates the simplicity of creating new tasks. The differentiable nature of the environments means that the cumulative reward can be directly optimized in a supervised manner, without need for reinforcement learning. Ivy Gym opens the door for intersectional research between supervised learning, trajectory optimization, and reinforcement learning.
For example, we show demos of each of the environments cartpole
, mountain_car
, pendulum
, reacher
, and swimmer
solved using direct trajectory optimization below.
We optimize for a specific starting state of the environment:

We show demos of each of the environments cartpole
, mountain_car
, pendulum
, reacher
, and swimmer
solved using supervised learning via a policy network.
We train a policy which is conditioned on the environment state, and the starting state is then randomized between training steps:

Ivy Memory#
Ivy Memory provides differentiable memory modules, including learnt modules such as Neural Turing Machines (NTM), but also parameter-free modules such as End-to-End Egospheric Spatial Memory (ESM).
For example, in this demo we learn to copy a sequence using ivy_memory.NTM
:

In this demo we create an egocentric 3D map of a room using ivy_memory.ESM
:

Round Up
Hopefully this has given you an idea of what’s possible using Ivy’s collection of applied libraries, and more importantly, given you inspiration for what’s possible using Ivy 🙂
Please reach out on discord if you have any questions!