layer_norm#
- ivy.layer_norm(x, normalized_idxs, /, *, scale=None, offset=None, eps=1e-05, new_std=1.0, out=None)[source]#
Apply Layer Normalization over a mini-batch of inputs.
- Parameters:
x (
Union
[Array
,NativeArray
]) – Input arraynormalized_idxs (
List
[int
]) – Indices to apply the normalization to.scale (
Optional
[Union
[Array
,NativeArray
]]) – Learnable gamma variables for elementwise post-multiplication, (default:None
) default isNone
.offset (
Optional
[Union
[Array
,NativeArray
]]) – Learnable beta variables for elementwise post-addition, default isNone
. (default:None
)eps (
float
) – small constant to add to the denominator. Default is1e-05
(default:1e-05
)new_std (
float
) – The standard deviation of the new normalized values. Default is1
. (default:1.0
)out (
Optional
[Array
]) – optional output array, for writing the result to. It must have a shape that the (default:None
) inputs broadcast to.
- Return type:
- Returns:
ret – The layer after applying layer normalization.
Examples
With
ivy.Array
input: >>> x = ivy.array([[1.0, 2.0], [3.0, 4.0]]) >>> y = ivy.layer_norm(x, [0, 1], new_std=2.0) >>> print(y) ivy.array([[-2.68 , -0.894],[ 0.894, 2.68 ]])
>>> x = ivy.array([[1., 2., 3.], [4., 5., 6.]]) >>> y = ivy.zeros((2, 3)) >>> ivy.layer_norm(x, [0], out=y) >>> print(y) ivy.array([[-1., -1., -1.], [ 1., 1., 1.]]) >>> x = ivy.array([[0.0976, -0.3452, 1.2740], ... [0.1047, 0.5886, 1.2732], ... [0.7696, -1.7024, -2.2518]]) >>> y = ivy.layer_norm(x, [0, 1], eps=0.001, ... new_std=1.5, scale=0.5, offset=[0.5, 0.02, 0.1]) >>> print(y) ivy.array([[ 0.826, -0.178, 0.981 ], [ 0.831, 0.421, 0.981 ], [ 1.26 , -1.05 , -1.28 ]]) With a mix of :class:`ivy.Array` and :class:`ivy.Container` inputs: >>> x = ivy.array([[1., 2., 3.], [4., 5., 6.]]) >>> normalized_idxs = ivy.Container({'a': [0], 'b': [1]}) >>> y = ivy.layer_norm(x, normalized_idxs, new_std=1.25, offset=0.2) >>> print(y) { a: ivy.array([[-1.25, -1.25, -1.25], [1.25, 1.25, 1.25]]), b: ivy.array([[-1.53, 0., 1.53], [-1.53, 0., 1.53]]) } With one :class:`ivy.Container` input: >>> x = ivy.Container({'a': ivy.array([7., 10., 12.]), ... 'b': ivy.array([[1., 2., 3.], [4., 5., 6.]])}) >> normalized_idxs = [0] >>> y = ivy.layer_norm(x, normalized_idxs, eps=1.25, scale=0.3) >>> print(y) { a: ivy.array([-0.342, 0.0427, 0.299]), b: ivy.array([[-0.217, 0., 0.217], [-0.217, 0., 0.217]]) } With multiple :class:`ivy.Container` inputs: >>> x = ivy.Container({'a': ivy.array([7., 10., 12.]), ... 'b': ivy.array([[1., 2., 3.], [4., 5., 6.]])}) >>> normalized_idxs = ivy.Container({'a': [0], 'b': [1]}) >>> new_std = ivy.Container({'a': 1.25, 'b': 1.5}) >>> bias = ivy.Container({'a': [0.2, 0.5, 0.7], 'b': 0.3}) >>> y = ivy.layer_norm(x, normalized_idxs, new_std=new_std, offset=offset) >>> print(y) { a: ivy.array([-1.62, 0.203, 1.42]), b: ivy.array([[-1.84, 0., 1.84], [-1.84, 0., 1.84]]) } Both the description and the type hints above assumes an array input for simplicity, but this function is *nestable*, and therefore also accepts :class:`ivy.Container` instances in place of any of the arguments.
- Array.layer_norm(self, normalized_idxs, /, *, scale=None, offset=None, eps=1e-05, new_std=1.0, out=None)#
ivy.Array instance method variant of ivy.layer_norm. This method simply wraps the function, and so the docstring for ivy.layer_norm also applies to this method with minimal changes.
- Parameters:
self (
Array
) – Input arraynormalized_idxs (
List
[int
]) – Indices to apply the normalization to.scale (
Optional
[Union
[Array
,NativeArray
]]) – Learnable gamma variables for elementwise post-multiplication, (default:None
) default isNone
.offset (
Optional
[Union
[Array
,NativeArray
]]) – Learnable beta variables for elementwise post-addition, default isNone
. (default:None
)eps (
float
) – small constant to add to the denominator. Default is1e-05
. (default:1e-05
)new_std (
float
) – The standard deviation of the new normalized values. Default is 1. (default:1.0
)out (
Optional
[Array
]) – optional output array, for writing the result to. It must have a shape that (default:None
) the inputs broadcast to.
- Return type:
Array
- Returns:
ret – The layer after applying layer normalization.
Examples
>>> x = ivy.array([[0.0976, -0.3452, 1.2740], ... [0.1047, 0.5886, 1.2732], ... [0.7696, -1.7024, -2.2518]]) >>> norm = x.layer_norm([0, 1], eps=0.001, ... new_std=1.5, scale=0.5, offset=[0.5, 0.02, 0.1]) >>> print(norm) ivy.array([[ 0.826, -0.178, 0.981 ], [ 0.831, 0.421, 0.981 ], [ 1.26 , -1.05 , -1.28 ]])
- Container.layer_norm(self, normalized_idxs, /, *, scale=None, offset=None, eps=1e-05, new_std=1.0, out=None)#
ivy.Container instance method variant of ivy.layer_norm. This method simply wraps the function, and so the docstring for ivy.layer_norm also applies to this method with minimal changes.
- Parameters:
self (
Union
[Array
,NativeArray
,Container
]) – Input containernormalized_idxs (
List
[int
]) – Indices to apply the normalization to.scale (
Optional
[Union
[Array
,NativeArray
,Container
]]) – Learnable gamma variables for elementwise post-multiplication, (default:None
) default isNone
.offset (
Optional
[Union
[Array
,NativeArray
,Container
]]) – Learnable beta variables for elementwise post-addition, default isNone
. (default:None
)eps (
float
) – small constant to add to the denominator. Default is1e-05
. (default:1e-05
)new_std (
float
) – The standard deviation of the new normalized values. Default is 1. (default:1.0
)out (
Optional
[Union
[Array
,Container
]]) – optional output container, for writing the result to. It must have a shape (default:None
) that the inputs broadcast to.
- Return type:
Container
- Returns:
ret – The layer after applying layer normalization.
Examples
With one
ivy.Container
input: >>> x = ivy.Container({‘a’: ivy.array([7., 10., 12.]), … ‘b’: ivy.array([[1., 2., 3.], [4., 5., 6.]])}) >>> normalized_idxs = [0] >>> norm = x.layer_norm(normalized_idxs, eps=1.25, scale=0.3) >>> print(norm) {a: ivy.array([-0.342, 0.0427, 0.299]), b: ivy.array([[-0.241, -0.241, -0.241,
[0.241, 0.241, 0.241]])
} With multiple
ivy.Container
inputs: >>> x = ivy.Container({‘a’: ivy.array([7., 10., 12.]), … ‘b’: ivy.array([[1., 2., 3.], [4., 5., 6.]])}) >>> normalized_idxs = ivy.Container({‘a’: [0], ‘b’: [1]}) >>> new_std = ivy.Container({‘a’: 1.25, ‘b’: 1.5}) >>> bias = ivy.Container({‘a’: [0.2, 0.5, 0.7], ‘b’: 0.3}) >>> norm = x.layer_norm(normalized_idxs, new_std=new_std, offset=offset) >>> print(norm) {a: ivy.array([-1.62, 0.203, 1.42]), b: ivy.array([[-1.84, 0., 1.84],
[-1.84, 0., 1.84]])
}