smooth_l1_loss#

ivy.smooth_l1_loss(input, target, /, *, beta=1.0, reduction='mean', out=None)[source]#

Compute the smooth L1 loss between two input tensors.

Parameters:
  • input (array_like) – First input tensor.

  • target (array_like) – Second input tensor.

  • beta (float, optional) – The smooth parameter. Default is 1.0.

  • reduction (str, optional) – Specifies the type of reduction to apply to the output. Should be one of ‘none’, ‘sum’, or ‘mean’. Default is ‘mean’.

  • out (array, optional) – Optional output array, for writing the result to. It must have a shape that the inputs broadcast to.

Return type:

Array

Returns:

ret (array) – The smooth_l1_loss between the two input tensors.

Examples

>>> x = ivy.array([1.0, 2.0, 3.0])
>>> y = ivy.array([2.5, 1.8, 3.2])
>>> ivy.smooth_l1_loss(x, y, beta=1.0)
ivy.array(0.3467)
>>> x = ivy.array([1.0, 2.0, 3.0])
>>> y = ivy.array([6.0, 2.0, 3.0])
>>> ivy.smooth_l1_loss(x, y, beta=1.0)
ivy.array(1.5)
>>> input = ivy.array([2.0, 3.0, 5.0, 7.0])
>>> target = ivy.array([2.5, 3.5, 5.5, 6.5])
>>> loss = ivy.smooth_l1_loss(input, target, beta=1.5, reduction='sum')
ivy.array(0.5)
>>> input = ivy.array([0.8, 1.2, 2.5, 3.7])
>>> target = ivy.array([0.9, 1.0, 2.3, 3.6])
>>> loss = ivy.smooth_l1_loss(input, target, beta=0.5, reduction='none')
ivy.array([0.0133, 0.0250, 0.0056, 0.0025])
>>> input = ivy.array([2.0, 3.0, 5.0, 7.0])
>>> target = ivy.array([2.5, 3.5, 5.5, 6.5])
>>> loss = ivy.smooth_l1_loss(input, target, beta=0.2, reduction='mean')
ivy.array(0.025)

With ivy.NativeArray input:

>>> x = ivy.native_array([1.5, 2.2, 3.7])
>>> y = ivy.native_array([2.1, 1.9, 3.5])
>>> print(ivy.smooth_l1_loss(x, y, beta=0.5))
ivy.array(0.0675)

With ivy.Container input:

>>> x = ivy.Container(a=ivy.array([1.0, 2.0, 3.0]))
>>> y = ivy.Container(a=ivy.array([2.5, 1.8, 3.2]))
>>> print(ivy.smooth_l1_loss(x, y, beta=1.0))
{
    a: ivy.array(0.3467)
}

With a mix of ivy.Array and ivy.NativeArray inputs:

>>> x = ivy.array([1.0, 2.0, 3.0])
>>> y = ivy.native_array([6.0, 2.0, 3.0])
>>> print(ivy.smooth_l1_loss(x, y, beta=0.5))
ivy.array(1.5)

With a mix of ivy.Array and ivy.Container inputs:

>>> x = ivy.array([1.0, 2.0, 3.0])
>>> y = ivy.Container(a=ivy.array([6.0, 2.0, 3.0]))
>>> print(ivy.smooth_l1_loss(x, y, beta=1.0))
{
    a: ivy.array(1.5)
}

Instance Method Examples

With ivy.Array input:

>>> x = ivy.array([1.0, 2.0, 3.0])
>>> y = ivy.array([2.5, 1.8, 3.2])
>>> print(x.smooth_l1_loss(y, beta=1.0))
ivy.array(0.3467)

With ivy.Container input:

>>> x = ivy.Container(a=ivy.array([1.0, 2.0, 3.0]))
>>> y = ivy.Container(a=ivy.array([2.5, 1.8, 3.2]))
>>> print(x.smooth_l1_loss(y, beta=1.0))
{
    a: ivy.array(0.3467)
}
Array.smooth_l1_loss(self, target, /, *, beta=1.0, reduction='mean', out=None)[source]#

ivy.Array instance method variant of ivy. smooth_l1_loss. This method simply wraps the function, and so the docstring for ivy.smooth_l1_loss also applies to this method with minimal changes.

Parameters:
  • self (Array) – input array containing true labels.

  • target (Union[Array, NativeArray]) – input array containing targeted labels.

  • beta (Optional[float], default: 1.0) – A float specifying the beta value for the smooth L1 loss. Default: 1.0.

  • reduction (Optional[str], default: 'mean') – Reduction method for the loss. Options are ‘none’, ‘mean’, or ‘sum’. Default: ‘mean’.

  • out (Optional[Array], default: None) – Optional output array, for writing the result to. It must have a shape that the inputs broadcast to.

Return type:

Array

Returns:

ret – The smooth L1 loss between the given labels.

Examples

>>> x = ivy.array([1, 2, 3, 4])
>>> y = ivy.array([2, 2, 2, 2])
>>> z = x.smooth_l1_loss(y, beta=0.5)
>>> print(z)
ivy.array(0.8125)
Container.smooth_l1_loss(self, target, /, *, beta=1.0, reduction='mean', key_chains=None, to_apply=True, prune_unapplied=False, map_sequences=False, out=None)[source]#

ivy.Container instance method variant of ivy.smooth_l1_loss. This method simply wraps the function, and so the docstring for ivy. smooth_l1_loss also applies to this method with minimal changes.

Parameters:
  • self (Container) – input container containing input labels.

  • target (Union[Container, Array, NativeArray]) – input array or container containing the targeticted labels.

  • beta (Optional[Union[float, Container]], default: 1.0) – a positive float value that sets the smoothness threshold. Default: 1.0.

  • reduction (Optional[Union[str, Container]], default: 'mean') – 'none': No reduction will be applied to the output. 'mean': The output will be averaged. 'sum': The output will be summed. Default: 'mean'.

  • key_chains (Optional[Union[List[str], Dict[str, str], Container]], default: None) – The key-chains to apply or not apply the method to. Default is None.

  • to_apply (Union[bool, Container], default: True) – If input, the method will be applied to key_chains, otherwise key_chains will be skipped. Default is input.

  • prune_unapplied (Union[bool, Container], default: False) – Whether to prune key_chains for which the function was not applied. Default is False.

  • map_sequences (Union[bool, Container], default: False) – Whether to also map method to sequences (lists, tuples). Default is False.

  • out (Optional[Container], default: None) – optional output container, for writing the result to. It must have a shape that the inputs broadcast to.

Return type:

Container

Returns:

ret – The smooth L1 loss between the input array and the targeticted labels.

Examples

>>> x = ivy.Container(a=ivy.array([1, 0, 2]), b=ivy.array([3, 2, 1]))
>>> y = ivy.Container(a=ivy.array([0.6, 0.2, 0.3]),
...                   b=ivy.array([0.8, 0.2, 0.2]))
>>> z = x.smooth_l1_loss(y)
>>> print(z)
{
    a: ivy.array(0.43333333),
    b: ivy.array(1.10666666)
}