soft_margin_loss#

ivy.soft_margin_loss(input, target, /, *, reduction='mean', out=None)[source]#

Compute the soft-margin hinge loss between predicted scores and true binary labels.

Parameters:
  • input (array_like) – True binary labels, of shape (batch_size,).

  • target (array_like) – Predicted scores, of shape (batch_size,).

  • reduction ({'mean', 'sum', 'none'}, optional) – Type of reduction to apply to the output. Default is ‘mean’.

  • out (array_like, optional) – Optional output array, for writing the result to. It must have a shape that the inputs broadcast to.

Return type:

Array

Returns:

ret (array) – The soft-margin hinge loss between the predicted scores and true binary labels.

Examples

>>> input = ivy.array([1, 0, 1, 0])
>>> target = ivy.array([0.8, 0.2, -0.6, 1.5])
>>> ivy.soft_margin_loss(input, target)
ivy.array(0.6987)
>>> input = ivy.array([1, 1, 0, 0])
>>> target = ivy.array([0.8, 0.7, 0.2, 0.1])
>>> ivy.soft_margin_loss(input, target, reduction='sum')
ivy.array(2.1606)
>>> input = ivy.array([1, 1, 0, 0])
>>> target = ivy.array([0.8, 0.7, 0.2, 0.1])
>>> ivy.soft_margin_loss(input, target, reduction='none')
ivy.array([0.3711, 0.4032, 0.6931, 0.6931])
Array.soft_margin_loss(self, target, /, *, reduction='mean', out=None)[source]#

ivy.Array instance method variant of ivy.soft_margin_loss. This method simply wraps the function, and so the docstring for ivy.soft_margin_loss also applies to this method with minimal changes.

Parameters:
  • self (Array) – input array containing true labels.

  • target (Union[Array, NativeArray]) – input array containing targeted labels.

  • reduction (Optional[str], default: 'mean') – 'none': No reduction will be applied to the output. 'mean': The output will be averaged. 'sum': The output will be summed. Default: 'sum'.

  • out (Optional[Array], default: None) – optional output array, for writing the result to. It must have a shape that the inputs broadcast to.

Return type:

Array

Returns:

ret – The soft margin loss between the true and targeticted labels.

Examples

>>> x = ivy.array([1, 1, 0])
>>> y = ivy.array([0.7, 0.8, 0.2])
>>> z = x.soft_margin_loss(y)
>>> print(z)
ivy.array([0.35667497, 0.22314353, 1.60943791])
Container.soft_margin_loss(self, target, /, *, reduction='mean', key_chains=None, to_apply=True, prune_unapplied=False, map_sequences=False, out=None)[source]#

ivy.Container instance method variant of ivy.soft_margin_loss. This method simply wraps the function, and so the docstring for ivy.soft_margin_loss also applies to this method with minimal changes.

# Insert the docstring here

Parameters:
  • self (Container) – input container containing input labels.

  • target (Union[Container, Array, NativeArray]) – input array or container containing the targeticted labels.

  • reduction (Optional[Union[str, Container]], default: 'mean') – the reduction method. Default: “mean”.

  • key_chains (Optional[Union[List[str], Dict[str, str], Container]], default: None) – The key-chains to apply or not apply the method to. Default is None.

  • to_apply (Union[bool, Container], default: True) – If input, the method will be applied to key_chains, otherwise key_chains will be skipped. Default is input.

  • prune_unapplied (Union[bool, Container], default: False) – Whether to prune key_chains for which the function was not applied. Default is False.

  • map_sequences (Union[bool, Container], default: False) – Whether to also map method to sequences (lists, tuples). Default is False.

  • out (Optional[Container], default: None) – optional output container, for writing the result to. It must have a shape that the inputs broadcast to.

Return type:

Container

Returns:

ret – The soft margin loss between the given distributions.