hinge_embedding_loss#

ivy.hinge_embedding_loss(input, target, *, margin=1.0, reduction='mean')[source]#

Measures loss from input x and label y with values 1 or -1. It evaluates if two inputs are similar or not, often used for embedding or semi-supervised learning.

Loss for the n-th sample:
\[\begin{split}l_n = \begin{cases} x_n, & \text{if}\; y_n = 1,\\ \max \{0, margin - x_n\}, & \text{if}\; y_n = -1, \end{cases}\end{split}\]
Total loss:
\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases}\end{split}\]

where \(L = \{l_1,\dots,l_N\}^\top\) .

Parameters:
  • input (Union[Array, NativeArray]) – Input tensor with dtype float. The shape is [N, *], where N is batch size and * represents any number of additional dimensions.

  • label – Label tensor containing 1 or -1 with dtype float32 or float64. Its shape matches that of the input.

  • margin (float, default: 1.0) – Sets the hyperparameter margin. Determines the necessary input size for hinge_embedding_loss calculations when label is -1. Inputs smaller than the margin are minimized with hinge_embedding_loss. Default is 1.0.

  • reduction (str, default: 'mean') – Specifies how to aggregate the loss across the batch. Options are: - 'none': Returns the unreduced loss. - 'mean': Returns the mean loss. - 'sum': Returns the summed loss. Default is 'mean'.

  • Shape

  • -----

    • Input: \((*)\) where \(*\) means, any number of dimensions.

    The sum operation operates over all the elements. - Target: \((*)\), same shape as the input - Output: scalar. If reduction is 'none', then same shape as the input

Return type:

Array

Returns:

ret – Hinge embedding loss calculated from the input and label, shaped based on the reduction method.

Examples

>>> input_tensor = ivy.array([1, 2, 3, 4], dtype=ivy.float64)
>>> target_tensor = ivy.array([1, 1, 1, 1], dtype=ivy.float64)
>>> loss = ivy.hinge_embedding_loss(input_tensor, target_tensor, reduction="none")
>>> loss
ivy.array([1., 2., 3., 4.])
>>> input_tensor = ivy.array([21, 22], dtype=ivy.float32)
>>> target_tensor = ivy.array([-1, 1], dtype=ivy.float32)
>>> loss = ivy.hinge_embedding_loss(input_tensor,target_tensor,
...                                 margin=2.0, reduction="sum")
>>> loss
ivy.array(22.)
Array.hinge_embedding_loss(self, target, *, margin=1.0, reduction='mean')[source]#

Measures loss from input x and label y with values 1 or -1. It evaluates if two inputs are similar or not, often used for embedding or semi-supervised learning.

Loss for the n-th sample:
\[\begin{split}l_n = \begin{cases} x_n, & \text{if}\; y_n = 1,\\ \max \{0, margin - x_n\}, & \text{if}\; y_n = -1, \end{cases}\end{split}\]
Total loss:
\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases}\end{split}\]

where \(L = \{l_1,\dots,l_N\}^\top\)

Parameters:
  • input – Input tensor with dtype float. The shape is [N, *], where N is batch size and * represents any number of additional dimensions.

  • label – Label tensor containing 1 or -1 with dtype float32 or float64. Its shape matches that of the input.

  • margin (float, default: 1.0) – Sets the hyperparameter margin. Determines the necessary input size for hinge_embedding_loss calculations when label is -1. Inputs smaller than the margin are minimized with hinge_embedding_loss. Default is 1.0.

  • reduction (str, default: 'mean') – Specifies how to aggregate the loss across the batch. Options are: - 'none': Returns the unreduced loss. - 'mean': Returns the mean loss. - 'sum': Returns the summed loss. Default is 'mean'.

  • Shape

  • -----

    • Input: \((*)\) where \(*\) means, any number of dimensions.

    The sum operation operates over all the elements. - Target: \((*)\), same shape as the input - Output: scalar. If reduction is 'none', then same shape as the input

Return type:

Array

Returns:

ret – Hinge embedding loss calculated from the input and label, shaped based on the reduction method.

Examples

>>> input_tensor = ivy.array([1, 2, 3, 4], dtype=ivy.float64)
>>> target_tensor = ivy.array([1, 1, 1, 1], dtype=ivy.float64)
>>> input_tensor.hinge_embedding_loss(target_tensor,reduction="sum")
ivy.array(10.)
>>> input_tensor = ivy.array([1, 2, 3], dtype=ivy.float64)
>>> target_tensor = ivy.array([1, -1, -1], dtype=ivy.float64)
>>> input_tensor.hinge_embedding_loss(target_tensor, margin=2.0)
ivy.array(0.33333333)
Container.hinge_embedding_loss(self, target, *, margin=1.0, reduction='mean', key_chains=None, to_apply=True, prune_unapplied=False, map_sequences=False)[source]#

ivy.Container instance method variant of ivy.hinge_embedding_loss. This method simply wraps the function, and so the docstring for ivy.hinge_embedding_loss also applies to this method with minimal changes.

Parameters:
  • input – input array or container containing input labels.

  • target (Union[Container, Array, NativeArray]) – input array or container containing the target labels.

  • margin ([typing.Union[float, ivy.Container]], default: 1.0) – Sets the hyperparameter margin. Determines the necessary input size for hinge_embedding_loss calculations when label is -1. Inputs smaller than the margin are minimized with hinge_embedding_loss. Default is 1.0.

  • reduction ([typing.Union[str, ivy.Container]], default: 'mean') – Specifies how to aggregate the loss across the batch. Options are: - 'none': Returns the unreduced loss. - 'mean': Returns the mean loss. - 'sum': Returns the summed loss. Default is 'mean'.

  • key_chains (Optional[Union[List[str], Dict[str, str], Container]], default: None) – The key-chains to apply or not apply the method to. Default is None.

  • to_apply (Union[bool, Container], default: True) – If input, the method will be applied to key_chains, otherwise key_chains will be skipped. Default is input.

  • prune_unapplied (Union[bool, Container], default: False) – Whether to prune key_chains for which the function was not applied. Default is False.

  • map_sequences (Union[bool, Container], default: False) – Whether to also map method to sequences (lists, tuples). Default is False.

  • Shape

  • -----

    • Input: \((*)\) where \(*\) means, any number of dimensions.

    The sum operation operates over all the elements. - Target: \((*)\), same shape as the input - Output: scalar. If reduction is 'none', then same shape as the input

Return type:

Container

Returns:

ret – Hinge embedding loss calculated from the input and label, shaped based on the reduction method.

Examples

>>> x = ivy.Container(a=ivy.array([[1, 0, 2]], dtype=ivy.float32),
...              b=ivy.array([[3, 2, 1]], dtype=ivy.float32))
>>> y = ivy.Container(a=ivy.array([[-1, -1, -1]], dtype=ivy.float32),
...              b=ivy.array([[1, 1, 1]], dtype=ivy.float32))
>>> x.hinge_embedding_loss(y, reduction="none", margin=0.5)
{
    a: ivy.array([[0., 0.5, 0.]]),
    b: ivy.array([[3., 2., 1.]])
}