Losses#

class ivy.data_classes.array.losses._ArrayWithLosses[source]#

Bases: ABC

_abc_impl = <_abc_data object>#
binary_cross_entropy(pred, /, *, from_logits=False, epsilon=0.0, reduction='none', pos_weight=None, axis=None, out=None)[source]#

ivy.Array instance method variant of ivy.binary_cross_entropy. This method simply wraps the function, and so the docstring for ivy.binary_cross_entropy also applies to this method with minimal changes.

Parameters:
  • self (Array) – input array containing true labels.

  • pred (Union[Array, NativeArray]) – input array containing Predicted labels.

  • from_logits (bool) – Whether pred is expected to be a logits tensor. By (default: False) default, we assume that pred encodes a probability distribution.

  • epsilon (float) – a float in [0.0, 1.0] specifying the amount of smoothing when calculating (default: 0.0) the loss. If epsilon is 0, no smoothing will be applied. Default: 0.

  • reduction (str) – 'none': No reduction will be applied to the output. (default: 'none') 'mean': The output will be averaged. 'sum': The output will be summed. Default: 'none'.

  • pos_weight (Optional[Union[Array, NativeArray]]) – a weight for positive examples. Must be an array with length equal (default: None) to the number of classes.

  • axis (Optional[int]) – Axis along which to compute crossentropy. (default: None)

  • out (Optional[Array]) – optional output array, for writing the result to. It must have a shape (default: None) that the inputs broadcast to.

Return type:

Array

Returns:

ret – The binary cross entropy between the given distributions.

Examples

>>> x = ivy.array([1 , 1, 0])
>>> y = ivy.array([0.7, 0.8, 0.2])
>>> z = x.binary_cross_entropy(y)
>>> print(z)
ivy.array([0.357, 0.223, 0.223])
cross_entropy(pred, /, *, axis=-1, epsilon=1e-07, reduction='sum', out=None)[source]#

ivy.Array instance method variant of ivy.cross_entropy. This method simply wraps the function, and so the docstring for ivy.cross_entropy also applies to this method with minimal changes.

Parameters:
  • self (Array) – input array containing true labels.

  • pred (Union[Array, NativeArray]) – input array containing the predicted labels.

  • axis (int) – the axis along which to compute the cross-entropy. If axis is -1, (default: -1) the cross-entropy will be computed along the last dimension. Default: -1.

  • epsilon (float) – a float in [0.0, 1.0] specifying the amount of smoothing when calculating (default: 1e-07) the loss. If epsilon is 0, no smoothing will be applied. Default: 1e-7.

  • out (Optional[Array]) – optional output array, for writing the result to. It must have a shape (default: None) that the inputs broadcast to.

Return type:

Array

Returns:

ret – The cross-entropy loss between the given distributions.

Examples

>>> x = ivy.array([0, 0, 1, 0])
>>> y = ivy.array([0.25, 0.25, 0.25, 0.25])
>>> z = x.cross_entropy(y)
>>> print(z)
ivy.array(1.3862944)
sparse_cross_entropy(pred, /, *, axis=-1, epsilon=1e-07, reduction='sum', out=None)[source]#

ivy.Array instance method variant of ivy.sparse_cross_entropy. This method simply wraps the function, and so the docstring for ivy.sparse_cross_entropy also applies to this method with minimal changes.

Parameters:
  • self (Array) – input array containing the true labels as logits.

  • pred (Union[Array, NativeArray]) – input array containing the predicted labels as logits.

  • axis (int) – the axis along which to compute the cross-entropy. If axis is -1, the (default: -1) cross-entropy will be computed along the last dimension. Default: -1. epsilon a float in [0.0, 1.0] specifying the amount of smoothing when calculating the loss. If epsilon is 0, no smoothing will be applied. Default: 1e-7.

  • epsilon (float) – a float in [0.0, 1.0] specifying the amount of smoothing when calculating (default: 1e-07) the loss. If epsilon is 0, no smoothing will be applied. Default: 1e-7.

  • out (Optional[Array]) – optional output array, for writing the result to. It must have a shape (default: None) that the inputs broadcast to.

Return type:

Array

Returns:

ret – The sparse cross-entropy loss between the given distributions.

Examples

>>> x = ivy.array([1 , 1, 0])
>>> y = ivy.array([0.7, 0.8, 0.2])
>>> z = x.sparse_cross_entropy(y)
>>> print(z)
ivy.array([0.223, 0.223, 0.357])