Losses#
- class ivy.data_classes.container.losses._ContainerWithLosses(dict_in=None, queues=None, queue_load_sizes=None, container_combine_method='list_join', queue_timeout=None, print_limit=10, key_length_limit=None, print_indent=4, print_line_spacing=0, ivyh=None, default_key_color='green', keyword_color_dict=None, rebuild_child_containers=False, types_to_iteratively_nest=None, alphabetical_keys=True, dynamic_backend=None, **kwargs)[source]#
Bases:
ContainerBase
- _abc_impl = <_abc_data object>#
- static _static_binary_cross_entropy(true, pred, /, *, from_logits=False, epsilon=0.0, reduction='none', pos_weight=None, axis=None, key_chains=None, to_apply=True, prune_unapplied=False, map_sequences=False, out=None)[source]#
ivy.Container static method variant of ivy.binary_cross_entropy. This method simply wraps the function, and so the docstring for ivy.binary_cross_entropy also applies to this method with minimal changes.
- Parameters:
true (
Union
[Container
,Array
,NativeArray
]) – input array or container containing true labels.pred (
Union
[Container
,Array
,NativeArray
]) – input array or container containing Predicted labels.from_logits (
bool
) – Whether pred is expected to be a logits tensor. By (default:False
) default, we assume that pred encodes a probability distribution.epsilon (
Union
[float
,Container
]) – a float in [0.0, 1.0] specifying the amount of smoothing when calculating (default:0.0
) the loss. If epsilon is0
, no smoothing will be applied. Default:0
.reduction (
str
) –'none'
: No reduction will be applied to the output. (default:'none'
)'mean'
: The output will be averaged.'sum'
: The output will be summed. Default:'none'
.pos_weight (
Optional
[Union
[Array
,NativeArray
,Container
]]) – a weight for positive examples. Must be an array with length equal (default:None
) to the number of classes.axis (
Optional
[Union
[int
,Container
]]) – Axis along which to compute crossentropy. (default:None
)key_chains (
Optional
[Union
[List
[str
],Dict
[str
,str
]]]) – The key-chains to apply or not apply the method to. Default isNone
. (default:None
)to_apply (
bool
) – If True, the method will be applied to key_chains, otherwise key_chains (default:True
) will be skipped. Default isTrue
.prune_unapplied (
bool
) – Whether to prune key_chains for which the function was not applied. (default:False
) Default isFalse
.map_sequences (
bool
) – Whether to also map method to sequences (lists, tuples). (default:False
) Default isFalse
.out (
Optional
[Container
]) – optional output container, for writing the result to. It must have a shape (default:None
) that the inputs broadcast to.
- Return type:
Container
- Returns:
ret – The binary cross entropy between the given distributions.
Examples
With
ivy.Container
inputs:>>> x = ivy.Container(a=ivy.array([1, 0, 0]),b=ivy.array([0, 0, 1])) >>> y = ivy.Container(a=ivy.array([0.6, 0.2, 0.3]),b=ivy.array([0.8, 0.2, 0.2])) >>> z = ivy.Container.static_binary_cross_entropy(x, y) >>> print(z) { a: ivy.array([0.511, 0.223, 0.357]), b: ivy.array([1.61, 0.223, 1.61]) }
With a mix of
ivy.Array
andivy.Container
inputs:>>> x = ivy.array([1 , 1, 0]) >>> y = ivy.Container(a=ivy.array([0.7, 0.8, 0.2]),b=ivy.array([0.2, 0.6, 0.7])) >>> z = ivy.Container.static_binary_cross_entropy(x, y) >>> print(z) { a: ivy.array([0.357, 0.223, 0.223]), b: ivy.array([1.61, 0.511, 1.2]) }
- static _static_cross_entropy(true, pred, /, *, axis=-1, epsilon=1e-07, reduction='sum', key_chains=None, to_apply=True, prune_unapplied=False, map_sequences=False, out=None)[source]#
ivy.Container static method variant of ivy.cross_entropy. This method simply wraps the function, and so the docstring for ivy.cross_entropy also applies to this method with minimal changes.
- Parameters:
true (
Union
[Container
,Array
,NativeArray
]) – input array or container containing true labels.pred (
Union
[Container
,Array
,NativeArray
]) – input array or container containing the predicted labels.axis (
Union
[int
,Container
]) – the axis along which to compute the cross-entropy. If axis is-1
, (default:-1
) the cross-entropy will be computed along the last dimension. Default:-1
.epsilon (
Union
[float
,Container
]) – a float in [0.0, 1.0] specifying the amount of smoothing when calculating (default:1e-07
) the loss. If epsilon is0
, no smoothing will be applied. Default:1e-7
.key_chains (
Optional
[Union
[List
[str
],Dict
[str
,str
]]]) – The key-chains to apply or not apply the method to. Default isNone
. (default:None
)to_apply (
bool
) – If True, the method will be applied to key_chains, otherwise key_chains (default:True
) will be skipped. Default isTrue
.prune_unapplied (
bool
) – Whether to prune key_chains for which the function was not applied. (default:False
) Default isFalse
.map_sequences (
bool
) – Whether to also map method to sequences (lists, tuples). (default:False
) Default isFalse
.out (
Optional
[Container
]) – optional output container, for writing the result to. It must have a shape (default:None
) that the inputs broadcast to.
- Return type:
Container
- Returns:
ret – The cross-entropy loss between the given distributions.
Examples
With
ivy.Container
inputs:>>> x = ivy.Container(a=ivy.array([0, 0, 1]), b=ivy.array([1, 1, 0])) >>> y = ivy.Container(a=ivy.array([0.6, 0.2, 0.3]),b=ivy.array([0.8, 0.2, 0.2])) >>> z = ivy.Container.static_cross_entropy(x, y) >>> print(z) { a: ivy.array(1.20397282), b: ivy.array(1.83258148) }
With a mix of
ivy.Array
andivy.Container
inputs:>>> x = ivy.array([0, 0, 1]) >>> y = ivy.Container(a=ivy.array([0.6, 0.2, 0.3]),b=ivy.array([0.8, 0.2, 0.2])) >>> z = ivy.Container.static_cross_entropy(x, y) >>> print(z) { a: ivy.array(1.20397282), b: ivy.array(1.60943794) }
- static _static_sparse_cross_entropy(true, pred, /, *, axis=-1, epsilon=1e-07, reduction='sum', key_chains=None, to_apply=True, prune_unapplied=False, map_sequences=False, out=None)[source]#
ivy.Container static method variant of ivy.sparse_cross_entropy. This method simply wraps the function, and so the docstring for ivy.sparse_cross_entropy also applies to this method with minimal changes.
- Parameters:
true (
Union
[Container
,Array
,NativeArray
]) – input array or container containing the true labels as logits.pred (
Union
[Container
,Array
,NativeArray
]) – input array or container containing the predicted labels as logits.axis (
Union
[int
,Container
]) – the axis along which to compute the cross-entropy. If axis is-1
, the (default:-1
) cross-entropy will be computed along the last dimension. Default:-1
. epsilon a float in [0.0, 1.0] specifying the amount of smoothing when calculating the loss. If epsilon is0
, no smoothing will be applied. Default:1e-7
.key_chains (
Optional
[Union
[List
[str
],Dict
[str
,str
]]]) – The key-chains to apply or not apply the method to. Default isNone
. (default:None
)to_apply (
bool
) – If True, the method will be applied to key_chains, otherwise key_chains (default:True
) will be skipped. Default isTrue
.prune_unapplied (
bool
) – Whether to prune key_chains for which the function was not applied. (default:False
) Default isFalse
.map_sequences (
bool
) – Whether to also map method to sequences (lists, tuples). (default:False
) Default isFalse
.out (
Optional
[Container
]) – optional output container, for writing the result to. It must have a shape (default:None
) that the inputs broadcast to.
- Return type:
Container
- Returns:
ret – The sparse cross-entropy loss between the given distributions.
Examples
With
ivy.Container
inputs:>>> x = ivy.Container(a=ivy.array([1, 0, 0]),b=ivy.array([0, 0, 1])) >>> y = ivy.Container(a=ivy.array([0.6, 0.2, 0.3]),b=ivy.array([0.8, 0.2, 0.2])) >>> z = ivy.Container.static_sparse_cross_entropy(x, y) >>> print(z) { a: ivy.array([1.61, 0.511, 0.511]), b: ivy.array([0.223, 0.223, 1.61]) }
With a mix of
ivy.Array
andivy.Container
inputs:>>> x = ivy.array([1 , 1, 0]) >>> y = ivy.Container(a=ivy.array([0.7, 0.8, 0.2]),b=ivy.array([0.2, 0.6, 0.7])) >>> z = ivy.Container.static_sparse_cross_entropy(x, y) >>> print(z) { a: ivy.array([0.223, 0.223, 0.357]), b: ivy.array([0.511, 0.511, 1.61]) }
- binary_cross_entropy(pred, /, *, from_logits=False, epsilon=0.0, reduction='none', pos_weight=None, axis=None, key_chains=None, to_apply=True, prune_unapplied=False, map_sequences=False, out=None)[source]#
ivy.Container instance method variant of ivy.binary_cross_entropy. This method simply wraps the function, and so the docstring for ivy.binary_cross_entropy also applies to this method with minimal changes.
- Parameters:
self (
Container
) – input container containing true labels.pred (
Union
[Container
,Array
,NativeArray
]) –input array or container containing Predicted labels. from_logits
Whether pred is expected to be a logits tensor. By default, we assume that pred encodes a probability distribution.
epsilon (
Union
[float
,Container
]) – a float in [0.0, 1.0] specifying the amount of smoothing when (default:0.0
) calculating the loss. If epsilon is0
, no smoothing will be applied. Default:0
.reduction (
str
) –'none'
: No reduction will be applied to the output. (default:'none'
)'mean'
: The output will be averaged.'sum'
: The output will be summed. Default:'none'
.pos_weight (
Optional
[Union
[Array
,NativeArray
,Container
]]) – a weight for positive examples. Must be an array with length equal (default:None
) to the number of classes.axis (
Optional
[int
]) – Axis along which to compute crossentropy. (default:None
)key_chains (
Optional
[Union
[List
[str
],Dict
[str
,str
]]]) – The key-chains to apply or not apply the method to. Default isNone
. (default:None
)to_apply (
bool
) – If True, the method will be applied to key_chains, otherwise key_chains (default:True
) will be skipped. Default isTrue
.prune_unapplied (
bool
) – Whether to prune key_chains for which the function was not applied. (default:False
) Default isFalse
.map_sequences (
bool
) – Whether to also map method to sequences (lists, tuples). (default:False
) Default isFalse
.out (
Optional
[Container
]) – optional output container, for writing the result to. It must have a shape (default:None
) that the inputs broadcast to.
- Return type:
Container
- Returns:
ret – The binary cross entropy between the given distributions.
Examples
>>> x = ivy.Container(a=ivy.array([1, 0, 0]),b=ivy.array([0, 0, 1])) >>> y = ivy.Container(a=ivy.array([0.6, 0.2, 0.3]),b=ivy.array([0.8, 0.2, 0.2])) >>> z = x.binary_cross_entropy(y) >>> print(z) { a: ivy.array([0.511, 0.223, 0.357]), b: ivy.array([1.61, 0.223, 1.61]) }
- cross_entropy(pred, /, *, axis=-1, epsilon=1e-07, reduction='sum', key_chains=None, to_apply=True, prune_unapplied=False, map_sequences=False, out=None)[source]#
ivy.Container instance method variant of ivy.cross_entropy. This method simply wraps the function, and so the docstring for ivy.cross_entropy also applies to this method with minimal changes.
- Parameters:
self (
Container
) – input container containing true labels.pred (
Union
[Container
,Array
,NativeArray
]) – input array or container containing the predicted labels.axis (
Union
[int
,Container
]) – the axis along which to compute the cross-entropy. If axis is-1
, (default:-1
) the cross-entropy will be computed along the last dimension. Default:-1
.epsilon (
Union
[float
,Container
]) – a float in [0.0, 1.0] specifying the amount of smoothing when calculating (default:1e-07
) the loss. If epsilon is0
, no smoothing will be applied. Default:1e-7
.key_chains (
Optional
[Union
[List
[str
],Dict
[str
,str
]]]) – The key-chains to apply or not apply the method to. Default isNone
. (default:None
)to_apply (
bool
) – If True, the method will be applied to key_chains, otherwise key_chains (default:True
) will be skipped. Default isTrue
.prune_unapplied (
bool
) – Whether to prune key_chains for which the function was not applied. (default:False
) Default isFalse
.map_sequences (
bool
) – Whether to also map method to sequences (lists, tuples). (default:False
) Default isFalse
.out (
Optional
[Container
]) – optional output container, for writing the result to. It must have a shape (default:None
) that the inputs broadcast to.
- Return type:
Container
- Returns:
ret – The cross-entropy loss between the given distributions.
Examples
>>> x = ivy.Container(a=ivy.array([1, 0, 0]),b=ivy.array([0, 0, 1])) >>> y = ivy.Container(a=ivy.array([0.6, 0.2, 0.3]),b=ivy.array([0.8, 0.2, 0.2])) >>> z = x.cross_entropy(y) >>> print(z) { a:ivy.array(0.5108256), b:ivy.array(1.609438) }
- sparse_cross_entropy(pred, /, *, axis=-1, epsilon=1e-07, reduction='sum', key_chains=None, to_apply=True, prune_unapplied=False, map_sequences=False, out=None)[source]#
ivy.Container instance method variant of ivy.sparse_cross_entropy. This method simply wraps the function, and so the docstring for ivy.sparse_cross_entropy also applies to this method with minimal changes.
- Parameters:
self (
Container
) – input container containing the true labels as logits.pred (
Union
[Container
,Array
,NativeArray
]) – input array or container containing the predicted labels as logits.axis (
Union
[int
,Container
]) – the axis along which to compute the cross-entropy. If axis is-1
, the (default:-1
) cross-entropy will be computed along the last dimension. Default:-1
. epsilon a float in [0.0, 1.0] specifying the amount of smoothing when calculating the loss. If epsilon is0
, no smoothing will be applied. Default:1e-7
.key_chains (
Optional
[Union
[List
[str
],Dict
[str
,str
]]]) – The key-chains to apply or not apply the method to. Default isNone
. (default:None
)to_apply (
bool
) – If True, the method will be applied to key_chains, otherwise key_chains (default:True
) will be skipped. Default isTrue
.prune_unapplied (
bool
) – Whether to prune key_chains for which the function was not applied. (default:False
) Default isFalse
.map_sequences (
bool
) – Whether to also map method to sequences (lists, tuples). (default:False
) Default isFalse
.out (
Optional
[Container
]) – optional output container, for writing the result to. It must have a shape (default:None
) that the inputs broadcast to.
- Return type:
Container
- Returns:
ret – The sparse cross-entropy loss between the given distributions.
Examples
>>> x = ivy.Container(a=ivy.array([1, 0, 0]),b=ivy.array([0, 0, 1])) >>> y = ivy.Container(a=ivy.array([0.6, 0.2, 0.3]),b=ivy.array([0.8, 0.2, 0.2])) >>> z = x.sparse_cross_entropy(y) >>> print(z) { a: ivy.array([1.61, 0.511, 0.511]), b: ivy.array([0.223, 0.223, 1.61]) }