PyTorch metrics#

class MaskedMetric(metric_fn, mask_nans=False, mask_inf=False, metric_fn_kwargs=None, at=None, full_state_update: Optional[bool] = None, **kwargs: Any)[source]#

Base class to implement the metrics used in tsl.

In particular a MaskedMetric accounts for missing values in the input sequences by accepting a boolean mask as additional input.

Parameters:
  • metric_fn – Base function to compute the metric point wise.

  • mask_nans (bool, optional) – Whether to automatically mask nan values.

  • mask_inf (bool, optional) – Whether to automatically mask infinite values.

  • at (int, optional) – Whether to compute the metric only w.r.t. a certain time step.

is_differentiable: bool = None#
higher_is_better: bool = None#
full_state_update: bool = None#
is_masked(mask)[source]#
update(y_hat, y, mask=None)[source]#

Override this method to update the state variables of your metric class.

compute()[source]#

Override this method to compute the final metric value from state variables synchronized across the distributed backend.

class MaskedMAE(mask_nans=False, mask_inf=False, at=None, **kwargs: Any)[source]#

Mean Absolute Error Metric.

Parameters:
  • mask_nans (bool, optional) – Whether to automatically mask nan values.

  • mask_inf (bool, optional) – Whether to automatically mask infinite values.

  • at (int, optional) – Whether to compute the metric only w.r.t. a certain time step.

is_differentiable: bool = True#
higher_is_better: bool = False#
full_state_update: bool = False#
class MaskedMSE(mask_nans=False, mask_inf=False, at=None, **kwargs: Any)[source]#

Mean Squared Error Metric.

Parameters:
  • mask_nans (bool, optional) – Whether to automatically mask nan values.

  • mask_inf (bool, optional) – Whether to automatically mask infinite values.

  • at (int, optional) – Whether to compute the metric only w.r.t. a certain time step.

is_differentiable: bool = True#
higher_is_better: bool = False#
full_state_update: bool = False#
class MaskedMRE(mask_nans=False, mask_inf=False, at=None, **kwargs: Any)[source]#

Mean Relative Error Metric.

Parameters:
  • mask_nans (bool, optional) – Whether to automatically mask nan values.

  • mask_inf (bool, optional) – Whether to automatically mask infinite values.

  • at (int, optional) – Whether to compute the metric only w.r.t. a certain time step.

is_differentiable: bool = True#
higher_is_better: bool = False#
full_state_update: bool = False#
compute()[source]#

Override this method to compute the final metric value from state variables synchronized across the distributed backend.

update(y_hat, y, mask=None)[source]#

Override this method to update the state variables of your metric class.

class MaskedMAPE(mask_nans=False, at=None, **kwargs: Any)[source]#

Mean Absolute Percentage Error Metric.

Parameters:
  • mask_nans (bool, optional) – Whether to automatically mask nan values.

  • at (int, optional) – Whether to compute the metric only w.r.t. a certain time step.

is_differentiable: bool = True#
higher_is_better: bool = False#
full_state_update: bool = False#
class MaskedPinballLoss(q, mask_nans=False, mask_inf=False, compute_on_step=True, dist_sync_on_step=False, process_group=None, dist_sync_fn=None, at=None)[source]#

Quantile loss.

Parameters:
  • q (float) – Target quantile.

  • mask_nans (bool, optional) – Whether to automatically mask nan values.

  • mask_inf (bool, optional) – Whether to automatically mask infinite values.

  • compute_on_step (bool, optional) – Whether to compute the metric right-away or if accumulate the results. This should be True when using the metric to compute a loss function, False if the metric is used for logging the aggregate error across different minibatches.

  • at (int, optional) – Whether to compute the metric only w.r.t. a certain time step.

is_differentiable: bool = True#
higher_is_better: bool = False#
full_state_update: bool = False#
mae(y_hat: Tensor, y: Tensor, mask: Optional[Tensor] = None, reduction: Literal['mean', 'sum', 'none'] = 'mean', nan_to_zero: bool = False) Union[float, Tensor][source]#

Compute the Mean Absolute Error (MAE) between the estimate \(\hat{y}\) and the true value \(y\), i.e.

\[\text{MAE} = \frac{\sum_{i=1}^n |\hat{y}_i - y_i|}{n}\]
Parameters:
  • y_hat (torch.Tensor) – The estimated variable.

  • y (torch.Tensor) – The ground-truth variable.

  • mask (torch.Tensor, optional) – If provided, compute the metric using only the values at valid indices (with mask set to True). If mask is not None and reduction is 'none', masked indices are set to nan (see nan_to_zero). (default: None)

  • reduction (str) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. (default: 'mean')

  • nan_to_zero (bool) – If True, then masked values in output are converted to 0. This has an effect only when mask is not None and reduction is 'none'. (default: False)

Returns:

The Mean Absolute Error.

Return type:

float | torch.Tensor

nmae(y_hat: Tensor, y: Tensor, mask: Optional[Tensor] = None, reduction: Literal['mean', 'sum', 'none'] = 'mean', nan_to_zero: bool = False) Union[float, Tensor][source]#

Compute the Normalized Mean Absolute Error (NMAE) between the estimate \(\hat{y}\) and the true value \(y\). The NMAE is the Mean Absolute Error (MAE) scaled by the max-min range of the target data, i.e.

\[\text{NMAE} = \frac{\frac{1}{N} \sum_{i=1}^n |\hat{y}_i - y_i|}{\max(y) - \min(y)}\]
Parameters:
  • y_hat (torch.Tensor) – The estimated variable.

  • y (torch.Tensor) – The ground-truth variable.

  • mask (torch.Tensor, optional) – If provided, compute the metric using only the values at valid indices (with mask set to True). If mask is not None and reduction is 'none', masked indices are set to nan (see nan_to_zero). (default: None)

  • reduction (str) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. (default: 'mean')

  • nan_to_zero (bool) – If True, then masked values in output are converted to 0. This has an effect only when mask is not None and reduction is 'none'. (default: False)

Returns:

The Normalized Mean Absolute Error

Return type:

float | torch.Tensor

mape(y_hat: Tensor, y: Tensor, mask: Optional[Tensor] = None, reduction: Literal['mean', 'sum', 'none'] = 'mean', nan_to_zero: bool = False) Union[float, Tensor][source]#

Compute the Mean Absolute Percentage Error (MAPE). between the estimate \(\hat{y}\) and the true value \(y\), i.e.

\[\text{MAPE} = \frac{1}{n} \sum_{i=1}^n \frac{|\hat{y}_i - y_i|} {y_i}\]
Parameters:
  • y_hat (torch.Tensor) – The estimated variable.

  • y (torch.Tensor) – The ground-truth variable.

  • mask (torch.Tensor, optional) – If provided, compute the metric using only the values at valid indices (with mask set to True). If mask is not None and reduction is 'none', masked indices are set to nan (see nan_to_zero). (default: None)

  • reduction (str) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. (default: 'mean')

  • nan_to_zero (bool) – If True, then masked values in output are converted to 0. This has an effect only when mask is not None and reduction is 'none'. (default: False)

Returns:

The Mean Absolute Percentage Error.

Return type:

float | torch.Tensor

mse(y_hat: Tensor, y: Tensor, mask: Optional[Tensor] = None, reduction: Literal['mean', 'sum', 'none'] = 'mean', nan_to_zero: bool = False) Union[float, Tensor][source]#

Compute the Mean Squared Error (MSE) between the estimate \(\hat{y}\) and the true value \(y\), i.e.

\[\text{MSE} = \frac{\sum_{i=1}^n (\hat{y}_i - y_i)^2}{n}\]
Parameters:
  • y_hat (torch.Tensor) – The estimated variable.

  • y (torch.Tensor) – The ground-truth variable.

  • mask (torch.Tensor, optional) – If provided, compute the metric using only the values at valid indices (with mask set to True). If mask is not None and reduction is 'none', masked indices are set to nan (see nan_to_zero). (default: None)

  • reduction (str) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. (default: 'mean')

  • nan_to_zero (bool) – If True, then masked values in output are converted to 0. This has an effect only when mask is not None and reduction is 'none'. (default: False)

Returns:

The Mean Squared Error.

Return type:

float | torch.Tensor

rmse(y_hat: Tensor, y: Tensor, mask: Optional[Tensor] = None, reduction: Literal['mean', 'sum', 'none'] = 'mean') Union[float, Tensor][source]#

Compute the Root Mean Squared Error (RMSE) between the estimate \(\hat{y}\) and the true value \(y\), i.e.

\[\text{RMSE} = \sqrt{\frac{\sum_{i=1}^n (\hat{y}_i - y_i)^2}{n}}\]
Parameters:
  • y_hat (torch.Tensor) – The estimated variable.

  • y (torch.Tensor) – The ground-truth variable.

  • mask (torch.Tensor, optional) – If provided, compute the metric using only the values at valid indices (with mask set to True). (default: None)

  • reduction (str) – Specifies the reduction to apply to the output: 'mean' | 'sum'. 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. (default: 'mean')

Returns:

The Root Mean Squared Error.

Return type:

float

nrmse(y_hat: Tensor, y: Tensor, mask: Optional[Tensor] = None, reduction: Literal['mean', 'sum', 'none'] = 'mean') Union[float, Tensor][source]#

Compute the Normalized Root Mean Squared Error (NRMSE) between the estimate \(\hat{y}\) and the true value \(y\), i.e. Normalization is by the max-min range of the data

\[\text{NRMSE} = \frac{\sqrt{\frac{\sum_{i=1}^n (\hat{y}_i - y_i)^2}{n}} }{\max y - \min y}\]
Parameters:
  • y_hat (torch.Tensor) – The estimated variable.

  • y (torch.Tensor) – The ground-truth variable.

  • mask (torch.Tensor, optional) – If provided, compute the metric using only the values at valid indices (with mask set to True).

  • reduction (str) – Specifies the reduction to apply to the output: 'mean' | 'sum'. 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. (default: 'mean')

Returns:

The range-normalzized NRMSE

Return type:

float

nrmse_2(y_hat: Tensor, y: Tensor, mask: Optional[Tensor] = None, reduction: Literal['mean', 'sum', 'none'] = 'mean') Union[float, Tensor][source]#

Compute the Normalized Root Mean Squared Error (NRMSE) between the estimate \(\hat{y}\) and the true value \(y\), i.e. Normalization is by the power of the true signal \(y\)

\[\text{NRMSE}_2 = \frac{\sqrt{\frac{\sum_{i=1}^n (\hat{y}_i - y_i)^2}{n}} }{\sum_{i=1}^n y_i^2}\]
Parameters:
  • y_hat (torch.Tensor) – The estimated variable.

  • y (torch.Tensor) – The ground-truth variable.

  • mask (torch.Tensor, optional) – If provided, compute the metric using only the values at valid indices (with mask set to True).

  • reduction (str) – Specifies the reduction to apply to the output: 'mean' | 'sum'. 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. (default: 'mean')

Returns:

The power-normalzized NRMSE.

Return type:

float

r2(y_hat: Tensor, y: Tensor, mask: Optional[Tensor] = None, reduction: Literal['mean', 'sum', 'none'] = 'mean', nan_to_zero: bool = False, mean_axis: Optional[Union[int, Tuple]] = None) Union[float, Tensor][source]#

Compute the coefficient of determination \(R^2\) between the estimate \(\hat{y}\) and the true value \(y\), i.e.

\[R^{2} = 1 - \frac{\sum_{i} (\hat{y}_i - y_i)^2} {\sum_{i} (\bar{y} - y_i)^2}\]

where \(\bar{y}=\frac{1}{n}\sum_{i=1}^n y_i\) is the mean of \(y\).

Parameters:
  • y_hat (torch.Tensor) – The estimated variable.

  • y (torch.Tensor) – The ground-truth variable.

  • mask (torch.Tensor, optional) – If provided, compute the metric using only the values at valid indices (with mask set to True).

  • reduction (str) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. (default: 'mean')

  • nan_to_zero (bool) – If True, then masked values in output are converted to 0. This has an effect only when mask is not None and reduction is 'none'. (default: False)

  • mean_axis (int, Tuple, optional) – the axis along which the mean of y is computed, to compute the variance of y needed in the denominator of the R2 formula.

Returns:

The \(R^2\).

Return type:

float | torch.Tensor

mre(y_hat: Tensor, y: Tensor, mask: Optional[Tensor] = None) float[source]#
multi_quantile_pinball_loss(y_hat, y, q)[source]#
class MaskedMetricWrapper(metric: MaskedMetric, input_preprocessing=None, target_preprocessing=None, mask_preprocessing=None)[source]#
forward(*args: Any, **kwargs: Any) Any[source]#

forward serves the dual purpose of both computing the metric on the current batch of inputs but also add the batch statistics to the overall accumululating metric state.

Input arguments are the exact same as corresponding update method. The returned output is the exact same as the output of compute.

update(y_hat, y, mask=None)[source]#

Override this method to update the state variables of your metric class.

compute()[source]#

Override this method to compute the final metric value from state variables synchronized across the distributed backend.

reset()[source]#

This method automatically resets the metric state variables to their default value.

training: bool#
class SelectMetricWrapper(metric, dim, input_idx=None, target_idx=None, mask_idx=None)[source]#
training: bool#
convert_to_masked_metric(metric_fn, **kwargs)[source]#

Simple utility function to transform a callable into a MaskedMetric.

Parameters:
  • metric_fn – Callable to be wrapped.

  • **kwargs – Keyword arguments that will be passed to the callable.

Returns: