# PyTorch metrics#

Base class to implement the metrics used in tsl.

In particular a MaskedMetric accounts for missing values in the input sequences by accepting a boolean mask as additional input.

Parameters:
• metric_fn – Base function to compute the metric point wise.

• at (int, optional) – Whether to compute the metric only w.r.t. a certain time step.

is_differentiable: bool = None#
higher_is_better: bool = None#
full_state_update: bool = None#

Override this method to update the state variables of your metric class.

compute()[source]#

Override this method to compute the final metric value from state variables synchronized across the distributed backend.

Mean Absolute Error Metric.

Parameters:

• at (int, optional) – Whether to compute the metric only w.r.t. a certain time step.

is_differentiable: bool = True#
higher_is_better: bool = False#
full_state_update: bool = False#

Mean Squared Error Metric.

Parameters:

• at (int, optional) – Whether to compute the metric only w.r.t. a certain time step.

is_differentiable: bool = True#
higher_is_better: bool = False#
full_state_update: bool = False#

Mean Relative Error Metric.

Parameters:

• at (int, optional) – Whether to compute the metric only w.r.t. a certain time step.

is_differentiable: bool = True#
higher_is_better: bool = False#
full_state_update: bool = False#
compute()[source]#

Override this method to compute the final metric value from state variables synchronized across the distributed backend.

Override this method to update the state variables of your metric class.

Mean Absolute Percentage Error Metric.

Parameters:

• at (int, optional) – Whether to compute the metric only w.r.t. a certain time step.

is_differentiable: bool = True#
higher_is_better: bool = False#
full_state_update: bool = False#

Quantile loss.

Parameters:
• q (float) – Target quantile.

• compute_on_step (bool, optional) – Whether to compute the metric right-away or if accumulate the results. This should be True when using the metric to compute a loss function, False if the metric is used for logging the aggregate error across different mini-batches.

• at (int, optional) – Whether to compute the metric only w.r.t. a certain time step.

is_differentiable: bool = True#
higher_is_better: bool = False#
full_state_update: bool = False#
mae(y_hat: Tensor, y: Tensor, mask: = None, reduction: Literal['mean', 'sum', 'none'] = 'mean', nan_to_zero: bool = False) [source]#

Compute the Mean Absolute Error (MAE) between the estimate $$\hat{y}$$ and the true value $$y$$, i.e.

$\text{MAE} = \frac{\sum_{i=1}^n |\hat{y}_i - y_i|}{n}$
Parameters:
• y_hat (torch.Tensor) – The estimated variable.

• y (torch.Tensor) – The ground-truth variable.

• mask (torch.Tensor, optional) – If provided, compute the metric using only the values at valid indices (with mask set to True). If mask is not None and reduction is 'none', masked indices are set to nan (see nan_to_zero). (default: None)

• reduction (str) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. (default: 'mean')

• nan_to_zero (bool) – If True, then masked values in output are converted to 0. This has an effect only when mask is not None and reduction is 'none'. (default: False)

Returns:

The Mean Absolute Error.

Return type:
nmae(y_hat: Tensor, y: Tensor, mask: = None, reduction: Literal['mean', 'sum', 'none'] = 'mean', nan_to_zero: bool = False) [source]#

Compute the Normalized Mean Absolute Error (NMAE) between the estimate $$\hat{y}$$ and the true value $$y$$. The NMAE is the Mean Absolute Error (MAE) scaled by the max-min range of the target data, i.e.

$\text{NMAE} = \frac{\frac{1}{N} \sum_{i=1}^n |\hat{y}_i - y_i|} {\max(y) - \min(y)}$
Parameters:
• y_hat (torch.Tensor) – The estimated variable.

• y (torch.Tensor) – The ground-truth variable.

• mask (torch.Tensor, optional) – If provided, compute the metric using only the values at valid indices (with mask set to True). If mask is not None and reduction is 'none', masked indices are set to nan (see nan_to_zero). (default: None)

• reduction (str) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. (default: 'mean')

• nan_to_zero (bool) – If True, then masked values in output are converted to 0. This has an effect only when mask is not None and reduction is 'none'. (default: False)

Returns:

The Normalized Mean Absolute Error

Return type:
mape(y_hat: Tensor, y: Tensor, mask: = None, reduction: Literal['mean', 'sum', 'none'] = 'mean', nan_to_zero: bool = False) [source]#

Compute the Mean Absolute Percentage Error (MAPE). between the estimate $$\hat{y}$$ and the true value $$y$$, i.e.

$\text{MAPE} = \frac{1}{n} \sum_{i=1}^n \frac{|\hat{y}_i - y_i|} {y_i}$
Parameters:
• y_hat (torch.Tensor) – The estimated variable.

• y (torch.Tensor) – The ground-truth variable.

• mask (torch.Tensor, optional) – If provided, compute the metric using only the values at valid indices (with mask set to True). If mask is not None and reduction is 'none', masked indices are set to nan (see nan_to_zero). (default: None)

• reduction (str) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. (default: 'mean')

• nan_to_zero (bool) – If True, then masked values in output are converted to 0. This has an effect only when mask is not None and reduction is 'none'. (default: False)

Returns:

The Mean Absolute Percentage Error.

Return type:
mse(y_hat: Tensor, y: Tensor, mask: = None, reduction: Literal['mean', 'sum', 'none'] = 'mean', nan_to_zero: bool = False) [source]#

Compute the Mean Squared Error (MSE) between the estimate $$\hat{y}$$ and the true value $$y$$, i.e.

$\text{MSE} = \frac{\sum_{i=1}^n (\hat{y}_i - y_i)^2}{n}$
Parameters:
• y_hat (torch.Tensor) – The estimated variable.

• y (torch.Tensor) – The ground-truth variable.

• mask (torch.Tensor, optional) – If provided, compute the metric using only the values at valid indices (with mask set to True). If mask is not None and reduction is 'none', masked indices are set to nan (see nan_to_zero). (default: None)

• reduction (str) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. (default: 'mean')

• nan_to_zero (bool) – If True, then masked values in output are converted to 0. This has an effect only when mask is not None and reduction is 'none'. (default: False)

Returns:

The Mean Squared Error.

Return type:
rmse(y_hat: Tensor, y: Tensor, mask: = None, reduction: Literal['mean', 'sum', 'none'] = 'mean') [source]#

Compute the Root Mean Squared Error (RMSE) between the estimate $$\hat{y}$$ and the true value $$y$$, i.e.

$\text{RMSE} = \sqrt{\frac{\sum_{i=1}^n (\hat{y}_i - y_i)^2}{n}}$
Parameters:
• y_hat (torch.Tensor) – The estimated variable.

• y (torch.Tensor) – The ground-truth variable.

• mask (torch.Tensor, optional) – If provided, compute the metric using only the values at valid indices (with mask set to True). (default: None)

• reduction (str) – Specifies the reduction to apply to the output: 'mean' | 'sum'. 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. (default: 'mean')

Returns:

The Root Mean Squared Error.

Return type:

float

nrmse(y_hat: Tensor, y: Tensor, mask: = None, reduction: Literal['mean', 'sum', 'none'] = 'mean') [source]#

Compute the Normalized Root Mean Squared Error (NRMSE) between the estimate $$\hat{y}$$ and the true value $$y$$, i.e. Normalization is by the max-min range of the data

$\text{NRMSE} = \frac{\sqrt{\frac{\sum_{i=1}^n (\hat{y}_i - y_i)^2}{n}}} {\max y - \min y}$
Parameters:
• y_hat (torch.Tensor) – The estimated variable.

• y (torch.Tensor) – The ground-truth variable.

• mask (torch.Tensor, optional) – If provided, compute the metric using only the values at valid indices (with mask set to True). (default: None)

• reduction (str) – Specifies the reduction to apply to the output: 'mean' | 'sum'. 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. (default: 'mean')

Returns:

The range-normalzized NRMSE

Return type:

float

nrmse_2(y_hat: Tensor, y: Tensor, mask: = None, reduction: Literal['mean', 'sum', 'none'] = 'mean') [source]#

Compute the Normalized Root Mean Squared Error (NRMSE) between the estimate $$\hat{y}$$ and the true value $$y$$, i.e. Normalization is by the power of the true signal $$y$$

$\text{NRMSE}_2 = \frac{\sqrt{\frac{\sum_{i=1}^n (\hat{y}_i - y_i)^2} {n}}}{\sum_{i=1}^n y_i^2}$
Parameters:
• y_hat (torch.Tensor) – The estimated variable.

• y (torch.Tensor) – The ground-truth variable.

• mask (torch.Tensor, optional) – If provided, compute the metric using only the values at valid indices (with mask set to True).

• reduction (str) – Specifies the reduction to apply to the output: 'mean' | 'sum'. 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. (default: 'mean')

Returns:

The power-normalzized NRMSE.

Return type:

float

r2(y_hat: Tensor, y: Tensor, mask: = None, reduction: Literal['mean', 'sum', 'none'] = 'mean', nan_to_zero: bool = False, mean_axis: = None) [source]#

Compute the coefficient of determination $$R^2$$ between the estimate $$\hat{y}$$ and the true value $$y$$, i.e.

$R^{2} = 1 - \frac{\sum_{i} (\hat{y}_i - y_i)^2} {\sum_{i} (\bar{y} - y_i)^2}$

where $$\bar{y}=\frac{1}{n}\sum_{i=1}^n y_i$$ is the mean of $$y$$.

Parameters:
• y_hat (torch.Tensor) – The estimated variable.

• y (torch.Tensor) – The ground-truth variable.

• mask (torch.Tensor, optional) – If provided, compute the metric using only the values at valid indices (with mask set to True).

• reduction (str) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the sum of the output will be divided by the number of elements in the output, 'sum': the output will be summed. (default: 'mean')

• nan_to_zero (bool) – If True, then masked values in output are converted to 0. This has an effect only when mask is not None and reduction is 'none'. (default: False)

• mean_axis (int, Tuple, optional) – the axis along which the mean of y is computed, to compute the variance of y needed in the denominator of the R2 formula.

Returns:

The $$R^2$$.

Return type:
mre(y_hat: Tensor, y: Tensor, mask: = None) [source]#

Compute the MAE normalized by the L1-norm of the true signal $$y$$, i.e.

$\text{MRE} = \frac{\sum_{i=1}^n |\hat{y}_i - y_i|}{\sum_{i=1}^n |y_i|}$
Parameters:
Returns:

The computed MRE value.

Return type:

float

multi_quantile_pinball_loss(y_hat, y, q)[source]#
forward(*args: Any, **kwargs: Any) Any[source]#

forward serves the dual purpose of both computing the metric on the current batch of inputs but also add the batch statistics to the overall accumululating metric state.

Input arguments are the exact same as corresponding update method. The returned output is the exact same as the output of compute.

Override this method to update the state variables of your metric class.

compute()[source]#

Override this method to compute the final metric value from state variables synchronized across the distributed backend.

reset()[source]#

This method automatically resets the metric state variables to their default value.

training: bool#
class SelectMetricWrapper(metric, dim, input_idx=None, target_idx=None, mask_idx=None)[source]#
training: bool#