Loss Functions API
Torchium provides 70+ specialized loss functions for various machine learning tasks, organized by domain and use case. This comprehensive collection extends PyTorch’s native loss functions with state-of-the-art implementations from recent research.
Classification Losses
Cross-Entropy Variants
- class torchium.losses.CrossEntropyLoss(weight: Tensor | None = None, size_average: bool | None = None, ignore_index: int = -100, reduce: bool | None = None, reduction: str = 'mean', label_smoothing: float = 0.0, **kwargs)[source]
Bases:
CrossEntropyLossEnhanced CrossEntropyLoss with additional features.
- class torchium.losses.FocalLoss(alpha: float | Tensor = 1.0, gamma: float = 2.0, reduction: str = 'mean', ignore_index: int = -100, **kwargs)[source]
Bases:
ModuleFocal Loss for addressing class imbalance.
Reference: https://arxiv.org/abs/1708.02002
- class torchium.losses.LabelSmoothingLoss(smoothing: float = 0.1, num_classes: int | None = None, reduction: str = 'mean', ignore_index: int = -100, **kwargs)[source]
Bases:
ModuleLabel Smoothing Loss for regularization.
Reference: https://arxiv.org/abs/1512.00567
- class torchium.losses.ClassBalancedLoss(samples_per_class: Tensor, beta: float = 0.9999, gamma: float = 2.0, loss_type: str = 'focal', reduction: str = 'mean', **kwargs)[source]
Bases:
ModuleClass-Balanced Loss for long-tailed recognition.
Reference: https://arxiv.org/abs/1901.05555
Margin-Based Losses
- class torchium.losses.TripletLoss(margin: float = 1.0, p: float = 2.0, eps: float = 1e-06, swap: bool = False, size_average=None, reduce=None, reduction: str = 'mean')[source]
Bases:
TripletMarginLossEnhanced TripletLoss.
- class torchium.losses.ContrastiveLoss(margin=1.0, reduction='mean')[source]
Bases:
ModuleContrastive Loss for metric learning.
- __init__(margin=1.0, reduction='mean')[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(input1, input2, target)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
Ranking Losses
Regression Losses
MSE Variants
- class torchium.losses.MSELoss(size_average=None, reduce=None, reduction: str = 'mean')[source]
Bases:
MSELossEnhanced MSELoss with additional features.
- class torchium.losses.MAELoss(size_average=None, reduce=None, reduction: str = 'mean')[source]
Bases:
L1LossMean Absolute Error Loss.
- class torchium.losses.HuberLoss(reduction: str = 'mean', delta: float = 1.0)[source]
Bases:
HuberLossEnhanced HuberLoss with additional features.
- class torchium.losses.SmoothL1Loss(size_average=None, reduce=None, reduction: str = 'mean', beta: float = 1.0)[source]
Bases:
SmoothL1LossEnhanced SmoothL1Loss with additional features.
- class torchium.losses.QuantileLoss(quantile=0.5, reduction='mean')[source]
Bases:
ModuleQuantile Loss for quantile regression.
- __init__(quantile=0.5, reduction='mean')[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(input, target)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.LogCoshLoss(reduction='mean')[source]
Bases:
ModuleLog-Cosh Loss for robust regression.
- __init__(reduction='mean')[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(input, target)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
Robust Regression
Computer Vision Losses
Object Detection
- class torchium.losses.FocalDetectionLoss(alpha: float = 0.25, gamma: float = 2.0, reduction: str = 'mean')[source]
Bases:
ModuleFocal Loss for object detection to address class imbalance
- __init__(alpha: float = 0.25, gamma: float = 2.0, reduction: str = 'mean')[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(predictions: Tensor, targets: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.GIoULoss(reduction: str = 'mean')[source]
Bases:
ModuleGeneralized IoU Loss for bounding box regression
- __init__(reduction: str = 'mean')[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(pred_boxes: Tensor, target_boxes: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.DIoULoss(reduction: str = 'mean')[source]
Bases:
ModuleDistance IoU Loss
- __init__(reduction: str = 'mean')[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(pred_boxes: Tensor, target_boxes: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.CIoULoss(reduction: str = 'mean')[source]
Bases:
ModuleComplete IoU Loss
- __init__(reduction: str = 'mean')[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(pred_boxes: Tensor, target_boxes: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.EIoULoss(reduction: str = 'mean')[source]
Bases:
ModuleEfficient IoU Loss
- __init__(reduction: str = 'mean')[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(pred_boxes: Tensor, target_boxes: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.AlphaIoULoss(alpha: float = 2.0, reduction: str = 'mean')[source]
Bases:
ModuleAlpha IoU Loss with adaptive weighting
- __init__(alpha: float = 2.0, reduction: str = 'mean')[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(pred_boxes: Tensor, target_boxes: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
Segmentation Losses
- class torchium.losses.DiceLoss(smooth: float = 1e-05, reduction: str = 'mean')[source]
Bases:
ModuleDice loss for medical image segmentation
- __init__(smooth: float = 1e-05, reduction: str = 'mean')[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(pred: Tensor, target: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.IoULoss(smooth: float = 1e-05, reduction: str = 'mean', ignore_index: int = -100, **kwargs)[source]
Bases:
ModuleIntersection over Union (IoU) Loss for semantic segmentation.
- class torchium.losses.TverskyLoss(alpha: float = 0.5, beta: float = 0.5, smooth: float = 1e-05, reduction: str = 'mean', **kwargs)[source]
Bases:
ModuleTversky Loss for semantic segmentation.
Reference: https://arxiv.org/abs/1706.05721
- class torchium.losses.FocalTverskyLoss(alpha: float = 0.5, beta: float = 0.5, gamma: float = 1.33, smooth: float = 1e-05, reduction: str = 'mean', **kwargs)[source]
Bases:
ModuleFocal Tversky Loss for semantic segmentation.
Reference: https://arxiv.org/abs/1810.07842
- class torchium.losses.LovaszLoss(per_image: bool = False, ignore_index: int = -100, reduction: str = 'mean', **kwargs)[source]
Bases:
ModuleLovász-Softmax Loss for semantic segmentation.
Reference: https://arxiv.org/abs/1705.08790
- __init__(per_image: bool = False, ignore_index: int = -100, reduction: str = 'mean', **kwargs)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- lovasz_grad(gt_sorted: Tensor) Tensor[source]
Compute gradient of the Lovász extension w.r.t sorted errors.
- class torchium.losses.BoundaryLoss(reduction: str = 'mean', **kwargs)[source]
Bases:
ModuleBoundary Loss for semantic segmentation.
Reference: https://arxiv.org/abs/1812.07032
Super Resolution
- class torchium.losses.PerceptualLoss(feature_layers: List[int] | None = None, use_gpu: bool = True)[source]
Bases:
ModulePerceptual loss using VGG features for image quality
- __init__(feature_layers: List[int] | None = None, use_gpu: bool = True)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(pred: Tensor, target: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.SSIMLoss(window_size: int = 11, sigma: float = 1.5, k1: float = 0.01, k2: float = 0.03, L: float = 1.0)[source]
Bases:
ModuleStructural Similarity Index loss for image quality
- __init__(window_size: int = 11, sigma: float = 1.5, k1: float = 0.01, k2: float = 0.03, L: float = 1.0)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(pred: Tensor, target: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.MSSSIMLoss(window_size: int = 11, sigma: float = 1.5, weights: List[float] | None = None)[source]
Bases:
ModuleMulti-Scale Structural Similarity Index loss
- __init__(window_size: int = 11, sigma: float = 1.5, weights: List[float] | None = None)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(pred: Tensor, target: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.LPIPSLoss(net_type: str = 'vgg', use_dropout: bool = True)[source]
Bases:
ModuleLearned Perceptual Image Patch Similarity (simplified version)
- __init__(net_type: str = 'vgg', use_dropout: bool = True)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(pred: Tensor, target: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.VGGLoss(layers: List[str] | None = None, weights: List[float] | None = None)[source]
Bases:
ModuleVGG-based perceptual loss for style transfer and super resolution
- __init__(layers: List[str] | None = None, weights: List[float] | None = None)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(pred: Tensor, target: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
Style Transfer
- class torchium.losses.StyleLoss(style_layers: List[str] | None = None, style_weights: List[float] | None = None)[source]
Bases:
ModuleStyle loss using Gram matrices for neural style transfer
- __init__(style_layers: List[str] | None = None, style_weights: List[float] | None = None)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(generated: Tensor, style_target: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.ContentLoss(content_layers: List[str] | None = None, content_weights: List[float] | None = None)[source]
Bases:
ModuleContent loss for neural style transfer
- __init__(content_layers: List[str] | None = None, content_weights: List[float] | None = None)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(generated: Tensor, content_target: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.TotalVariationLoss(weight: float = 1.0)[source]
Bases:
ModuleTotal Variation loss for image smoothing and noise reduction
- __init__(weight: float = 1.0)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(x: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
Natural Language Processing
Text Generation
- class torchium.losses.PerplexityLoss(ignore_index: int = -100)[source]
Bases:
ModulePerplexity-based loss for language modeling
- __init__(ignore_index: int = -100)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(logits: Tensor, targets: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.CRFLoss(num_tags: int, batch_first: bool = True)[source]
Bases:
ModuleConditional Random Field loss for sequence labeling
- class torchium.losses.StructuredPredictionLoss(margin: float = 1.0)[source]
Bases:
ModuleStructured prediction loss using max-margin
- __init__(margin: float = 1.0)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(scores: Tensor, targets: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
Evaluation Metrics
- class torchium.losses.BLEULoss(n_gram: int = 4, smooth: bool = True)[source]
Bases:
ModuleBLEU score based loss (1 - BLEU)
- __init__(n_gram: int = 4, smooth: bool = True)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(pred_tokens: Tensor, target_tokens: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- torchium.losses.ROUGELoss
alias of
PerplexityLoss
- torchium.losses.METEORLoss
alias of
PerplexityLoss
- torchium.losses.BERTScoreLoss
alias of
PerplexityLoss
Word Embeddings
- class torchium.losses.Word2VecLoss(vocab_size: int, embed_dim: int, num_negative: int = 5)[source]
Bases:
ModuleSkip-gram with negative sampling loss
- __init__(vocab_size: int, embed_dim: int, num_negative: int = 5)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(center_words: Tensor, context_words: Tensor, negative_words: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- torchium.losses.GloVeLoss
alias of
Word2VecLoss
- torchium.losses.FastTextLoss
alias of
Word2VecLoss
Generative Models
GAN Losses
- class torchium.losses.GANLoss(use_lsgan: bool = False, target_real_label: float = 1.0, target_fake_label: float = 0.0)[source]
Bases:
ModuleStandard GAN loss (BCE)
- __init__(use_lsgan: bool = False, target_real_label: float = 1.0, target_fake_label: float = 0.0)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(prediction: Tensor, target_is_real: bool) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.WassersteinLoss(*args, **kwargs)[source]
Bases:
ModuleWasserstein GAN loss
- forward(real_pred: Tensor, fake_pred: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.HingeGANLoss(*args, **kwargs)[source]
Bases:
ModuleHinge loss for GANs
- forward(prediction: Tensor, target_is_real: bool) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.LeastSquaresGANLoss(*args, **kwargs)[source]
Bases:
ModuleLeast squares GAN loss
- forward(prediction: Tensor, target_is_real: bool) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.RelativistGANLoss(*args, **kwargs)[source]
Bases:
ModuleRelativistic GAN loss
- forward(real_pred: Tensor, fake_pred: Tensor, for_discriminator: bool = True) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
VAE Losses
- class torchium.losses.ELBOLoss(beta: float = 1.0)[source]
Bases:
ModuleEvidence Lower Bound loss for VAE
- __init__(beta: float = 1.0)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(recon_x: Tensor, x: Tensor, mu: Tensor, logvar: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.BetaVAELoss(beta: float = 1.0)[source]
Bases:
ELBOLossBeta-VAE loss with adjustable beta parameter
- class torchium.losses.BetaTCVAELoss(alpha: float = 1.0, beta: float = 1.0, gamma: float = 1.0)[source]
Bases:
ModuleBeta-TC-VAE loss for disentanglement
- __init__(alpha: float = 1.0, beta: float = 1.0, gamma: float = 1.0)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(recon_x: Tensor, x: Tensor, mu: Tensor, logvar: Tensor, z: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- torchium.losses.FactorVAELoss
alias of
BetaTCVAELoss
Diffusion Models
- class torchium.losses.DDPMLoss(*args, **kwargs)[source]
Bases:
ModuleDDPM (Denoising Diffusion Probabilistic Models) loss
- forward(noise_pred: Tensor, noise_true: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.DDIMLoss(*args, **kwargs)[source]
Bases:
DDPMLossDDIM loss (similar to DDPM)
- class torchium.losses.ScoreMatchingLoss(*args, **kwargs)[source]
Bases:
ModuleScore matching loss for diffusion models
- forward(score_pred: Tensor, score_true: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
Metric Learning
Contrastive Learning
- class torchium.losses.ContrastiveMetricLoss(margin: float = 1.0)[source]
Bases:
ModuleContrastive loss for metric learning
- __init__(margin: float = 1.0)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(output1: Tensor, output2: Tensor, label: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.TripletMetricLoss(margin: float = 1.0)[source]
Bases:
ModuleTriplet loss for metric learning
- __init__(margin: float = 1.0)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(anchor: Tensor, positive: Tensor, negative: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.QuadrupletLoss(margin1: float = 1.0, margin2: float = 0.5)[source]
Bases:
ModuleQuadruplet loss extending triplet loss
- __init__(margin1: float = 1.0, margin2: float = 0.5)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(anchor: Tensor, positive: Tensor, negative: Tensor, negative2: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.NPairLoss(l2_reg: float = 0.02)[source]
Bases:
ModuleN-pair loss for metric learning
- __init__(l2_reg: float = 0.02)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(anchors: Tensor, positives: Tensor, negatives: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
Angular Losses
- class torchium.losses.AngularMetricLoss(margin: float = 0.5, scale: float = 64)[source]
Bases:
ModuleAngular loss for face recognition
- __init__(margin: float = 0.5, scale: float = 64)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(features: Tensor, labels: Tensor, weight: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.ArcFaceMetricLoss(margin: float = 0.5, scale: float = 64)[source]
Bases:
ModuleArcFace loss for face recognition
- __init__(margin: float = 0.5, scale: float = 64)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(features: Tensor, labels: Tensor, weight: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.CosFaceMetricLoss(margin: float = 0.35, scale: float = 64)[source]
Bases:
ModuleCosFace loss for face recognition
- __init__(margin: float = 0.35, scale: float = 64)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(features: Tensor, labels: Tensor, weight: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.SphereFaceLoss(margin: int = 4, scale: float = 64)[source]
Bases:
ModuleSphereFace loss (A-Softmax)
- __init__(margin: int = 4, scale: float = 64)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(features: Tensor, labels: Tensor, weight: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
Proxy-Based Losses
- class torchium.losses.ProxyNCALoss(num_classes: int, embed_dim: int, scale: float = 32)[source]
Bases:
ModuleProxy-NCA loss for metric learning
- __init__(num_classes: int, embed_dim: int, scale: float = 32)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(embeddings: Tensor, labels: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.ProxyAnchorLoss(num_classes: int, embed_dim: int, margin: float = 0.1, alpha: float = 32)[source]
Bases:
ModuleProxy-Anchor loss for metric learning
- __init__(num_classes: int, embed_dim: int, margin: float = 0.1, alpha: float = 32)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(embeddings: Tensor, labels: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
Multi-Task Learning
Uncertainty Weighting
- class torchium.losses.UncertaintyWeightingLoss(num_tasks: int)[source]
Bases:
ModuleUncertainty-based weighting for multi-task learning
- __init__(num_tasks: int)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(losses: list) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.MultiTaskLoss(weights: list | None = None)[source]
Bases:
ModuleSimple multi-task loss with fixed weights
- __init__(weights: list | None = None)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(losses: list) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
Gradient Surgery
- class torchium.losses.PCGradLoss[source]
Bases:
ModulePCGrad-style gradient surgery for multi-task learning
- forward(losses: list) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.GradNormLoss(num_tasks: int, alpha: float = 1.5)[source]
Bases:
ModuleGradNorm for balancing gradients in multi-task learning
- __init__(num_tasks: int, alpha: float = 1.5)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(losses: list) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.CAGradLoss(num_tasks: int, c: float = 0.5)[source]
Bases:
ModuleConflict-Averse Gradient descent for multi-task learning
- __init__(num_tasks: int, c: float = 0.5)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(losses: list) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
Dynamic Balancing
- class torchium.losses.DynamicLossBalancing(num_tasks: int, temp: float = 2.0)[source]
Bases:
ModuleDynamic loss balancing based on task difficulty
- __init__(num_tasks: int, temp: float = 2.0)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(losses: list) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
Domain-Specific Losses
Medical Imaging
- class torchium.losses.DiceLoss(smooth: float = 1e-05, reduction: str = 'mean')[source]
Bases:
ModuleDice loss for medical image segmentation
- __init__(smooth: float = 1e-05, reduction: str = 'mean')[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(pred: Tensor, target: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.TverskyLoss(alpha: float = 0.5, beta: float = 0.5, smooth: float = 1e-05, reduction: str = 'mean', **kwargs)[source]
Bases:
ModuleTversky Loss for semantic segmentation.
Reference: https://arxiv.org/abs/1706.05721
Audio Processing
- class torchium.losses.SpectralLoss(n_fft: int = 2048, alpha: float = 1.0)[source]
Bases:
ModuleSpectral loss for audio processing
- __init__(n_fft: int = 2048, alpha: float = 1.0)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(pred: Tensor, target: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
Time Series
- class torchium.losses.DTWLoss(use_cuda: bool = False)[source]
Bases:
ModuleDynamic Time Warping loss for time series
- __init__(use_cuda: bool = False)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(pred: Tensor, target: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
PyTorch Native Losses
For completeness, Torchium also includes all PyTorch native loss functions:
- class torchium.losses.BCELoss(weight: Tensor | None = None, size_average=None, reduce=None, reduction: str = 'mean')[source]
Bases:
_WeightedLossCreates a criterion that measures the Binary Cross Entropy between the target and the input probabilities:
The unreduced (i.e. with
reductionset to'none') loss can be described as:\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_n \left[ y_n \cdot \log x_n + (1 - y_n) \cdot \log (1 - x_n) \right],\]where \(N\) is the batch size. If
reductionis not'none'(default'mean'), then\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases}\end{split}\]This is used for measuring the error of a reconstruction in for example an auto-encoder. Note that the targets \(y\) should be numbers between 0 and 1.
Notice that if \(x_n\) is either 0 or 1, one of the log terms would be mathematically undefined in the above loss equation. PyTorch chooses to set \(\log (0) = -\infty\), since \(\lim_{x\to 0} \log (x) = -\infty\). However, an infinite term in the loss equation is not desirable for several reasons.
For one, if either \(y_n = 0\) or \((1 - y_n) = 0\), then we would be multiplying 0 with infinity. Secondly, if we have an infinite loss value, then we would also have an infinite term in our gradient, since \(\lim_{x\to 0} \frac{d}{dx} \log (x) = \infty\). This would make BCELoss’s backward method nonlinear with respect to \(x_n\), and using it for things like linear regression would not be straight-forward.
Our solution is that BCELoss clamps its log function outputs to be greater than or equal to -100. This way, we can always have a finite loss value and a linear backward method.
- Parameters:
weight (Tensor, optional) – a manual rescaling weight given to the loss of each batch element. If given, has to be a Tensor of size nbatch.
size_average (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_averageis set toFalse, the losses are instead summed for each minibatch. Ignored whenreduceisFalse. Default:Truereduce (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average. WhenreduceisFalse, returns a loss per batch element instead and ignoressize_average. Default:Truereduction (str, optional) – Specifies the reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the sum of the output will be divided by the number of elements in the output,'sum': the output will be summed. Note:size_averageandreduceare in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction. Default:'mean'
- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Target: \((*)\), same shape as the input.
Output: scalar. If
reductionis'none', then \((*)\), same shape as input.
Examples
>>> m = nn.Sigmoid() >>> loss = nn.BCELoss() >>> input = torch.randn(3, 2, requires_grad=True) >>> target = torch.rand(3, 2, requires_grad=False) >>> output = loss(m(input), target) >>> output.backward()
- __init__(weight: Tensor | None = None, size_average=None, reduce=None, reduction: str = 'mean') None[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(input: Tensor, target: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.BCEWithLogitsLoss(weight: Tensor | None = None, size_average=None, reduce=None, reduction: str = 'mean', pos_weight: Tensor | None = None)[source]
Bases:
_LossThis loss combines a Sigmoid layer and the BCELoss in one single class. This version is more numerically stable than using a plain Sigmoid followed by a BCELoss as, by combining the operations into one layer, we take advantage of the log-sum-exp trick for numerical stability.
The unreduced (i.e. with
reductionset to'none') loss can be described as:\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_n \left[ y_n \cdot \log \sigma(x_n) + (1 - y_n) \cdot \log (1 - \sigma(x_n)) \right],\]where \(N\) is the batch size. If
reductionis not'none'(default'mean'), then\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases}\end{split}\]This is used for measuring the error of a reconstruction in for example an auto-encoder. Note that the targets t[i] should be numbers between 0 and 1.
It’s possible to trade off recall and precision by adding weights to positive examples. In the case of multi-label classification the loss can be described as:
\[\ell_c(x, y) = L_c = \{l_{1,c},\dots,l_{N,c}\}^\top, \quad l_{n,c} = - w_{n,c} \left[ p_c y_{n,c} \cdot \log \sigma(x_{n,c}) + (1 - y_{n,c}) \cdot \log (1 - \sigma(x_{n,c})) \right],\]where \(c\) is the class number (\(c > 1\) for multi-label binary classification, \(c = 1\) for single-label binary classification), \(n\) is the number of the sample in the batch and \(p_c\) is the weight of the positive answer for the class \(c\).
\(p_c > 1\) increases the recall, \(p_c < 1\) increases the precision.
For example, if a dataset contains 100 positive and 300 negative examples of a single class, then
pos_weightfor the class should be equal to \(\frac{300}{100}=3\). The loss would act as if the dataset contains \(3\times 100=300\) positive examples.Examples
>>> target = torch.ones([10, 64], dtype=torch.float32) # 64 classes, batch size = 10 >>> output = torch.full([10, 64], 1.5) # A prediction (logit) >>> pos_weight = torch.ones([64]) # All weights are equal to 1 >>> criterion = torch.nn.BCEWithLogitsLoss(pos_weight=pos_weight) >>> criterion(output, target) # -log(sigmoid(1.5)) tensor(0.20...)
In the above example, the
pos_weighttensor’s elements correspond to the 64 distinct classes in a multi-label binary classification scenario. Each element inpos_weightis designed to adjust the loss function based on the imbalance between negative and positive samples for the respective class. This approach is useful in datasets with varying levels of class imbalance, ensuring that the loss calculation accurately accounts for the distribution in each class.- Parameters:
weight (Tensor, optional) – a manual rescaling weight given to the loss of each batch element. If given, has to be a Tensor of size nbatch.
size_average (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_averageis set toFalse, the losses are instead summed for each minibatch. Ignored whenreduceisFalse. Default:Truereduce (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average. WhenreduceisFalse, returns a loss per batch element instead and ignoressize_average. Default:Truereduction (str, optional) – Specifies the reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the sum of the output will be divided by the number of elements in the output,'sum': the output will be summed. Note:size_averageandreduceare in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction. Default:'mean'pos_weight (Tensor, optional) – a weight of positive examples to be broadcasted with target. Must be a tensor with equal size along the class dimension to the number of classes. Pay close attention to PyTorch’s broadcasting semantics in order to achieve the desired operations. For a target of size [B, C, H, W] (where B is batch size) pos_weight of size [B, C, H, W] will apply different pos_weights to each element of the batch or [C, H, W] the same pos_weights across the batch. To apply the same positive weight along all spacial dimensions for a 2D multi-class target [C, H, W] use: [C, 1, 1]. Default:
None
- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Target: \((*)\), same shape as the input.
Output: scalar. If
reductionis'none', then \((*)\), same shape as input.
Examples
>>> loss = nn.BCEWithLogitsLoss() >>> input = torch.randn(3, requires_grad=True) >>> target = torch.empty(3).random_(2) >>> output = loss(input, target) >>> output.backward()
- __init__(weight: Tensor | None = None, size_average=None, reduce=None, reduction: str = 'mean', pos_weight: Tensor | None = None) None[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(input: Tensor, target: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.CTCLoss(blank: int = 0, reduction: str = 'mean', zero_infinity: bool = False)[source]
Bases:
_LossThe Connectionist Temporal Classification loss.
Calculates loss between a continuous (unsegmented) time series and a target sequence. CTCLoss sums over the probability of possible alignments of input to target, producing a loss value which is differentiable with respect to each input node. The alignment of input to target is assumed to be “many-to-one”, which limits the length of the target sequence such that it must be \(\leq\) the input length.
- Parameters:
blank (int, optional) – blank label. Default \(0\).
reduction (str, optional) – Specifies the reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the output losses will be divided by the target lengths and then the mean over the batch is taken,'sum': the output losses will be summed. Default:'mean'zero_infinity (bool, optional) – Whether to zero infinite losses and the associated gradients. Default:
FalseInfinite losses mainly occur when the inputs are too short to be aligned to the targets.
- Shape:
Log_probs: Tensor of size \((T, N, C)\) or \((T, C)\), where \(T = \text{input length}\), \(N = \text{batch size}\), and \(C = \text{number of classes (including blank)}\). The logarithmized probabilities of the outputs (e.g. obtained with
torch.nn.functional.log_softmax()).Targets: Tensor of size \((N, S)\) or \((\operatorname{sum}(\text{target\_lengths}))\), where \(N = \text{batch size}\) and \(S = \text{max target length, if shape is } (N, S)\). It represents the target sequences. Each element in the target sequence is a class index. And the target index cannot be blank (default=0). In the \((N, S)\) form, targets are padded to the length of the longest sequence, and stacked. In the \((\operatorname{sum}(\text{target\_lengths}))\) form, the targets are assumed to be un-padded and concatenated within 1 dimension.
Input_lengths: Tuple or tensor of size \((N)\) or \(()\), where \(N = \text{batch size}\). It represents the lengths of the inputs (must each be \(\leq T\)). And the lengths are specified for each sequence to achieve masking under the assumption that sequences are padded to equal lengths.
Target_lengths: Tuple or tensor of size \((N)\) or \(()\), where \(N = \text{batch size}\). It represents lengths of the targets. Lengths are specified for each sequence to achieve masking under the assumption that sequences are padded to equal lengths. If target shape is \((N,S)\), target_lengths are effectively the stop index \(s_n\) for each target sequence, such that
target_n = targets[n,0:s_n]for each target in a batch. Lengths must each be \(\leq S\) If the targets are given as a 1d tensor that is the concatenation of individual targets, the target_lengths must add up to the total length of the tensor.Output: scalar if
reductionis'mean'(default) or'sum'. Ifreductionis'none', then \((N)\) if input is batched or \(()\) if input is unbatched, where \(N = \text{batch size}\).
Examples
>>> # Target are to be padded >>> T = 50 # Input sequence length >>> C = 20 # Number of classes (including blank) >>> N = 16 # Batch size >>> S = 30 # Target sequence length of longest target in batch (padding length) >>> S_min = 10 # Minimum target length, for demonstration purposes >>> >>> # Initialize random batch of input vectors, for *size = (T,N,C) >>> input = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_() >>> >>> # Initialize random batch of targets (0 = blank, 1:C = classes) >>> target = torch.randint(low=1, high=C, size=(N, S), dtype=torch.long) >>> >>> input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long) >>> target_lengths = torch.randint( ... low=S_min, ... high=S, ... size=(N,), ... dtype=torch.long, ... ) >>> ctc_loss = nn.CTCLoss() >>> loss = ctc_loss(input, target, input_lengths, target_lengths) >>> loss.backward() >>> >>> >>> # Target are to be un-padded >>> T = 50 # Input sequence length >>> C = 20 # Number of classes (including blank) >>> N = 16 # Batch size >>> >>> # Initialize random batch of input vectors, for *size = (T,N,C) >>> input = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_() >>> input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long) >>> >>> # Initialize random batch of targets (0 = blank, 1:C = classes) >>> target_lengths = torch.randint(low=1, high=T, size=(N,), dtype=torch.long) >>> target = torch.randint( ... low=1, ... high=C, ... size=(sum(target_lengths),), ... dtype=torch.long, ... ) >>> ctc_loss = nn.CTCLoss() >>> loss = ctc_loss(input, target, input_lengths, target_lengths) >>> loss.backward() >>> >>> >>> # Target are to be un-padded and unbatched (effectively N=1) >>> T = 50 # Input sequence length >>> C = 20 # Number of classes (including blank) >>> >>> # Initialize random batch of input vectors, for *size = (T,C) >>> # xdoctest: +SKIP("FIXME: error in doctest") >>> input = torch.randn(T, C).log_softmax(1).detach().requires_grad_() >>> input_lengths = torch.tensor(T, dtype=torch.long) >>> >>> # Initialize random batch of targets (0 = blank, 1:C = classes) >>> target_lengths = torch.randint(low=1, high=T, size=(), dtype=torch.long) >>> target = torch.randint( ... low=1, ... high=C, ... size=(target_lengths,), ... dtype=torch.long, ... ) >>> ctc_loss = nn.CTCLoss() >>> loss = ctc_loss(input, target, input_lengths, target_lengths) >>> loss.backward()
- Reference:
A. Graves et al.: Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks: https://www.cs.toronto.edu/~graves/icml_2006.pdf
Note
In order to use CuDNN, the following must be satisfied:
targetsmust be in concatenated format, allinput_lengthsmust be T. \(blank=0\),target_lengths\(\leq 256\), the integer arguments must be of dtypetorch.int32.The regular implementation uses the (more common in PyTorch) torch.long dtype.
Note
In some circumstances when using the CUDA backend with CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting
torch.backends.cudnn.deterministic = True. Please see the notes on /notes/randomness for background.- __init__(blank: int = 0, reduction: str = 'mean', zero_infinity: bool = False)[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(log_probs: Tensor, targets: Tensor, input_lengths: Tensor, target_lengths: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.CosineEmbeddingLoss(margin: float = 0.0, size_average=None, reduce=None, reduction: str = 'mean')[source]
Bases:
_LossCreates a criterion that measures the loss given input tensors \(x_1\), \(x_2\) and a Tensor label \(y\) with values 1 or -1. Use (\(y=1\)) to maximize the cosine similarity of two inputs, and (\(y=-1\)) otherwise. This is typically used for learning nonlinear embeddings or semi-supervised learning.
The loss function for each sample is:
\[\begin{split}\text{loss}(x, y) = \begin{cases} 1 - \cos(x_1, x_2), & \text{if } y = 1 \\ \max(0, \cos(x_1, x_2) - \text{margin}), & \text{if } y = -1 \end{cases}\end{split}\]- Parameters:
margin (float, optional) – Should be a number from \(-1\) to \(1\), \(0\) to \(0.5\) is suggested. If
marginis missing, the default value is \(0\).size_average (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_averageis set toFalse, the losses are instead summed for each minibatch. Ignored whenreduceisFalse. Default:Truereduce (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average. WhenreduceisFalse, returns a loss per batch element instead and ignoressize_average. Default:Truereduction (str, optional) – Specifies the reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the sum of the output will be divided by the number of elements in the output,'sum': the output will be summed. Note:size_averageandreduceare in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction. Default:'mean'
- Shape:
Input1: \((N, D)\) or \((D)\), where N is the batch size and D is the embedding dimension.
Input2: \((N, D)\) or \((D)\), same shape as Input1.
Target: \((N)\) or \(()\).
Output: If
reductionis'none', then \((N)\), otherwise scalar.
Examples
>>> loss = nn.CosineEmbeddingLoss() >>> input1 = torch.randn(3, 5, requires_grad=True) >>> input2 = torch.randn(3, 5, requires_grad=True) >>> target = torch.ones(3) >>> output = loss(input1, input2, target) >>> output.backward()
- __init__(margin: float = 0.0, size_average=None, reduce=None, reduction: str = 'mean') None[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(input1: Tensor, input2: Tensor, target: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.GaussianNLLLoss(*, full: bool = False, eps: float = 1e-06, reduction: str = 'mean')[source]
Bases:
_LossGaussian negative log likelihood loss.
The targets are treated as samples from Gaussian distributions with expectations and variances predicted by the neural network. For a
targettensor modelled as having Gaussian distribution with a tensor of expectationsinputand a tensor of positive variancesvarthe loss is:\[\text{loss} = \frac{1}{2}\left(\log\left(\text{max}\left(\text{var}, \ \text{eps}\right)\right) + \frac{\left(\text{input} - \text{target}\right)^2} {\text{max}\left(\text{var}, \ \text{eps}\right)}\right) + \text{const.}\]where
epsis used for stability. By default, the constant term of the loss function is omitted unlessfullisTrue. Ifvaris not the same size asinput(due to a homoscedastic assumption), it must either have a final dimension of 1 or have one fewer dimension (with all other sizes being the same) for correct broadcasting.- Parameters:
full (bool, optional) – include the constant term in the loss calculation. Default:
False.eps (float, optional) – value used to clamp
var(see note below), for stability. Default: 1e-6.reduction (str, optional) – specifies the reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the output is the average of all batch member losses,'sum': the output is the sum of all batch member losses. Default:'mean'.
- Shape:
Input: \((N, *)\) or \((*)\) where \(*\) means any number of additional dimensions
Target: \((N, *)\) or \((*)\), same shape as the input, or same shape as the input but with one dimension equal to 1 (to allow for broadcasting)
Var: \((N, *)\) or \((*)\), same shape as the input, or same shape as the input but with one dimension equal to 1, or same shape as the input but with one fewer dimension (to allow for broadcasting), or a scalar value
Output: scalar if
reductionis'mean'(default) or'sum'. Ifreductionis'none', then \((N, *)\), same shape as the input
Examples
>>> loss = nn.GaussianNLLLoss() >>> input = torch.randn(5, 2, requires_grad=True) >>> target = torch.randn(5, 2) >>> var = torch.ones(5, 2, requires_grad=True) # heteroscedastic >>> output = loss(input, target, var) >>> output.backward()
>>> loss = nn.GaussianNLLLoss() >>> input = torch.randn(5, 2, requires_grad=True) >>> target = torch.randn(5, 2) >>> var = torch.ones(5, 1, requires_grad=True) # homoscedastic >>> output = loss(input, target, var) >>> output.backward()
Note
The clamping of
varis ignored with respect to autograd, and so the gradients are unaffected by it.- Reference:
Nix, D. A. and Weigend, A. S., “Estimating the mean and variance of the target probability distribution”, Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN’94), Orlando, FL, USA, 1994, pp. 55-60 vol.1, doi: 10.1109/ICNN.1994.374138.
- __init__(*, full: bool = False, eps: float = 1e-06, reduction: str = 'mean') None[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(input: Tensor, target: Tensor, var: Tensor | float) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.HingeEmbeddingLoss(margin: float = 1.0, size_average=None, reduce=None, reduction: str = 'mean')[source]
Bases:
_LossMeasures the loss given an input tensor \(x\) and a labels tensor \(y\) (containing 1 or -1). This is usually used for measuring whether two inputs are similar or dissimilar, e.g. using the L1 pairwise distance as \(x\), and is typically used for learning nonlinear embeddings or semi-supervised learning.
The loss function for \(n\)-th sample in the mini-batch is
\[\begin{split}l_n = \begin{cases} x_n, & \text{if}\; y_n = 1,\\ \max \{0, margin - x_n\}, & \text{if}\; y_n = -1, \end{cases}\end{split}\]and the total loss functions is
\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases}\end{split}\]where \(L = \{l_1,\dots,l_N\}^\top\).
- Parameters:
margin (float, optional) – Has a default value of 1.
size_average (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_averageis set toFalse, the losses are instead summed for each minibatch. Ignored whenreduceisFalse. Default:Truereduce (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average. WhenreduceisFalse, returns a loss per batch element instead and ignoressize_average. Default:Truereduction (str, optional) – Specifies the reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the sum of the output will be divided by the number of elements in the output,'sum': the output will be summed. Note:size_averageandreduceare in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction. Default:'mean'
- Shape:
Input: \((*)\) where \(*\) means, any number of dimensions. The sum operation operates over all the elements.
Target: \((*)\), same shape as the input
Output: scalar. If
reductionis'none', then same shape as the input
- __init__(margin: float = 1.0, size_average=None, reduce=None, reduction: str = 'mean') None[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(input: Tensor, target: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.KLDivLoss(size_average=None, reduce=None, reduction: str = 'mean', log_target: bool = False)[source]
Bases:
_LossThe Kullback-Leibler divergence loss.
For tensors of the same shape \(y_{\text{pred}},\ y_{\text{true}}\), where \(y_{\text{pred}}\) is the
inputand \(y_{\text{true}}\) is thetarget, we define the pointwise KL-divergence as\[L(y_{\text{pred}},\ y_{\text{true}}) = y_{\text{true}} \cdot \log \frac{y_{\text{true}}}{y_{\text{pred}}} = y_{\text{true}} \cdot (\log y_{\text{true}} - \log y_{\text{pred}})\]To avoid underflow issues when computing this quantity, this loss expects the argument
inputin the log-space. The argumenttargetmay also be provided in the log-space iflog_target= True.To summarise, this function is roughly equivalent to computing
if not log_target: # default loss_pointwise = target * (target.log() - input) else: loss_pointwise = target.exp() * (target - input)
and then reducing this result depending on the argument
reductionasif reduction == "mean": # default loss = loss_pointwise.mean() elif reduction == "batchmean": # mathematically correct loss = loss_pointwise.sum() / input.size(0) elif reduction == "sum": loss = loss_pointwise.sum() else: # reduction == "none" loss = loss_pointwise
Note
As all the other losses in PyTorch, this function expects the first argument,
input, to be the output of the model (e.g. the neural network) and the second,target, to be the observations in the dataset. This differs from the standard mathematical notation \(KL(P\ ||\ Q)\) where \(P\) denotes the distribution of the observations and \(Q\) denotes the model.Warning
reduction= “mean” doesn’t return the true KL divergence value, please usereduction= “batchmean” which aligns with the mathematical definition.- Parameters:
size_average (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_averageis set to False, the losses are instead summed for each minibatch. Ignored whenreduceis False. Default: Truereduce (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average. Whenreduceis False, returns a loss per batch element instead and ignoressize_average. Default: Truereduction (str, optional) – Specifies the reduction to apply to the output. Default: “mean”
log_target (bool, optional) – Specifies whether target is the log space. Default: False
- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Target: \((*)\), same shape as the input.
Output: scalar by default. If
reductionis ‘none’, then \((*)\), same shape as the input.
Examples
>>> kl_loss = nn.KLDivLoss(reduction="batchmean") >>> # input should be a distribution in the log space >>> input = F.log_softmax(torch.randn(3, 5, requires_grad=True), dim=1) >>> # Sample a batch of distributions. Usually this would come from the dataset >>> target = F.softmax(torch.rand(3, 5), dim=1) >>> output = kl_loss(input, target) >>> >>> kl_loss = nn.KLDivLoss(reduction="batchmean", log_target=True) >>> log_target = F.log_softmax(torch.rand(3, 5), dim=1) >>> output = kl_loss(input, log_target)
- __init__(size_average=None, reduce=None, reduction: str = 'mean', log_target: bool = False) None[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(input: Tensor, target: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.L1Loss(size_average=None, reduce=None, reduction: str = 'mean')[source]
Bases:
_LossCreates a criterion that measures the mean absolute error (MAE) between each element in the input \(x\) and target \(y\).
The unreduced (i.e. with
reductionset to'none') loss can be described as:\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = \left| x_n - y_n \right|,\]where \(N\) is the batch size. If
reductionis not'none'(default'mean'), then:\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases}\end{split}\]\(x\) and \(y\) are tensors of arbitrary shapes with a total of \(N\) elements each.
The sum operation still operates over all the elements, and divides by \(N\).
The division by \(N\) can be avoided if one sets
reduction = 'sum'.Supports real-valued and complex-valued inputs.
- Parameters:
size_average (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_averageis set toFalse, the losses are instead summed for each minibatch. Ignored whenreduceisFalse. Default:Truereduce (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average. WhenreduceisFalse, returns a loss per batch element instead and ignoressize_average. Default:Truereduction (str, optional) – Specifies the reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the sum of the output will be divided by the number of elements in the output,'sum': the output will be summed. Note:size_averageandreduceare in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction. Default:'mean'
- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Target: \((*)\), same shape as the input.
Output: scalar. If
reductionis'none', then \((*)\), same shape as the input.
Examples
>>> loss = nn.L1Loss() >>> input = torch.randn(3, 5, requires_grad=True) >>> target = torch.randn(3, 5) >>> output = loss(input, target) >>> output.backward()
- __init__(size_average=None, reduce=None, reduction: str = 'mean') None[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(input: Tensor, target: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.MarginRankingLoss(margin: float = 0.0, size_average=None, reduce=None, reduction: str = 'mean')[source]
Bases:
_LossCreates a criterion that measures the loss given inputs \(x1\), \(x2\), two 1D mini-batch or 0D Tensors, and a label 1D mini-batch or 0D Tensor \(y\) (containing 1 or -1).
If \(y = 1\) then it assumed the first input should be ranked higher (have a larger value) than the second input, and vice-versa for \(y = -1\).
The loss function for each pair of samples in the mini-batch is:
\[\text{loss}(x1, x2, y) = \max(0, -y * (x1 - x2) + \text{margin})\]- Parameters:
margin (float, optional) – Has a default value of \(0\).
size_average (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_averageis set toFalse, the losses are instead summed for each minibatch. Ignored whenreduceisFalse. Default:Truereduce (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average. WhenreduceisFalse, returns a loss per batch element instead and ignoressize_average. Default:Truereduction (str, optional) – Specifies the reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the sum of the output will be divided by the number of elements in the output,'sum': the output will be summed. Note:size_averageandreduceare in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction. Default:'mean'
- Shape:
Input1: \((N)\) or \(()\) where N is the batch size.
Input2: \((N)\) or \(()\), same shape as the Input1.
Target: \((N)\) or \(()\), same shape as the inputs.
Output: scalar. If
reductionis'none'and Input size is not \(()\), then \((N)\).
Examples
>>> loss = nn.MarginRankingLoss() >>> input1 = torch.randn(3, requires_grad=True) >>> input2 = torch.randn(3, requires_grad=True) >>> target = torch.randn(3).sign() >>> output = loss(input1, input2, target) >>> output.backward()
- __init__(margin: float = 0.0, size_average=None, reduce=None, reduction: str = 'mean') None[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(input1: Tensor, input2: Tensor, target: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.MultiLabelMarginLoss(size_average=None, reduce=None, reduction: str = 'mean')[source]
Bases:
_LossCreates a criterion that optimizes a multi-class multi-classification hinge loss (margin-based loss) between input \(x\) (a 2D mini-batch Tensor) and output \(y\) (which is a 2D Tensor of target class indices). For each sample in the mini-batch:
\[\text{loss}(x, y) = \sum_{ij}\frac{\max(0, 1 - (x[y[j]] - x[i]))}{\text{x.size}(0)}\]where \(x \in \left\{0, \; \cdots , \; \text{x.size}(0) - 1\right\}\), \(y \in \left\{0, \; \cdots , \; \text{y.size}(0) - 1\right\}\), \(0 \leq y[j] \leq \text{x.size}(0)-1\), and \(i \neq y[j]\) for all \(i\) and \(j\).
\(y\) and \(x\) must have the same size.
The criterion only considers a contiguous block of non-negative targets that starts at the front.
This allows for different samples to have variable amounts of target classes.
- Parameters:
size_average (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_averageis set toFalse, the losses are instead summed for each minibatch. Ignored whenreduceisFalse. Default:Truereduce (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average. WhenreduceisFalse, returns a loss per batch element instead and ignoressize_average. Default:Truereduction (str, optional) – Specifies the reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the sum of the output will be divided by the number of elements in the output,'sum': the output will be summed. Note:size_averageandreduceare in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction. Default:'mean'
- Shape:
Input: \((C)\) or \((N, C)\) where N is the batch size and C is the number of classes.
Target: \((C)\) or \((N, C)\), label targets padded by -1 ensuring same shape as the input.
Output: scalar. If
reductionis'none', then \((N)\).
Examples
>>> loss = nn.MultiLabelMarginLoss() >>> x = torch.FloatTensor([[0.1, 0.2, 0.4, 0.8]]) >>> # for target y, only consider labels 3 and 0, not after label -1 >>> y = torch.LongTensor([[3, 0, -1, 1]]) >>> # 0.25 * ((1-(0.1-0.2)) + (1-(0.1-0.4)) + (1-(0.8-0.2)) + (1-(0.8-0.4))) >>> loss(x, y) tensor(0.85...)
- __init__(size_average=None, reduce=None, reduction: str = 'mean') None[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(input: Tensor, target: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.MultiLabelSoftMarginLoss(weight: Tensor | None = None, size_average=None, reduce=None, reduction: str = 'mean')[source]
Bases:
_WeightedLossCreates a criterion that optimizes a multi-label one-versus-all loss based on max-entropy, between input \(x\) and target \(y\) of size \((N, C)\). For each sample in the minibatch:
\[loss(x, y) = - \frac{1}{C} * \sum_i y[i] * \log((1 + \exp(-x[i]))^{-1}) + (1-y[i]) * \log\left(\frac{\exp(-x[i])}{(1 + \exp(-x[i]))}\right)\]where \(i \in \left\{0, \; \cdots , \; \text{x.nElement}() - 1\right\}\), \(y[i] \in \left\{0, \; 1\right\}\).
- Parameters:
weight (Tensor, optional) – a manual rescaling weight given to each class. If given, it has to be a Tensor of size C. Otherwise, it is treated as if having all ones.
size_average (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_averageis set toFalse, the losses are instead summed for each minibatch. Ignored whenreduceisFalse. Default:Truereduce (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average. WhenreduceisFalse, returns a loss per batch element instead and ignoressize_average. Default:Truereduction (str, optional) – Specifies the reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the sum of the output will be divided by the number of elements in the output,'sum': the output will be summed. Note:size_averageandreduceare in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction. Default:'mean'
- Shape:
Input: \((N, C)\) where N is the batch size and C is the number of classes.
Target: \((N, C)\), label targets must have the same shape as the input.
Output: scalar. If
reductionis'none', then \((N)\).
- __init__(weight: Tensor | None = None, size_average=None, reduce=None, reduction: str = 'mean') None[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(input: Tensor, target: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.MultiMarginLoss(p: int = 1, margin: float = 1.0, weight: Tensor | None = None, size_average=None, reduce=None, reduction: str = 'mean')[source]
Bases:
_WeightedLossCreates a criterion that optimizes a multi-class classification hinge loss (margin-based loss) between input \(x\) (a 2D mini-batch Tensor) and output \(y\) (which is a 1D tensor of target class indices, \(0 \leq y \leq \text{x.size}(1)-1\)):
For each mini-batch sample, the loss in terms of the 1D input \(x\) and scalar output \(y\) is:
\[\text{loss}(x, y) = \frac{\sum_i \max(0, \text{margin} - x[y] + x[i])^p}{\text{x.size}(0)}\]where \(i \in \left\{0, \; \cdots , \; \text{x.size}(0) - 1\right\}\) and \(i \neq y\).
Optionally, you can give non-equal weighting on the classes by passing a 1D
weighttensor into the constructor.The loss function then becomes:
\[\text{loss}(x, y) = \frac{\sum_i w[y] * \max(0, \text{margin} - x[y] + x[i])^p}{\text{x.size}(0)}\]- Parameters:
p (int, optional) – Has a default value of \(1\). \(1\) and \(2\) are the only supported values.
margin (float, optional) – Has a default value of \(1\).
weight (Tensor, optional) – a manual rescaling weight given to each class. If given, it has to be a Tensor of size C. Otherwise, it is treated as if having all ones.
size_average (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_averageis set toFalse, the losses are instead summed for each minibatch. Ignored whenreduceisFalse. Default:Truereduce (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average. WhenreduceisFalse, returns a loss per batch element instead and ignoressize_average. Default:Truereduction (str, optional) – Specifies the reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the sum of the output will be divided by the number of elements in the output,'sum': the output will be summed. Note:size_averageandreduceare in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction. Default:'mean'
- Shape:
Input: \((N, C)\) or \((C)\), where \(N\) is the batch size and \(C\) is the number of classes.
Target: \((N)\) or \(()\), where each value is \(0 \leq \text{targets}[i] \leq C-1\).
Output: scalar. If
reductionis'none', then same shape as the target.
Examples
>>> loss = nn.MultiMarginLoss() >>> x = torch.tensor([[0.1, 0.2, 0.4, 0.8]]) >>> y = torch.tensor([3]) >>> # 0.25 * ((1-(0.8-0.1)) + (1-(0.8-0.2)) + (1-(0.8-0.4))) >>> loss(x, y) tensor(0.32...)
- __init__(p: int = 1, margin: float = 1.0, weight: Tensor | None = None, size_average=None, reduce=None, reduction: str = 'mean') None[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(input: Tensor, target: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.NLLLoss(weight: Tensor | None = None, size_average=None, ignore_index: int = -100, reduce=None, reduction: str = 'mean')[source]
Bases:
_WeightedLossThe negative log likelihood loss. It is useful to train a classification problem with C classes.
If provided, the optional argument
weightshould be a 1D Tensor assigning weight to each of the classes. This is particularly useful when you have an unbalanced training set.The input given through a forward call is expected to contain log-probabilities of each class. input has to be a Tensor of size either \((minibatch, C)\) or \((minibatch, C, d_1, d_2, ..., d_K)\) with \(K \geq 1\) for the K-dimensional case. The latter is useful for higher dimension inputs, such as computing NLL loss per-pixel for 2D images.
Obtaining log-probabilities in a neural network is easily achieved by adding a LogSoftmax layer in the last layer of your network. You may use CrossEntropyLoss instead, if you prefer not to add an extra layer.
The target that this loss expects should be a class index in the range \([0, C-1]\) where C = number of classes; if ignore_index is specified, this loss also accepts this class index (this index may not necessarily be in the class range).
The unreduced (i.e. with
reductionset to'none') loss can be described as:\[\begin{split}\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \\ l_n = - w_{y_n} x_{n,y_n}, \\ w_{c} = \text{weight}[c] \cdot \mathbb{1}\{c \not= \text{ignore\_index}\},\end{split}\]where \(x\) is the input, \(y\) is the target, \(w\) is the weight, and \(N\) is the batch size. If
reductionis not'none'(default'mean'), then\[\begin{split}\ell(x, y) = \begin{cases} \sum_{n=1}^N \frac{1}{\sum_{n=1}^N w_{y_n}} l_n, & \text{if reduction} = \text{`mean';}\\ \sum_{n=1}^N l_n, & \text{if reduction} = \text{`sum'.} \end{cases}\end{split}\]- Parameters:
weight (Tensor, optional) – a manual rescaling weight given to each class. If given, it has to be a Tensor of size C. Otherwise, it is treated as if having all ones.
size_average (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_averageis set toFalse, the losses are instead summed for each minibatch. Ignored whenreduceisFalse. Default:Noneignore_index (int, optional) – Specifies a target value that is ignored and does not contribute to the input gradient. When
size_averageisTrue, the loss is averaged over non-ignored targets.reduce (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average. WhenreduceisFalse, returns a loss per batch element instead and ignoressize_average. Default:Nonereduction (str, optional) – Specifies the reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the weighted mean of the output is taken,'sum': the output will be summed. Note:size_averageandreduceare in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction. Default:'mean'
- Shape::
Input: \((N, C)\) or \((C)\), where C = number of classes, N = batch size, or \((N, C, d_1, d_2, ..., d_K)\) with \(K \geq 1\) in the case of K-dimensional loss.
Target: \((N)\) or \(()\), where each value is \(0 \leq \text{targets}[i] \leq C-1\), or \((N, d_1, d_2, ..., d_K)\) with \(K \geq 1\) in the case of K-dimensional loss.
Output: If
reductionis'none', shape \((N)\) or \((N, d_1, d_2, ..., d_K)\) with \(K \geq 1\) in the case of K-dimensional loss. Otherwise, scalar.
Examples
>>> log_softmax = nn.LogSoftmax(dim=1) >>> loss_fn = nn.NLLLoss() >>> # input to NLLLoss is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target must have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> loss = loss_fn(log_softmax(input), target) >>> loss.backward() >>> >>> >>> # 2D loss example (used, for example, with image inputs) >>> N, C = 5, 4 >>> loss_fn = nn.NLLLoss() >>> data = torch.randn(N, 16, 10, 10) >>> conv = nn.Conv2d(16, C, (3, 3)) >>> log_softmax = nn.LogSoftmax(dim=1) >>> # output of conv forward is of shape [N, C, 8, 8] >>> output = log_softmax(conv(data)) >>> # each element in target must have 0 <= value < C >>> target = torch.empty(N, 8, 8, dtype=torch.long).random_(0, C) >>> # input to NLLLoss is of size N x C x height (8) x width (8) >>> loss = loss_fn(output, target) >>> loss.backward()
- __init__(weight: Tensor | None = None, size_average=None, ignore_index: int = -100, reduce=None, reduction: str = 'mean') None[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(input: Tensor, target: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.PoissonNLLLoss(log_input: bool = True, full: bool = False, size_average=None, eps: float = 1e-08, reduce=None, reduction: str = 'mean')[source]
Bases:
_LossNegative log likelihood loss with Poisson distribution of target.
The loss can be described as:
\[ \begin{align}\begin{aligned}\text{target} \sim \mathrm{Poisson}(\text{input})\\\text{loss}(\text{input}, \text{target}) = \text{input} - \text{target} * \log(\text{input}) + \log(\text{target!})\end{aligned}\end{align} \]The last term can be omitted or approximated with Stirling formula. The approximation is used for target values more than 1. For targets less or equal to 1 zeros are added to the loss.
- Parameters:
log_input (bool, optional) – if
Truethe loss is computed as \(\exp(\text{input}) - \text{target}*\text{input}\), ifFalsethe loss is \(\text{input} - \text{target}*\log(\text{input}+\text{eps})\).full (bool, optional) –
whether to compute full loss, i. e. to add the Stirling approximation term
\[\text{target}*\log(\text{target}) - \text{target} + 0.5 * \log(2\pi\text{target}).\]size_average (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_averageis set toFalse, the losses are instead summed for each minibatch. Ignored whenreduceisFalse. Default:Trueeps (float, optional) – Small value to avoid evaluation of \(\log(0)\) when
log_input = False. Default: 1e-8reduce (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average. WhenreduceisFalse, returns a loss per batch element instead and ignoressize_average. Default:Truereduction (str, optional) – Specifies the reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the sum of the output will be divided by the number of elements in the output,'sum': the output will be summed. Note:size_averageandreduceare in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction. Default:'mean'
Examples
>>> loss = nn.PoissonNLLLoss() >>> log_input = torch.randn(5, 2, requires_grad=True) >>> target = torch.randn(5, 2) >>> output = loss(log_input, target) >>> output.backward()
- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Target: \((*)\), same shape as the input.
Output: scalar by default. If
reductionis'none', then \((*)\), the same shape as the input.
- __init__(log_input: bool = True, full: bool = False, size_average=None, eps: float = 1e-08, reduce=None, reduction: str = 'mean') None[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(log_input: Tensor, target: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.SoftMarginLoss(size_average=None, reduce=None, reduction: str = 'mean')[source]
Bases:
_LossCreates a criterion that optimizes a two-class classification logistic loss between input tensor \(x\) and target tensor \(y\) (containing 1 or -1).
\[\text{loss}(x, y) = \sum_i \frac{\log(1 + \exp(-y[i]*x[i]))}{\text{x.nelement}()}\]- Parameters:
size_average (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_averageis set toFalse, the losses are instead summed for each minibatch. Ignored whenreduceisFalse. Default:Truereduce (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average. WhenreduceisFalse, returns a loss per batch element instead and ignoressize_average. Default:Truereduction (str, optional) – Specifies the reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the sum of the output will be divided by the number of elements in the output,'sum': the output will be summed. Note:size_averageandreduceare in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction. Default:'mean'
- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Target: \((*)\), same shape as the input.
Output: scalar. If
reductionis'none', then \((*)\), same shape as input.
- __init__(size_average=None, reduce=None, reduction: str = 'mean') None[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(input: Tensor, target: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.TripletMarginLoss(margin: float = 1.0, p: float = 2.0, eps: float = 1e-06, swap: bool = False, size_average=None, reduce=None, reduction: str = 'mean')[source]
Bases:
_LossCreates a criterion that measures the triplet loss given an input tensors \(x1\), \(x2\), \(x3\) and a margin with a value greater than \(0\). This is used for measuring a relative similarity between samples. A triplet is composed by a, p and n (i.e., anchor, positive examples and negative examples respectively). The shapes of all input tensors should be \((N, D)\).
The distance swap is described in detail in the paper Learning shallow convolutional feature descriptors with triplet losses by V. Balntas, E. Riba et al.
The loss function for each sample in the mini-batch is:
\[L(a, p, n) = \max \{d(a_i, p_i) - d(a_i, n_i) + {\rm margin}, 0\}\]where
\[d(x_i, y_i) = \left\lVert {\bf x}_i - {\bf y}_i \right\rVert_p\]The norm is calculated using the specified p value and a small constant \(\varepsilon\) is added for numerical stability.
See also
TripletMarginWithDistanceLoss, which computes the triplet margin loss for input tensors using a custom distance function.- Parameters:
margin (float, optional) – Default: \(1\).
p (int, optional) – The norm degree for pairwise distance. Default: \(2\).
eps (float, optional) – Small constant for numerical stability. Default: \(1e-6\).
swap (bool, optional) – The distance swap is described in detail in the paper Learning shallow convolutional feature descriptors with triplet losses by V. Balntas, E. Riba et al. Default:
False.size_average (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_averageis set toFalse, the losses are instead summed for each minibatch. Ignored whenreduceisFalse. Default:Truereduce (bool, optional) – Deprecated (see
reduction). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average. WhenreduceisFalse, returns a loss per batch element instead and ignoressize_average. Default:Truereduction (str, optional) – Specifies the reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the sum of the output will be divided by the number of elements in the output,'sum': the output will be summed. Note:size_averageandreduceare in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction. Default:'mean'
- Shape:
Input: \((N, D)\) or \((D)\) where \(D\) is the vector dimension.
Output: A Tensor of shape \((N)\) if
reductionis'none'and input shape is \((N, D)\); a scalar otherwise.
Examples:
>>> triplet_loss = nn.TripletMarginLoss(margin=1.0, p=2, eps=1e-7) >>> anchor = torch.randn(100, 128, requires_grad=True) >>> positive = torch.randn(100, 128, requires_grad=True) >>> negative = torch.randn(100, 128, requires_grad=True) >>> output = triplet_loss(anchor, positive, negative) >>> output.backward()
- __init__(margin: float = 1.0, p: float = 2.0, eps: float = 1e-06, swap: bool = False, size_average=None, reduce=None, reduction: str = 'mean')[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(anchor: Tensor, positive: Tensor, negative: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.TripletMarginWithDistanceLoss(*, distance_function: Callable[[Tensor, Tensor], Tensor] | None = None, margin: float = 1.0, swap: bool = False, reduction: str = 'mean')[source]
Bases:
_LossCreates a criterion that measures the triplet loss given input tensors \(a\), \(p\), and \(n\) (representing anchor, positive, and negative examples, respectively), and a nonnegative, real-valued function (“distance function”) used to compute the relationship between the anchor and positive example (“positive distance”) and the anchor and negative example (“negative distance”).
The unreduced loss (i.e., with
reductionset to'none') can be described as:\[\ell(a, p, n) = L = \{l_1,\dots,l_N\}^\top, \quad l_i = \max \{d(a_i, p_i) - d(a_i, n_i) + {\rm margin}, 0\}\]where \(N\) is the batch size; \(d\) is a nonnegative, real-valued function quantifying the closeness of two tensors, referred to as the
distance_function; and \(margin\) is a nonnegative margin representing the minimum difference between the positive and negative distances that is required for the loss to be 0. The input tensors have \(N\) elements each and can be of any shape that the distance function can handle.If
reductionis not'none'(default'mean'), then:\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases}\end{split}\]See also
TripletMarginLoss, which computes the triplet loss for input tensors using the \(l_p\) distance as the distance function.- Parameters:
distance_function (Callable, optional) – A nonnegative, real-valued function that quantifies the closeness of two tensors. If not specified, nn.PairwiseDistance will be used. Default:
Nonemargin (float, optional) – A nonnegative margin representing the minimum difference between the positive and negative distances required for the loss to be 0. Larger margins penalize cases where the negative examples are not distant enough from the anchors, relative to the positives. Default: \(1\).
swap (bool, optional) – Whether to use the distance swap described in the paper Learning shallow convolutional feature descriptors with triplet losses by V. Balntas, E. Riba et al. If True, and if the positive example is closer to the negative example than the anchor is, swaps the positive example and the anchor in the loss computation. Default:
False.reduction (str, optional) – Specifies the (optional) reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the sum of the output will be divided by the number of elements in the output,'sum': the output will be summed. Default:'mean'
- Shape:
Input: \((N, *)\) where \(*\) represents any number of additional dimensions as supported by the distance function.
Output: A Tensor of shape \((N)\) if
reductionis'none', or a scalar otherwise.
Examples:
>>> # Initialize embeddings >>> embedding = nn.Embedding(1000, 128) >>> anchor_ids = torch.randint(0, 1000, (1,)) >>> positive_ids = torch.randint(0, 1000, (1,)) >>> negative_ids = torch.randint(0, 1000, (1,)) >>> anchor = embedding(anchor_ids) >>> positive = embedding(positive_ids) >>> negative = embedding(negative_ids) >>> >>> # Built-in Distance Function >>> triplet_loss = \ >>> nn.TripletMarginWithDistanceLoss(distance_function=nn.PairwiseDistance()) >>> output = triplet_loss(anchor, positive, negative) >>> output.backward() >>> >>> # Custom Distance Function >>> def l_infinity(x1, x2): >>> return torch.max(torch.abs(x1 - x2), dim=1).values >>> >>> # xdoctest: +SKIP("FIXME: Would call backwards a second time") >>> triplet_loss = ( >>> nn.TripletMarginWithDistanceLoss(distance_function=l_infinity, margin=1.5)) >>> output = triplet_loss(anchor, positive, negative) >>> output.backward() >>> >>> # Custom Distance Function (Lambda) >>> triplet_loss = ( >>> nn.TripletMarginWithDistanceLoss( >>> distance_function=lambda x, y: 1.0 - F.cosine_similarity(x, y))) >>> output = triplet_loss(anchor, positive, negative) >>> output.backward()
- Reference:
V. Balntas, et al.: Learning shallow convolutional feature descriptors with triplet losses: https://bmva-archive.org.uk/bmvc/2016/papers/paper119/index.html
- __init__(*, distance_function: Callable[[Tensor, Tensor], Tensor] | None = None, margin: float = 1.0, swap: bool = False, reduction: str = 'mean')[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(anchor: Tensor, positive: Tensor, negative: Tensor) Tensor[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class torchium.losses.AdaptiveLogSoftmaxWithLoss(in_features: int, n_classes: int, cutoffs: Sequence[int], div_value: float = 4.0, head_bias: bool = False, device=None, dtype=None)[source]
Bases:
ModuleEfficient softmax approximation.
As described in Efficient softmax approximation for GPUs by Edouard Grave, Armand Joulin, Moustapha Cissé, David Grangier, and Hervé Jégou.
Adaptive softmax is an approximate strategy for training models with large output spaces. It is most effective when the label distribution is highly imbalanced, for example in natural language modelling, where the word frequency distribution approximately follows the Zipf’s law.
Adaptive softmax partitions the labels into several clusters, according to their frequency. These clusters may contain different number of targets each. Additionally, clusters containing less frequent labels assign lower dimensional embeddings to those labels, which speeds up the computation. For each minibatch, only clusters for which at least one target is present are evaluated.
The idea is that the clusters which are accessed frequently (like the first one, containing most frequent labels), should also be cheap to compute – that is, contain a small number of assigned labels.
We highly recommend taking a look at the original paper for more details.
cutoffsshould be an ordered Sequence of integers sorted in the increasing order. It controls number of clusters and the partitioning of targets into clusters. For example settingcutoffs = [10, 100, 1000]means that first 10 targets will be assigned to the ‘head’ of the adaptive softmax, targets 11, 12, …, 100 will be assigned to the first cluster, and targets 101, 102, …, 1000 will be assigned to the second cluster, while targets 1001, 1002, …, n_classes - 1 will be assigned to the last, third cluster.div_valueis used to compute the size of each additional cluster, which is given as \(\left\lfloor\frac{\texttt{in\_features}}{\texttt{div\_value}^{idx}}\right\rfloor\), where \(idx\) is the cluster index (with clusters for less frequent words having larger indices, and indices starting from \(1\)).head_biasif set to True, adds a bias term to the ‘head’ of the adaptive softmax. See paper for details. Set to False in the official implementation.
Warning
Labels passed as inputs to this module should be sorted according to their frequency. This means that the most frequent label should be represented by the index 0, and the least frequent label should be represented by the index n_classes - 1.
Note
This module returns a
NamedTuplewithoutputandlossfields. See further documentation for details.Note
To compute log-probabilities for all classes, the
log_probmethod can be used.- Parameters:
in_features (int) – Number of features in the input tensor
n_classes (int) – Number of classes in the dataset
cutoffs (Sequence) – Cutoffs used to assign targets to their buckets
div_value (float, optional) – value used as an exponent to compute sizes of the clusters. Default: 4.0
head_bias (bool, optional) – If
True, adds a bias term to the ‘head’ of the adaptive softmax. Default:False
- Returns:
output is a Tensor of size
Ncontaining computed target log probabilities for each exampleloss is a Scalar representing the computed negative log likelihood loss
- Return type:
NamedTuplewithoutputandlossfields
- Shape:
input: \((N, \texttt{in\_features})\) or \((\texttt{in\_features})\)
target: \((N)\) or \(()\) where each value satisfies \(0 <= \texttt{target[i]} <= \texttt{n\_classes}\)
output1: \((N)\) or \(()\)
output2:
Scalar
- __init__(in_features: int, n_classes: int, cutoffs: Sequence[int], div_value: float = 4.0, head_bias: bool = False, device=None, dtype=None) None[source]
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- tail: ModuleList
- forward(input_: Tensor, target_: Tensor) _ASMoutput[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- log_prob(input: Tensor) Tensor[source]
Compute log probabilities for all \(\texttt{n\_classes}\).
- Parameters:
input (Tensor) – a minibatch of examples
- Returns:
log-probabilities of for each class \(c\) in range \(0 <= c <= \texttt{n\_classes}\), where \(\texttt{n\_classes}\) is a parameter passed to
AdaptiveLogSoftmaxWithLossconstructor.
- Shape:
Input: \((N, \texttt{in\_features})\)
Output: \((N, \texttt{n\_classes})\)
- predict(input: Tensor) Tensor[source]
Return the class with the highest probability for each example in the input minibatch.
This is equivalent to
self.log_prob(input).argmax(dim=1), but is more efficient in some cases.- Parameters:
input (Tensor) – a minibatch of examples
- Returns:
a class with the highest probability for each example
- Return type:
output (Tensor)
- Shape:
Input: \((N, \texttt{in\_features})\)
Output: \((N)\)
Usage Examples
Classification Example
import torch
import torch.nn as nn
import torchium
# Binary classification with class imbalance
criterion = torchium.losses.FocalLoss(
alpha=0.25, # Weight for positive class
gamma=2.0, # Focusing parameter
reduction='mean'
)
# Multi-class with label smoothing
criterion = torchium.losses.LabelSmoothingLoss(
num_classes=10,
smoothing=0.1
)
Segmentation Example
# Dice loss for segmentation
dice_loss = torchium.losses.DiceLoss(smooth=1e-5)
# Combined loss for better performance
criterion = torchium.losses.CombinedSegmentationLoss(
dice_weight=0.5,
focal_weight=0.5
)
# Tversky loss with custom alpha/beta
tversky_loss = torchium.losses.TverskyLoss(
alpha=0.3, # False positive weight
beta=0.7 # False negative weight
)
Object Detection Example
# GIoU loss for bounding box regression
giou_loss = torchium.losses.GIoULoss()
# Focal loss for classification
focal_loss = torchium.losses.FocalDetectionLoss(
alpha=0.25,
gamma=2.0
)
Generative Model Example
# GAN loss
gan_loss = torchium.losses.GANLoss()
# VAE loss
vae_loss = torchium.losses.ELBOLoss()
# Diffusion model loss
diffusion_loss = torchium.losses.DDPMLoss()
Metric Learning Example
# Triplet loss for metric learning
triplet_loss = torchium.losses.TripletMetricLoss(margin=0.3)
# ArcFace loss for face recognition
arcface_loss = torchium.losses.ArcFaceMetricLoss(
num_classes=1000,
embedding_size=512,
margin=0.5,
scale=64
)
Multi-Task Learning Example
# Uncertainty weighting for multi-task
multi_task_loss = torchium.losses.UncertaintyWeightingLoss(
num_tasks=3
)
# Gradient surgery
pcgrad_loss = torchium.losses.PCGradLoss()
Factory Functions
# Create loss by name
criterion = torchium.create_loss('focal', alpha=0.25, gamma=2.0)
# List all available losses
available = torchium.get_available_losses()
print(f"Available losses: {len(available)}")
Loss Function Comparison
Advanced Usage Patterns
Combined Losses
class CombinedLoss(nn.Module):
def __init__(self):
super().__init__()
self.dice = torchium.losses.DiceLoss()
self.focal = torchium.losses.FocalLoss()
def forward(self, pred, target):
dice_loss = self.dice(pred, target)
focal_loss = self.focal(pred, target)
return 0.6 * dice_loss + 0.4 * focal_loss
Weighted Losses
# Class weights for imbalanced datasets
class_weights = torch.tensor([1.0, 2.0, 0.5]) # Weight for each class
criterion = torchium.losses.FocalLoss(
alpha=class_weights,
gamma=2.0
)
Domain-Specific Selection Guide
- For Computer Vision:
Object Detection: GIoU, DIoU, CIoU, EIoU, AlphaIoU
Segmentation: Dice, Tversky, Lovasz, Boundary
Super Resolution: Perceptual, SSIM, MS-SSIM, LPIPS, VGG
Style Transfer: Style, Content, TotalVariation, NeuralStyle, AdaIN
- For Natural Language Processing:
Text Generation: Perplexity, CRF, StructuredPrediction
Evaluation: BLEU, ROUGE, METEOR, BERTScore
Word Embeddings: Word2Vec, GloVe, FastText
- For Generative Models:
GANs: GAN, Wasserstein, Hinge, LeastSquares, Relativist
VAEs: ELBO, BetaVAE, BetaTCVAE, FactorVAE
Diffusion: DDPMLoss, DDIMLoss, ScoreMatching
- For Metric Learning:
Contrastive: Contrastive, Triplet, Quadruplet, NPair
Angular: Angular, ArcFace, CosFace, SphereFace
Proxy-based: ProxyNCA, ProxyAnchor
- For Multi-Task Learning:
Uncertainty: UncertaintyWeighting, MultiTask
Gradient Surgery: PCGrad, GradNorm, CAGrad
Dynamic: DynamicLossBalancing
- For Domain-Specific Tasks:
Medical Imaging: Dice, Tversky (specialized for medical segmentation)
Audio Processing: Spectral, MelSpectrogram
Time Series: DTW, DTWBar