Contrastive Training for Improved Out-of-Distribution Detection

Paper Summary

Paper

TL;DR

Existing techniques

Idea

Method

CLP score

Conclusion

They showed that representations obtained through contrastive training improve OOD detection performance beyond what is possible with purely supervised training. The representations are shaped by joint training, in which the contrastive loss pushes the representations apart, even within each class, while the supervised loss acts to cluster the representations by class.

Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • An Image Is Worth 16x16 Words:Transformers For Image Recognition At Scale
  • A Simple Framework for Contrastive Learning of Visual Representations
  • Big Self-Supervised Models are Strong Semi-Supervised Learners
  • Big Self-Supervised Models Advance Medical Image Classification
  • Masked Autoencoders Are Scalable Vision Learners