Revisiting Self-Supervised Contrastive Learning for Facial Expression Recognition

Shu: SSLFER

Yuxuan Shu   Xiao Gu   Guang-Zhong Yang   Benny Lo

BMVC 2022

Code     Paper


Abstract

The success of most advanced facial expression recognition works relies heavily on large-scale annotated datasets. However, it poses great challenges in acquiring clean and consistent annotations for facial expression datasets. On the other hand, self-supervised contrastive learning has gained great popularity due to its simple yet effective instance discrimination training strategy, which can potentially circumvent the annotation issue. However, there remain inherent disadvantages of instance-level discrimination, which are even more challenging when faced with complicated facial representations. In this paper, we revisit the use of self-supervised contrastive learning and explore three core strategies to enforce expression-specific representations and to minimize the interference from other facial attributes, such as identity and face styling. Experimental results show that our proposed method outperforms the current state-of-the-art self-supervised learning methods, in terms of both categorical and dimensional facial expression recognition tasks.

Video

Poster

poster

Highlights


We explored three main promising solutions in Self-supervised learning-based Facial Expression Recognition (FER).

·Positives with Same Expression


TimeAug - Add additional temporal shifts to generate positive pairs.

FaceSwap - Swap faces to get same expression on different faces with a low computational cost.

·Negatives with Same Identity


HardNeg - Sample with a larger time-interval to avoid identity-related shortcuts.

·False Negatives Cancellation


MaskFN - Minimize the negative effects caused by false negatives using mouth-eye descriptors.

Datasets

We pretrained on VoxCeleb1 dataset and evaluated on two Facial eExpression Recognition datasets, AffectNet and FER2013. We further evaluated on a Face Recognition dataset, LFW. In our experiment, we used the released version provided by scikit-learn. The usage can be found here.

Citation


@inproceedings{shu2022revisiting,
  title={Revisiting Self-Supervised Contrastive Learning for Facial Expression Recognition},
  author={Shu, Yuxuan and Gu, Xiao and Yang, Guang-Zhong and Lo, Benny},
  booktitle={BMVC},
  year={2022}
}

© Yuxuan Shu 2022