On Learning Associations of Faces and Voices

ACCV 2018

Changil Kim1, Hijung Valentina Shin2, Tae-Hyun Oh1, Alexandre Kaspar1, Mohamed Elgharib3, Wojciech Matusik1

1MIT CSAIL, 2Adobe Research, 3QCRI

Abstract

In this paper, we study the associations between human faces and voices. Audiovisual integration, specifically the integration of facial and vocal information is a well-researched area in neuroscience. It is shown that the overlapping information between the two modalities plays a significant role in perceptual tasks such as speaker identification. Through an online study on a new dataset we created, we confirm previous findings that people can associate unseen faces with corresponding voices and vice versa with greater than chance accuracy. We computationally model the overlapping information between faces and voices and show that the learned cross-modal representation contains enough information to identify matching faces and voices with performance similar to that of humans. Our representation exhibits correlations to certain demographic attributes and features obtained from either visual or aural modality alone. We release our dataset of audiovisual recordings and demographic annotations of people reading out short text used in our studies.

Downloads

Code

A reference implementation is available under the permissive MIT License. Please cite the paper if you use the software.

Pre-trained models are avaiable as TensorFlow checkpoints.

Dataset

The following dataset is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0). Please cite the paper if you use any part of this dataset.

The demographic annotations of our dataset and the VoxCeleb test set consist of 5 columns, which denote the following.

BibTeX

@inproceedings{Kim2018on,
  author    = {Changil Kim and Hijung Valentina Shin and Tae-Hyun Oh and Alexandre Kaspar and Mohamed Elgharib and Wojciech Matusik},
  title     = {On Learning Associations of Faces and Voices},
  booktitle = {Proceedings of Asian Conference on Computer Vision (ACCV)},
  year      = {2018},
}

Acknowledgments

This work is funded in part by the QCRI–CSAIL computer science research program. Changil Kim is supported by a Swiss National Science Foundation fellowship P2EZP2 168785. We thank Sung-Ho Bae for his help in this work.