Contrastive learning based facial action unit detection in children with hearing impairment for a socially assistive robot platform


Gurpinar C., Takır Ş., Biçer E., ULUER P., Arica N., Köse H.

Image and Vision Computing, vol.128, 2022 (SCI-Expanded) identifier identifier

  • Publication Type: Article / Article
  • Volume: 128
  • Publication Date: 2022
  • Doi Number: 10.1016/j.imavis.2022.104572
  • Journal Name: Image and Vision Computing
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, Applied Science & Technology Source, Biotechnology Research Abstracts, Computer & Applied Sciences, INSPEC
  • Keywords: Contrastive learning, Facial action unit detection, Child -robot interaction, Transfer learning, Domain adaptation, Covariate shift
  • Galatasaray University Affiliated: Yes

Abstract

© 2022 Elsevier B.V.This paper presents a contrastive learning-based facial action unit detection system for children with hearing impairments to be used on a socially assistive humanoid robot platform. The spontaneous facial data of children with hearing impairments was collected during an interaction study with Pepper humanoid robot, and tablet-based game. Since the collected dataset is composed of limited number of instances, a novel domain adaptation extension is applied to improve facial action unit detection performance, using some well-known labelled datasets of adults and children. Furthermore, since facial action unit detection is a multi-label classification problem, a new smoothing parameter, β, is introduced to adjust the contribution of similar samples to the loss function of the contrastive learning. The results show that the domain adaptation approach using children's data (CAFE) performs better than using adult's data (DISFA). In addition, using the smoothing parameter β leads to a significant improvement on the recognition performance.