© Springer Nature Switzerland AG 2021.This paper introduces an emotion recognition system for an affectively aware hospital robot for children, and a data labeling and processing tool called LabelFace for facial expression recognition (FER) to be employed within the presented system. The tool provides an interface for automatic/manual labeling and visual information processing for emotion and facial action unit (AU) recognition with the assistant models based on deep learning. The tool is developed primarily to support the affective intelligence of a socially assistive robot for supporting the healthcare of children with hearing impairments. In the proposed approach, multi-label AU detection models are used for this purpose. To the best of our knowledge, the proposed children AU detector model is the first model which targets 5-to 9-year old children. The model is trained with well-known posed-datasets and tested with a real-world non-posed dataset collected from hearing-impaired children. Our tool LabelFace is compared to a widely-used facial expression tool in terms of data processing and data labeling capabilities for benchmarking, and performs better with its AU detector models for children on both posed-data and non-posed data testing.