Browsing by Author "Rodrigues, Ana Sofia Figueiredo"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
- Classification of facial expressions under partial occlusion for VR gamesPublication . Rodrigues, Ana Sofia Figueiredo; Lopes, Júlio Castro; Lopes, Rui Pedro; Teixeira, Luís F.Facial expressions are one of the most common way to externalize our emotions. However, the same emotion can have different effects on the same person and has different effects on different people. Based on this, we developed a system capable of detecting the facial expressions of a person in real-time, occluding the eyes (simulating the use of virtual reality glasses). To estimate the position of the eyes, in order to occlude them, Multi-task Cascade Convolutional Neural Networks (MTCNN) were used. A residual network, a VGG, and the combination of both models, were used to perform the classification of 7 different types of facial expressions (Angry, Disgust, Fear, Happy, Sad, Surprise, Neutral), classifying the occluded and non-occluded dataset. The combination of both models, achieved an accuracy of 64.9% for the occlusion dataset and 62.8% for no occlusion, using the FER-2013 dataset. The primary goal of this work was to evaluate the influence of occlusion, and the results show that the majority of the classification is done with the mouth and chin. Nevertheless, the results were far from the state-of-the-art, which is expect to be improved, mainly by adjusting the MTCNN.
- Eye Importance in Facial Expression RecognitionPublication . Rodrigues, Ana Sofia Figueiredo; Lopes, Júlio Castro; Lopes, Rui PedroThe human face is a powerful tool for nonverbal communication, capable of conveying a wide range of emotions. Previous research has shown that facial expressions contribute significantly to interpersonal communication, indicating that 55% of information is conveyed through facial expression alone. Despite advancements, Facial Expression Recognition (FER) technology faces challenges, particularly in scenarios involving occlusions. Instances during the COVID-19 pandemic and in Virtual Reality (VR) environments highlight these challenges, where mask usage and head-mounted displays obstruct facial features critical for accurate recognition. This paper aims to investigate the importance of the eyes in facial expression recognition by using four models: ResNet-18, VGG-19, EfficientNet-B1, and an Ensemble model. Utilising the FERPlus dataset, scenarios with and without occlusion were examined. In scenarios without occlusion, ResNet-18 emerged as the top-performing model, achieving 86.1% accuracy. However, when occluded by goggles, the Ensemble model demonstrated superior performance with 82.8% accuracy. Furthermore, in the presence of mask occlusion, EfficientNet-B1 exhibited the most robust performance, achieving an accuracy of 71.2%. Despite challenges, the results of this paper reaffirm the enduring importance of the eyes in facial expression recognition, emphasizing their pivotal role in conveying emotions even amidst technological obstacles.
- Facial expression recognition under partial occlusionPublication . Rodrigues, Ana Sofia Figueiredo; Lopes, Rui PedroFacial expressions play a crucial role in conveying emotions, accounting for 55% of com- munication. Although humans naturally perceive these expressions, individual differences can make this recognition complex. Technological advancements seek to automate the identification of facial expressions, thereby improving interactions. Nonetheless, the obstruction of facial features caused by elements such as hand movements or hair presents substantial obstacles, complicating the precise recognition of expressions. This study investigates the impact of partial occlusion on facial expression recognition, specifically examining how occlusions from masks and Virtual Reality goggles affect model performance on the FERPlus and FERV39K datasets. The results reveal that occlusion reduces the accuracy of all models. Notably, the performance of EfficientNetB1 drops significantly from 92.9% to 74% when the mouth is obscured, in happiness, in FERPlus dataset, while ResNet18 performs poorest in recognizing fear, plummeting to 30% with eyes occlusion. In the FERV39K dataset, occlusion scenarios have a substantial effect on the accuracy of the neutral class. For example, in VGG19, the accuracy decreases sharply from 94.4% to 31.7% in the goggles occlusion scenario and to 30.4% in the mask occlusion scenario. However, a three-class grouping enhances the overall performance, illustrated by the results obtained in the three models in both datasets, indicating the effectiveness of the approach in difficult situations. These findings emphasize the significant challenges occlusion poses for emotion recognition systems, highlighting the need for continued research in this field.