Facial expressions are a fundamental component of human communication, conveying a wide range of emotional states. However, facial expression recognition (FER) in the wild remains challenging due to variations in pose, illumination, occlusion, and limited computational resources.
EmoPAtt-Lite is a lightweight facial emotion recognition architecture designed to achieve high accuracy with minimal computational cost, making it suitable for deployment on resource-constrained devices (Ben seddik† & Adelekan, 2025).
Overview of the EmoPAtt-Lite architecture, combining a truncated MobileNetV1 backbone with spatial transformation, channel-wise attention, and an attention-based classifier.
Facial expressions are a fundamental component of human communication, conveying a wide range of emotions. However, automatic facial expression recognition (FER) in the wild remains challenging, particularly under adverse conditions. Recent advances in computer vision have demonstrated the effectiveness of deep neural networks for FER, but their deployment is often constrained by the need for substantial computational resources. To address this, we propose EmoPAtt-Lite, a compact FER model that modifies MobileNetV1 by integrating spatial adaptation and channel-aware recalibration modules. Unlike prior patch-attention methods, our model emphasizes spatial alignment (via a Spatial Transformer Network) and channel weighting (via Squeeze-and-Excitation block) to enhance lightweight FER performance. Despite its compressed size of only 1.3M parameters, EmoPAtt-Lite achieves state-of-the-art performance on the FER2013 benchmark, reaching an accuracy of 79.35%, thus demonstrating that high recognition accuracy can be attained without heavy computational demands.
@inproceedings{benseddik2025emopattlite,title={EmoPAtt-Lite: Lightweight Facial Emotion Recognition},author={{Ben seddik}, Ismail and Adelekan, Adebowale Emmanuel},booktitle={International Conference on Information Technology and Applications (ICITA)},series={Lecture Notes in Networks and Systems},year={2025},publisher={Springer}}