A groundbreaking technology capable of real-time human emotion recognition has been developed by Professor Jiyun Kim and his research team at the Department of Material Science and Engineering at UNIST. This innovative breakthrough holds the potential to transform various industries, particularly influencing the evolution of next-generation wearable systems tailored to provide services based on emotional cues.
The challenge of understanding and accurately extracting emotional information, given its abstract and ambiguous nature, has long persisted. To overcome this hurdle, the research team introduced a multi-modal human emotion recognition system, amalgamating verbal and non-verbal expression data to efficiently harness comprehensive emotional information.
At the heart of this cutting-edge system lies the personalized skin-integrated facial interface (PSiFI) system—a self-powered, facile, stretchable, and transparent technology. Featuring a pioneering bidirectional triboelectric strain and vibration sensor, this system enables the simultaneous sensing and integration of both verbal and non-verbal expression data. The integration with a wireless data transfer-equipped data processing circuit allows real-time emotion recognition.
Harnessing machine learning algorithms, the developed technology showcases precise and real-time human emotion recognition, even when individuals are wearing masks. The system has successfully found applications in a digital concierge setup within a virtual reality (VR) environment.
Central to the technology is the concept of “friction charging,” where objects separate into positive and negative charges upon friction. Remarkably, the system is self-generating, eliminating the need for an external power source or complex measuring devices for data recognition.
Professor Kim commented, “Based on these technologies, we have developed a skin-integrated face interface (PSiFI) system that can be customized for individuals.” Employing a semi-curing technique, the team manufactured a transparent conductor for the friction charging electrodes. Additionally, a personalized mask was created using a multi-angle shooting technique, combining flexibility, elasticity, and transparency.
The research team successfully integrated the detection of facial muscle deformation and vocal cord vibrations, enabling real-time emotion recognition. The system’s capabilities were demonstrated in a virtual reality “digital concierge” application, where customized services based on users’ emotions were provided.
Jin Pyo Lee, the first author of the study, highlighted, “With this developed system, it is possible to implement real-time emotion recognition with just a few learning steps and without complex measurement equipment. This opens up possibilities for portable emotion recognition devices and next-generation emotion-based digital platform services in the future.”
The wireless and customizable nature of the system ensures wearability and convenience, making it suitable for a variety of applications. Furthermore, the team applied the system to VR environments, utilizing it as a “digital concierge” for settings such as smart homes, private movie theaters, and smart offices. The system’s capacity to identify individual emotions in different situations facilitates the provision of personalized recommendations for music, movies, and books.
Professor Kim emphasized, “For effective interaction between humans and machines, human-machine interface (HMI) devices must be capable of collecting diverse data types and handling complex integrated information. This study exemplifies the potential of using emotions, which are complex forms of human information, in next-generation wearable systems.” The research was conducted in collaboration with Professor Lee Pui See of Nanyang Technical University in Singapore and was supported by the National Research Foundation of Korea (NRF) and the Korea Institute of Materials (KIMS) under the Ministry of Science and ICT.
By Impact Lab