Deep Learning for Classification of Peak Emotions within Virtual Reality Systems
D. Quesnel, S. DiPaola, & B.E. Riecke (2017)
As we begin to understand the value of transformative emotions like awe, we learn that there is a great benefit to our day to day lives in experiencing situations of awe and wonder. Before we learn how to reliably elicit awe, we must first learn how to record its presence on a moment to moment basis. With such an interpersonal and complex emotion however, it is challenging to use only retroactive reporting methods to record its elicitation; we may learn that awe has been felt, but not know what exactly it was about a stimulus or experience that elicited it, unless we ‘catch it in the act’. This is where biosensing technologies come in. With the use of multiple biosensors, we can use multimodal fusion through artificial intelligence, in this case, deep learning to understand what body responses correlate with sudden feelings of awe. This paper describes the adoption of a multimodal fusion technique called ‘Preference Deep Learning’ (PDL), first coined by Martinez, H.P. et al. (2013) to identify biosignal markers of affect as they pertain to awe and other profound emotions, like joy, and fear. This paper is a direct result of my work in a graduate level AI course taught by Prof. Steve DiPaola, at SFU, and as my first foray into AI was accepted to SIGGRAPH Asia 2017.
ABSTRACT
Research has demonstrated well-being benefits from positive, ‘peak’ emotions such as awe and wonder, prompting the HCI community to utilize affective computing and AI modelling for elicitation and measurement of those target emotional states. The immersive nature of virtual reality (VR) content and systems can lead to feelings of awe and wonder, especially with a responsive, personalized environment based on biosignals. However, an accurate model is required to differentiate between emotional states that have similar biosignal input, such as awe and fear. Deep learning may provide a solution since the subtleties of these emotional states and affect may be recognized, with biosignal data viewed in a time series so that researchers and designers can understand which features of the system may have influenced target emotions. The proposed deep learning fusion system in this paper will use data collected from a corpus, created through collection of physiological biosignals and ranked qualitative data, and will classify these multimodal signals into target outputs of affect. This model will be real-time for the evaluation of VR system features which influence awe/wonder, using a bio-responsive environment. Since biosignal data will be collected through wireless, wearable sensor technology, and modelled through the same computer powering the VR system, it can be used in field research and studios.
Presented at the Artificial Intelligence Meets Virtual and Augmented Worlds (AIVR) at SIGGRAPH Asia 2017, Bangkok, Thailand. Journal paper: International Series for Information Systems and Management in Creative eMedia (CreMedia), International Ambient Media Association (iAMEA), n. 2017/2, ISSN 2341-5576, ISBN 978-952-7023-17-4, 2017, Available at: www.ambientmediaassociation.org/Journal
Click the PDF icon to the right to download the 6-page paper by Quesnel, D., DiPaola, S., & Riecke, B. E. (2017). Deep Learning for Classification of Peak Emotions within Virtual Reality Systems. Presented at the 1st Artificial Intelligence Meets Virtual and Augmented Worlds (AIVR), SIGGRAPH Asia 2017, Bangkok, Thailand.