Multimodal Joint Emotion and Game Context Recognition in League of Legends Livestreams

05/31/2019
by   Charles Ringer, et al.
0

Video game streaming provides the viewer with a rich set of audio-visual data, conveying information both with regards to the game itself, through game footage and audio, as well as the streamer's emotional state and behaviour via webcam footage and audio. Analysing player behaviour and discovering correlations with game context is crucial for modelling and understanding important aspects of livestreams, but comes with a significant set of challenges - such as fusing multimodal data captured by different sensors in uncontrolled ('in-the-wild') conditions. Firstly, we present, to our knowledge, the first data set of League of Legends livestreams, annotated for both streamer affect and game context. Secondly, we propose a method that exploits tensor decompositions for high-order fusion of multimodal representations. The proposed method is evaluated on the problem of jointly predicting game context and player affect, compared with a set of baseline fusion approaches such as late and early fusion.

READ FULL TEXT

page 2

page 5

research
11/14/2020

On the Benefits of Early Fusion in Multimodal Representation Learning

Intelligently reasoning about the world often requires integrating data ...
research
03/14/2022

A Systematic Review on Affective Computing: Emotion Models, Databases, and Recent Advances

Affective computing plays a key role in human-computer interactions, ent...
research
09/21/2017

Temporal Multimodal Fusion for Video Emotion Classification in the Wild

This paper addresses the question of emotion classification. The task co...
research
06/07/2018

Multimodal Relational Tensor Network for Sentiment and Emotion Classification

Understanding Affect from video segments has brought researchers from th...
research
03/28/2019

A Multimodal Emotion Sensing Platform for Building Emotion-Aware Applications

Humans use a host of signals to infer the emotional state of others. In ...
research
01/15/2021

The Multimodal Sentiment Analysis in Car Reviews (MuSe-CaR) Dataset: Collection, Insights and Improvements

Truly real-life data presents a strong, but exciting challenge for senti...
research
09/08/2023

EGOFALLS: A visual-audio dataset and benchmark for fall detection using egocentric cameras

Falls are significant and often fatal for vulnerable populations such as...

Please sign up or login with your details

Forgot password? Click here to reset