Learning Intermediate Features of Object Affordances with a Convolutional Neural Network

02/20/2020
by   Aria Yuan Wang, et al.
0

Our ability to interact with the world around us relies on being able to infer what actions objects afford – often referred to as affordances. The neural mechanisms of object-action associations are realized in the visuomotor pathway where information about both visual properties and actions is integrated into common representations. However, explicating these mechanisms is particularly challenging in the case of affordances because there is hardly any one-to-one mapping between visual features and inferred actions. To better understand the nature of affordances, we trained a deep convolutional neural network (CNN) to recognize affordances from images and to learn the underlying features or the dimensionality of affordances. Such features form an underlying compositional structure for the general representation of affordances which can then be tested against human neural data. We view this representational analysis as the first step towards a more formal account of how humans perceive and interact with the environment.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset