Representational similarity of actions in the human brain

Urgen B. A., Pehlivan Tort S., Saygin A. P.

6th International Workshop on Pattern Recognition in Neuroimaging, PRNI 2016, Trento, Italy, 22 - 24 June 2016 identifier identifier

  • Publication Type: Conference Paper / Full Text
  • Doi Number: 10.1109/prni.2016.7552341
  • City: Trento
  • Country: Italy
  • Keywords: action recognition, neuroimaging, fMRI, computer vision, MVPA, representational similarity analysis, RECOGNITION, PARIETAL, CORTEX
  • TED University Affiliated: Yes


© 2016 IEEE.Visual processing of actions is supported by a network of brain regions in occipito-temporal, parietal, and premotor cortex in the primate brain, known as the Action Observation Network (AON). What remain unclear are the representational properties of each node of this network. In this study, we investigated the representational content of brain areas in AON using fMRI, representational similarity analysis (RSA), and modeling. Subjects were shown video clips of three agents performing eight different actions during fMRI scanning. We then computed the representational dissimilarity matrices (RDMs) for each brain region, and compared them with that of two sets of model representations that were constructed based on computer vision and semantic attributes. Our findings reveal that different nodes of the AON have different representational properties. PSTS as the visual area of the AON represents high level visual features such as movement kinematics. As one goes higher in the AON hierarchy, representations become more abstract and semantic as our results revealed that parietal cortex represents several aspects of actions such as action category, intention of the action, and target of the action. These results suggest that during visual processing of actions, pSTS pools information from visual cortex to compute movement kinematics, and passes that information to higher levels of AON coding semantics of actions such as action category, intention of action, and target of action, consistent with computational models of visual action recognition.