Deep into visual saliency for immersive VR environments rendered in real-time


ÇELİKCAN U., Askin M. B., Albayrak D., Çapın T. K.

COMPUTERS & GRAPHICS-UK, cilt.88, ss.70-82, 2020 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 88
  • Basım Tarihi: 2020
  • Doi Numarası: 10.1016/j.cag.2020.03.006
  • Dergi Adı: COMPUTERS & GRAPHICS-UK
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, PASCAL, Aerospace Database, Applied Science & Technology Source, Communication Abstracts, Compendex, Computer & Applied Sciences, INSPEC, Metadex, Civil Engineering Abstracts
  • Sayfa Sayıları: ss.70-82
  • Anahtar Kelimeler: Visual saliency, Virtual reality, Stereographics, COMPUTATIONAL MODEL, BOTTOM-UP, ATTENTION, COMPONENTS, MAP
  • TED Üniversitesi Adresli: Evet

Özet

As virtual reality (VR) headsets with head-mounted-displays (HMDs) are becoming more and more prevalent, new research questions are arising. One of the emergent questions is how best to employ visual saliency prediction in VR applications using current line of advanced HMDs. Due to the complex nature of human visual attention mechanism, the problem needs to be investigated from different points of view using different approaches. Having such an outlook, this work extends the previous effort on exploring a set of well-studied visual saliency cues and saliency prediction methods making use of these cues with the aim of assessing how applicable they are for estimating visual saliency in immersive VR environments that are rendered in real-time and experienced with consumer HMDs. To that end, a new user study was conducted with a larger sample and reveals the effects of experiencing dynamic computer-generated scenes with reduced navigation speeds on visual saliency. Using these scenes that have varying visual experiences in terms of contents and range of depth-of-field, the study also compares VR viewing to 2D desktop viewing with an expanded set of results. The presented evaluation offers the most in-depth view of visual saliency in immersive, real-time rendered VR to date. The analysis encompassing the results of both studies indicate that decreasing navigation speed reduces the contribution of depth-cue to visual saliency and has a boosting effect for cues based on 2D image features only. While there are content-dependent variances among their scores, it is seen that the saliency prediction methods based on boundary connectivity and surroundedness work best in general for the given settings. (C) 2020 Elsevier Ltd. All rights reserved.