While visual saliency has been used for various purposes in virtual reality (VR), the efforts to properly understand the saliency mechanism in VR remain insufficient. In this paper, we present an extensive comparative analysis of learning-based and heuristic-based approaches to visual saliency prediction in immersive VR experienced using head-mounted-displays with a particular focus on the contribution of the depth cue. To this end, we use three learning-based RGB-D image saliency detection methods and two heuristic-based RGB-D image saliency detection methods on a VR dataset curated from three distinct virtual environments under two-dimensional and three-dimensional viewing conditions. Additionally, we extend the analysis by including a heuristic-based RGB video saliency detection method and its depth-infused version. The results acquired using these seven methods reveal the superiority of the learning-based RGB-D image saliency prediction methods in VR and validate the importance of the depth cue in the saliency prediction of virtual environments.