Metaverse: A dilemma between comfort, immersion, and value (part 2)

Toward a global human perception model; reducing a trillion-dollar question into a few equations

BRELYON
6 min readMay 24, 2022

In order to craft a lasting and engaging metaverse medium, it is essential to deliver comfort and immersion in a drastically better way than what current technologies offer (part 1). This requires defining metrics with human perception at the center of a comparison space, as opposed to the current format-specific comparisons. This is essentially a very small but important part of a trillion-dollar question: what is the human perception model, and how can we match, cover, or capture the broadest sensory aspect of it? As you can guess, with even a small sliver of the answer to this question, we can dramatically improve user experiences and create unimagined, realistic, and meaningful interconnectedness in the digital world. Facebook was able to implement and figure out how users perceive certain positive feedback with their social peers and leverage that to increase advertisement engagement and create a trillion-dollar industry. The better we can model human perception, the more we are likely to predict human behavior, and this information can be used to predict, analyze, or measure the success of a certain experience or product for the metaverse. Obviously, there is endless literature on this topic [1], all the way from replacing human behavior with mathematical expressions, to the entire fields of psychophysics [2], psychology [3,4], and more recently, artificial intelligence and computational psychology [57]. We would like to categorize different levels of virtual worlds around human perception constructs as below.

Figure 3. Five levels of human-computer evolution envisioned by BRELYON; the metaverse is the entry to the age of telepresence.

To put forward a concrete and fair discussion here, we focus on the most basic level of visual perception, not dependent on deeper human understanding, to create context for the content being shown. We are reducing a trillion-dollar question to a few numbers, but that is the only way to talk specifically about the display format of metaverse. The idea of turning human sensory information into a bit rate is not a new idea and it goes back to the 1940s, when Claude Shannon published his theory of communications [8]. Putting the user at the center of this comparison, we define a “user’s visual perceived bit rate (PBR)” and analyze how this rate gets saturated (or not) by the maximum bit rate of the projected content. For instance, if the maximum bit rate of the displayed content is higher than the PBR, an increased immersive factor does not necessarily improve the experience and, indeed, the extra load may increase the visual fatigue of the user [911]. On the other hand, if the bit-rate of the projected content is far below the saturation level of the PBR, the immersion score will also be low. Because so many parameters influence perception and make the discussion overly complex, here we only consider a simplified optics perspective. Our goal is to make sure that we are not wasting any bits and that we are getting the best possible image at the back of both eyes in the most natural, least intrusive fashion.

Another factor that we will consider is the eye aberration profile. Scenes projected at longer distances create less distortion and induce less aberrations, at the back of the eye and, therefore, reduce visual fatigue. This is because the cone of the rays from a distant projection is narrower (within paraxial range of the pupil) and tends to focus the images clearer in areas near the fovea [1215], whereas the cone of rays coming from images projected closer to the eye tend to be wider and are more affected by the natural imperfections in the human eye’s lens. Let’s define a Global Visual Comfort Score (GVCS) and a Visual Immersiveness Score (VIMS) to be able to compare different types of experiences at the hardware level. What’s certain is that these parameters are bounded, as all visual perception and human sensory inputs do seem to have a saturation point. We can’t perceive, for example, infinitely distinct colors or infinitely fast frame rates or infinitely many pixels. There are clear saturation points for each one of the human sensory parameters, and, therefore, to increase the presence perception in metaverse-like experiences, we must focus on parameters that are far from their saturation points and stop increasing those that have already surpassed them [16].

Table 1. Nine main visual sensory parameters and their saturation points. The curves toward these saturation points are also usually sublinear, such that the closer you are to the saturation point, the more incremental of an improvement you notice. That is, these parameters all have diminishing returns. Parameter values extracted from Ultimate Specifications for AR Displays with Passive Optics [16].

Now that we laid out a general higher level frame of thought on evaluating the qualifying a visual experience, in the next article we will go deeper into a human-centric evaluation of comfort, immersiveness, and access. This helps us to understand what parameters are needed for a virtual experience to make it a better experience. Should we focus on comfort? Is it better to have stereo 3D or is it better to focus on image contrast or maybe increase the resolution? All coming up next on Brelyon’s medium channel.

  1. Newman, A. P. Human-computer perception. (Massachusetts Institute of Technology, 2020).
  2. Hofmann, L. & Palczewski, K. Advances in understanding the molecular basis of the first steps in color vision. Prog Retin Eye Res 49, 46–66 (2015).
  3. Wong, S. H. & Plant, G. T. How to interpret visual fields. Pract Neurology 15, 374 (2015).
  4. Bülthoff, I., Bülthoff, H. & Sinha, P. Top-down influences on stereoscopic depth-perception. Nat Neurosci 1, 254–257 (1998).
  5. Laird, J. E., Lebiere, C. & Rosenbloom, P. S. A Standard Model of the Mind: Toward a Common Computational Framework across Artificial Intelligence, Cognitive Science, Neuroscience, and Robotics. Ai Mag 38, 13–26 (2017).
  6. Amit, K. Artificial Intelligence and Soft Computing. (Taylor & Francis, 2000). doi:10.1201/9781315219738.
  7. Lu, Y. Artificial intelligence: a survey on evolution, models, applications and future trends. J Management Anal 6, 1–29 (2019).
  8. Markowsky, G. Information Theory: Physiology. Britannica https://www.britannica.com/science/information-theory/Physiology (n.d.).
  9. Gibaldi, A., Canessa, A. & Sabatini, S. P. The Active Side of Stereopsis: Fixation Strategy and Adaptation to Natural Environments. Sci Rep-uk 7, 44800 (2017).
  10. Gibaldi, A. & Banks, M. S. Binocular Eye Movements Are Adapted to the Natural Environment. J Neurosci 39, 2877–2888 (2019).
  11. Lit, A. Depth-Discrimination Thresholds as a Function of Binocular Differences of Retinal Illuminance at Scotopic and Photopic Levels. J Opt Soc Am 49, 746 (1959).
  12. Wang, J.-M., Liu, C.-L., Luo, Y.-N., Liu, Y.-G. & Hu, B.-J. Statistical virtual eye model based on wavefront aberration. Int J Ophthalmol-chi 5, 620–624 (n.d.).
  13. Carvalho, L. A. Accuracy of Zernike polynomials in characterizing optical aberrations and the corneal surface of the eye. Invest Ophth Vis Sci 46, 1915–26 (2005).
  14. Carvalho, L. A. V., Castro, J. C. & Carvalho, L. A. V. Measuring higher order optical aberrations of the human eye: techniques and applications. Braz J Med Biol Res 35, 1395–1406 (2002).
  15. Navarro, R., Moreno, E. & Dorronsoro, C. Monochromatic aberrations and point-spread functions of the human eye across the visual field. J Opt Soc Am 15, 2522 (1998).
  16. Heshmat, B., Wei, L. & Tian, M. Ultimate augmented reality displays with passive optics: fundamentals and limitations. in Optical Data Science II (ed. SPIE) vol. 10937 1093707 (SPIE, 2019).

--

--

BRELYON

We are a team of scientists, and entrepreneurs focused on future of human computer evolution. Our expertise is in display tech. and computer science.