Metaverse: A dilemma between comfort, immersion, and value (part 3)

Demystifying the Bermuda triangle of display tech startups: a human-centric evaluation of comfort, immersiveness, and access

In the last two articles of this series (Part One: Mechanics behind Metaverse and Part Two: Modeling human behavior and vision), we discussed the mechanics behind metaverse and how modeling human behavior and vision has a key role in bringing a sense of presence. Since the last article up to now, we have seen Mark Zuckerberg and Meta team alluding to a rather similar concept with many common parameters in the visual perception part. The so-called idea of visual Turing test by Mark is fair and rooted in sound perception science, but it puts all the weight on visual performance and doesn’t capture the bigger picture on when a new display format would be able to replace existing ones. In this article, we want to dive a bit deeper into these metrics and dynamics. We propose a definition for a Global Visual Comfort Score (GVCS) and Visual Immersiveness Score (VIMS) parameters considering their most important contributing factors (both objective and subjective) to visual fatigue and the perception of immersiveness. To compare and evaluate the contributing factors across different display formats, we will propose different factors with different tiers. We also consider accessibility factors in terms of market penetration and user adoption. Table 2 lists the factors for the evaluation of the GVCS, VIMS, and accessibility scores.

Table 2. Factors considered in the evaluation of a GVCS, VIMS, and accessibility scores.

Before we talk about these parameters, we note that there is a cognitive aspect for each one of them. For instance, if you look at a flashing screen, you will get more tired than you would looking at a dark screen. Another example, if your VR headset constantly pushes your vergence back and forth, it will obviously demand a heavy cognitive load on your brain. But here we are considering general, all-encompassing content, not such content-dependent edge cases.

3.1. Breaking Down Global Visual Comfort Score (GVCS)

Unlike hearing, the complexity of the vision system involves many perceptual parameters and muscle groups that contribute to total or Global Visual Comfort Score, here is a fun simplified illustration.

Accommodation is the ability of our eyes to focus on different depths. And as you can guess, the human eye has not evolved to focus on a flat surface located at 70 cm from the face, 8 hours a day, every day. The human eye needs a dynamic variation of accommodation–looking both at the horizon and, occasionally, at close objects, in order to exercise its ciliary muscles [34]. The absence of dynamic accommodation is, unfortunately, the reason why many children are developing presbyopia at an early age [35]. To reduce accommodation stress and relax the eyes’ ciliary muscles, it is optimal to look at images at further distances (2 meters and beyond) with occasional variations of accommodation. It is noteworthy that too much accommodation variation is also a factor that can very quickly exhaust the eyes. This happens when you are sitting very close to a very large flat monitor: the constant back-and-forth between the (closer) center and the (further) edges of the display simply forces those eye muscles to move back and forth, like watching a tennis match. This is why a curved monitor feels more comfortable than a large flat monitor [36], a fact that is now leveraged by the display industry as a marketing feature, with R1000 radius (1 meter of radius) advertised as the best curvature for a display [37]. As we have shown in our previous academic publications, this curvature is also distance dependent [16].

Considerations for human eye vergence are similar to those considered for accommodation. Vergence is the coordinated movement of both our eyes to look at the same object. Although smaller saccade movements are natural in a smaller field-of-view, extraocular and rectus muscle groups become exhausted during large, continuous scans for an extended period of time. This means that if you are playing a VR game where zombies are attacking you from your left and right side at your peripheral vision, you’re gonna exhaust those muscles pretty fast and create eye fatigue. Conversely, fixating the eye within a very small field-of-view (as with a cell phone), results in minimal saccade movements. Doing this for an extended period of time, day in and day out, will cause faster macular degeneration [38]. For lateral and vertical eye movement, a large field-of-view, with most content at the foveal region and occasional movements on the peripheral zones, is the ideal scenario.

Vergence, moreover, also varies with depth. Here, you obviously do not want something that is too close to your face (closer than 70 cm is too close). This is one of the reasons your cell phone screen is more tiring than your laptop screen, your laptop screen is more tiring than your monitor, and your monitor is more tiring than your TV. 3D displays should not constantly force your eye muscles to oscillate rapidly with 3D objects coming to your face and going into the horizon. The depth transitions have to be done in moderation to keep the cognitive load low. Generally speaking, based on research published in the Journal of Applied Sciences, 2D content requires less cognitive load than 3D content [39]. However, a large field-of-view that is close to your face will, again, force your depth-related vergence to shift back-and-forth from the center to the edges. A curved screen again helps reduce your depth related vergence stress when scanning from left to right.

This metric considers the stress from the intensity of the light in different hues. For example, retinal comfort is affected by factors such as stress caused by blue light intensity, a very bright screen with sudden jumps in brightness levels, and the brightness stress you feel at night when looking at a bright phone screen. That discomfort comes from your rods and cones screaming that the light is too focused and too bright for the pupil size. It’s true that HDR can represent the real world much better and create a sense of presence but if not done in moderation and if brightness transitions are not presented to the eye gradually the eye pupil simply has no time to adjust and that is not good news for the retina. The environments and speed of experiences and brightness transitions in virtual worlds are not as same as the natural world; imagine how many times bright flashes of light could be presented in a first-person shooting game to the gamer. Periodic exposure to such sudden brightness flashes is almost like a dialed-down version of the experience that a welding operator has. There is no doubt that the higher and faster the brightness transitions are the more damage is done to the retina. This is similar to hearing loss induced by long use of earbuds.

Convergence and cohesion. When the pixels are blurry or too close to your face, or the parallax –the shift in the image between the left eye and right eye– does not match your expectation, you are putting more work on your eyes and brain, and it all translates to more saccade triggers, more ciliary muscle activation, and, therefore, more visual fatigue. For example, one of the challenges with headsets with two separate displays (one for each eye) is that, at lower resolutions, the pixels of an object in one location have a slight shift in one display compared to the other. Typically, the human brain considers those two pixels to correspond to the same point of a 3D object. However, if those pixels are too off (beyond Panum’s fusional range), your subconscious brain wrestles with determining whether or not this is really one point, or two points, or just a hazy point. This all translates to increased cognitive load and more fatigue.

Circle of confusion. This is the area that is created at the back of your eye resulting from imaging a point source or a single pixel. This factor determines the depth-of-field and affects the distortions and aberrations that the optics of your eye imprints on the image on the retina. The human eye is a marvelous camera but it does not come with the best optics or the best sensor and, actually, a lot of the work is done during post-processing in your brain. At shorter distances, the eye lens is more curved and has to bend the light more. The consequence is more aberration in the created image than in the images seen from far away distances. This, again, means more post-processing work for your brain and it might be a reason why people prefer the TV to be far, even though the trade-off is a reduced field-of-view and immersion. This factor becomes more pronounced for people over 40, as they start to develop presbyopia.

Depth-of-field accuracy. This parameter concerns the distance between the nearest and the furthest objects that can be acceptably focused in an image or seen at a certain depth. When you look at things that are at a close distance, the objects that are located further away are expected to blur in a certain way. If that does not happen, cognitive load increases. This is a challenge for all-in-focus near-eye displays, as well as super multi-view light-field near-eye displays. In the former case (i.e., all-in-focus near-eye displays), the images that should be blurred are still sharp and, therefore, confuse accommodation depth cues. In the latter case (i.e., multi-view light-field near-eye displays), the images that should have a soft blur, end up having a patterned or textured blur that drives unwanted accommodation.

Generally speaking, whenever we deviate from the natural way we see and perceive the surrounding world, we are putting more cognitive load on the brain and eye. Here, we want to mention two optical factors that are somewhat independent of the content.

Aberrations and distortions. Aberrations refer to a set of effects that measure how an optical design spreads out the light, rather than focuses it to a point. Aberrations that affect comfort are astigmatism, coma, field curvature, and the horopter and diopter errors. Distortions refer to a set of effects that measure the deviation of an image from a rectilinear projection wrapped around the user. Some of the most common aberrations are perspective distortion, barrel distortion, pincushion distortion, and chromatic distortion.

Accommodation-vergence mismatch. The mismatch between depth cues from vergence and from accommodation is one of the main reasons why you get a headache with VR headsets. The effect accounts for the main difference between stereoscopic and natural viewing conditions and it is one of the most prominent factors to consider in visual fatigue [4042].

Full body pose stress. This parameter concerns the limitation put on the user to remain in a certain position or posture for an extended time. For example, flat displays often require the user to be seated at a certain distance and at a certain angle, whereas headsets allow the user to move more freely. In general, we can assume that a device that allows the user to move freely will be more ergonomically beneficial than a device that requires a user to be in a specific pose. For this parameter, we can consider five levels of stress:

  • Standing level: The user is required to be standing up.
  • Sitting level: The user is required to be seated.
  • Resting level: The user can stand, sit, or lie down.
  • Localized pose: The user can sit, stand, or lie down but needs to be in some limited region.
  • No limits on location or pose: The user can move freely and be in any pose while using the device.

Shoulder stress. This parameter evaluates the impact of using a device worn on the shoulders. For instance, devices that need to be worn have a clear and direct impact on shoulder stress due to size and weight. We can consider five levels of shoulder stress:

  • Weight bearing constant: The weight on the shoulders is fixed and constant. This is very tiring like holding tools, a backpack, or holding your hands in the air with no support.
  • Weight bearing dynamic: The weight on the shoulders changes, depending on the movements and pose, as well as the presence of external support. This is like holding a tablet or a joystick.
  • Arms rested but extended: This is like typing on the keyboard.
  • Arms rested close to legs but localized: This is like the user having his hands in his lap or pockets.
  • Fully rested, no forced posture: Arms can be placed freely, and there is no extra weight on the shoulders.

Head and neck stress. This parameter is similar to the shoulder stress but focuses on evaluating the impact of using the device on the head and neck. This stress is not necessarily caused by having to wear a device, but it is mainly due to maintaining a certain head position, angle, and distance in using the device. We can consider five levels of head and neck stress:

  • Localized gaze-fixed head and neck posture with forward pull or push: The user is required to keep his gaze and head fixed. This is like sitting in the first row of the movie theater or looking into a microscope viewer.
  • Head bearing torque with periodic forced neck movement: The user is required to perform neck movement to move the head periodically while wearing a device, for example, to acknowledge or execute certain commands.
  • Head bearing no weight but forced periodic neck movements: The user is required to perform neck movement to move the head periodically without wearing any device. The movements could be used to acknowledge or execute certain commands. Financial traders usually complain about this type of stress.
  • Head bearing minimal weight: The user does not wear significant weight on the head but still is required to execute certain neck movements. This is the case for ideal AR glasses for example.
  • No stress and occasional neck movements: The user is not wearing any weight and is not required to perform any neck movement.

This parameter combines both objective and subjective factors affecting the comfort in using a device. The need to wear or not to wear the device, or certain peripherals, has a direct impact on how intrusive the device feels to the user. For example, some devices, such as flat displays or cell phones, do not require the user to wear anything. Other devices, such as headsets and glasses, require the user to wear them to experience the content.

Localization. This parameter accounts for the zone in which that device can be used or seen. Larger screens usually allow larger movements and larger viewable zones, whereas a cell phone screen is viewable in only a very tight area.

  • Eye boxed: The device needs to be placed close to the eyes (such as goggles or contact lenses; boxed zone < 10 cm3).
  • Head boxed: The device needs to be close to the head such as cell phones (boxed zone < 104 cm3).
  • Seat bounded: The user needs to be seated to use the device such as monitors (boxed zone < 106 cm3).
  • Zone bounded: The user needs to be in a limited area to use the device but can move freely within that zone (boxed zone < 10m3).
  • No bound: The user can move freely and use the device anywhere.

Weight and torque. In the case of devices needing to be worn, this parameter evaluates how heavy they feel on the head and how that weight impacts the torque affecting the neck muscles and arms. For instance, we can consider these weight and torque tiers:

  • Helmet tier: > 500 g.
  • Brick tier: 500 g — 150 g.
  • Goggle tier: 150 g — 50 g.
  • Glasses tier: 50 g — 1 g.
  • Naked tier: not required to be worn at all.

Contact intrusion. This parameter concerns the contact of the device to the body surface. It measures how easy it is for the user not to feel sweaty or hot due to wearing the device or having heat-generating elements around the body. We can consider five tiers:

  • Pierced: like implanted medical electrodes
  • Hard contact: like wearing eye contact, or wearing a goggle with no rubber framing. Not fun!
  • Direct contact with no ventilation: like headsets or some helmets
  • Soft contact with ventilation: like your clothing, or having a phone in your hand or a laptop on your lap. It could still get hot but it has air ventilating around it.
  • Non-contact: the device is away from your body

3.2. Visual immersiveness score (VIMS); mapping numbers to feelings

Output bit-rate consists of a parameter set that characterizes the performance of a device in projecting bits of information in time (bandwidth). For a visually rich experience, the output bandwidth, or bit-rate, should be high but higher doesn’t necessarily mean better. Some of the considered parameters are field-of-view span, intensity span, color depth, frame rate, spatial resolution, and depth resolution. However, given that this factor considers a set of parameters that need to work coherently as a whole, just looking at the bit-rate number does not tell the whole story. Furthermore, research shows that the human eye suffers from saturation in all the parameters considered in the bit-rate and, therefore, increasing the performance will face a physiological limit at some point.

Field-of-view span. This parameter evaluates the range of solid angles into which the device can project content. The human eye visual field spans approximately 120º in both horizontal and vertical axes [43]. Therefore, projecting beyond this range will not increase the immersive experience or the visual comfort.

Intensity span. This parameter measures the number of light intensity levels that a display is capable of projecting. It is often measured in bits. For example, a commonly used intensity span is 8 bits, which is capable of displaying 256 levels.

Color depth. This parameter measures the color depth of the colors projected in the content. Color depth is directly measured in bits of intensity for each of the three color channels, red, green, and blue (RGB). This parameter is often measured as bits-per-pixel (bpp). Common color depths are 24-bit (true color), 30-bit (deep color), and, most recently, 48-bit color palettes [44,45].

Frame rate. The refresh rate measures the frequency in which images are refreshed. Standard rate is 60 frames per second (fps). In gaming applications this rate can be higher than 120 fps.

Spatial resolution. Resolution measures the density of pixels of the screen. This factor is often measured in pixels per inch (PPI). Some of the most common horizontal resolutions are 1080 ppi, 2160 ppi, and 4320 ppi [46].

Depth resolution. Depth resolution measures the number of monocular and stereoscopic depth layers shown by the display. This parameter depends on whether the projected content has monocular or stereoscopic information or both [16,47,48]. It is possible to express the depth resolution in a number of monocular depth layers and/or angular channels. The bandwidth that is required to show all the angles and all the depth layers can add up pretty quickly to Tb/s of information. Therefore it's important to compress the depth information and figure out how it can be delivered to the user effectively.

As amazing as the human eye is, it has biological limitations that cause the performance to saturate after a certain threshold. This fact causes the response function of all the parameters mentioned above to follow the shape of a sigmoid function [49]. This behavior is characterized by a slow increase in performance as the value of the parameter increases, followed by an exponential increase up to an inflection point, after which the performance starts to increase sublinearly as the value of the parameter increases until the saturation point is reached (i.e., increasing sigmoid). In the complementary behavior, the sigmoid behavior will be characterized by a slow decrease in performance, followed by an exponential decrease up to an inflection point in which the response slows down to zero (i.e., decreasing sigmoid).

For example, Figure 4 shows an example of decreasing sigmoid behavior of the perceived modulation contrast data in which we can see that the sensitivity drops to 0 near 65 Hz for modulated light that is spatially uniform. However, sensitivity starts to drop around 500 Hz when colored light is used and spatial edges are present [50].

Figure 4. Left: Modulation contrast response for spatially uniform light. Left: Modulation contrast response for colored light with spatial edges. Bottom: Visual acuity versus background luminance in which we can see that the response of the cone saturates at high luminances, whereas the rods dominate at low luminances, again resulting in an overall sigmoid-shape response behavior.

In what follows, we use this sigmoid function to model these performance parameters and seek to maximize them. You can bypass the rest of this section to avoid some mathematical details and to continue the discussion below in Section 3.3.

In general, we can model the sigmoid behavior with the following function [51]:

[1]

where a is the absolute minimum of the function, b is the absolute maximum of the function, x0 is the location of the inflection point, and k is related to the curve’s steepness near the inflection region (Figure 5). If we know the range of the function (i.e., ymin and ymax), we can always transform S into a function bounded between 0 and 1 as follows:

[2]

Notice that the derivative is maximum when x = x0, that is, at the inflection point.

Figure 5. Sigmoid function with all the fitting parameters.

We can formulate a perceived bit-rate (PBR) model by defining the perceived bit (PB) as a vector in which each component corresponds to one of the parameters we wish to model. In our case, we would like to model field-of-view, brightness, color, frames-per-second, pixels-per-inch, and depth resolution, expressed by x1, x2, x3, x4, x5, and x6, respectively:

[3]

Where Fi is a sigmoid-like function with argument taken to be parameter i. The PBR is the total derivative of PB with respect to time:

[4]

Assuming that the parameters are all independent, the maximum will occur when each component is maximized. Because all Fi’s are modeled as sigmoid functions, the PBR will be maximum when all parameters are at their respective inflection points. Therefore, we can define a vector corresponding to the set of parameters that produce a maximum PBR. The challenge then becomes searching the parameter space and finding the real-world parameters that fit the sigmoid functions to minimize the error in terms of field-of-view span, intensity span, color depth, frame rate, spatial resolution, and depth resolution for the human vision, both physiological and psychological.

Assuming that we can normalize with respect to the range of the function so we can use equation [2], we can define the PBR in matrix form as (PBRM) as:

[5]

Then, the goal is to minimize:

[6]

Where

is the vector that represents the position in which the fitting parameters maximizes the PBR defined by

Mathematically this is the best way to give you maximum immersion with minimum bandwidth which would follow human perception models for each of these parameters. Now different populations or markets may weigh different parameters differently and that can be easily considered in this calculation.

3.3. Accessibility

This parameter gauges the accessibility for different populations. For example, many users may require corrective lenses for a variety of vision conditions, some users may be color blind, and some other users might not be able to move freely or have some physical limitations that will hinder the use of wearable devices, like those used in VR.

This parameter considers how the use of a device is affected by population factors such as age/generation, vision anomalies (myopia, presbyopia, astigmatism, etc.), and intended use/application of the device by a certain population. The five tiers are: Age divide, Vision anomaly divide, Content divide, Application divide, and No divide.

This parameter evaluates how easy it is to transport a device and set it up. For example, some devices may be used while traveling, fit in a purse or pocket, or even be wearable, whereas other devices can only be used on a desk or in an office. We can consider the following portability tiers:

  • Not portable: The device is not portable and needs to be installed and set up in a specific location.
  • Suitcase portable: The device can be transported by a single person in a large suitcase.
  • Purse or backpack: The device can be carried in a purse or backpack.
  • Pocket: The device can be carried in a pocket.
  • Wearable: The device not only fits in a pocket, but also can be worn by the user.

The price can be a big barrier to entry for users, and we breakdown the price below (for the US display market):

  • Early adopter enterprise: $15K+,
  • Broad Enterprise: $3K-$15K,
  • Luxury consumer: $3-$5K,
  • Prosumer: $1.5K-$3K, and
  • Commodity consumer: below $1.5K.

Alright now we finally have a human-centric framework that we talked about in previous articles; this now allows us to compare not only in one format of device, but it also allows us to see when a device can take over another device. In the next article, which is going to be the last piece of this series from Brelyon, we will actually crunch some numbers to see where different formats stand relative to each other. As always, thanks for your attention, make sure to check out our website: www.brelyon.com. We have job openings, investment opportunities, and immersive products that you can explore. More social handles below:

https://twitter.com/BrelyonHQ

facebook.com/brelyonHQ

https://www.instagram.com/brelyoncommunity/

https://www.linkedin.com/company/brelyon/

https://www.youtube.com/channel/UC9wRknKRClTLuYaEk-LeMmg

http://medium.com/@brelyon

  1. Moody, R. Screen Time Statistics: Average Screen Time in U.S. vs. the rest of the world. comparitech (2021).
  2. Olsen, D. R. et al. Comparing usage of a large high-resolution display to single or dual desktop displays for daily work. Proc Sigchi Conf Hum Factors Comput Syst 1005–1014 (2009). doi:10.1145/1518701.1518855
  3. Stegman, A., Ling, C. & Shehab, R. Human Interface and the Management of Information. Interacting with Information. in HCI International, Symposium on Human Interface 84–93 (2011). doi:10.1007/978–3–642–21669–5_11
  4. Kang, Y. & Stasko, J. Lightweight Task/Application Performance using Single versus Multiple Monitors: A Comparative Study. in Graphics Interface (2008). doi:10.1145/1375714.1375718
  5. Conlon, C. Do Larger Monitors Equal Greater Productivity? (2011).
  6. Anderson, J. A., Hill, J., Parkin, P. & Garrison, A. Productivity, Screens, and Aspects Ratios (2007).
  7. Almagro, M. Key Trends in Immersive Display Technologies and Experiences. Sound & Communications (2020).
  8. Foster, A. IBC2019 Technical Papers: Immersive. Trends (2019).
  9. Zalani, R. Screen Time Statistics 2021: Your Smartphone is Hurting You. Elite Content Marketer (2021).
  10. Morrison, G. 8K TV explained, and why you definitely don’t need to buy one. cnet (2021).
  11. Won, L. S. Samsung to shift some smartphone production at Vietnam to India. TheElect (2022).
  12. Park, S.-Y. LG Display OLED TV panel sales top 20 mn units on strong demand. The Korea Economic Daily (2021).
  13. Heshmat, B. Is Augmented reality doomed? A look into the future of AR. (2019).
  14. Vailshery, L. S. Average session time of virtual reality (VR) users in the United Stated between the 2nd and 3rd quarter of 2019, by user type. Statista (2021).
  15. Heshmat, B. The first no-headset virtual monitor. (2020).
  16. Aghasi, A., Heshmat, B., Wei, L., Tian, M. & Cholewiak, S. A. Optimal allocation of quantized human eye depth perception for multi-focal 3D display design. Opt Express 29, 9878 (2021).
  17. Zhang, C., Perkis, A. & Arndt, S. Spatial immersion versus emotional immersion, which is more immersive? 2017 Ninth Int Conf Qual Multimedia Exp Qomex 1–6 (2017) doi:10.1109/qomex.2017.7965655.
  18. Newman, A. P. Human-computer perception. (Massachusetts Institute of Technology, 2020).
  19. Hofmann, L. & Palczewski, K. Advances in understanding the molecular basis of the first steps in color vision. Prog Retin Eye Res 49, 46–66 (2015).
  20. Wong, S. H. & Plant, G. T. How to interpret visual fields. Pract Neurology 15, 374 (2015).
  21. Bülthoff, I., Bülthoff, H. & Sinha, P. Top-down influences on stereoscopic depth-perception. Nat Neurosci 1, 254–257 (1998).
  22. Laird, J. E., Lebiere, C. & Rosenbloom, P. S. A Standard Model of the Mind: Toward a Common Computational Framework across Artificial Intelligence, Cognitive Science, Neuroscience, and Robotics. Ai Mag 38, 13–26 (2017).
  23. Amit, K. Artificial Intelligence and Soft Computing. (Taylor & Francis, 2000). doi:10.1201/9781315219738
  24. Lu, Y. Artificial intelligence: a survey on evolution, models, applications and future trends. J Management Anal 6, 1–29 (2019).
  25. Markowsky, G. Information Theory: Physiology. Britannica
  26. Gibaldi, A., Canessa, A. & Sabatini, S. P. The Active Side of Stereopsis: Fixation Strategy and Adaptation to Natural Environments. Sci Rep-uk 7, 44800 (2017).
  27. Gibaldi, A. & Banks, M. S. Binocular Eye Movements Are Adapted to the Natural Environment. J Neurosci 39, 2877–2888 (2019).
  28. Lit, A. Depth-Discrimination Thresholds as a Function of Binocular Differences of Retinal Illuminance at Scotopic and Photopic Levels. J Opt Soc Am 49, 746 (1959).
  29. Wang, J.-M., Liu, C.-L., Luo, Y.-N., Liu, Y.-G. & Hu, B.-J. Statistical virtual eye model based on wavefront aberration. Int J Ophthalmol-chi 5, 620–624 (2012).
  30. Carvalho, L. A. Accuracy of Zernike polynomials in characterizing optical aberrations and the corneal surface of the eye. Invest Ophth Vis Sci 46, 1915–26 (2005).
  31. Carvalho, L. A. V., Castro, J. C. & Carvalho, L. A. V. Measuring higher order optical aberrations of the human eye: techniques and applications. Braz J Med Biol Res 35, 1395–1406 (2002).
  32. Navarro, R., Moreno, E. & Dorronsoro, C. Monochromatic aberrations and point-spread functions of the human eye across the visual field. J Opt Soc Am 15, 2522 (1998).
  33. Heshmat, B., Wei, L. & Tian, M. Ultimate augmented reality displays with passive optics: fundamentals and limitations. in Optical Data Science II (ed. SPIE) vol. 10937 1093707 (SPIE, 2019).
  34. Seltman, W. Eye Exercises. WebMD (2020).
  35. Griff, A. M. Presbyopia. Healthline (2021).
  36. Samsung. How a curved monitor brings ergonomic benefit and productivity. Insights (2021).
  37. Samsung. Samsung Gaming Monitor Odyssey Neo G9 (2021).
  38. Seiple, W., Szlyk, J. P., McMahon, T., Pulido, J. & Fishman, G. A. Eye-Movement Training for Reading in Patients with Age-Related Macular Degeneration. Investigative Ophthalmology Vis Sci 46, 2886 (2005).
  39. Ramadan, M., Alhaag, M. & Abidi, M. Effects of Viewing Displays from Different Distances on Human Visual System. Appl Sci 7, 1153 (2017).
  40. Vienne, C., Sorin, L., Blondé, L., Huynh-Thu, Q. & Mamassian, P. Effect of the accommodation-vergence conflict on vergence eye movements. Vision Res 100, 124–33 (2014).
  41. Banks, M. S., Kim, J. & Shibata, T. Insight into vergence/accommodation mismatch. Proc Spie 8735, 873509–873509–12 (2013).
  42. Hoffman, D. M., Girshick, A. R., Akeley, K. & Banks, M. S. Vergence-accommodation conflicts hinder visual performance and cause visual fatigue. J Vision 8, 33.1–30 (2008).
  43. Hammoud, R. I. Passive Eye Monitoring. (Springer, 2008). doi:10.1007/978–3–540–75412–1.
  44. Sullivan, G. J., Ohm, J.-R., Han, W.-J. & Wiegand, T. Overview of the High Efficiency Video Coding (HEVC) Standard. IEEE T Circ Syst Vid 22, 1649–1668 (2012).
  45. Adobe. Color depth and high dynamic range color. Adobe (2021).
  46. Rusen, C. A. What do the 720p, 1080p, 1440p, 2K, 4K resolutions mean? What are the aspect ratio and orientation? Digital Citizen (2019).
  47. Son, J.-Y., Chernyshov, O., Lee, C.-H., Park, M.-C. & Yano, S. Depth resolution in three-dimensional images. J Opt Soc Am 30, 1030 (2013).
  48. Petikam, L., Chalmers, A. & Rhee, T. Visual Perception of Real World Depth Map Resolution for Mixed Reality Rendering. 2018 Ieee Conf Virtual Real 3d User Interfaces Vr 00, 401–408 (2018).
  49. Han, J. & Moraga, C. From Natural to Artificial Neural Computation, International Workshop on Artificial Neural Networks Malaga-Torremolinos, Spain, June 7–9, 1995 Proceedings.
  50. Davis, J., Hsieh, Y.-H. & Lee, H.-C. Humans perceive flicker artifacts at 500 Hz. Sci Rep-uk 5, 7861 (2015).
  51. Lee, S. W., Lee, S. Y. & Pahk, H. J. Precise Edge Detection Method Using Sigmoid Function in Blurry and Noisy Image for TFT-LCD 2D Critical Dimension Measurement. Current Optics and Photonics 2, 69–78 (2018).
  52. Guo, J. et al. Subjective and objective evaluation of visual fatigue caused by continuous and discontinuous use of HMDs. J Soc Inf Display 27, 108–119 (2019).
  53. Wang, Y. et al. Assessment of eye fatigue caused by head-mounted displays using eye-tracking. Biomed Eng Online 18, 111 (2019).
  54. Han, J., Bae, S. H. & Suk, H.-J. Comparison of Visual Discomfort and Visual Fatigue between Head-Mounted Display and Smartphone. Electron Imaging 2017, 212–217 (2017).
  55. Iskander, J., Hossny, M. & Nahavandi, S. A Review on Ocular Biomechanic Models for Assessing Visual Fatigue in Virtual Reality. IEEE Access 6, 19345–19361 (2018).
  56. Shibata, T., Kim, J., Hoffman, D. M. & Banks, M. S. The zone of comfort: Predicting visual discomfort with stereo displays. J Vision 11, 11–11 (2011).

--

--

BRELYON belongs to team of scientists, engineers, and intellectuals focused on future of human computer evolution.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
BRELYON

BRELYON belongs to team of scientists, engineers, and intellectuals focused on future of human computer evolution.