Turns Out We Don’t Need Holograms!

BRELYON
10 min readNov 18, 2022
How are large-scale 3D displays like this one created?

Back to the Future. Star Wars. Iron Man. We have seen all sorts of pop-culture depictions of “holograms” for large-scale three-dimensional displays. We have also seen over and over, maybe every other month, a piece with a title like “Holograms Are Finally Here.” But what are holograms exactly? And why aren’t holograms as pervasive in real life as they are in science fiction and media?

First, a clarification is in order: the word “hologram” is used colloquially to describe any sort of three-dimensional (3D) display. But “hologram”–or “holographic display”–and “3D display” are not interchangeable terms. Rather, a hologram is a physical element that produces a three-dimensional image using the specific physical principle of wave interference. A display that uses such physics is called a holographic display. Now, captivating, eye-popping, static images made from holograms have been around for almost 60 years, and the idea itself goes back even further, to Dennis Gabor’s pioneering work in 1948 [1]. So why haven’t we figured out how to produce a true holographic display or holographic TV yet? If only it were that easy!

To understand how holograms work, imagine the common example of tossing two stones in a pond. Each stone causes ripples, concentric waves called “wavefronts,” that travel outward from the impact. And these waves can overlap and create patterns on the surface of the pond. Analogously, as light waves travel outward from a real three-dimensional scene, they overlap and create such patterns. When we look at those patterns, we perceive the three-dimensionality of the scene through visual cues such as parallax between the eyes, motion parallax, accommodation (focusing) of each eye, and several contextual cues of the image. A hologram contains all that information to replicate those patterns exactly and to trigger those cues, so that when a beam of light passes through it and travels to our eyes, we perceive an exact copy of the 3D scene that was encoded.

Hologram of Dennis Gabor himself, probably recorded in the 1950s or 1960s. Based on analog results, for decades it was believed that holographic displays would be the future of displays. But unlike other media, like sound and video, holograms were fundamentally not compatible with the digital wave.

In fact, recording holograms is quite easy. You can find amazing holographic images in museums. Dennis Gabor’s large self-portrait hologram, for example, is quite a remarkable piece. These holograms are analog devices that use photographic films to encode all that 3D information. Fabricating a holographic display, on the other hand, is extremely hard because of three fundamental problems:

  • 1. Holograms are based on a property of waves called “phase,” which describes exactly what part of the wave you’re looking at: the peak, the trough, or somewhere in between. (Imagine the exact, complex patterns of the ripples of water created by the stones tossed into the pond.) To control the phase of light waves, holograms require feature sizes that are far below the wavelength of light. As an example, let’s take the color blue, which has a wavelength of about 400 nm. Conservatively, if you want to create a pixel that changes the phase of this blue light by even just 4 bits–that means only 24 = 16 different phase steps–you’ll need pixel features in the nanometer range. To put things in perspective, your smartphone pixels today are tens of microns wide, and the smallest camera sensor pixel size is over 500 nanometers. So even with this conservative estimate, you need a pixel feature density that is 10 times higher than the densest camera sensor in the world and 1,000 times denser than your smartphone. That is like fitting an 8K TV in every 100 pixels of your current phone! Even if you could create smaller and smaller features on each pixel to control the phase of the light, the challenges don’t end there.
  • 2. The second major challenge involves sharp boundaries. Remember how easy it was to record an analog hologram? Well, that’s because there is no boundary or sharp edge to perturb the light in an analog medium like a photographic film, but as soon as you start to dice up a screen and cast a mesh on it, as you do for a digital sensor, those sharp, tiny pixel edges become comparable to the wavelength of the light and create an artifact called diffraction. Diffraction is a very real problem in digital holographic displays because pretty much any electronic element that samples the light is comparable in size to its wavelength and consequently perturbs the phase. You can find lots of images of diffraction effects and many papers describing attempts to get around the problem. However, diffraction and its dependency on color is a problem that is not easily resolved. Even the best electronics today are made by lithography and optical writing techniques, so it’s really hard to create something with light that’s invisible to light.
Left: A diffraction artifact from a pixel grid perturbing the phase of the light. Center: A speckle pattern from a laser pointer. Right: Image results from a spatial light modulator (SLM), which is the closest product to a holographic display, show notable speckle artifacts. See more results here: https://www.cs.princeton.edu/~fheide/wirtingerholography.
  • 3. The third major problem is keeping the wavefronts consistent across a large area. This problem can’t be overlooked. Imagine a 32″ holographic display that is trying to show you a star in the sky. This requires all the pixels to emit a cohesive, collimated, unperturbed wavefront of light, an image with 10- to 100-nm precision. That’s nanometer precision across 32″. In other words, about 1 part in 10 million consistency is needed to avoid pesky “speckles.” Any little inconsistency across that display surface, even from a speck of dust, can cause the light to interfere with itself and cause something called a speckle pattern. These speckles are what give images from even analog holograms shimmering, grainy artifacts. This effect becomes really noticeable and distracting, and it overall reduces the contrast and quality of the image.

So, to sum up, you need to have 1,000s of times more density than the densest screens today, and they must be fabricated with 1,000s of times more consistency across a large area. For a consumer screen, this necessitates terapixels, not just megapixels. And, somehow, you have to solve the problem of pixel boundary artifacts, and once you do, you arrive at the content issue. Now, the content issue is a more addressable issue: we have been running 3D games for decades already, but feeding all that content into a holographic display is a massive bandwidth bottleneck. Imagine rendering your games from hundreds of different viewing angles simultaneously. Even when you have machine learning (ML) algorithms calculating all those angles in an efficient, compressive way, there is just too much to show. By now it should be clear that holographic displays simply don’t make sense. Fabrication is thousands of times harder, the required bandwidth is tens of times higher, and the resulting glasses-free 3D effect is simply not a good enough return on investment.

But it turns out we don’t need holograms to create really compelling depth effects. Holograms are really, really (over 10,000 times really) over-engineering the problem. In fact, the best-winning solutions are those that give you maximum immersion and 3D effects with minimum bandwidth and fabrication challenges. Take the spinning light fans and hovering LED meshes. These are now a big business with 100,000s of units sold annually for different exhibits. There is no fancy physics, no spatial light modulators (SLMs), and no ML algorithms needed. LED meshes are a really interesting emerging solution for creating massive-scale depth effects and illusions. Leading this category of illusions is Hypervision, followed by many Asian manufacturers.

Hypervision spinning LED lights give an illusion of hovering images in mid-air due to the human-eye-limited time response and dynamic range. See more examples here.

One of the reasons why these solutions work so well is that they leverage the human visual system’s limited time response and the brightness of powerful LEDs versus natural environments. The same goes for those mesh screens illuminated by projectors. Mesh projection screens and mesh LED walls are especially good for large-scale hovering illusions.

Tomorrowland showcasing a large hovering screen in the Amsterdam concert. See more examples here.

Although these methods are sufficient to create depth effects or that hovering holographic illusion for events and shows, they fail in smaller-scale precision representations of 3D, for things like monitor use cases or 3D cinema.

Here is where a new mode of creating depth effects are being introduced by startups and even bigger companies like Google. These other modes are called “lightfield displays” [2,3], but at this point the term “lightfield” is just as overused, and misused, as the term “hologram.” Lightfield displays are basically displays with depth: the idea is that instead of showing the entire phase of light you just try mimic the phase with a very, very limited number of samples of viewing angles or distances to sufficiently trigger depth cues of the human eye by relying on the limited time response of our eyes and our inability to see the phase of the light with nanometer accuracy. There are two camps of lightfield displays, with one being much newer than the other one. The first camp is the set of autostereoscopic displays. These are displays that provide left-eye images and right-eye images, but that don’t need you to wear glasses. This camp is led by companies like Alioscopy, Leia, LookingGlassFactory, etc. The idea here is that if you keep increasing the viewing angles, you get a better and better representation of 3D environments. Another example of this idea is Google’s Starline project, in which a large number of projector arrays are used to provide smooth transitions in content viewed across a large set of angles.

Google’s Starline project uses a large set of angular projections to provide an autostereoscopic screen. Another project in the same category was done by Dr. Paul Debevec at USC.

The second camp uses focal planes, or “monocular” depth layers. Here, Brelyon is one of the first companies to bring larger-scale displays with monocular depth to the consumer market. Based on the human visual system [7], there exists a bridge between monocular depth and (auto)stereoscopic depth; however, it’s rather hard to go from one to the other. The number of viewing angles needed to trigger monocular depth is extremely high, as is the number of monocular depth layers needed to be stacked to provide smooth autostereoscopic depth transitions. In both cases, novel computational approaches can help fill in those gaps, and that is where there is promise. These new approaches are like the MP3 format of representing the lightfield. They are all approximations of what a true holographic display would be, but, just like the spinning LEDs, they could still provide really compelling results.

Brelyon Ultra Reality™, capable of showing conformal monocular depth, See more here.

For those of you who are mathematically inclined, you can find lots of interesting work that compares the two types of display devices, but the comparison is not that straightforward, as each method comes with a bag of problems and drawbacks [4–7]. In fact, the response of an optical system for coherent laser light is different from the response of the same optical system for incoherent white light. And, even knowing those responses quantitatively, it is difficult to translate that knowledge into a ranking of which one is better [5]. We believe the solution that is more scalable and that presents better image quality will be the winning solution, as there have been hundreds of prototypes of holographic displays and lightfield displays shown in research that have never made it to the mass market.

Finally, the physics of holography, or more generally, interferometry has phenomenal scientific uses in, for example, metrology, microscopy, and astronomy, where the precision it affords is required for the very sensitive data that needs to be collected. And even for smaller displays, like headsets, some of the challenges above can be slightly amended. Holographic optical elements (HOEs) are also used as components in some near-eye displays, too. But the information that gets recorded and produced via a holographic display is absolute overkill and really not worth the trouble for large-scale consumer screens. In other words, just like flying cars, which are neither good cars nor good airplanes, holograms are just the wrong idea for bringing depth to displays. — Sorry, Star Wars fans!

Written by: Dr. Christopher Barsi, Dr. Barmak Heshmat and Dr. Alok Mehta. Brelyon has been teasing the market with virtual displays with monocular depth. See a talk on this here, or check out www.brelyon.com for more information.

From left to right: Dr. Chris Barsi, Dr. Barmak Heshmat, and Dr. Alok Mehta.

References

[1] D. Gabor. A new microscope principle. Nature 161, 777 (1948).

[2] G. Lippmann. Épreuves réversibles. Photographies intégrales. C. R.

Acad. Sci. 146, 446 (1908).

[3] F.E. Ives. Parallax stereogram and process of making same. US Patent US725567A (1903).

[4] Z. Zhang, M. Levoy. Wigner distributions and how they relate to the light field. IEEE International Conference on Computational Photography (ICCP), 2009.

[5] J.W. Goodman. Introduction to Fourier Optics, 1st ed. (McGraw-Hill) (1968).

[6] M. Yamaguchi. Light-field and holographic three-dimensional displays. J. Opt. Soc. Am. A 33, 2348 (2016).

[7] Alireza Aghasi, Barmak Heshmat, Leihao Wei, and Moqian Tian, “Optimal allocation of quantized human eye depth perception for multi-focal 3D display design,” Opt. Express 29, 9878–9896 (2021).

--

--

BRELYON

We are a team of scientists, and entrepreneurs focused on future of human computer evolution. Our expertise is in display tech. and computer science.