What comes after the 8K X? (Near Future VR Headsets)

That is one approach. Personally, I’d prefer a full VR headset which blocks light, with cameras which can augment the VR display with the real world as needed. The system would not even need to be limited to visible light. For example: Infrared vision would be helpful in a variety of situations.

5 Likes

Yes, agreed. but why couldnt the headset darken with a second layer or even with a quick iris shutter / aperture to enable both ways of working? And of course, one can add additional cam sensor frequencies like infrared or ultraviolet and convert.

1 Like

One of the AR KS had something of a clip on visor to switch from AR to VR

Word is the new 8KX+ is to be released in November of this year. PREORDER NOW! /s

1 Like

It certainly could, but dark is not equal to black and I’d like the option to completely block my surroundings. An iris shutter could completely block the light, but would be difficult to implement across the entire field-of-view.

If you’re adding a camera anyway, why bother with the added complexities of a darkening system?

1 Like

Don’t know if you mean a different one than CastAR, but in its case, that visor, had they ever built it, could not just have been something to block out the light, but would have had to be the equivalent to the whole “birdbath” optical component used by many other AR glasses, since it relies on reflective surfaces out in the physical environment for its projections in regular operation.

I am reluctant to call the tech concept: “AR”, actually, but that does not stop me from eagerly awaiting my Kickstarter “Tilt5” unit – the new-growth offspring of the eventually bankrupted-by-investor’s-emplaced-hubristic-management CastAR. :7

1 Like

I didn’t see that one. I think it might have been AntVR.

Please tell if you know of a light field display concept using existing hardware without one these bottlenecks…

  • Low resolution. At less than 8kX resolution, vergence-accommodation conflict is not the limiting factor. Microlens arrays dividing the screen into many separate images would require optically impossibly small pixels.
  • Weight. Simply stacking two 8kX panels together might solve vergence-accommodation conflict well enough, but would double headset weights in the worst place possible. For most users, including myself, the extra weight would be far from worthwhile.

Otherwise my guess is some deeply custom not-yet-produced hardware might be required. A single multilayer panel with fused glass might be possible to fabricate. Or, a sweeping laser projector with planar waveguide and LCD light gate might be able to recreate an entire light field (exploiting the wide spectrum between RGB). Or modern semiconductor fabrication resolution could be scaled up to create an outright adaptive optics (holographic) display.

I think I am missing something here.

Smart people in the industry seem to think light-field displays can be produced short of something that could take years and tens of millions USD, like setting up a dedicated VR headset panel manufacturing lines at 5nm photolithographic resolution.

What technique/technology am I not aware of?

1 Like

Of course that is an interesting solution, too. But I expect to have a real mixed / augmented reality with “see through” instead of a pure VR with a semi-augmented video overlay from cam system. Maybe one will start with Your suggestion.

The thread creator asks for improvements and innovations we may expect. Provide Your own vision instead of tear down mine. How looks it like, your own vision of VR/AR?

Possibly only that you took Neal’s statement more seriously than: “I’m dreaming out loud, here”. :7

I too would love to see what tech is growing in secret labs, though. :7

1 Like

No. There is a lot more serious interest in light field displays than just dreaming. Never before have I seen such interest without myself understanding what the technical path to success is.

1 Like

Oh, interest is there, absolutely, and things are being built; I have been yearning for lighfield display HMDs ever since Douglas Lanman demonstrated his little experimental prototype while he was at NVidia, and incessantly bothered people with talk about it since. :7

…but I doubt any one of us is expecting proper lightfield displays in wearable form factor for the immediate next batch of new devices after the 8kX… :7

(EDIT: You sounded a bit more dismissive, than excited. Glad to hear I got that wrong. :slight_smile:

My feeling is that the industry is focused enough on light field displays to make the next generation of VR headsets based on that technology. In terms of the benefits and technology roadmap I can see why. Physically, it is not possible to make VR headsets comparably comfortable, lightweight, and high-resolution, any other way.

What I cannot understand is what technique/technology allows this to be done, commercially, now. Instead of in a few years after the current generation of semiconductor fabrication tools are retired from the front lines making computer chips.

Or maybe it is that the semiconductor fabrication industry is just getting more willing to affordably take on other jobs?

1 Like

I agree. This is unlikely for the next gen headsets. I’m hopeful for the generation after that.

Personally, I think the microlens array is the most feasible solution in the short term. That avoids the weight issue. I’m not so sure about the “impossibly small pixels” issue. OLED displays with tiny pixels are feasible. Basically for the microlens display we’d probably need at least an 8x8 mini-image per final pixel, topped with a microlens. Given that silicon chips can be built with far smaller features, this is possible, but currently way too expensive, but there’s hope for the future.

image

2 Likes

Here is the problem with microlens arrays. Let’s say the 8kX panels are 50mm tall, divided by 2160px, is just 23 microns per pixel. Even supposing the pixels can actually change colors (no subpixels), just to divide the display into only 10 regions, each pixel would have to be considerably less than 2.3 microns.

Just 2.3 microns is already dangerously close to the ~0.5micron wavelength of visible light.

A point source of visible light less than about a 1micron is physically impossible. Quantum physics dictates the free-space photon will ‘form’ at comparably random locations below that limit.

At best, there will be little opportunity for further resolution improvement. At worst, there will be severe diffraction artifacts, like was seen on the waveguide AR displays of CES 2019 (but not so much CES 2020). I would also expect aligning microlens arrays to be challenging at best.

What you are likely to end up with, is a diffraction grating. Same thing as a CD-ROM disc, causing a rainbow of colors. Looking at smartphone displays reflecting intense LED light sources, already shows some diffraction patterns, illustrating just how close these displays already are to such things.

Of the technologies I know about, multilayer displays seem by far the most likely. Just take the existing technology, and stack it up.

But then weight becomes a concern.

As I think about it more, some technology for fusing together thin sheets of hardened glass might make the most sense in the near term. Just buy a stack of panels, stick them in a hot press, and done. But then the failure rate could be several times higher. Though the rejects due to subpanel cracking could be reassigned to cheaper single-light-field headsets.

EDIT: I think I might have figured out the only way to do this. Thick carbon fiber backplate. As I think about what allows a panel to crack, it is not tensile strength of the back, but compressive strength (being 10x weaker for most materials) of the front. Thick glass is the usual way to protect LCD/OLED panels. But a thick carbon fiber backplate could have the required stiffness to hold the panels flat enough that much thinner layers of composite glass/adhesive would not be subjected to enough bending to break them.

1 Like

Wish I had industry insight to say anything enlightening - I don’t know Jack. :7
Are you saying you could imagine retired chip processes could be repurposed toward producing denser (Q)LED microdisplays, when 5nm nodes supposedly come online next year? :7

For a (much) more armchair “engineer” perspective (that’s me :P); Scott Wilkinson’s old “Home Theatre Geeks” podcast, was a fantastic place to hear intelligent interviews with (actual) engineers explaining, in layman terms, technology they were in the process of bringing to practical use, that one would recognise years later, when it made it into actual products, including quantum dots, which were expected to go on to discrete LED matrix displays, after a brief stint as backlight phosphors and filters. One ancient, apparently bit of a hot-shot, guest, whose name escapes me, was adamant that he had holographic displays sorted and in the pipe - don’t know whether we’ll ever see any of that, nor anything about their resolution, nor if the guy is still alive…

For more immediate conceivable low-fi solution amateur proposals, using more-or-less-I-guess-more-of-less existing components; Two “simple” concepts I could envisage, would be something like what I imagine the working principle might be behind Avegant’s stamp-size-FOV purported lightfield prototype AR headset from a year or two back: Projections from multiple points, onto a shared reflector; Or, since we are already doing low-persistence, and motion-compensating frame synthesis in HMDs anyway: Maybe use a very high frame rate screen, with a monochromatic LCD overlay acting as an active parallax barrier, and use that to scan through some discrete cuts of theta and phi sequentially – perhaps in combination with a microlens array as well, to offset the multiplexing time slot limitation. Diffraction might be a problem… :7

We have one upside in the near-eye display context, and that is that the eyebox per x,y cell does not need to be much larger than mandated by the travel range of the pupil of the eye, so θ, ϕ resolution does not need to be wasted on full 180 degree ranges.

…or something like that… :stuck_out_tongue:

1 Like

I am curious if there will be an OLED line update, or there are no ready to use OLED displays available (higher res than 5K XR)?

2 Likes

Everdisplay demoed a 5.5"? 4k Oled with 734ppi? In 2015 with possible VR use. However nothing about it since.

3 Likes

AFAIK, it is when 7nm and such is completely retired from Intel/NVIDIA/AMD/etc CPU/GPU/RAM production that the process becomes affordable for things like ASICs and VR displays. And the ‘older’ the ‘technology node’ gets, the cheaper it is for non-computing users of the production lines.

Projections

Projection usually implies a mechanism that takes space or involves planar waveguides. Weight and/or size tend to be problems, especially at larger FOV. AFAIK, this is especially true for lightfield engines.

The prototype you mention looks bulky as I would intuitively expect.

Stacking display layers with adhesive is more low-fi. It just uses a larger number of existing panels.

2 Likes

that doesn’t sound promising… fingers crossed Samsung releases some refresh of their OLED HMD so that Pimax could use screens. I wonder if AniMe Matrix displays will be coming to VR.

1 Like