VR Sickness and the Pimax 8K

I am blessed with an iron gut. I experienced a slight tinge of nausea for a few minutes first time I tried anything with fast movement in the Rift. I took a Gravol and went back in after the feeling subsided and all was fine after that. I suspect my conquering motion sickness at an early age and years of flying simulators in 3D glasses didn’t hurt either. There will likely always be a few who can’t make the adjustment, but most will after some build up of their VR legs. It is what it is, so people have to be willing to try or go back to a monitor.
Learning to fly and have your brain adjust to things like flying down the Vegas strip, 50ft off the deck, inverted, goes a long way to acclimating the brain so that it can ascertain it’s position in 3D space. VR only makes that task easier, since you have more accurate information.
I can display a rollercoaster on the back wall of my house at 150 inches (in 2D) an have certain people get ill, so it isn’t just VR sickness. No doubt VR will intensify the feeing for many, though.
It would be interesting to hear from other long term flightsimmers as to how quickly they adjusted to VR sickness or it they had to.

1 Like

That is something called accommodation-vergence decoupling. I learned that from Doc-Ok’s blog (he is a VR researcher).

After a longer period of conflict, the accommodation-vergence reflex might just throw up its hands and give up, causing accommodation-vergence decoupling. At that point the other mechanism takes over, and vision tends to improve: while blurriness avoidance is slower to focus, it is, as I mentioned above, not fooled by VR.

However, when the user takes the headset off after a long VR session, accommodation-vergence coupling does not come back immediately. This means that right after using VR for an extended period of time, users might experience strange-feeling vision problems. They might not be able to focus very well, or the real world might look oddly unreal, or they might have slight double vision.

Here is a link to the original article:
Accommodation and Vergence in Head-mounted Displays

3 Likes

Nice Read. The Doc_ok article definitely has great detail. I imagine the article I posted “Obscure Neuroscience plaguing VR” talked about that. But had swappes “Vergence-Accommodation” conflict.

Definitely Doc’s write up has lot more depth on the subject.

1 Like

Doc-OK talks about this as the blurriness when objects are brought close to our eyes as “accommodation-vergence conflict”. Strangely I have the opposite effect of what he describes so I think it is not a complete science yet. For example in Lone Echo I have to bring the popup monitors right up to my face before I can read the text on them.

I am long sighted so that may be a reason but this “accommodation-vergence conflict” is not exclusive to all. I also remove my reading glasses before putting on the HMD as do not want to scratch the lenses.

2 Likes

The “focus” issue where we can focus on near or distant objects for reasons of natural comfort are quite interesting.

After the link that @Davebobman posted I started to read into Vergence-Accommodation-Conflict some more. Completely a beginner here but I gather that is how “Focus” works, Vergence with Accommodation = Focus from a bodily function perspective and the conflict is when either Vergence or accommodation breaks sync OR does not change yet visual input from VR changes thus there is a conflict.

In VR a single panel per eye can only take us so far to what our eyes expect for vision, after all this is faked 3D on a 2D plane (the LCD panel). One answer to a more naturally accepted VR experience is the use of Light Field Displays.

Here is a snip from a poster tikiman098 on a YT video ( The Light Field Stereoscope - SIGGGRAPH 2015 - YouTube ) that explains how they work quite well I thought.

A light field display is one that can control light both in position and in angle. A conventional display can only control light by position (think pixel position, x,y, i.e. two dimensions). A light field display can control light by position (x,y) and also by emitted angle (a,b - you can call these angle coordinates whatever you want, but there are 2 coordinates), so that’s the four dimension or 4D part of it. When you have a light field display and you have these four dimensions to play with, and one display for each eye, you can reproduce 3D scenes which when viewed behave exactly like the original 3D scene - meaning a range of depths are simultaneously present (as is the case with natural scenes - a range of depths are always there, you just focus on whatever you choose, and the rest blurs accordingly). Big breath.

OK, so if we’ll agree that once you have a light field display, you can display scenes which span a range of depths (and all depths are simultaneously present), the next question is - how do we make a light field display? One way to make a light field display is … by stacking LCD panels. You can think of it like this: the first LCD panel controls light’s x,y position, and the second LCD panel controls the light’s a,b or emitted angle. Light from the backlight goes through BOTH panels before you see it. Each LCD panel kind-of acts like an old-school transparency film.

The key piece to understand is that the two LCD panels are NOT displaying two flat images, one near, one far. Instead, they are together encoding light’s POSITION AND ANGLE. Once you have control of position and angle, you can reproduce 3D scenes that behave like the natural world, in which MANY depths are SIMULTANEOUSLY present.

I learnt something new today :slight_smile:

3 Likes

Indeed now high res gives the benefit of a better quality virtual space. Near & far objects csn appear to have proper focus, though due to the eyes light processing there will still be some form of conflict of which Light field displays should fix (in theory for now).

1 Like

I would guess that you may be having issues with SDE and that bringing things closer to your face is effectively increasing their resolution.

Also, the relative blurriness of objects is proporitional to how far they are from the headset’s virtual image. For example, the Rift DK2 focuses on 1.4m and the Rift DK1 focuses on infinity. As such, objects near the user would be much less blurry on the DK2.

Regardless, vergence-accomodation decoupling is caused by triangulation problems (getting both eyes to work together properly) with a simple solution: close one eye

Doc-Ok article: Head-mounted Displays and Lenses

I am definitely not an optomologist, so I can’t tell you what effects this would have. A brief look into lenses should give you the answer though.

1 Like

That is essentially the issue.

Brief overview of the article:

  • Vergence - both eyes turn to look at the object so that it is centered on the fovea (high-resolution central vision)
  • Accommodation - the eye lens deforms so that the light from the object is focused on the fovea.
  • Vergence-accommodation coupling - reflex built up in early childhood that automatically adjusts accommodation to match your eyes’ vergence (since “real images” always have the light coming from the same distance as the object).
  • Vergence-accommodation decoupling - the reflex is ignored after spending extended periods looking at the “virtual image” and will not come back until you spend some time looking at “real images” again.

The issue with VR is that there are objects at different distances (changing vergence) but the light only comes from one “virtual image”/screen (non-changing accommodation). Since this isn’t how your reflex usually works you suffer from blurriness and eye strain until the reflex becomes uncoupled.

Going back to Doc-Ok once again, it is possible to simulate proper accommodation. It is imperfect but the idea is that you find the distance that the user’s eyes are verged on and blur all objects outside of those distances. This is less effective when the object the user is focused on is already blurry.

Note that this is different from foveated rendering, which blurs all objects in the user’s peripheral vision (regardless of distance).

Doc-Ok article (accommodation simulation): An Eye-tracked Oculus Rift

2 Likes

Yeah, eye tracked dynamic blur has been discussed for a while now. Maybe even that will cause a conflict though as it is still a 2D linear effect across a flat panel (with simulated 3D depth from software) but it is outside our natural automated ability so some fighting must happen, our eyes will be trying to calculate too. I suppose a powerful camera (not the tracking camera) is required inside the headset to monitor cause and effect of all this research.

Just read that article and Doc-OK confirms this suspicion:

And looking at blurry images is really annoying, because it messes with the eyes’ accommodation reflex.

On a side note, I wonder if those holographic bookmark cards you can buy work like this, they have one image at one angle and a totally different image at another angle.

Just a terminology correction, It is not a “blur” effect or some kind of global object blur, Foveated rendering is simply rendering at lower resolutions outside the fovea area for the purposes of lowering CPU/GPU demand.

1 Like

[quote=“D3Pixel, post:30, topic:5497”]…
Maybe even that will cause a conflict though as it is still a 2D linear effect across a flat panel (with simulated 3D depth from software) but it is outside our natural automated ability so some fighting must happen, our eyes will be trying to calculate too…
[/quote]

It might work better, though, and somebody just stated an intention to possibly try it, over at r/oculus, provided you, E.g. (there are other options, such as a flexible lens) physically moved the display panel back and forth (and adjusted barrel distortion accordingly), so that the virtual distance produced by the optical path, matched the one you are converging on.

Then the focal distance of the screen would follow your vergence, and the software bokeh would likely be much easier to accept. (EDIT: Still not perfect, but a considerable potential improvement.)

We are talking rather small Z movements needed, to produce the required diopter changes, and focus change should be a good bit less speed sensitive than X and Y tracking…

EDIT: The reddit thread, mentioned above: https://www.reddit.com/r/oculus/comments/86s410/a_simpler_solution_to_vergenceaccomodation/

2 Likes

This might actually have the opposite effect. We might suffer more eye fatigue because we can see more clearly of near and far objects so our eyes will be working harder to focus, and by focus I mean the eyes natural reflex to focus if the object is believed to be in the distance but that might not apply as vergence accommodation has decoupled due to fixed distance of the panel.

Longer play sessions than 20 mins will reveal this in the M1 test.

1 Like

Wow what an interesting theory. So it would work like a camera lens auto-focus?

1 Like

Yep. Hope that person tries it, and publishes the outcomes. :slight_smile:

(EDIT: Camera frusta may need to be adjusted on the fly, too… :7)

2 Likes

Could that not be done in software?

1 Like

That bit is all software; I was referring to the two in-game “virtual cameras”, which need to match the screen/lens setup. When this optical path is no longer fixed, we probably need to be able to render a different FOV from one frame to the next, so that it matches the real-world display. It could be done via post distortion/reprojection, but I am a staunch proponent of always wanting 90 “real”, “proper” frames per second, with reprojection only being allowed in emergencies. :7

1 Like

Edit: I thought we are talking about making the lens or the panels move (Z) in tiny incremental motorized movements to match dynamic software blur (tracked) so that our eyes do not have as much accommodation conflict, much like how auto-focus on a DSLR works. You are talking about simulating all that in software? That is the bit I will have to get my head around as it still causes our eyes to fight what they are seeing.

OR

Are you saying that bit is correct but the Frustrum needs to be done in software. But that is the bit I do not understand. DSLR Auto-Focus does not change the size, only optical zoom does that?. But is that because a camera has more than one lens so auto-focus not only moves forwards backwards but moves lenses together too and that is the bit that needs to be simulated in software? (I need to read up on how this works I guess)

EUREKA

I know what you mean now. Ignore all that ^
If the view in software does not match the micro motorized movement of the panel ( or lenses?) then it will of course look like it is shaking in place (presumably)

[Deleted old text]

1 Like

True our closest non vr reference is True 3d movies. Could try a marathon. Lol

Now with the 4k (60hz) i find i can stay in Ethan Carter with the VR dlc for easy a couple of hours; but i might beva mutant or something. Lol

Oculus & Vive users found it quite ill fated but this might also be with getting used to “mouse look” since with walking you move in the direction your looking. I adjusted by using a combo of Controller with moving head.

But it is very nice looking in vr.

1 Like

Something like that, I’d expect. Can’t help but bring to mind the old horror film effect, where the camera crew dollies and zooms on the subject in sync, so that it looks like the corridor a pursuee is fleeing down stretches out ahead of them, negating any ground they try to put between themselves and the monster, and making the door to safety race away into the distance. :7

Sorry about how hard it is for me to get a point across in a clear manner. :7

1 Like

haha, yes. I know what you mean.
You made a lot of sense so I assumed it was my lack of knowledge that had me in a twist. I eventually caught up though :smiley:

Most of my inaccuracies were for the sake of brevity, considering that I was basically writing a TLDR for Doc-Ok’s articles.

At first glance, it sounded to me like accommodation simulation and foveated rendering were doing the same thing. My intention was to briefly point out the difference to anyone who already knew about foveated rendering.

This might be a better synopsis:

  • Blurring is the process by which accommodation simulation is achieved.
  • Blurring is a by-product of foveated rendering, whose process is to render fewer pixels in the periphery.