Brainwarp, skipping an eye vs Interlacing

So Brainwarp works by halving the computational demand by only showing one eye at a time, alternating each eye at 80/90 frames per second.

So what happens to the eye that is being skipped? does it hold the previous frame or blank it?

If it holds the previous frame then does that not add 2 frames of latency to what you should be seeing with motion to photon. That would cause a strange effect in fast motion wouldn’t it?

If it displays a black frame then would that could create a strobe effect?

Why not just interlace both eyes at the same time for the same performance gain?

Video at 1080i shows little defects so maybe this would work on the Pimax at higher resolutions and faster frame rates.

Just a thought.

3 Likes

honestly we still don’t know for a fact how it works. we don’t even know if it works.

god knows would love to have details on it, but guessing the fate of brainwarp and how it works if at all is for the end of the nda.

6 Likes

If it show the other eye black, then it are like active shutter glasses for 3D monitors/TVs so it would reduce the perceived brightness. but as long the frequencies are not to low there will not be a strobe effect.

active 3D monitors and TV’s are normally 120HZ to be able to send 60Hz image information to each eye.
so i believe it will be a bit different.

1 Like

Finally someone asking intelligent questions, and this is what I was trying to point out to this forum since when I came here…

Follow the white rabbit dude, I will not add anything that has not been already said/hinted.

‘Til of course you come to the point where you need some more little push, and ask… :stuck_out_tongue:

1 Like

My HOPE is, in fashion to other vr platforms, there are several variations.

Variation 1: no performance savings, but displays each at at half phase of each other, and renderer draws frames in linear fashion of original frames. Which means the renderer only has to drawn one eye per frame which is less work, but at twice the speed. The effect will be each eye gets 80 frames a piece of unique game originated frames.

Variation 2: each eye is still rendered at half phase of each other of original frames but with an always on reprojection is set to run at 40 fps with artificial frames boosting to 80. Scenery and environments usually look decent, but occluded objects during motion, hands, or room scale walking around will look juddery.
Effective performance savings though,

Variation 3: same as 2, except data is grabbed from the depth buffer to do more accurate reprojection that will have less issues with occluded objects during motion and unpredictable movements.

That’s my best guess.
Variation 3 is the hardest to do I’d guess.

It’d be interesting if they could do always on reprojection for the out portion of your field of view only as your usually looking within your central vision when moving. Better yet would be combining this and variation 2 or 3 only “turning on” when frames are dropped. I personally wanna see variation 1 come to fruition because that’s what’s truly going to feel like 160fps vr the most. I don’t think they will blank frames, they will reprojection them.

2 Likes

They haven’t confirmed how brainwarp works and the poor illustration during the kickstarter I think has created more confusion than confirmation.

The concept that its supposed to give us the perception of double the refresh rate I don’t see it working the way you and many others describe. I’ve always thought it would work by splitting L/R refresh time and alternating the start of the refresh by half… so halfway through L refresh is when R refresh begins. That way I can buy that technically 180 refreshes are perceived to occur during a second (or 160 on the 8K if we’re stuck with 80hz).

Someone else did an illustration that follows the same way I think it works, I’ll try and find it.

Obviously pure speculation here since we have nothing to tell us what is happening but it makes some sense to me, and if I’ve invented a better method they then use I want royalties! :wink:

2 Likes

Found the image someone made.

5 Likes

Probably the best of the three ways, but it can’t still be done with the current limitations imposed to the commercial hardware and panels.

Nvidia and AMD did a little move in that direction with G-Sync and Freesync, partially alleviating the sw side of the problem, and letting the gfx cards pump out as many frames as possible x second, but we’re still at a point that the hw limitations in the control logic of the panels needs to be removed (and it can already be done…) plus, faster, unhindered, new gen panels must be used (forget anything currently available in the Pc market).

And any method you’re going to use to give the impression of more frames, keep in mind that there is a huge difference in what you consciously perceive with your eyes, and what your brain does, and that it has HUGE implications.

2 Likes

What you are describing in #3 sounds like Oculus (probably patented) ATW - TimeWarp.

That is kind of what I am suggesting already. By dividing the image up across both eyes. If that is every other line or 50% progressive. We are on the same page I think.

But not one eye at a time LonRoff RonLoff, instead both on running normally but shifted out of sync by 50%.

Yeah. I said that here…

Although not out of sync. Just interlaced :slight_smile: so 50% render but interlaced.

Scanline interlace, existing since the first home computers…

Tough it adds some heavy flickering to the final image, unless you refresh the screen at very high speeds, but to this point we’re back at the same problem of the current panels, refresh speed limits :slight_smile:

Again…pure speculations

1 Like

this is a fascinating image that brings up a point about brainwarp i had never considered, that it would only work in the field of stereo overlap. so if you ran at 60 warped to 120 you are having to deal with 60 fps in that extremely motion sensitive peripheral vision.

5 Likes

I think of interlace as splitting the frame up into chunks (lines) and alternating them. Where in this case it can just be the whole frame still rendered at that moment in time. The PC has to still render L and R separately anyway so if they can be transmitted independently then it should be fine just alternating.

How would that work in the rendering pipeline. The engine renders a full size frame at X x Y. It is that rendering process that costs the most.

As far as I know this is the parameters of the rendering pipeline. If we could render 50% down on one frame then the lower 50% on the next frame that would require an engine modification? Or is this not what you are suggesting?

In effect this would require 4 cameras in the game engine. Left Top, Left Bottom, Right Top, Right Bottom?

If that is possible then you could also render full native size but skip every other line, on the next frame you do the same but offset the skipped line. The result would create a progressive image as it fills in the interlaced every other frame…or something :slight_smile:

No idea :stuck_out_tongue: I don’t know enough (anything) of how that bit works… if it still creates a single frame that is split up (like SBS/OU 3D) or if it can render them separately, my assumption for VR HMD’s with dual displays was it was done separately?

Since panels have moved on since the days of interlaced video as we now have more bandwidth to get progressive, are the panels of today much faster so that interlacing could make a comeback for speed fallback purposes at the expense of quality?

My assumption is that in a game engine it has two cameras side by side, it renders both at the same time and produces a double wide image that then gets processed for each eye after it has rendered the double width image. That’s why VR is more demanding than single camera (normal) games.