Varjo isn’t in the market for gamers and the product isn’t ‘promoted to the masses’ because the masses wouldn’t be able to afford it nor would they be able to use it.
ACC as a worst case example for a simulation’s implementation of VR is a known fact.
Using VR in repro is IMO a no go, native or buy/wair for a better GPU or rendering technique.
OpenFSR will work on any headset that uses OpenVR(Steamvr) as long as the game is using dx11. It is reported Alyx and Squadrons don’t work as they don’t like openvr.dll being swapped.
It would be curious to see how StarVR has there 1 cable per eye with rendering 4 viewports. 2 viewports per eye. Must be something on there driver level. Maybe a custom renderer?
which allows for vendors extension, if by some reason OXR limits the potential of a device it should worked through by vendor and standard holder, i dont think there is not overcomed difficulty to implement so, as i see they use openxr https://developer.varjo.com/docs/openxr/openxr so the problem might not be there, pimax eg has steamvr extension, i bet they just dont want to implement own plugin for steamvr as its not their focus, reasons dont matter, as by now hmd’s performance is garbage, they at least need good reprojection + would be good to have something like FFR pimax has, and yeah it works fine with steamvr
And these extensions do not need to be specifically addressed by application developers? They will automagically retroactively work on older games, which run on a pre-OpenXR API?
reprojection isnt a game dependent feature, for resolution at least oxr based games should work, but i guess its not about oxr but steamvr renderer and pimax did own for these purposes
I think this is a hit n miss scenario. The Focus Displays are a hardware FFR concept so like Nvidia flavored FFR will be hit n miss.
What we know primarily is that it is very early days for there software and driver stack. The headset overall is a great glimpse into future possibilities.
As the context displays are already quite amazing on there own. It might be simpler to go back to the original cascaded display idea to use dual stacked cloned displays to improve sharpness, clarity and response across the entire view area without the extra overhead needed. As was in Nvidia"s original paper.
Varjo was quite clear there not targetting standard users but Enterprise as it specific intended use at present isn’t really gamer.
actually i see they already have developed openxr exts for quad views and foveated rendering, so it seems steamvr plugin is what missing to make use more potential from the device, nowadays wo reprojection vr is barely usable, if you could ask do they have any plans on working on it it would be interesting to know
from what i remember in starvr demo with robot quad technic was used but in steamvr games it looked worse than in demo as seems they got output from 2 cameras and projected it to 4 views, so indeed seems they did some trick on driver side but isnt varjo doing same as seems they copying 2 displays to 4 if res is enough. I would definetely wish this device to succed and bring first ET capable device to the market for gaming, but i afraid it wont be any different as with HTC Vive pro eye, HTC also reffered to privacy issues with ET tech and law as there not only tech related issues
There a bit different. In that one is divided into 4 views that align so to speak like a panorama pic. But yes playing with Camera views.
The Varjo on the other hand has a display in the middle of a display and the res doesn’t match. Which creates more complex issues.
Myself from what I read am more interested in a Varjo without the focus displays. If keeping with dual displays then the original Nvidia Cascaded displays. But in honestly from what others have said. Just the context displays with Aspherical Lenses with the distortion fixed would be quite sufficient
Well, yes - reprojection is a post-processing effect (…although, a good version of it could very probably benefit greatly with a little extra, normally unknown, information from the game, such as gameworld motion data and depth buffer - pretty sure Oculus’ ASW2 does lean on those).
…but additional viewports is another matter, and I strongly suspect, sadly, that games will need to be specifically written to use any OpenXR extensions that add them. Would be delighted to be proven wrong. :7
(Personally I will still choose to suffer dropped frames, over synthetic frames (…although I will accept the simple rotational “reprojection” - just not the video extrapolation). Cholera is bad, but nowhere near as bad as plague. :P)
My takeaway from what I read people writing, is that:
There is no 4 views in the StarVR. It has one screen per eye. You can render 4 views in game and scale and composite them as appropriate to the two screens, in order to not waste undue rendering power on the periphery, and you could do exactly the same with Pimax headsets. Don’t know whether Pimax’s Unity plugin (…for going directly to Pimax’s drivers, without SteamVR) offers the functionality prefabbed… -Pimax techs? -Armin? -Marcin?
You can likewise render 4 views for the Varjos, but they actually have the 4 screens to take those 4 rendered views. From posts in this thread, it appears that if you use an API that does not allow for more than 2 views, and your render targets are too small to actually make use of the resolution the focus displays have, the focus displays are not switched on at all, and the views display fully on the context displays. If your render target size is large enough, the parts of it that goes to focus/context displays are scaled appropriately to each.
No idea exactly how the focus display is overlayed on the context one. I have been assuming it is done by projection/beamsplitter, but on the other hand: Nokia holds some rather valueable waveguide patents, apparently… :7
I’ve been thinking that, too: Just the lenses and the context displays, and none of the other expensive stuff, would make for a rather neat headset, but the question is: For how long. I guess nobody else seems interested in eschewing the fresnels, so the advantage of low glare would probably remain, but how long will it be until resolution like that of the context displays is commonplace, and then exceeded? :7
Just replace the existing openvr.dll file with the mod. Best to save the old one- I just it’s extension from .dll to .old.
Beyond that, I know nothing! I’m trying to figure out how to assign the hack’s parameters.
It needed nothing in DCS- but I did have to adjust pixel density higher in the VR settings to get a nice compromise of visual quality and FPS.
Cheers!
Unfortunately, while you are 100% right that native frames are always preferable to synthetic, its the reality of modern flat panel displays for any application that prioritizes low latency or crucially for VR, low persistence of vision.
The way an LCD displays images often requires synthetic frames because to get the persistence low enough for the average human not to notice requires either pulsing the backlight, brute forcing higher and higher refresh rates, or a combination of these together with re projection.
Even if we want only native frames, consistency of frame to frame latency is crucial to perceiving smooth motion and head tracking “presence,” IE using as a “bandaid” as you put it. We also have to worry about LCD’s slow pixel transition speed.
For an LCD to naturally have the image persistence of a good fast decaying phosphor that you would find on a high end CRT display (less than .5 ms) would require a brute force frame rate of about 1000hz to avoid flicker, and even then the panel would need to be able to switch that fast to really see the benefit.
Transition speeds are less of a problem for OLED, but OLED has its own issues.
The pimax pulses the back light together with running the HMD at decently high frame rates, and as we know the best GPUs struggle with native FPS at the super sampling that gives great quality.
Its 100% true that re projection is an ugly little crutch, but its the fault of the display technologies we use.
Just small correction: index got way lower input resolution, and SS works on top of that.
So even if you push SS to the same resolution as G2 you will still supersampling lower input resolution. Same example with 8k+ and 8kX. You can SS 8k+ to the same resolution as native 8kX but image won’t be the same and not even close to native 4k resolution. That’s why you will always have better performance on outdated hmd like index. Cheers
But 90hz with a very god LCD like the one in the VR-3’s context displays passes the threshold for me, that is regarding display frequency. More is surely better, but 90hz works.
And those 90hz I personally want matched with 90fps natively, no framedrops and most definitely no approximated synthetic frames. For some reason I can absolutely tell the difference even with ASW2 and I prefer to use a resolution upscaler, upgrade my GPU, overclock or, there I said it, turn down some settings before I want to rely on the ‘bandaid’.
But yes, with higher and higher refresh rates, not just the GPUs are taxed but CPU IPC is becoming the constraining factor again.