Yes Sweviver for COO of Europe.
Now that I’m thinking about it, when was the last time we sae Norm from Tested and Jeremy from Tested do their VR show ( Edit: Projections is what it was called)?
I used to love watching their stuff. I dont think anything from Tested VR has even ipped up in my feed for ages?
Last Tested review I saw was about the release of HL:Alyx. I mean, I do not know if there were more after, I just did not get any hint about them. Do they even cover VR anymore?
Funny you must have wrote this at the exact time as I was thinking of the same question.
Another unrelated shower thought. Im going to check that youtube stats channel to see how that MRTV podcast has been doing since the FReality podcast has been on hiatus.
I never realized how much of my saturdays i liked having that podcast on.
They still have: “The VR Minute!” in their: “This is only a Test” discussion show.
HTC Vive Pro 2: maximum FOV? (with measurements in Lab
Note: turn on the caption to see the English translation
I can imagine why would you want that, and I can also imagine why you won’t get that . What Doc-ok did is pretty straightforward and down to the result systematic approach, which works when you are able to describe the lens (distortion) by known equation (hopefully the “usual” one). I understand he did the lens “reconstruction” to get the warping formula and consequently the smooth graphs of PPD (which are cool btw), but technically you do not need to do that (and for more complex lens you even cannot do that). But you can still use the rendered (distorted) image to calculate the PPD.
And while you are right that this method relies on how correct is the actual undistortion function implemented by the manufacturer, I believe it is still better tack than trying to get a camera feed of the live (render) session and get the PPD out of it. The reason is that while you can calibrate the (measuring) camera, it may still be more difficult to detect the necessary features on the taken shot, compared to when your CV algorithms work directly on the rendered image.
Plus you will be facing the potential (mis)focus of the lens and an unknown impact of a (variable) eye relief.
It might be good to use the camera to verify the geometry of the image, i.e. project the grid in VR and use the camera to verify it is indeed observed as a grid and not as a squashed and stretched patchwork, but using a camera (even in an abstract form of a pinhole calibrated camera) will still leave some questions about whether what you get is really what you want.
After reading and listening the objective aspects of the Vive pro 2 FOV, I am leaning to the conclusion that is not that the vertical resolution is terrible, is that the shape of the lenses been rectangular is creating an unnatural subliminal effect on the user. Pimax doesnt fit in the category due to the much larger FOV real state. But the Vive Pro still hovering around similar FOV as all modern consumer commercial Headsets making it quite obvious (that exclude Pimax as it is an enthusiast HMD)
just my early assumption
I’ve read MRTV’s pre-written review and he makes a very good review, showing both the good and the bad. For example, he’s also speaking and giving examples regarding color palette which beats even the G2 (but not vs Oled), and will be criticizing VP2’s microphone quality, also says it can’t keep up with the Index/G2’s audio. He also gives modding tips and looks at use-/upgrade-cases, when or why it would make sense or when or why not.
I mention this as examples: The review is very balanced, I was surprised. By far the most thorough I’ve read/watched yet, much to the taste of the enthusiasts here I imagine, because it gives answers that we’re interested in (Sweetspot, FOV, clarity range, Mods, etc).
You’ll see once it’s public.
P.S. 96° vFOV almost matches Pimax’ 100°, am I wrong? We’ll “mod” the VP2 slightly to get there. Very interesting.
Well, I am of course postulating that the measuring equipment would get to put whatever calibration imagery it wants on screen, raw, without interference from the VR runtime and its predistortion shader – possibly painstakingly testing every pixel one by one, with different values. That should also relatively easily let it estimate optical acuity, refractive indices, and internal reflections. :7
…and whilst a camera is of course not an eye, and does not behave like one, I would most certainly expect anybody-who-knows-what-they-are-doing building a distortion profile to have done that with a calibrated camera, as opposed to just relying on their formulas for the lens and/or “eyeballing it”, and then again when calibrating each unit.
Just one comment, on the eye relief and its impact on the visible FOV. Moving the eyes closer to the lenses will give a bigger FOV naturally, the same way as moving to the window will give a bigger view (out of the window). Changing the eye relief have also an impact on the distortion though. How big depends on the lens itself. Low FOV lenses can be pretty inert to the eye relief changes in which case closer you can get to the lens the better.
High FOV lenses on the other hand can be less forgiving and changing an eye relief may change the distortion in the high FOV parts to the point it will distort the geometry of the scene. This is where Pimax (and other high FOV headset) suffer the most. Unless you can reliable determine the eye relief of the user (and adapt the undistortion algorithm to his particular value) you risk that the “universal” value will not fit and produce an unnatural undistortion and therefore a distraction.
In the end it is all about trade-offs, eye-box (robustness) vs. high FOV (but unstable), etc.
I really wish more people would check the stereo overlap when taking their subjective FOV measurements, by the way.
indeed, that help promote greater stereoscopy
With all of this talk of FOV, I feel like revisiting my old wide FOV headset prototype (2013). Revamp the casing and use RGB Oleds
I remember seeing this back in the day, So cool man.
I wish there was a way to use angled mirrors in an HMD for an old school mirror box like effect so you could use 4 OLED screens no bezels, etc.
Right, I wish I had time for prototyping. My recent goal when considering returning to prototyping was a new design that would try eliminate the black border when looking down (where you can see the floor via the hole by the nose area)
While Pimax, Xtal, StarVR do have excellent wide FOV they all severally deficient ridding of the lower border of the FOV. That’s why regardless of how wide the FOV is you still feel seeing the world through some form of helmet or mask
I have sketch ideas to solve it but haven’t found anything yet that would work
96 indeed is close with only a difference of -6° from pimax’s rendered FoV. The number MRTV posted is rendered FoV in vertical and Horizontal. To that end a mod will not yield more than what is rendered as rendered is the max possible perceived FoV.
I am more curious on the Stereo overlap as the vertical and Horizontal FoV should be close to same value. From my understanding there would be only 2 ways to achieve a higher horizontal value - decreased stereo inbetween eyes or a distorted stretched image akin to what Tested initially described a 4:3 TV image stretched to 16:9. If it is a stretched warp it might be why some have said it is blurred compared to the G2 due to correcting the stretched image to lens projected image.
Indeed with Vrgineers’ HeroVR the FoV had a range of 150 horizontal to 170 iirc due to the eye relief. So likely also adjusted the distortion accordingly. Not sure if they continued this feature in the Xtal models.
I literally just realized a new use for a mirror. We are getting to the point now of inkjet printable OLED screens. You could use mirrors and custom printing (offsetting the pixels of each panel) the to get a cascaded OLED display. Just like a cascaded LCD, but self emissive and without the brightness loss.