Why Pimax can do High FOV and Meta can't?

I think Pico Neo 3 used curved displays.

Valve never said that, they actually have never said the FOV of the index. They just vaguely alluded to 20deg larger than the standard of other headsets. I think that’s a bit of a cop out but it’s still better than having specific numbers that aren’t true.

Pimax needs to get sending you one!

3 Likes

Yes, that’s right. the 120 did not come officially, but only from various specialist channels on the Internet and on YouTube. xD

The first generation of wmr was worse, where only Samsung kept its values (around 100) The competitors were actually all below 90. (e.g. Lenovo 78 with the regular facefoam instead of the advertised 93h/110d)

We don’t need to say anything about the official Pimax Fovs either. I have the feeling that the measurement was always taken from the lens here, but never considered that the eye is significantly further from the panel.

The only culprit is that there is no standardized procedure and everyone is right somewhere from their point of view. That’s why @risa2000 tried with his tool to at least create clear values from the render data in order to form a basis.

But all things considered, it has never been said that the Crystal’s FOV is greater than that of the Index. But the Aero leaves behind. And that’s actually how it is
@Butters007 - if you want more, you. must wait for the 12k.

Thank you for your insights! -Here’s hoping more vendors jump on the generic extension bandwagon (…and that the extension is improved, if lacking in any way, to include whatever particularly useful “special sauce” one vendor or other may have in their proprietary API) – I am not only concerned with developers having to support multiple disparate devices today, no matter how simple and similar dealing with each and all may be, but with next year’s devices also working, right off-the-bat, with their existing builds. :7

If you will allow a sort of follow-up question: Among other things: Can “basic” OpenXR, without resorting to esoteric extensions, request, with some authority, that an application delivers it more than two viewports, or two contiguous viewports segmented into multiple conjoined frustums?

(…Oh, and by the way… since we were talking eyetracking… According to some old study, whose existence dreged itself up from the silt of my mind, there is supposedly non-negligible benefit in matching game camera offsets (and post-rendering reprojection), with the user’s gaze, accounting for the slight shift in the positions of their pupils/eye-lenses, when looking around… Does OpenXR include any recommendations pertaining to that? (Both for runtime- and application developers.))

Sorry for the inquisitiveness. :stuck_out_tongue:

Some immediate benefits that spring to mind, would be:

  • The option to curve one’s display in such a way that it matches the field curvature of the lens used, improving edge-to-edge clarity. -At least on one axis, assuming a flat, rectangular display curved more or less cylindrically.
  • Mitigating any view-angle-dependent colour/luma shifts that may be inherent to the display.
  • Allowing for a contiguous display that curves seamlessly around the user’s eye, instead of using e.g. multiple panels set edge-to-edge, or special optics that can spread the image on a single flat panel out over a larger FOV.

…but as for rendering… I’d assume any rendered projection can be mapped to any view projection, so I’m not sure there are any inherent savings to be made there – I imagine it should be possible to have them anyway… :7

1 Like

Well if you curve it in you would have to render in a severe bow-tie formation, which is precisely what large fov rendering does now. I also see a problem of the pupil itself moving as their may be nonsuch lense that would accommodste the new angle for a given area…ill leave that to the opticians…

You are trolling! All the time. Always provoking. For me it´s disgusting.
here, take a :fish:

2 Likes

meinen wollt er nicht :frowning:

1 Like

you sure about that?

8KX has two 4k panels. 4K is only 2160p.

The concept of “views” is generic. The core standard only specifies mono and stereo, but a runtime can easily advertise more complex modes, like XR_MSFT_first_person_observer or XR_VARJO_quad_views. While these ones are vendor-specific, all they adds is one definition which allows you to identify that the capability of 3 independent views or 4 viewports is available.

However, nothing can be requested “with authority”. The application will be offered the option to use either normal stereo or quad views in this case for example. Supporting multiple viewports in a fully generic and optimized way in your app isn’t trivial, for example with the Varjo extension there is an underlying understanding that those viewports overlap, and therefore the app should stencil out the overlapping region to avoid rendering things that won’t be consumed by a viewport because it is covered by another one. You shouldn’t force an app developer to write all this code if they don’t care about supporting a $7,000 professional grade headset.

There are 3 categories of eye tracking extensions so far:

  • Some specialized for social interactions with per eye gaze smoothed out for avatar rendering and other face feature tracking like openness
  • Some specialized for foveated rendering which typicall don’t get raw eye data but rather density maps used for rendering
  • Finally the general-purpose mono gaze tracking for interactions like pointing.

I use the latter for foveated rendering since it is the cross-vendor one and it provides sufficient data to create my own VRS masks.

I haven’t heard or seen anything that does what you said, and it seems that head tracking and user IPD are generally sufficient for today’s usage.

That’s TV ‘4k’. Which is 3840 x 2160, due to the aspect ratio.

Potential 4k per eye for the new valve or apple headset would be more like 4000 x 4000, and over a smaller FOV, so we’re probably looking at like 45 - 50 PPD.

I don’t think that’s true. You can stitch multiple shots together to make a panel that fits your design.

As an example you could go 5500 x 2880 to reach a ceiling 16MP.

The 4k uOLED panels already exist, from a company called eMagin. The project name is ‘steamboat’, quite clearly a reference to valve. I can send on more info later, just at work.

@Atmos

More info here: eMagin Presents 4K OLED Microdisplay On 'STEAMBOAT' Board

I’m not questioning where the panels come from just your assessment as to the resolution being 4000x4000. I highly doubt that’s the resolution Valve will use.

I was just sharing information that is going about, not stating it as definitive.

Why do you think that wouldn’t be the resolution Valve go with? How about Apple?

I don’t think it’s the rendering target but I do think it’s likely they will be using those 4k per eye panels, it has been strongly implied (look into David Heaney, I think it was him, asking eMagin directly in person and their response).

See, the problems I have with this way of viewing things, are that: A) The features of today’s $7000 professional grade headset, are those of tomorrow’s consumer devices, and B) App developers should not be forced to care about supporting any specific devices, and have to write all that support code, in the first place. -Instead, it should be made as easy as possible for them to do things in a generalised manner that is reasonably forward-thinking and future-proof. Unified methods, work pipelines, function libraries, abstraction layers, and so on, should be provided to them by the vendors of VR runtimes, hardware drivers, graphics APIs, graphics engines, and so on; And device-specific concerns should be minimised; Those vendors hopefully getting along and co-operating somewhat within the Khronos Group (Yeah, I never claimed to set realistic hopes for humanity :P).

I really think that as long as any good things are always complicated optionals, which may to boot be deprecated tomorrow, nobody (with the noteable exception of a few people like yourself) will ever bother with them, and they will wilt on the vine, never getting their well deserved chance to take off.

With something like the Aero or StarVR view solution, one might think that the “hole” in the context view viewplane, where the focus viewplane is to “slot in”, should simply be part of the hidden area mask already provided by the VR runtime, and that this, along with the camera matrices, should be all the application developer really needs to know: Compositing the two together should fall to the VR runtime, not the app developer (…although the runtime could possibly defer this function to the device vendor’s driver/support software).

Ideally, projections would always go to something like a sphere section, instead of one or several rectangles, and come accompanied by density maps (polar coordinates, maybe…), but that is of course unlikely to happen without a major bottom-up overhaul of some 3D rendering fundamentals. Perhaps any conceivable retiring of rasterisation, in the future, could be a good opportunity to stage such an upheaval. :stuck_out_tongue:

If you don’t mind another question, which has long vexed me:

In laymans terms: How does OpenXR toolkit “force” game shaders to use VRS? This ingoramus would have thought they would have had to be written with it in mind, to begin with. :7

EDIT: Oh, and thanks for your answers and thoughts!