StarVR One - 4 Viewport Rendering Capability

@mbucchia, based on what you posted and looking at how the StarVR One is displaying 4 screen if I understand what I’m seeing in the documentation, I believe you are absolutely correct that StarVR has baked in screen like the Varjo XR3/VR3.

The StarVR One specs state there are two 4.77" AMOLED displays only. Do you think they did some type of sandwiched display development all the way back in 2015.

This would explain allot because the more I view the StarVR doc the more it looks exactly like what Varjo did and why StarVR would target the headset for B2B only. If you look at the two smaller screens as displayed in the documentation, those are focus screen which would be used by a company’s proprietary virtual software to allow detailed viewing of flight sim cockpits, manufacturing/engineering parts inspection, and such. This could be why the headset looks blurry because SteamVR and games do not know how to properly render for the two focus screen portions of each of StarVR One’s 4.77" screen.

One of the quirks the VR3 was experiencing when we first adopted it for gaming was in flight sims or cockpit related games, the focus screen allowed crisp focus of dials, button, etc. and we raved about it. Some non-sim games however did not know how to handle the focus screen and sometime you would get strange behavior out of the focus screen such as slight fuzziness. Varjo made several changes to the firmware and has since added an option in the control software which allows disabling the VR3 focus screen and I play most my games with it off because the focus screen make things kind of weird.

StarVR One and Varjo XR3/VR3 are the only headsets I’m aware of which require two Display Port connections. Not even Pimax or Xtal require two Display Port connection for their displays.

I know I’m going down the rabbit hole but when MRTV viewed the special StarVR version of the Showdown demo, he along with others didn’t complain about the demo being blurry.

1 Like

Thanks! Lots of interesting info, as always., and I’ve now got the: “panini projection” wikipedia page bookmarked, for future perusal. :7

Should you feel so inclined, a few questions that sprung to mind:

  • Would you say there is any noteworthy VRAM savings (for buffers), compared to e.g. VRS – maybe especially with deferred rendering?

  • I take it one still want to use different LODs, appropriate to each of context and focus? (…I presume mipmap selection is handled automatically by either the graphics API, or GPU driver?)

  • When you get to wide FOV headsets, do you estimate there would be benefit to a six-view limited combination of approaches, where you split the context view, top-to-bottom, into a forward view, and a peripheral one, where the latter can possibly be lower resolution still, and rotated to frustum symmetry? I figure this need not be a 50-50 split, but could if so needed possibly be past a possible window within which the focus view is permitted to move… I’m reasoning that regardless of whether foveation is dynamic or fixed, there would be lens-matching-, and projection related reasons (EDIT: …not to mention retina limitations) you could drop resolution further, out there…

  • One I’ve long wondered… You have previously explained some of what it takes to get VRS going; Figuring out which shaders to apply it to, when “hacking” existing applications; And how none of VRS happens in the shaders (from what I understood, the GPU simply does not “foreach” them for skipped pixels, so to speak); And also that OpenXR is designed to let you interpose e.g. OpenXRToolkit between itself and applications - no “hacks” there; All interesting “aha” moments for me. I digressed a bit there, but on VRS: More specifically the subsequent interpolation stage: Can different algorithms for it be plugged in? Can a developer substitute their own, and could a user pick from a selection in e.g. the NVidia control panel, and have it work with existing applications? (E.g. replacing something that does nearest-neighbour, with a hypothetical future VRS-adapted version of DLSS/FSR?)

1 Like

Optimization Files

All, my StarVR arrived today and it’s in outstanding condition, it looks like it’s never been used even though I’m the 3rd owner. It still has protective covering on head strap.

I’m in the process of setting up software and thought I would mention here just in case someone else decides to try out a StarVR One.

  1. Compass installation on my Win10 system went smooth and quick.

  2. The Compass software was able to connect to the StarVR Download Server and pull down the Optimization File for my headset. This the actual name of the Optimization Files which were pulled down:

    H477.KL04705004837027760601_B.PCD 7/1/2023 1:29 PM 5,017 KB
    H477.KL04705004837027760601_G.PCD 7/1/2023 1:29 PM 4476 KB
    H477.KL04705004837027760601_R.PCD 7/1/2023 1:29 PM 5109 KB
    H477.KL04705004838004710601_B.PCD 7/1/2023 1:29 PM 5104 KB
    H477.KL04705004838004710601_G.PCD 7/1/2023 1:28 PM 4459 KB
    H477.KL04705004838004710601_R.PCD 7/1/2023 1:28 PM 5097 KB

1 Like

Nice write up on the multi-view rendering :+1: .

After looking at the UE docs, you linked, I am pretty sure that what I saw (at the time I had StartVR One in my hands), were exactly those textures in the doc. One with the normal aspect ratio and the other one “compressed” and slightly smaller in resolution. Now when looking at it again, I can see finally (:slight_smile:) what they tried to do. The way they chose to compress the res differently in the horizontal and vertical direction is (or was) an orthodox and interesting approach.

1 Like

The last 2 image prototypes at 8:40 of the video are my units.

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.