Yes, there are construction site experiences to help learn use some machinery if I remember correctly. So robotics can be something that can be used for.
It does seem that only developers/content creators/enterprise customers are being accepted currently. I requested further information and applied to buy but they declined as in their own words the unit is not suitable for consumer use.
No problem, my next HMD will be the 8KX, whenever it rolls off the lines.
Guess that answers the compatibility question.
When did you get the answer? Did you tell them that it was gaming or a project? Yes, likely they donât want gamers now. I started a project in 2018 solely with the StarVR in mind. So, it was disappointing when they stopped the project after I made a request the day it was suppose to go for sale
Any update from StarVR @VR-TECH on your application? Even though consumers canât buy it right now, Iâm really curious what the experience will be like for you and you can also use the headset to try and play some games and see if any of them work with that wide FOV.
If you are a company and plan to do some deving. You can buy it
Looks like StarVR is not available after all⌠at least not for me. What a surprise!
I wish you good luck with your lottery applications, guys
Old news now. And honestly to be expected as last time when was going for release was more or less DeVs only. And still seems to be at the earlier DeV kit price. If goes to retail might be higher.
Pimax already has object culling issues in many titles at 140 wide and 160 wide. StarVR One will likely have this even worst due to an extra +50° wide.
Confirmed Open Vr Motion Compensation works with the Star VR, this was verified in a video sent to me from the Star Vr team. Good news for Motion Rig users like myself.
Sucks that it is not compatible with all SteamVR games though
Will you get the star vr one?
Have you tried play this titles with disabled âhidden are maskâ in Pitool ? Its works great in all my sim games and most room scale games.
If it is working get a hold of those DeVs as @StonebrickStudios said that is an issue with the game and your loosing a lor in performance with Hidden Mask off.
if their support is the same way as replying to our emails then its a bad sign,star vr one seems to not respond to our emails anymoreâŚ
So if this is a way a company handles their customersâŚthen dont order hereâŚ
Same for me, they are not going to sale to just anyone I guess.
@jojon The most interesting prospect of Avegant Lightfield is their Unity Plugin which from my understanding almost entirely solves the vergence accommodation conflict another words when you focus on objects near you they are literally in focus and same for objects in the far distance. All w/o eye tracking. My assumption is that what they have done is gotten the detail information from the Z buffer at different distances(planes) and they render multiple at the same time thus allowing you to see the objects in perfect clarity. But this solution would be too costly for VR without eyetracking because you would essentially be stacking fully rendered viewports on top of one another. Without eye tracking to reduce LOD and cull unnecessary objects this solution would not work well on current and even upcoming GPUs. And as for DLP its benefit is mainly in eye strain reduction since they are projecting the image onto your retina vs you looking at a screen. All it really is in AVEGANTâs case is an LED light source with a wheel, shining onto the DMD(millions of mirrors) which then are reflected and passed through optics to focus the image into your eye. @Heliosurge when you said Alegiant I think you meant Avegant? In any case the reason why you do not see SDE with the Glyph is not because DLP is some kind of sorcery but rather because they are not zooming into the image as much as a VR headset does. If you take a VR lens and put that on top of the Glyph optics at the right angle you will see the all familiar sde again. My 1080P Wemax DLP projector which uses a very similar DMD chip has very noticeable pixels at 100 inches but take that down to 50 or 75 and the quality jumps out. Now what would be neat is to see the bigger modern 4K DLP chips not the pixel shift ones but actual 4K and have those for displays. Their FOV would prob be around 80deg but image quality would be superb and eyes train would be a non issues with Avegant style design. But ideally you would want either 4x DMDs with custom optics for bigger fov with stitching or like I mentioned before use 3 smaller DMDâs per eye and projection map the image onto the retina with eye tracking. It would still be a lightweight HMD with probably around 120 fov and I imagine if you added more dmds along a curve you could achieve StarVR level horizontal fov with no distortion since you would map the VR distortion profile based on the user gaze. Samsung has already proven that curved optics can work and with DLP this is entirely achievable. The cost of the DMDs has dropped quite a bit and with scaled production I can see a device like this selling in the realm of what the Pimax 8K+ sells for now.
Well, it is the lightfield display that enables accommodating naturally (and removes the need for a big lens near your eye, too); The plugin âonlyâ produces the lightfield content. :9
I must preface the following by saying that I have no idea whether Avegantâs prototype is actually a lightfield display, or in truth some other solution â I mean; You have things like Magic Leap, who also claim that monicker, but whose âletâs-just-get-something-anything-out-on-the-market-while-we-try-to-figure-out-the-real-tech-that-is-yet-to-performâ, Theranos style, current product turned out to in fact âjustâ be a two-plane jobbie, that switches between the two different depth display panes, depending on your vergence - meh.
As you say: It is not inconceivable that Avegantâs Unity plugin is able to populate the lightfield with âgood enoughâ material, using the Z buffer, in lieu of actually rendering from every possible viewpoint. In this case they would probably still need to know your IPD, but all imperfections (such as lighting/perspective/reflections/etc not changing appropriately when you look around, which it doesnât do anyway, with current HMDs) may be worth it, for only needing to render two views, just like today.
The way I imagine it, it wouldnât be a matter of anything âplanesâ, but using the distances acquired from the Z buffer to figure out at which angles the colour value from the normal of each lightfield pixel will radiate out of neighbouring ones - simple trigonometry (EDIT: âŚwhich could probably be reduced to a something simpler, that has the maths âprebakedâ (EDIT2: actually there we could have something like âplanesâ, in the form of e.g. sets of offset LUTs for a range of discrete depths)).
The big question is whether the values in the Z buffer have enough precision for such operations to yield satisfying results. :7
Eyetracking could still help, to determine which angles can safely be culled (and of course also for things that it could do for current headsets, such as adjusting for the minute shifting of the observer, caused by moving your eyes around - either by actually moving the game cameras accordingly, or by post warping).
At the scale and low light requirements of Avegantâs products, I seriously doubt there is a mechanical colour wheel - more likely a solid state solution, in the form of a tri-colur LED light source.
I donât believe the lightfield unit does the Glyph thing, which requires you to look straight into the lens, and is as such unsuitable. Youâll notice from footage, that the prototype has a visor, just like other AR products. I am assuming that they are projecting/waveguiding the DLP imagery onto that, and that either it has micro structures on it, that fan the incoming light out, or an intermediary pane of some sort does (the latter would by necessity need to be rather large, to result in the right vectors, after bouncing off the reflector).
âŚor something like thatâŚ
@cr4zyw3ld3r @jojon as I am only familiar with the term âlight fieldâ as in âGoogle Lightfieldsâ i.e a technology to record a real scene for later VR viewing, I do not see how anything âlight fieldâ should improve the VR tech, in particular the vergence problem or FOV.
There is always a problem that if you want to âprojectâ something into an eye, you need a projector and unless you put the projector into the exacts spot (of the virtual object it should project), you still need optics to âproject it there virtuallyâ. But unless you have some kind of âxenomorphicâ lens which can project the objects into different distance based on the angles they are viewed (as the angles are basically the only metrics the system can use), you are stuck with âfixed projection planeâ.
An attempt on âxenomorphicâ lens are actually the varifocal lenses demonstrated by Oculus, but they are still activated over the whole projection plane. Which is yet to be seen whether it actually could help in real usage.
Even if you manage to project the light directly into the eye, you would still need the very same âxenomorphicâ property on that light source - i.e. being able to refocus or defocus in real time. Plus you would need to solve another problem (which is not so much of a problem for the âstaticâ images, as used nowadays), how to keep up with eyeâs movement - rotation, and change of accomodation.
Ok, so simplified down to my own level: In a 4D lightfield display, as an equivalent to a regular 2D monitor, each pixel outputs different values depending on which angle you are viewing it from; Its little square cell, out of the 2D cutsection âwindowâ that hangs in the air in the game world, can reproduce all the light that passes through it at every angle - not just the single direction that will intersect the observer in a single projection.
It can contain every projection, for any place the viewer might be standing, in front of it.
This allows for accommodation, if angular resolution is sufficient, because when out eyesâ lenses focus, they pretty much selects which incoming rays of light to project sharply onto the retina.
Lightfield displays can be built in various different ways, such as applying a sheet with an array of tiny lenses onto a regular screen, so that clusters of pixels, which will do, together, for a single letâs-call-it-pixel are projected out in a scattering fashion, making each such cluster, as a macro picture element, showing you a selection out of its constituent actual pixels, depending on the place from which you are viewing it. The same effect can be achieved using parallax barriers.
You could, as a thought experiment, surround yourself with enough tiny projectors, each shooting out a full image, that each projector appears as a single view-dependent pixel to you.
Hmm⌠This day has not been good for my staying below the twenty character limit.
I was totally thinking of my projector when I wrote that. If recall reading their patent correctly they use three separate LEDs RGB to form the colors they need for the projection.
But their Lightfield unit is still the same DLP tech just with a customized combiner lens. They did not actually do anything novel optics wise here. There are no movable lenses and no eye tracking in the unit that was publicly shown. See WO2015103640A1 - Imaging a curved mirror and partially transparent plate - Google Patents But avegant achieved their object depth without a new lens design. It is all achieved in software. If you look at their overview from Verge and Tested you will see how they achieved this since Ed Tang alludes to this solution multiple times. As for how they did it, well that is the million dollar question and the reason their AR solution is superior to everything available consumer side in AR. The reason to bring it up here is because whatever they used can be used in similar fashion in VR but with tweaks. Their lightfield implementation is just buzzwords but the way they pull angular and depth information is based on the lightfield approach, however again they are quite secretive and what makes their solution tick.