Foveated rendering should really make the 8k shine. Might even look as good as the 8k x with high super sampling. Speaking of foveated rendering, can we get an update on the eye tracking? @deletedpimaxrep1
Also, itâs not looking like itâll happen, but integrated eye tracking would be a super cool addition. Someone else suggested this as well. In my opinion, Adhawkâs eye tracking would be the best for the 8k, as it has low power consumption while still being fast. Oh, and itâs REALLY cheap.
Yeah, not including eye tracking from the start will be a big missed opportunity. I even donât know how they would expect to still meet that stretch goal for everyone afterwards. Such a module isnât easily retrofitted by anyone.
Also, the way I see it, the M2 could be, but may not be, the final hardware version at this time.
Well, Iâm sure all the time is used well (no sarcasm intended)
I âgotâ the argument as such, but I do not buy it. If the static upscaling algorithm (working with fixed target- and source- sizes, and therefore also a fixed scale factor (1.5), which one would hope it optimised for), somehow makes a âbadâ, or âOKâ image look worse than it started out, it would do exactly the same to a good one â There are no extra pixels to draw from; Those are already âbaked inâ to the delivered ones. The âShit in: Shit outâ) principle is true enough, but so is generation loss, and a better antialiased source image could at best âsmooth overâ the blemishes, on account of being softer to begin with, whereas normally with a decent scaling routine, scaling up would introduce some softening, as it preserves proportional uniformity.
The 5k meanwhile, would still get an image size that is optimal to it, with or without supersampling, and benefit in one manner or the other, either way - looking every bit as much better than OK with higher supersampling as the 8k, regardless of its blockier resolution, and accompanying SDE, and other weaknesses.
If we were talking e.g, say, simple nearest neighbour scaling, anything other than integer scale factors would introduce periodic distortions, as some pixels are doubled, and others are not; Sharpest possible outcome - no softening anywhere, but the downsides are inevitable.
Now, one could pre-arrange spatial distribution within the sent bitmap, in software, to overcome (or rather: âplay to-â) the perfectly known and predictable deficiencies in the on-HMD upscaling algo â kind of like we already have for lens distortion, but I wouldnât count on that.
Iâll say this: If (and thatâs a big âifâ), the transport image is fully utilised, and âblank spotsâ, that are not visible through the lenses, are used to carry content that is then scaled to within the visible areas, then weâre looking at a way to carry more information to the 8k (âŚand especially if the scaler could treat different regions of the picture as having different resolutions), because then youâd be utilising more of the resolution of the two 4k display panels, whereas the 5k does not have the physical pixels to put such extra information into, and would have to throw it away (âŚor scale it down, if it could).
Oh man⌠I ended up armchair âlecturingâ an experienced software developer⌠Iâll crawl back under my rock now.
Maybe it is not just software problem but the hardware one. There are still some open questions:
When exactly is the SS applied?
There is SS which is normally applied (by every other HMD) at the scene rendering to compensate pre-lens warp distortion. I guess Pimax may need to do this one as well (unless they invented some break through optics).
But from description @deletedpimaxrep1 gave it is not clear if we talk pre-lens warp rendering SS or some additional SS applied to the after-pre-lens-warp image.
The HW scaler may work with very simple algorithm without having the full info about the neighboring pixels (as they may not have been transmitted yet), as anything more complex might introduce additional latency.
My speculation is that Pimax opted for low SS for pre-lens warp, which led to aliasing artifacts in the perceived image, which were further enhanced by upscaler. The same artifacts (assuming the rendering res is the same for 5K and 8K) are probably not that disturbing on 5K.
On the other hand, if 5K runs at 90Hz it may still need ~10% more bandwidth than 8K @80Hz at the same res. So it is business as usual at Pimax. The answers are just producing more questions.
This may sound pessimistic but do we know if the meetups will take place during August or, instead, simply are being planned during the month of August so that they take place in the future (after August)?
Iâm excited to visit a meetup and I hope they include London, UK.
Ppl without experience in computer graphics processing and display technology shouldnât try to get their head around regarding the higher demanding 8k vs 5k at same input resolution. Itâs a reasonable explanation that we got and the devil might very well hide in the detailsâŚ
Put in a very simple way: we can assume a high quality scaler is too expensive in terms of delay, power and heat to have it inside a HMD. So itâs very likely only capable of doubling same or average pixels for 2x scaling or making an average single pixel value for 1.5x scaling. But then for content without very good anti aliasing or super sampling the scaling for high res display does no good. It would just show very sharp details of jaggies and artifacts on a high res display that otherwise the lower res display would compensate.
(It should be comparable to the bad TV channels scaling effect on the early HDTVs in case you can remember that from some 10yrs ago. It was even worse with bad compressed input instead of uncompressed source material. With the switch from HDTV to 4kTVs that industry didnât make similar mistakes, not to my knowledge at least.)
In the end with a better input signal the scaling will work well, since fine details get magnified. So over time when combined with faster GPUs there is nothing to worry about. All that is needed is a higher quality rendering on the 8k to shine.
Sorry Xunshu but this is totally a nebulous answer again. Your answer implies that there is doubt again that the M2 is a good and final version. On the one side you start mass production, on the other site this answer and another testing. I fear, that this will go on to eternity. And did the M1 testers also test the M2 version? And did they give the green light for mass production???
No, testers donât have the M2 yet. They havenât even been shipped yet
Iâm hoping that the M2 tests are just for a final check but the Pimax team is already sure enough that the M2 version is ready for productionâŚ?
That is exactly the question!! It would be no testing if they would be sure about their product. And normally you test before production. Tha is why âI have not a good feeling about hisâ. So Pimax, whatâs going on? Make a clear statement please. THX.
Hardware is more important than software as once it is shipped, it canât be updated without a mass recall. Take your time. While it is probably an unpopular opinion here, I will wait til Christmas if I have to in order to have high quality hardware. Software can always be updated after.
If the M2 is not good enough for backers/consumers after all, they probably have business customers for whom it is good enough. So in case an M3 is required, the M2 units produced will be shipped to business customers (and the odd backer who wishes to get a slightly flawed 8K to save him a 2-3 months of waiting).
Just be glad that they actually intend to listen to the feedback fo the testers. If the testers say, the hardware is not good enough, it is what it is and I rather wait for an M3 in that case because if it is not much fun or breaks easily, what is the point in having it delivered at all ?
My understanding of supersampling / scaling
Supersampling main purpose is to reduce aliasing artifacts. It achieves this quite simply by telling the game engine to render at a higher frame size than what the native panel size is. Scaling (in the HMD) on the other hand just makes the image stretch to fill the panel size?
In addition, I am not sure if the supersampled image is simply scaled down to native size (which improves sharpness compared to rendering at native size in the first place) or if the Supersampled image is used as a lookup table of screen space pixels for its own scaling process. If the latter, that could also be used by the hardware scaler in the HMD?
Undersampling or downsampling (lowering the SuperSample value) does the opposite, it renders at a smaller size than the native panel, this is purely for reducing load as the engine has to render less pixels. The downside is that the image is not as clear as native and is akin to changing a 2D game resolution from 1080p to say 800x600
Finally the question
So, this brings me back to your quoted point. You say that latency can be introduced because neighboring pixels might not have been transmitted to the scaler yet. Why would the scaler work on a stream rather than full frame buffers? How can it scale at all on a per pixel basis?
My understanding is that the HMD scaler would have a fixed and known time to scale a full frame. If it goes beyond itâs allotted scale time (e.g. 11 ms) then Brainwarp would need to kick in and replace the partial scaled frame with the previous full frame and the new incoming frame. That is assuming Brainwarp monitors the hardware scaler too.
Speaking of which, why is there no option to automate supersampling that lowers it or raises it. The engine knows how much extra load time it needs to process scaling so this would seem an ideal place to automate SS for the absolute optimum SS value so that the HMD scaler can work efficiently without resorting to Brainwarp.
Itâs only about that point that we have to come to an end anytime with testing and developing. Because then you have to wonder, how serious was Pimaxâ idea to develop such a HMD and start a kickstarter campaign. Did they estimate realistically their goals and especially their SKILLS??