Display Stream Compression, Human Visual Hyperacuity, VR

For the sake of anyone else thinking about what DSC means for VR, I want to be absolutely unambiguous on the logic behind using compression for VR.

tldr
Any compression artifacts can reasonably be expected to be totally drowned out in VR environments, which are always somewhat natural, even for the Pimax Vision 8kX. Pardon me if this could have been better written, it is a rather exhaustive summary of common sense and an obvious convergence of evidence.

_
VR is a huge pipeline of mathematical compromises, forced by legacy software, lack of processing power, physical transistor (clock rate) limitations, physical display limitations, and manufacturing cost.

DSC, as currently described and used, is the very least of these.

Compression ratios used over DisplayPort, HDMI, and similar, are extremely low. Tens of gigabits of raw video, to tens of gigabits compressed, maybe 3:1. Compression ratios used for lossy image/video/audio encoding, JPEG/H.264/OGG/MP3, tend to be far higher, gigabits of raw video to megabits.

Consequently, the opportunity for artifacts is relatively minimal.

_
VR physical resolution, 3D rendering, and lighting, naturally makes it unlikely that low-compression artifacts are observable, or especially reduce actual information, at all.

Artifacts seen under obviously sensitive synthetic flicker/panning tests, probably require a static reference and human visual sub-pixel hyperacuity to discern. Even then, these artifacts are not usually detected by study participants. Breakdown of the simple algorithm is another possiblity, but rare.

Static reference - VR environments are not static images. To make a stastically significant detection of which of two headsets are/not using DSC, would require discerning artifacts effectively dithered away by even especially small head movements difficult to restrain adequately.

Sub-pixel hyperacuity - even the new Pimax Vision 8kX headset is still 10x short of meeting human visual hyperacuity limits, barely approaching normal human visual acuity of 20/20. Human ability to align sub-pixel objects at this boundary of display technology is already reasonably expected to be quite poor.

Breakdown - DSC does use a relatively discrete algorithm for compression. Consequently, it theoretically could break down with somewhat unnatural inputs, such as text/graphics, if its hard limits were exceeded. This is unlikely, and much more unlikely with VR, due to lighting dithering the colors in both time and space.

_
DSC works as a relatively discrete process.

  1. Color space is converted from RGB to something more efficient (unimportant).
  2. Horizontal lines of pixels are encoded as how much one pixel changes to the next, only when enough change is detected, allowing instant transition from solid black to solid white.
  3. Color depth of a horizontal line of pixels is chosen by how random the differences are. Pure random color dots, less depth. Large patches of similar colors, more depth. Allows the most gradual changes in color to be seen without banding.
  4. Exact color values are given for a few chosen pixels. Usually allows unnatural images of any color to use exactly the correct color.

By contrast, something like JPEG, H.264, or even OGG/MP3 audio compression, uses a less discrete process - frequency filtering.

  1. A line of pixels, and/or a pixel’s value over a few frames, is plotted like an oscilloscope line.
  2. Basic low-pass, high-pass, and band-pass filters are applied.
  3. Amplitude of each frequency bin is taken.
  4. Decompression exactly reverses the process.
  5. Somewhat more complicated things may be done as well, such as dividing the frame into sub-blocks, reducing noise to improve compression, or some of the steps used by processes like DSC.

At very low compression ratios, DSC may break down a bit with synthetic images combining exactly solid colors, high-entropy noise, and periodic patterns. Whereas frequency filtering allows much higher compression ratios before approaching a risk of artifacts, at the expense of far more computation.

_
For DSC artifacts to be observable, images would almost certainly need to be sent to the framebuffer directly, and approach the limits of normal human visual acuity, with none of the rendering in 3D space typical of VR.

Some psychometric researchers using VR headsets to present raw stereo pairs may need to consider how known artifacts could impact their study. Worst case, their images could be run through the same algorithm, highly magnified, and viewed in slow/fast motion, before performing the study. Whether these limits will be encountered in any case for which current VR headsets are in fact the best available technology seems doubtful.

Untitled
(Various HTML colors casually selected, random RGB noise added, checkerboard pattern added. Absolutely no coincidental resemblance to any real-world symbols intended, and was actively avoided.)

_
For the future, within 10 years of development…

  • Graphics hardware will barely begin to computationally accommodate raster resolutions approaching human visual hyperacuity limits.
  • Improved semiconductor, cabling, and fiber optic technology, will allow higher clock rates and more comprehensive parallelism, negating the need for DSC.

_
Acknowledgement

11 Likes

Dang, my head can’t seem to understand any of these. I guess you’re basically saying Pimax 8KX still sucks! LOL

I don’t think so. I think his point is that DSC is likely to be undetectable in VR, because…

5 Likes

Thanks for this, very interesting and exactly the kind of thing I always like to learn more about.

It’s at the limits of my understanding but appreciate the comparison to audio and video codecs :+1:

3 Likes

Wow that some very interesting information, thanks Matthew!

5 Likes

Ultimately, my point is that there is overwhelmingly greater pressure on almost every other step in the entire VR pipeline than to get rid of DSC.

Aside from that, I should also point out the human visual processing system operates on much the same principles as video compression does, and this is due fundamental mathematical limitations that apply to image processing as inflexibility as the laws of thermodynamics, the speed of light, etc.

There is no ‘magic evil’ about DSC. The few academic research applications I can think of that would plausibly be impacted, would not be using VR, will carefully consider the impact of known artifacts on their studies, and could use specialized display technology that has been available since long before personal computers existed.

Bottom line. If someone starts trying to sell VR headsets to anyone outside certain academic research institutions with various kinds of Fear-Uncertainty-Doubt about that headset being “DSC-Free!”, their customers should be pointed here for a more balanced perspective.

1 Like

Sony clearly demonstrated there is a lot of inefficient coding in gen1 vr with how cable psvr is.

Even if the software is fully optimized, we are looking at a 10x improvement in GPU capability, plus whatever new loads are imposed by higher-quality things along the way like more shaders, raytracing, etc, before getting rid of DSC is an issue.

The reason DSC happened seems to be that transistors are already getting down to tens of atoms, and at that point, more transistors is a lot more achievable, than higher clock rates (which means smaller transistors), or more data lines through the entire display electronics pipeline right to the panels.

1 Like

Remember Drivers are part of the code efficiency.

I remember the Attrib.exe from DOS some demonstrated just how inefficient code. Someone coded their own version that was a 1/10 of the size.

And I could do another long topic going over…

All the hardware calculations to show…
Whether raytraced or approximated…
Computational cost of achieving the kind of realism done by CryEngine 2 is within about the same order of magnitude either way…
Modern GPU hardware genuinely barely has the operations-per-second to render all these objects, textures, and pixels…

Ultimately the hardware must improve before DSC will become a relevant bottleneck.

There is only one way that extent of improvement will happen quickly - multi-GPU.

1 Like

Or GPU cloud farms like AMD has stated for Raytracing boost. It still comes down to inefficiencies in coding and hardware power. Now bare in mind how powerful consoles have become quite powerful with low spec cpu with custom efficient OS.

While I like cutting edge of say Android. Apple’s IOS on the iphones is more efficient with using it’s restricted hardware profile.

It has been a problem for a long time now. Just throw stronger CPU/GPU and more memory.

GPU cloud farms obviously require video compression to deliver the results to end users…

To an extent yes especially with formerly OnLive and is Goforce?(Nvidia cloud gaming) and Stadia.

But what Amd is talking about is augmenting. Steam has a kind of option of this with shader caching.

I have a hard time imagining how any kind of ‘augmenting’ would either not introduce artifacts (ie. DLSS), or would actually improve GPU pixel/texture/object rendering performance locally by 10x.

maybe stuff like pretracing some rays and whatever requires heavy computation but is not terrible timing critical and then use those calculations combined with some dynamic ray-tracing on the GPU to extrapolate a result and speed up the render of the final frame… but it’s starting to sound like a bunch of compromises by then.

1 Like

‘VR is a huge pipeline of mathematical compromises’… ’ DSC, as currently described and used, is the very least of these .’

i’m not arguing that, i was just trying to imagine what could be offloaded to the cloud without actually introducing extra latency / compression.

Yeah, multi GPU is coming down the tracks. Need to move sideways. It’s the only way. All the big manufacturers know it and are beavering away on a solution to the problem.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.