For the sake of anyone else thinking about what DSC means for VR, I want to be absolutely unambiguous on the logic behind using compression for VR.
Any compression artifacts can reasonably be expected to be totally drowned out in VR environments, which are always somewhat natural, even for the Pimax Vision 8kX. Pardon me if this could have been better written, it is a rather exhaustive summary of common sense and an obvious convergence of evidence.
VR is a huge pipeline of mathematical compromises, forced by legacy software, lack of processing power, physical transistor (clock rate) limitations, physical display limitations, and manufacturing cost.
DSC, as currently described and used, is the very least of these.
Compression ratios used over DisplayPort, HDMI, and similar, are extremely low. Tens of gigabits of raw video, to tens of gigabits compressed, maybe 3:1. Compression ratios used for lossy image/video/audio encoding, JPEG/H.264/OGG/MP3, tend to be far higher, gigabits of raw video to megabits.
Consequently, the opportunity for artifacts is relatively minimal.
VR physical resolution, 3D rendering, and lighting, naturally makes it unlikely that low-compression artifacts are observable, or especially reduce actual information, at all.
Artifacts seen under obviously sensitive synthetic flicker/panning tests, probably require a static reference and human visual sub-pixel hyperacuity to discern. Even then, these artifacts are not usually detected by study participants. Breakdown of the simple algorithm is another possiblity, but rare.
Static reference - VR environments are not static images. To make a stastically significant detection of which of two headsets are/not using DSC, would require discerning artifacts effectively dithered away by even especially small head movements difficult to restrain adequately.
Sub-pixel hyperacuity - even the new Pimax Vision 8kX headset is still 10x short of meeting human visual hyperacuity limits, barely approaching normal human visual acuity of 20/20. Human ability to align sub-pixel objects at this boundary of display technology is already reasonably expected to be quite poor.
Breakdown - DSC does use a relatively discrete algorithm for compression. Consequently, it theoretically could break down with somewhat unnatural inputs, such as text/graphics, if its hard limits were exceeded. This is unlikely, and much more unlikely with VR, due to lighting dithering the colors in both time and space.
DSC works as a relatively discrete process.
- Color space is converted from RGB to something more efficient (unimportant).
- Horizontal lines of pixels are encoded as how much one pixel changes to the next, only when enough change is detected, allowing instant transition from solid black to solid white.
- Color depth of a horizontal line of pixels is chosen by how random the differences are. Pure random color dots, less depth. Large patches of similar colors, more depth. Allows the most gradual changes in color to be seen without banding.
- Exact color values are given for a few chosen pixels. Usually allows unnatural images of any color to use exactly the correct color.
By contrast, something like JPEG, H.264, or even OGG/MP3 audio compression, uses a less discrete process - frequency filtering.
- A line of pixels, and/or a pixel’s value over a few frames, is plotted like an oscilloscope line.
- Basic low-pass, high-pass, and band-pass filters are applied.
- Amplitude of each frequency bin is taken.
- Decompression exactly reverses the process.
- Somewhat more complicated things may be done as well, such as dividing the frame into sub-blocks, reducing noise to improve compression, or some of the steps used by processes like DSC.
At very low compression ratios, DSC may break down a bit with synthetic images combining exactly solid colors, high-entropy noise, and periodic patterns. Whereas frequency filtering allows much higher compression ratios before approaching a risk of artifacts, at the expense of far more computation.
For DSC artifacts to be observable, images would almost certainly need to be sent to the framebuffer directly, and approach the limits of normal human visual acuity, with none of the rendering in 3D space typical of VR.
Some psychometric researchers using VR headsets to present raw stereo pairs may need to consider how known artifacts could impact their study. Worst case, their images could be run through the same algorithm, highly magnified, and viewed in slow/fast motion, before performing the study. Whether these limits will be encountered in any case for which current VR headsets are in fact the best available technology seems doubtful.
(Various HTML colors casually selected, random RGB noise added, checkerboard pattern added. Absolutely no coincidental resemblance to any real-world symbols intended, and was actively avoided.)
For the future, within 10 years of development…
- Graphics hardware will barely begin to computationally accommodate raster resolutions approaching human visual hyperacuity limits.
- Improved semiconductor, cabling, and fiber optic technology, will allow higher clock rates and more comprehensive parallelism, negating the need for DSC.
- @Fabrizio for referencing relevant psychometric study on DSC artifact visibility.