Since playing around with SuperSampling I noticed that there are big differences in terms of quality and performance even at the “same” settings (at least by the numbers) whether it’s being set In-App or Outside-App.
Eg. it feels like Arizona Sunshine and Eleven Table Tennis look better and run more smoothly doing SS via SteamVrSettings (Vive) or even TrayTool (Oculus) than from within the app.
Using Videoapps it seems the other way round.
Am I missing something? Is there sorta general rule of thumb, simple try and error, an odd coincidence or do I have weired perceptions?
I thought that I actually have a basic understanding how SS works but these observations do not add up and I did not find anything like “Supersampling In-App vs Outside-App” on Google. Any thoughts on this?
There are many ways in which implementations can vary. Three well known issues are:
-
SteamVR settings were changed some time back, so that SS 2.0 (in SteamVR settings, and nowhere else) no longer means twice the width and twice the height, but twice the total pixel count for the entire screen, so the actual SteamVR supersampling nowadays, for any given number, is only the square root of what it used to be. This change was made because it was deemed easier for the user, if the number more directly reflected the increase in work load, as you supersample. I tend to disagree, but, oh well… :7
(EDIT: At about the same time, the sampling method was changed, to one that produces a smoother image. On popular request, an option was soon thereafter added, to switch back to the old one, because whilst it exhibitis more aliasing, the picture is much sharper and rich in detail, which many, myself included, prefer (find this option in the developer section of SteamVR settings; Enable Advanced Supersampling Filtering (default: ON, which is the newer algorithm), just below the reprojection options.) -
In something like Elite Dangerous, you have two in-app supersampling options. There is “Supersampling”, which works all internally: A larger image is rendered, and then downsampled, before it is handed over to the target runtime (SteamVR or Oculus); …and “HMD Quality”, which hands over the larger rendered image, without downsampling it first, which means the runtime gets the whole thing to work with, when distorting for the lens, making for a better result.
In the case of Elite, “external” SS and “HMD Quality” stacks, so twice the size per dimension in both places means 4 times SS, and 16 times the workload. Other titles may have treat things differently, -
At least until recently, Unreal Engine 4 titles universally completely ignored any supersampling you asked for from SteamVR. Not only that, but I suspect it had a resolution “hardcoded” into the engine build, that did not include the base 1.4 oversampling used with Vive (to fully utilise the resolution in the centre of the lens). This meant everything was extremely blurry, and your only hope to improve matters was if the app developer included a supersampling option, or if you increased “screen percentage” (and at the same time got rid of the ugly post effect blur that went for “antialiasing”), in the application’s .ini files - something that does not necessarily always work.
Today UE4 has added automatically scaling screen percentage to the developers’ tool belt, along with a retrofitted forward rendering path option (option for the developer, that is; The game must be designed with the chosen tech in mind, so… The older “forward” method is less powerful, but allows for better AA algos), which should hopefully improve matters. :7
Awesome indeed. Though we’d need to find out how Pimax’s driver works in Supersampling(Render Multiplier). I believe it still functions as you said the steamvr driver did. But because pimax is using its own driver i believe the steam ss doesn’t do anything since steamvr is not rendering.
Wow, thank you so much for taking your time laying it out to me like that, apreaciate it!
I believe @Enopho can give you some information on where to adjust your settings (at least for ED). I don’t have the thread handy, but I know that there is one specific to ED and I have already committed to working with him to find settings appropriate for ED with the 8k on AMD (Vega 56) hardware when the time arrives.
Hope this helps.
Thread is in the VR Content section I believe… I’m at daughters birthday party right now so will find it later.
Eno
** EDIT
here is the link for ED
http://community.openmr.ai/t/additional-testing-with-elite-dangerous/3203/27
since it‘s obvious that you are well familiar with the different concepts and variations of SS I try to take advantage of you and clearify some more stuff for me if I may
I get the idea of downsampling as form of superior antialising. Did that a lot with my full HD panel, so I could make good use of all the surplus gpu power. (don‘t do it anymore cuz I‘m running a 4k screen now)
So I unterstand that rendering a game at a higher res than the pannel actually supports, ends in form of down sampling.
This makes also sense for eg. watching 720p movies (upscaled to 2/4k post) than downsampled by the limitations of a 1080p screen just like madvr does.
So Supersampling in a game basically just upscales the content (post) instead of actually rendering it „ingame“ (which would be more demanding) like the movie example above.
So now to the point
If a movie is already in 4k/5k, upscaling seems useless, apart from the downsampling by the screens of the HMD. Except it‘s a shitty player app (unfortunaly most of them) that is limited internally to let’s say 1k. So the chain looks like
4k footage - player limit 1k - upscaled to whatever set in steam settings - downsampled by limited res of the screen. If it’s theoretically a good player that would pass the res 1:1 internally, the upscaling would not be needed? Or would there still be a noticeable picture improvement if 4k footage is upscaled to 8k, then downsampled to kinda poor (as of yet) HMD res? MPC Player has some nice diagnostic tools build in to check what goes in an out. Is there a tool out there to measure the actual output res of a vr player? I suppose that actual supersampling of 4k footage is just a waste of recources which ends in a less smooth playback
and not noticeable better picture IF the player is capable of passing 4k.
Sorry for my confusing choice of words since I’m not a pro.
Umm… Doubt I can meet your expectations, but…
I’ll go out on a limb, and opine that there is never a point to upscaling, other than to fit an image to a frame. Video is already finalised to the resolution it comes in, and scaling it up can not conjure detail into existence. Rather you’ll get into the aliasing issues from spotty sampling I mentioned, especially since a video texture is unlikely to have mipmaps, but I’ll allow that the averaging that has at that point already occurred through the upscaling, could itself be said to have a bit of an “antialiasing” effect (blur, essentially).
You master content at a high resolution, to preserve as much detail as possible, for as long as possible, and then, as the last step, downsample it to the target resolution once. After that, things can only degrade. Scales in multiples is easy, but if I scale up a 100x100 picture to 101x101, that sounds like a tiny change, but with a quality focus, it still means every pixel will need to change, in order to maintain proportions, and since we are working with discrete picture elements, that means a value change, rather than a position one, resulting in slight softening: moving a black pixel, surrounded by white, half a step to the side, famously turns it into two medium grey ones – the same applies in proportion, universally.
So everytime we rescale the picture, it will be blurred a little, and therefore we typically only want to do it once, and that one time is the VR runtime’s lens distortion compensation stage. With Pimax 8k there will be an additional upscaling after this, when the picture arrives on the HMD, which is one of the reasons I posit that the 8kX should look slightly better than the 8k, even if rendering both at the exact same resolution – we’ll see about that. :7
Hope something in there more or less answered your question…
Thanks again for the clarification, had to read twice though but also because English is, obviously, not my native language.
Honestly I don’t play ED (will have a look soon though) but now I grasp the differences in games and the mipmaps principle. I also very much like that “other than to fit an image to a frame”-Point of view which is similar to audio editing (I do a lot of audioengineering) but honestly I had remarkable results with madvr with upscaling lowres footage. So there are sometimes probably rare cases where it may actually make sense. But of course altering anything substancialy will always come with compromises, sometimes neglectable if resulting in a healthy trade off.
I was also asking about a test tool because I don’t get, and that’s just theory, why a vr movie player comes with inbuild SS while beeing limited to lets say fullhdres internally. Instead it could just pass the footage (eg 4k) and let the lens distortion do its Job and achieve a much better result than, as you said, goin back and forth multiple times. I’m still testing various players and try to get a more in-depth understanding why the results (in terms of picture quality) differ that much. I think it’s safe to say that if you play 4k footage and get much better results using SS there must be something wrong with the app and I can assume that the app is actually downscaling before applying SS which makes no sense at all.
Not a native english speaker myself, and tend to write rambling confusing nonsense, even in what is my native language – sorry about that. :7
Hmm, I googled “madvr”, and got to a page which I did not find particularly informative…
I take it the application takes the output from any of the media players listed (none of which I have ever heard about. :D), and displays it in VR in some manner…? Is this manner “glueing the video to your face”, so to speak, or does it put it on a screen in a virtual environment? Or is it actually not a VR app at all, other than in name, or even real-time – maybe in effect an image processing program, or does it go between the video and the listed players, rather than the other way around (EDIT: …or maybe both!)?
How does it interact with the listed calibration packages (…of which I know of Calman and LightSpace at least)?
Either way, we are talking upscaling the video content, prior to rendering it into the HMD, are we, and not just supersampling the intermediate image (“the game”, if you will), into which the video is rendered?
With the latter, you will use more of the pixels from the video (as a motion texture), even if the texture filtering method is poor, and definitely get a better result.
The former might help, by producing half-tones between pixels, for the filter to sample, but it should create those by itself anyway, so… I can only imagine you’d get sharp video, but with quite a bit of aliasing.
The madvr thing sounds very focussed on quality, so I would have expected it to forgo any “workaround” style trickery, and instead go straight for using a high quality (and processing intensive) resampling method, to render the original video to the desired output itself, rather than relying on something like game engine defaults, but I have no idea. :7
The 200% scaled up image on the page nicely demonstrates a few different resampling algorithms, by the way. When looked at using a screen magnifying tool (search for “Magnifier” on your dektop, and crank it up high; This tool magnifies “nearest neighbour”, without smoothing, so that one can clearly see pixels, blown up), one can clearly see how Laczos (a bicubic-like profile) produces a nicer result than bilinear, and how the NGU filter smooths out edges spatially, rather than just “working in place”, at the cost of giving things a bit of a “water colour” painted look.
One can definitely see the effect of downscaling, as you mention, when watching videos on Youtube, with the default interface, where the video is something of a thumbnail image on the top left - I notice more and more moire patterning in video there, for example whenever somebody wears something with high contrast and dense detail (high frequency) patterning, such as a two-tone knit jumper, or a pinstripe suit, which could be a sign of a rather simple filtering method being used, but there are other possibilites, too, involving things such as sharpening being applied in post.
In all, everything depends of in what order things happen, and how things are done in each step. Maybe Mathias (madvr developer) could be persuaded to reveal his secrets… :7
Honestly MadVR does not directly deal with VR but it’s an awesome tool to get the best out of movies in general.
I don’t dare to explain chroma upscaling to you cuz I just follow guides and fiddle around, the results are amazing though.
I roughly know how it works but I’m not fit to put it into words.
Using a LG 4k 75 Hz Screen I usually have this chain:
Video Footage 1080p 24 fps
reclock 24 to 25 fps
madVR chroma upscaling (NGU sharp) to double or quadruple
S(mooth)V(ideo)P(roject) set to 3x (25) to match the 75Hz of the monitor
or 2x30 on my old Plasma TV
I also tried this with Vorpx and MPC hc streaming directly to HMD (including Madvr/Reclock/SVP)
reclocked to 30x3 and I like the results. It’s a bit of fiddling since the footage may
vary in fps and more demanding codecs. So u need to check which reclock to choose based on the
footage, display target and how much the gpu can take with realtime upscaling demands.
It’s great for watching 2D movies and “non” vr (no180,360) 3D movies.
Unfortunaly I don’t like the results with 180,360 footage
Ok, so something that takes colour into account, I surmise, if the “chroma” naming is anything to go by… :7
I could imagine the distinct edges produced by the algorithm (as opposed to the gradients you get from ordinary resampling) could constitute some really nice and recognisable anchors for SVP’s motion analysis (and also Oculus’ asynchronous space warp), to work with.
Never tried SVP; How does it do when there is rapid motion (with associated blur)?