Aliasing Problem with Elite Dangerous - Pimax 8K X

Dear Community and wide FOV Fans ;),

I am a very activ E:D gamer and in VR it is a quite unique experience.
But the aliasing in E:D VR is a little problem for me.

Using of Anti-Aliasing in Elite makes the situation and also the performance more worse (tryed any option).
I am using the 8KX with 3800x3300 resultion in steam VR
and 1.25 render scale in pitool 268 (like recommended in the spreadsheet).
In game setting is HMD Quality at 1.5 and supersampling at 1.0

So for any suggestions on how I can improve the aliasing I would be very thankfull!

Best,
Pascal

2 Likes

You cant , your resolutions is OK and expirence is a bit blurry with AA or jagged without. Even at 2D screen without strong SS is jagged . Get used to it :slight_smile: Cobra engine.

3 Likes

Unfortunately, ED has some of the worst aliasing of any modern game. It uses a “deferred renderer”, which means that “real” antialiasing techniques don’t work. Worse still, the stations have many closely-spaced, thin, parallel objects like grates. It really is a worst-case scenario.

What you need to do is set your supersampling to as high a setting as your system can handle.

Personally, when playing ED, I now use Small FOV on my 8KX, so that I can get the maximum supersampling per-pixel that my lowly RTX 2080 can handle. I really want to upgrade my GPU, but current availability and prices are terrible.

4 Likes

@Yata_PL @Yata_PL

Thanks for your suggestions.
I had already the thought, that the aliasing can not be improved any further.
Now I have the proof. And yes I am mostly used to the aliasing, but it is a so wasted potential for such a awesome space simulator. Graphically the game in VR is very beautiful, if not the aliasing would be. But I have to accept it completly, as it is.

@neal_white_iii At this point I would wait until the next gpu generation alla Lovelace or RDNA 3. Then you will have an even higher boost in performance than going from a 2080 to a 3080 or 3090. As long as you have still warranty at you 2080, hold it.
In 2022 the gpu shortage “should” improve, due to ramping up of the production of the silicon.
So your probability to get the new generation should be higher.

1 Like

for the graphic laymen, among us. Here’s a great thread on what forward rendering actually is.

i think we would all benefit in an Elite Dangerous 2 about now. This engine is old now.

1 Like

@drowhunter

Interesting link and informations, thanks for posting :grinning:

1 Like

I must admit that after reading this I was not very wiser, so I went on trying to find more info about the difference. To my surprise there were mostly superficial comparisons some even downright wrong. I am not a game developer, but I believe I would be able to understand a technical explanation, but I did not.

Anyway, all the articles I read were mentioning that the deferred renderer was unable to do proper anti-aliasing (though I am not sure why), so I wonder, how this would solve the problem with ED.

As it happens, I believe I’ve heard something in the distant past, to the effect that ED, at least at that time, used deferred rendering for some things, and forward for others.

I have no idea how many separate layers the game renders and composites together, but it is always safe to assume that UI elements are done on their own, and I quite suspect planet surfaces may be done in one pass, and every object, that may appear on that surface, in another, because if one push graphics fidelity to the point frame rate gets really low; One can move one’s head around, and actually see motion lag between surface rocks, buildings, SRVs, and so on, and the ground they are supposed to be… ehm… “grounded” to, which could be imagined to mean the layers have even been set up in such a way that their rendering is allowed to cross frame boundaries, rather than only being a floating point precision thing, between recusions of coordinate system parenting, which Elite also exhibits a lot of (take, for example, a trip with the vanity camera, e.g. when docked at a station, and check how the camera being parented to the ship, which is parented to the landing pad, which is parented to the station, leads to whole sections jittering around, relative to one another, when viewed up close, and the mantissa is not fine grained enough). :stuck_out_tongue:

Planet surfaces have also so far exhibited less aliasing than “loose” objects, to my eye, which could be construed to suggest they are rendered forward and receiving the proper multisampling antialiasing one can not have with deferred, but that could of course also simply be a matter of how the heightmap topography tends to be be kind of soft and rounded, with texturing (with anisotropic filtering) carrying most detail, whilst being similarly soft and low in contrast, which could kind of attenuate the “standing-out-ness” of any jaggies.

As it happens, it has been noticed (well, it stood out to me, at least) that the new more detailed planet features seen in ED:Odyssey promo videos, exhibit a lot more aliasing than what we have previously seen on planets. This could simply be the inherent effect of the aforementioned difference between old low frequency detail, and new high frequency detail, BUT one could also couple it with something that was said in a Q&A with one of the planet generation/rendering developers the other week, that more stuff now will have “physically based rendering” materials. -Spinning on, on that thread, one could conceive the notion that planet rendering could be about to transition from forward rendering to deferred, in line with the rest of the game, which would subject it to the same drawbacks as all the “hard” objects previously in that pipeline.

That, however, should potentially come with the associated upsides to deferred rendering, and maybe we could finally get lighting from multiple stellar bodies - a long desired fundamental feature, even if maybe shadow maps could be presumed to remain from a single global source (plus one local one), for performance reasons; As well as, on the PBR side, more shiny ice, to offset the additional specular aliasing that is bound to come on the other side of that coin, as you get into recursiveness and resolution sampling matters between normal maps and reflection maps… :7

An old engine that is modularly designed, can always have bits swapped out and updated - even should it turn out to be Gamebryo. :7

(EDIT: So if things are about to go from bad to worse, jaggie-wise, maybe we could hope for a some good implementation of TAA or DLSS to soften the blow, although I am not convinced, neither of the prospect of FDev doing anything of the sort, nor the output quality of such algorithms… :7)

This is overly simplified, but a forward renderer draws sorted triangles from back to front. When antialiasing, the edge of each triangle is combined with previously drawn pixels.

No such sorting is required for a deferred renderer, which prevents “overdraw”, so it is a more efficient algorithm. The first pass sets pixels in the Z-buffer (depth) and subsequent passes don’t draw anything that is behind the Z-buffer, which speeds things up. The problem is that because everything happens at the discrete pixel level, antialiasing doesn’t work (because the “behind” pixels are not drawn and there is nothing to blend the “front” pixels with).

Supersampling solves the deferred rendering / antialiasing problem, at a great cost (drawing many more pixels). That’s because several pixels (all with their own depth) are combined for the final output, which smooths the edges. I’ve experimented with ED and to look good (because of all the grates) you need at least 3x scaling, which means drawing 9 times the pixels. (3 times both x and y dimensions.) 4x supersampling looks great, but that’s 16 times as many pixels (and on my system, it draws about 1 frame per second).

TTA would also solve the problem, but it’s very complicated to do (since movement data from previous frames must be retained and incorporated into the new frame). TAA can also introduce drawing artifacts.

3 Likes

Well my understanding is that with deferred rendering you hold all polygons in memory until it’s time to draw them. This has the benefit that you can know what the entire scene looks like, including occluded objects.

Real anti aliasing is probably only possible because this data exists. Whereas with first rendering each polygon is rendered without knowledge of its neighbors. Which renders certain anti-aliasing techniques useless.

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.