Tech Talk #1 : Why Unreal Enginer 5 is a GAME CHANGER?

Dear all,

Hope you guys had a relaxing and cherish Easter Holiday break!

We are about to start a brand new project “Tech Talk” on OpenMR.

The project is to bring upon an intellectual atmosphere where people debate and discuss their ideas leading to greater spread of different ideas, views and opinions.
Each Monday, we will propose a new topic here or if you guys have any in your mind, please reach us out!

So, let’s introduce the first topic which is rapidly shared and gone viral on the net.

“Why Unreal Enginer 5 is a game changer?”

From the official trailer and advertisement shown on the official site, they’ve brought up some good new features such as modeling and world partition.

Is this gonna allows our DEV(s) to have better tools to develop exciting VR software or games?

What will be unfold in the coming years? We shall wait and witness the greatness.

Leave a comment below if you have something to share with us!


I watched the unreal presentation it looks mind blowing also for normal games this is going to be next level :ok_hand:t2::ok_hand:t2:
Also very excited that crystal dynamics started using unreal engine 5 to make a new tomb raider game :partying_face:


Well, we’ll see how much of the new and flashy features developers will be able to afford to use in VR titles…

I downloaded the engine when it relased (…to the public), to have a look-around, and maybe see if I could drop a VR pawn into the Matrix city demo (…which I have not figured how to do yet, so that it shows up in the HMD, and not just on-screen - I’d just have to watch some weeks worth of tutorials, I suppose, just to helpingly find my way around the editor - much less get a basic understanding of the structuring… I could load the VR plugin, which unlocked the VR preview menu item, but selecting that only rewarded me with a crash on startup :7).

The city does look quite spectacular. Right from the start you can freecamera beneath the wooden benches that line the top edges of some planters, and there is texturing and nuts and bolts and stuff modelled there, inside the inch-wide gap between stone and plank, that nobody will ever normally see :9, plenty of fine detail everywhere.

…but it is also rather demanding, performance-wise, and leans heavily on temporally sampled upscaling; Turning around produces this grainy blurring effect, with pixelly fringes around everything, which resolve to something better over a few frames, after one stop turning. As a visual effect, this actually rather fits the The Matrix style art direction/world building, though :7.

Lumen produces some mighty fine global illumination, but it is a low res lighting model, and it does lag behind by a second or two.

Lots and lots of detailed and destructable (…at which point cars have their static Nanite parts swapped out on the fly, with regular ones…) mobile actors traversing the streets and roads of the city. -The much hyped “metahumans” are of course not of their highest fidelity kind, when there are that many on screen at the same time – it is going to be interesting to see how they look in the sort of titles where there is only likely to be a single one on at any time, and only to deliver a little monologue and then sod off, and where the developer may not always be that rigorous about optmisation, like Cyan’s “Firmament”.

The argueably minor little thing I was really quite curious about, was how well the parallax shader, that is used to quite spectacular effect on all the windows in every building in the city, work in VR. Still wondering - if somebody else, with more wherewithal than I, has checked, please do share your findings. :7


It seems to me that the advances in Unreal Engine 5 are aimed at flat gaming and not really VR. They’re focused on bigger and better with even more detail. That’s great for flat gaming where modern GPUs are increasingly overkill. But those same GPUs struggle with even moderately complex environments in VR. So I expect that most of what’s been added isn’t very meaningful from a VR perspective.

VR remains a sideline of their development. Note that Lumen doesn’t even work in the VR rendering pipeline yet. They say they’re going to keep working on Lumen, and VR support will be added sometime in the future.

VR is taking off, and game engine developers need to hop on that train. But so far it doesn’t seem like any major priority for them.

What I’m really interested in seeing happen is the development of rendering pipelines which are explicitly for VR from the ground up. Such that they’re not rendering as if for a flat game and then doing additional post processing to fix it up for VR. VR gear does not display in flat planes like a monitor does, and it should theoretically be possible to eliminate that interim step. That would eliminate a lot of wasted processing.

1 Like

No? None of these features are even available in VR…

1 Like

Semi-amusingly, what I see as the right way forward for VR rendering inherently means heavier processing load :stuck_out_tongue:: Raytracing - ground-up raytracing, that is - none of this hybrid nonsense.

In VR, all these things like realistic, accurate realtime lighting, shadows, reflections, and so on, make one hell lot more of a difference, than they do in a picture frame on one’s desk, IMHO. It’s the difference between a human character in-game, that looks like they were a picture scissored out of a magazine and pasted on top of the background, and somebody who is actually there, and have a little ambient occlusion interaction with the environment.

Thing is that although raytracing is a whole lot more work for the computer, it can be argued that with it operating kind of per-pixel; If the renderer was written to have the right sort of flexibility, RT lends itself much better to foveation than rasterised graphics: It is parallelisable; You could theoretically have a spherical (…or parabolic) viewplane, so that you don’t waste resources on a stretched rectangular one; You could adapt the density of your rays directly to the user’s gaze; You could take the effects of the lens into account in the same adaptation, again concentrating your resources where they make the most good; You could take its distortion into account at the moment the rays are cast, so that the frame comes out counter-distorted right away, eliminating the need to do that in post; You could dynamically reduce the amount of complexity per pixel, in many different ways, such as reducing how many bounces are allowed; You could structure your rendering the frame in such a way that samples are taken all over over it, density still guided by foveation, and quality refined with multiple passes, so that it builds up over time, all the way up until the moment the frame needs to handed off to the runtime, to make the next refresh, at which point rendering can instantly be aborted, and the frame be constructed from what we have so far, making dynamic quality rendering inherent to the system (That said: I also think that HMDs should be able to refresh at arbitrary times, so that the displaying of a 0.005ms delayed frame can simply be delayed by that much as well, throwing off the yokes of a fixed frame rate. :7).

1 Like

If UE5 didn’t have baked in support for Parallel Projections Off epic fail…:rofl:

Yeah if they could do that thing Sony is doing where geometric complexity is reduced for areas outside your fov.

I would prefer a Topic of why “OpenXR” is a game changer. And why doesn’t Pimax support it.


Well I guess if nanite and lumen work loke DLSS and just improve things that get ported to VR anyway. For example Luke Ross’s mods work fine with DLSS. I understand rendering pipelines matter for VR, deffered vs forward etc. Asking what this means for VR is a bit like what does DLSS mean for VR. Its probably not bad. But not every UE5 game will fully take advantage of all the features. Others will take advantage of the features, but be crap games. A 4090 will still be needed before the 12K becomes viable.

If a game is written from the ground up to find out what the eyes need to see first and foremost VR will make flatscreen look like trash. Dynamic foveated rendering is that first step. Nanite, Lumen and and UE5 in general don’t address the needs of VR properly.
Ray tracing is the first step to simply asking “what does this part of my retina see” a million times per millisecond. Not to anyone’s surprise thats hard for a 4k screen, but our eyes dont see that way and the detail (In pixels per degree is only a little more in the fovea than a 4k screen at a nominal distance and a lot lot less everywhere else. Plus the added FoV too of course)

In the end, Tobii “spotlight” may be our quiet friend in VR. At least in theory. Regrettably Tobii has always been in the background as a cute “3D addon” as thats where it got its roots…they may have no idea that they are at a nexus of where VR must go for the future.

UE5 does only a little to directly address the needs of VR.

I agree wholeheartedly. Dynamic frame-time based rendering has been the “fusion power” of graphics for too long. I simply don’t believe its going to be a reality anytime soon. So which is it? Tell the game “times up, give me what you got” or tell the display “hold on, almost done”. Ideally the only slider in the graphics menu would be “permitted frametime”. You tell the software “you’ve got 12 ms to render”. Go. If you’re running a potato you get choppy frametimes OR bad graphics. I mean, thats what we do right now except we have 20 levers to balance (30 in the case of MSFS2020 hahah), but ultimately that’s the balancing act with each game every time.

1 Like

Thanks for the weekly discussions, I think that’s a good idea to engage enthusiansts - as that’s what most Pimax users are!


haha , i really like the way you talk!That’ll be the topic of next week!!


Awesome! @mbucchia should see this!

My understanding is that Nanite does not work for VR. Instead, the geometry needs to use meshes, just like Unreal 4. They did say that they were investigating the possibility of using Nanite with VR, but that it wasn’t a priority.

Did they say what the technical blocker is? I have a hard time imagining one.

It’s OK, Luke Ross will mod it after the fact and make anything that can be flatscreen 3D, hahah.

They did not say. Pure speculation:

  1. Since Nanite renders polygons using the CPU, it might not run at an acceptable framerate in VR.
  2. It might use the previous frame data to render the next. Keeping both left and right eye frame data might require too much system memory or the renderer isn’t designed to switch between sets of frame data.
  3. There might be artifacts due to lack of synchronization between left and right eye rendering data.
1 Like

That heavy on the CPU, huh? Well, that’s unfortunate. Do we know whether there is any significant preprocessing and/or data size overhead, when making an asset into a Nanite, one? (I’m assuming a large part of it would lie in how the geometry data is structured.) :7

1 Like