piTools feature request: smart smoothing with 2 third or 3 quarter interpolated frames

I do not know if a new topic this is the best way to push a request. I already add this request in the topic “Pitool feature request”, but without any feedback.
Flight sims like DCS can not provide more than 45-50 fps, so it’s not possible to use 120 or 144 hz, as Current SS need at least 60 or 72 fps.
If SS interpolate 2 images on 3 or 3 on 4 we may be able to use these mode with 40 or 36 fps and so have a more fluid experience. However, I do not know if it is possible…

2 Likes

Why would the experience be more fluid? The same number of real frame updates would be pushed to the headset, with smart smoothing artifacts present in all intermediate frames.

If anything, this would simply multiply and soften the smart smoothing artifacts over more pixels. I doubt that would be an improvement.

What we really need is one of three things.

  • ASW-like Smart Smoothing, as with Oculus. Only if the remaining artifacts are not obvious even with Pimax resolution.
  • Multi-thread and SLi. Double performance, disable Smart Smoothing entirely.
  • Multi-chip GPU card (or otherwise 2x as performant).

Seriously. If you are serious enough to care about DCS rendering <50FPS, you should be serious enough to get the extra GPU.

I would immediately if it worked.

3 Likes

Oculus has been working on synthesizing 2 of 3 frames, but I’m not sure of their progress. This is really hard to do without bad artifacts.

The problem is that the future frame is unknown, so the algorithm has to predict the future. The more extrapolated (not interpolated, details below) frames which are inserted, the further each frame will drift from the future “real” frame, which leads to weird visual artifacts as the image “snaps” to the next real frame. Personally, I find the current “1 of 2” Smart Smoothing to be disturbing, so I must leave it off.

One solution would be to wait until both the current and future frames are generated and then interpolate between them. That would solve the artifacts, but would introduce major lag and would probably induce VR sickness.

Another possibility would require the game itself to provide motion vectors of all objects, so that the prediction algorithm could have more fidelity with the future frame. Unfortunately, that requires the game developer to do extra work, which is unlikely to happen until VR becomes a major market segment. Our best hope for this solution would be for major platforms like Unreal and Unity to integrate this into their game engines, but that wouldn’t help current games or ones which use in-house engines, like DCS, Elite D, Half Life, Quake, Doom, Crysis, Far Cry, Microsoft Flight Sim, etc.

1 Like

Hello,
globally, you will still compute frame at the same timestamp but display 2 interpolated frame instead if 1 in the other timestamps. Will it be so different to display 1 extrapolated frame every 2/60 of second than 2 extrapolated (with same parameter than for one) in the same delay ?

It’s not so different, but it yields twice as much error. (You’re taking “2 steps in the dark”, instead of only 1.)

1 step and half, to be more accurate.
I wonder what should be the impact for games that support well SS as DCS…

Interesting idea, but I would like to see another approach, when only 1 of 3 (or some other amount) frames is extrapolated (instead of each second frame). E.g, in case of 90hz the game will render 60 real frames instead of 45 per sec.

@VR-TECH
There is a general thread with all feature requests.
This one is about Smart Smoothing.

1 Like

… Fair enough …

I guess the problem is that the extrapolation (or time-warp, or space-warp) is not free. So following your reasoning, if I can extrapolate one frame in T+t, why should not I extrapolate N frames in T+t/N, T+2*t/N, … T+t? Because usually you do not have the GPU budget to run those extrapolations. The idea is that the engine only extrapolates when it becomes evident that the next frame will not make it on time.

Plus as others said, the extrapolation is subject to errors which increases rapidly with how far into the future you need the extrapolation.

Basically the extrapolations (in all theirs reincarnations) are just a band-aid to the poor performance and the last resort to at least show anything to user, but they are definitely not the solution to the problem.

If your game (engine) can render a frame every 1/90 of second, you can have it rendering the full frame rate without interjecting any artificial frames. If, on the other hand, it can only render frames every 1/60 second, then there is no way it has resources to squeeze something in between the two consecutive frame, regardless whether it is only every other (or 3rd, or 4th) period.

3 Likes

Then it shouldn’t work for 1/45 second or whatever, but it works…

Nobody is talking about the situation, when game engine can handle 60fps or less with 99% of GPU usage. But now, even if it possible to have stable 70fps, the Smart Smoothing will go to 45 real + 45 artificial frames, instead of possible 60 real frames + 30 artificial frames.

It works if you have parity between synthesized and rendered frames. So if the headset runs at 90 Hz refresh, and the game cannot make it, it drops to 45 FPS and the VR runtime synthesizes every other frame to fill in the gap to make it back to 90.

Could you draw simple schematics, how do you imagine it should work for 1 frame of 3 to be synthesized, with timestamps when the frames are rendered and when the synthesized frame is inserted?

1 Like

Nope, I can’t do it)
The only thing I can do is to give an idea or make a feature request (no matter how stupid it looks in someone elses eyes), and then someone from developers team can leave a response like “we won’t implement this - hard/impossible because…” or “yes, we will implement it”

“One solution would be to wait until both the current and future frames are generated and then interpolate between them. That would solve the artifacts, but would introduce major lag and would probably induce VR sickness.”

I think with modern multi threaded CPU’S rendering ahead at a very low resolution in say 540p then using a direct ML algorithm to scale the images up could be a path forward for non synthetic frame interpolation, possibly using the IGPU and a number of dedicated threads to run a 2nd concurrent instance of the game while you are playing.

We know that people can run 2 vives or Rifts off of one box, so rendering a second instance in low fidelity while in VR should not be that difficult, in theory.

Unfortunately, that won’t help DCS, which was the game mentioned in the OG post. DCS is CPU-limited, so a low-res image will likely take nearly as long to generate as a higher res one.

Also, this would require changes to the game image, which means it’s unlikely that a game developer would invest the time to attempt your suggested approach.

But not all DCS workloads are CPU bound. And if GPU performance were doubled, at least those workloads could be used for practicing basic instrument flying at higher resolution.

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.