Agreed. 132 may not allow us to trigger the performance boost but it doesn’t mean that Pitool causes the low GPU utilization issue.
Yesterday I stopped all “Pi” processes and services and tested rFactor 2 on a 1080p monitor. I obtained the exact same GPU utilization as with the Pimax; between 51% and 71% while racing. I tried with low details and got an average of 380 fps. With all details and eye candy features set to max, I got an average of 150 fps. The detail level did not affect the GPU utilization; it always averages 60%.
The more I use the “Tweak” the more I learn how to better use it (in rFactor 2):
I’m never able to trigger it when the CPU frame time is high. e.g. spikes over than 10ms. When that happens, remove a few AI cars that usually does it.
I can adjust the tweaked GPU utilization by changing the super sampling (SS) level. e.g. If the game is stuttering because the GPU hits 95%+ then I reduce the SS by 10% for my next race. This usually reduces it and keeps it in the 75% to 85%.
Nvidia ticket status: They are trying to reproduce the issue. I usually receive a set of questions per day.
Have you tested this in the end? As a coming back user and being a bit carefull about changing things to settings is your trick still valid?
I’m a 5K+ user
Headset firm: 2.1.255.183
pitool 1.0.1.91
^ as you can see i haven’t updated since receiving the pimax…
I’m seeing this with a VEGA64 as well. Frame rate not hitting 90 and yet GPU usage in the 70-85% range. This is in SkyrimVR.
Latest Beta Pitool. Latest AMD Drivers, 1600X CPU nowhere near max on any core and plenty of RAM left.
Haven’t found a way of doing a “reset” (mainly as I use the GPU fan header for its water pump control, so need the fan profile to stay custom) GPU is kept below 50C.
Will try loading profiles and see if I can get it to hit 100% utilisation.
I think it’s possible that the CPU cannot feed commands to the GPU any faster. It’s actually possible for a game to be bottlenecked by both the CPU and the GPU, at different parts of the game frame update cycle. (Sometimes the CPU is waiting on the GPU and at other times the GPU is waiting on the CPU.) Improved APIs, like Vulcan, help to reduce this interdependency.
Which is fine if you cannot trick the GPU into giving you 90%+ like we can. Clearly there is an issue somewhere but CPU bottleneck in these cases, which many users reproduced with .129, is not one of them.
I have gotten in touch with Nvidia about this as well, as they FINALLY contacted me regarding this issue I will update you as soon as I get more information about the progress!
I’m convinced the GPU utilization is being controlled somewhere to avoid stuttering. Maybe the problem is only that it’s not perfectly calibrated; too conservative.
The reason I say that is that when I unlock the performance using the “tweak” with graphical settings that are too demanding to allow the GPU to provide enough fps to stay locked at the set 90Hz, 72Hz or 64Hz, I get major stutter as soon as the utilization is above 95%.
So it seems that artificial fps limit is necessary but it would be nice if it could activate itself only when GPU utilization reaches a certain threshold to avoid spikes above 9x%.
Hey @SweViver, got any news on this?
In my case I am sure I am not CPU bound - it always out runs my GPU in latency and sich
might be multithreading related issues and the whole syncing stuff- not trivial. Would be nice to know more or even get the GPU to go past the 90% utilization.
Kind regards!
No news at all from Nvidia. Last thing I heard from the Nvidia guy Manuel (through private message on Twitter) was: “Sounds like Windows DWM is acting wonky. I’ll let you know when I have further info”
Thanks @SweViver for the update!
if it is pi tool related I wonder if they could instead of inhibit it also trigger it …
would be nice to know what NVIDIA and in that case MS know … but it was also for AMD the same case if I read that correctly- so on the MS side might be true.
Anybody know if it works with Windows Mixed Reality Headsets?
Interesting, I have a custom board highly OCed 2080tifrom EVGA, I cant get the boost. If the voltage is is related that could be interesting - then again is it also there for the AMD Cards?
I guess syncing head room seams plausible, just a good question why it leaves so much unused if like 6-10% seem to be enough.
As for you not getting the boost, note that the boost trick has been reported as universally not working since PiTool .132 (i.e. still not working in .144) - we know for a fact that the PiTool team is working on resolving the issue once and for all, or at the very least make the boost trick work again, as per SweViver.
Anyone reading who wants the boost working, it’s working on PiTool .129.