Next gen Display port 1.5 will allow 8KX to use one cable, but still no GPU will run it (a card twice as powerful as the 2080ti or 2080ti SLI for 80hz), so 8K+ should be offered as well

I never said it did, I support a 90hz 8KX.

I never meant this to be so offensive, if you weren’t so defensive it wouldn’t be, you’re so sensitive.

1 Like

Analog cables to manage such a wide bandwidth are going to require full pure-copper wires of very thick AWG, and considering how much DP1.4 cables (real ones…) medium prices are already high, these new ones are going to cost 60-80 USD , how much stupid is this ??..

There is already Thunderbolt-3 specs on optical, and an improved one would be easy to do, and with fiberoptics prices going down year after year, it would surely be a cheaper move to finally go optical, regarding video signal.

Sometimes I really don’t understand the marketing and tech choices the big giants and consortiums choose -_-

1 Like

heliosurge said GPU power was a good reason to reduce the headset to 80hz, I said it wasn’t as it made no difference, you then started replying to me with loads of calculations without stipulating an opinion on the 80 or 90hz so I could only assume you were supporting heliosurge’s assertion that 80hz was for our GPU’s benefit, when it makes no difference to GPU power required

I’d rather have an 8K-X that CAN use the panels full resolution, than have an 8K and be permanently stuck on undersampling at 2560x1440

man are you fried?

you asked me what my opinion was, so i gave it to you. you don’t even remember what you asked, maybe phrase it better next time if you don’t want facts then don’t ask my opinion cause I base it off da facts.
or maybe you just can’t handle the truth.

“so if you are saying we need 2.5 times a 2080ti to run the 8K-X at 90hz, do you honestly think 80hz makes any difference at all in this equation, like 2.2 times required, so why even bother reducing it.”

1 Like

Thats very logical reasoning, I said the same when some poeple started wanting to switch their 8k pledge to 8KX.
And AMD is not gonna change anything, their next best GPU will be on par with 2080ti AT BEST that is. @Heliosurge ,and its the same story with the CPU front.
We have seen this so many times, there wont be any magical leaps in performance. Nvidia makes, AMD follows, rarely makes a tiny bit faster GPU.
Maybe a bit faster on some “AMD optimized” games but overall the same, or a bit slower than Nvidia.

1 Like

yes exactly, I asked your opinion on 80hz and you didn’t give one at all

1 Like

“to answer your question, it’s x2 the performance of the 2080ti required to run the 8KX at 80hz in current rasterised titles with the same performance as the 5k+ at 90hz on the 2080ti.”

you didn’t ask me about my opinion on 80hz vs 90hz, you asked me whether or not I thought 80hz would make a difference. lmao

1 Like

and you started going on about the 5K+, which is not what I asked about

read my whole post, you can’t cherry pick the middle bit and ignore the rest, my whole post is about 90vs80 on the 8K-X exclusively

No i didn’t, and I only ever mentioned the 5k+ to use it as a baseline to compare the pixels required to push the 5k+ vs the 8KX. You have no idea what you’re even talking about.

1 Like

Cherry picking huh? that’s you cause you just said I started going on about the 5k+ when I didn’t at all, and regardless this is a thread I started so if you want to talk about something else you’re just derailing the subject of this thread.

Dude, your own quote of yourself you are talking about the 5k+

Yeah because you asked me if 80hz would make a difference compared to 90hz, and in my original post i gave figures of performance for the 5k+ as a benchmark to show the pixels required to push a 5k+ at 90hz and the fact that a 2080ti can only barely manage to do it on the 5k+ at 90hz in most titles, and the pixel count required for the 8KX at 90hz. You are on a totally different tangent to the topic in this thread.

This topic was just intended to shed the light on the truth of things, if you can’t handle the truth I can’t help you.

For the record I initially brought up the 80hz vs 90hz discussion in relation to the screens used in the 8K, because the supposed refresh rate of the 8KX is 80hz which is an indication it could be using the same pentile screen as the 8K.

I brought this up because ultimately I think everyone who wants the 8KX should be pushing Pimax to find a 4K full RGB stripe screen to use for the 8K (a plus version) and the 8KX, which apparently isn’t available now.
Whilst 90hz is important and I agree with why you raised it as an issue, I feel the screen used for the 8KX is far more important. a 3 supbixel per pixel non-diamond display should be used over a 2 subpixel display, otherwise the 8KX will probably have a blurry image like the 8K.

2 Likes

Yeah and as I said earlier in this thread before in response to Heliosurge:

2 Likes

2 x brute force GPU increments are not going to happen in 2 years. The scale of GPU performance is not linear, probably due to thermal and power limits being hit already and why the 20 series is a side step of new architecture with a poor performance boost over last gen.

IMO the answers are elsewhere.

Foveated rendering being one.

Multi-gpu (not SLI) which is already used in realtime raytracing and scales linearly as you add more cards.

Then we have AMD who are interesting to watch as they have worked out how to bond chiplets together using I.F. so we might even see a single GPU card that has twice or even 3 x the performance of last gen.

Per eye GPU rendering maybe. Assumed it has sync issues like SLI though so not 100% on that one.

In fact, resolution was mostly important for SDE reduction but with new tech as demonstrated by Samsung this is not so much as a necessity any more, it shows that you can improve the image without changing resolution.

3 Likes

Infinity fabric amd chiplet multi-gpus would be interesting on xGMI if xGMI scales linearly too.

Foveated rendering needs to be implemented at a hardware level and not as a software implementation in order for mass adoption to occur, due to the fact every game will require software coding in order to get it to work.

VR is going to need software-independent and hardware intergrated foveated rendering.
Foveated rendering intergrated into gpu firmware, done by an eye tracking module sending a signal to the gpu so that the gpu knows what to render in what quality before the information and frame renders leave the gpu. This should be offered as some mode you can put your display driver into when connected to a HMD, similar to other technologies seen in the Turning whitepaper, rather than relying on individual software intergration per program and game.

or a VRlink/NVlink with consecutive gains and not minimal gains by stacking multiple gpus to achieve the acceptable performance required for higher grade resolutions or ray-traced titles.

@Sean.Huang @Matthew.Xu @deletedpimaxrep1 @anon74848233 @Pimax-Support @PimaxVR Seeing as you’re having discussions and collaborating with AMD and Nvidia, you guys should definitely put this suggestion through to them as an additional software intergration and functionality in their next-generation post-turing RTX graphics cards and/or Vulkan API

2 Likes

Also, I think most manufacturers are planning to jump straight to PCIe 5.0 which reportedly comes a few months after PCIe 4.0

I also asked why foveated rendering could not be done at the driver level and the answers seemed to suggest that the engine has to come first due to the bi-directional aspect of VR. You move your head, that gets sent back, the frame updates with the new camera coords etc etc. Game engines like Unity and Unreal seem to be dragging their feet on coming up with a standard for eye tracked foveated rendering but I guess VR is still a tiny portion of their user base so its not in demand yet. I thought the GPU manufacturers would have offered to do this too but again, demand is not helping a standard come together fast enough. Just my opinion of course.

1 Like

I’m still of the view that AMD chiplet multi-gpus and later based RTX GPU architecture will be inferior for rasterization performance and will be inferior in performance in older DX9-DX11 games which rely solely on single core performance, clock speed and only utilise 1-2 cores.

If that’s the case then the 1080ti and 2080ti may always be the best GPUs for playing a Pimax 5k+ on titles from 2016 and before.

Probably because there is no easy way forward without having every developer write code for its integration into their software on an individual basis for every game, however I recall reading one VR developer saying the code would be incredibly easy and only 1-2 lines of code.

My suggestion would require one or a few eye-tracking modules (Adhark/aGlass/Tobii) to partner with AMD/Nvidia engineers to see if they can have eye tracking signals be sent to the GPUs in question and that data be used to gimp the quality of the resolution of the signal and in-program rendering in relation and corresponding to the position around the middle of the eye inferred from the data sent by the eye tracking module, increasing performance and overriding any coding requirement in program software for implementation.

This should be offered as some mode you can put your display driver into when connected to a HMD, similar to other VR software technologies implemented in Turings API.
@Sean.Huang @Matthew.Xu @deletedpimaxrep1 @anon74848233 @Pimax-Support @PimaxVR

1 Like

Based on? If history? I think that is finally changing, especially as the console world of multi-core CPU/GPU which has a huge influence on developers, AMD happens to be one of the major driving forces of gaming technology strangely enough. Multi-core is AMD’s ace against Intel and Nvidia and Microsoft and Sony are committed to AMD and their multi-core process. 6, maybe 8 core will be the norm in the next couple of years imo as higher clocks have hit their limit unless you want to use liquid nitrogen or a generator :smiley:

PS. I am not trying to sound like an AMD fanboi here but I see the scales tipping in their favor ahead.

2 Likes

I’m not saying Multi-core won’t have its benefits in the future for next-gen games, not at all.

What I mean is that AMD’s multi-core processors are never as good at SPC as intel’s offerings. Clock speed may be as high or higher in Zen 2 than Intel but if SPC is lower the performance won’t be as good in games made only to take advantage of 1-2 cores.

There is a chance Zen 2 3700X could hit 5ghz base clock speed though seeing as the engineering sample is 4.5ghz it’s not unreasonable to assume, in the event that’s the case the 3700X may be marginally better than the 9900k/9700k overall if slighly less SPC.

Most older games only use 2 cores, and only a few use 4. For those older games multi-core CPUs will not improve the performance if the SPC is lower. Battlefield or Battlefront will for sure, Skyrim won’t.

In terms of multi-gpu chiplet designs, most older games have been designed for monolithic die gpus.

Even though AMD’s chiplet isn’t exactly multi core GPU as we’ve seen before in the previous nvidia Titan cards and software may be optimised to take advantage of it better, AMD has never been able to achieve nVidia’s single core performance or clock speed with their gpus.

It’s not impossible but AMD may win over in new games and in old games with the quantity of a better architecture with their chiplet designs, but it won’t be because of sheer horsepower rather because they have a lot of less efficient cores making up in multi-gpu loads.

Seeing as Ryzen chips don’t perform better in older games than intel chips, i’d be surprised if the new GPU architecture could be taken advantage of in games and software which are optimised for single core GPUs.

If that’s the case, Navi 20 is going to be a lot more powerful than people expect even if it’s at the supposed performance of the 1080ti. (Navi 12 is 1080 performance equivalent for $250).

1 Like