Will the 12k be wireless? What Does Physics Say?

Why don’t you link it, obviously.

Thanks for the link.

Thanks. That clears the split rendering issue up for me.

No I mentioned the chip and specifically mentioned about it in my post. Leaving that open.

Thanks thats helpful. I’m a pharmacist by trade, so all I can understand is the numbers, never having to make such things work. I thought data was data and throughput was throughput, but I certainly had a sense that 7Gbps was not truly attainable, but 1st wanted to expose the theoretical gap, before even treading into reality.

I think it means: “Candy-Games Only” on wireless - just like with Oculus. Anyway I have a 200Mbit connection to my router and a Gbit internet so clearly I’m not using my knowledge well in practice hahah. Have fun.

1 Like

Yes you mentioned the chip, just not in the right context.

So you are aware of it but not if it’s capabilities.
Just as I wrote.

What are the capabilities then?

Yep no idea how the chip works, but from what you said it seems a Pimax will be able to play “candy games” on the headset natively or wirelessly as with the Meta and thats a far cry from what people may expect I think…fair enough.

It’s an SOC, so a CPU and a GPU on one chip.

It provides enough power to run less demanding games and absolutely sufficient power to run any and all entertainment applications. This is entirely besides the point when it comes to wireless streaming.

What happens in that case is that your PC-side CPU/GPU renders the entire application, the audio-visual data is then heavily compressed by your PC and transmitted as a super small package to your HMD. On the HMD, the computational power of the SOC is then used to super quickly de-compress the audio-visual stream und display it.

You can think of it a bit like using TeamViewer or Citrix (or GeforceNow or any other remote computing solution): of course the raw data steam of e.g. a 4K 60hz desktop is way, way bigger than the bandwidth of your internet connection. But the goal isn’t to brute force replace your HDMI/DP cable with an internet connection, the goal is to compress all data as fast and as quickly as possible to then transmit it via your internet connection, and then use a client side application to de-compress and display.

The biggest hurdle in this sequence isn’t bandwidth once a sufficient threshold is reached, it’s latency within the entire pipeline. It still holds true that there is no such thing as a free lunch: if you don’t have the raw stream bandwidth, you need to compensate with compute power to compress and de-compress, and you need a perfectly tuned sw stack to achieve acceptable results.

I hope that made it clearer.

4 Likes

You shouldn’t feel dumb, you came in to the discussion with good information and an open mind. Absolutely nothing dumb about that. The only time to feel dumb is when you learn new information but deny it because you have already decided you want to hold a certain opinion, you’ve been anything but dumb in this discussion!

3 Likes

It did. Team viewer -good example. Latency though, ugh in VR. A game would have to be designed for that I would think. Or at least to be processed on the SoC. Guess that’s for meta designers (small-m) to figure out.

1 Like

You are right, latency is a very big deal in VR because of two major factors other than wireless:

  1. The required high refresh rate to avoid motion sickness at 90 hz alreay limits the entire envelope to 11.1ms
  2. There is also latency in the display’s switching time and within all sensors involved, depending on their poll rate.

So if you add those all up, you get an incredibly small timeframe in between which a motion finds itself represented as a photon movement and perceived by your eyes.

But this sort of super fast stream encryption is well established and performs very well; even a smaller developer such as Guy Godin has it implemented very well in Virtual Desktop as a streaming solution.

That’s why I consider client side compute to be the most intelligent solution to wireless VR.

1 Like

why link something that jumps in your face with a huge banner on this start page?

Yeah, Oculus have John Carmack the king of code efficiency directing all that. Plus Facebook’s billions.
What do Pimax have? Let’s just say not that.

Yet you have one single person, Guy Godin, making Virtual Desktop that does the same as Airlink but arguably better (he doesn’t even have the level of access to the hardware and system that oculus have).

Your point is valid but isn’t the be all and end all. Small teams can still do amazing things, we actually don’t know what talent Pimax have working on these issues. They could be better or worse than you think, lets just wait and see. I understand your reservation, and I share the same, but I don’t think it’s a useful mindset excepting for curbing expectations perhaps.

1 Like

Yeah, no doubt small teams can do amazing stuff.

I’m just going on Pimax’s track record. I’ve been here a while. They push boundaries but rarely back it up with polish or consistency. What you get are glitchy cutting edge features. I’ve been egging them on to step up to the next level for years.

How’s the original eye tracking working out for everyone?

Art Armin and Sweviver achieved a lot with Pimax experience mind.

I would love for them to have hired some more talented staff, or to work with the talented open source community. Too often they cut corners though. It’s budgetary and cultural I think.

More than happy to be proved wrong, I’ll be first in line when/if I am.

1 Like

Oh ad blocker must be killing it. Or I unpinned it?

Partnerships with Qualcomm and Tobii minimally… So… I think this is very doable.

1 Like

The Tobii partnership:after all this, I wonder are they using tobii transport codec just for the wireless throughput? Or are they using it for the rendering side as well (or whatever they would call it as I’m sure it’s not transport or codec). I mean the SIGGRAPH video is damn impressive to me, and the new 12K promises to come with eye-tracking. I’m schocked that the Varjo Aero has eye-tracking, but fails to connect on the focal rendering from what I’ve seen/read/heard. I hope it’s not another feature that is woefully unsupported.

I feel like too much innovation is unutilized by the software, but then, so was TAA and DLSS just a few years ago.

With all due respect, I think you need to do a little more research with how the conversation has been going and some of the claims you’re making. I get having questions and nobody knows everything, but Varjo does support eye tracking and DFR.

Secondly, in a way it’s already supported. Anything which supports Variable Rate Shading can support DFR. It’s not the most optimized utilization of it and they’ll be room to refine and improve in the future, but we already have applications with “Fixed” Foveated Rendering, including Pimax’s very own PiTool. The only difference between that and Dynamic Foveated Rendering is, well… The Dynamic part. Fixed is static, while dynamic follows your gaze. Hell, you can already use DFR with the Pimax Eye tracker, though it was awful before Guppy’s fix for it. It isn’t universal but a decent chunk of games and applications do already support VRS. And on top of that things like UE5 include an eye tracking plugin, while Sony is also making it a cornerstone of their headset, so anything which it on PSVR2 will probably end up supporting it, if it’s not a port of a simpler and older game. The gains from FFR/DFR on even the 8k X are pretty impressive in GPU bound scenarios, seen around 30%~+ from some of my own testing.