For practical full claimed FOV with the 12k, there is going to have to be some changes engine- driver-, HAL/gfxAPI-, and runtime side, and some on the hardware side, too. Unfortunately history - both older and very recent (e.g. new UE5 shinies as yet not working in VR, and taking even screen-based gaming right back to “cinematic” 30fps), is not filling me with confidence that things needed, will come to be.
I put the “practical” qualifier there, because with that claimed FOV, we’re talking 159x135 degrees per eye, which is perfectly possible on a singular rectangular viewplane, since both dimensions are less than 180° (…or 2 times 90°), as long as you can render canted; It just won’t be remotely efficient, even then (…and over 180° with PP, on a single flat rectangle per eye is just a plain geometrical impossibility).
Games really need to start to render, or be “tricked” (e.g. by the APIs they work through, and/or drivers), into rendering in ways that optimises workload and memory use for each segment of FOV, and for lens- and foveation properties, instead of current wasteful practices, which were considered acceptable for <100° FOVs – there are many ways this could be done, including, to a degree, things that NVidia introduced with their 10x0 series (…in their typical proprietary manner), but which (effectively) nobody ever adopted, so… (EDIT2: Heck - there was a 360° version of Quake aaaages ago - way before the VR renaissance; Just six 90x90° views on a cube, which is pretty much what you have with dynamic envmaps anyway…)
I really feel that the sooner games graphics switches over to full raytracing, without any rasterisation vestages, the more ready we become to take the necessary steps forward, given how much better it can potentially lend itself to foveation, if the implementer has any forward thinking; Even if there will be a significant lapse in generational performance progress with the switchover to the more work-heavy methodology. It will never happen, of course, but…
On the brute-force approach side: Given how much better a potential RT workload has to being distributed, I would hope for multi-GPU to make a proper comeback, but with a main card that is like a regular card today, that can run all those “legacy” rasterised games, and an arbitrary number of “farm” daughter boards, with purely RT-and-parallelisation-optimised cores. Another thing that is unlikely to happen… (EDIT: …but if it were to happen, after all, it needs to be through a standard. -As long as every solution provider guards their own little kingdom, nothing will ever become properly adopted – maybe not even over in Apple-land, which is it’s own little galactic empire, apart from all the rest of humanity.)