Getting the most out of a G/RB half-4K panel

Pimax started the 8K campaign on the basis of new panels and a decision to make wide field of view. There was notable difficulty with the new panels, leading to the 80Hz and now subpixel pattern debacles. Let’s see what can be done with what we do have.

First, a description of what can be physically displayed: There exists a 3840x2160 grid. Within this grid, no pixels have full colour. By what I’ve been able to make out, it appears to have checkerboard patterns for all colours; effectively, every second pixel is absent. The lines of same coloured pixels are diagonal, so the actual display pixel is a “diamond” shape; a normal square rotated 45 degrees and scaled up by √2. It is probable that the subpixels are displaced in relation to each other, but the chromatic aberration of the lens makes that shift all over the screen; it averages out.

The first problem is that we’re feeding the panel 4K RGB when it can only display half of that. This is currently done by a separate upscaler chip, converting 2560x1440 into 3840x2160, and somewhere along the way hitting a rate limit such that it can only operate in 80Hz. So we want to design a way to have the panel show every byte of input data.

If the panel is something like AU Optronics H546UAN01.0, there is a bit of built in processing to support 1080 input. What we’d be looking for is a way to enable pixel doubling in one axis. So look for pixel doubling, nearest neighbour, 3840x1080 or 1920x2160. If we find such a mode, this cuts the bitrate from the scaler to the panel by half and gets rid of the blurring the panel adds if it blends in the pixels it was unable to display.

At that point, we have full RGB, but every other scanline is offset and our pixels are double width. The combination is not something the scaler can handle. However, we’re dealing with 4.1Mpixel instead of 8.3Mpixel (befuddling rounding). Our source image is currently 3.7Mpixel. This is the level we really want to play porch tricks; our scaler chip should be capable of placing a smaller image within the display region, and we don’t use the full extent of the panel. Let’s optimistically assume this is possible, and our image region actually shrinks, perhaps to 3456x1080 or 1728x2160. This puts us at 3.7Mpixel, and suddenly the scaler is working for us, not against!

That our red, green and blue channels are differently placed is not a big issue. This can be compensated for by simply shifting the lens center in the lens warp, because it processes the channels individually to compensate for chromatic aberration already. That the pixels are the a diamond shape, and offset every other line, is a little trickier.

One way to see our subpixel layout is that we have overlapping ordinary grids; this is the approach taken in e.g. Bayer decoding as G1-R-B-G2. Overlapping images are expensive to render. Another view is that it is really a regular grid that’s rotated 45 degrees and has 45 degree slanted sides. GPUs are great at rotating things.

So, we rotate our render buffer. Our formerly largest size, horizontal, is now diagonal. So we’ve in general enlarged our render buffer by something like √2. But now that we’re properly aligned, what looked like full pixel gaps has become just √2 larger pixels. We’re at a more native size. Also, supersampling has started hitting the correct region. We have much larger unused regions in our render buffer, but we cover those up with the occlusion mesh, that was rotated along with the rendering perspective. Our lens warp will reverse the rotation and apply the per-scanline offset for readout buffer format.

So we’ve redefined our render buffer from something approximating 2560x1440 (with some factor to compensate for lens) to something related to 1920x2160 rotated 45 degrees, fitting in a buffer about 2885 pixels across but not using the corners. We’re likely paying a little overhead from cut off tiles at the edges, but that already happens with the occlusion mesh.

This is all very hacky, but the end result is a clean rendering with no extra blur. If it can be deployed, depending on properties of the scaler, panel, and their setup sequences.

22 Likes

Nicely formulated and expressed. You give up the additional diagonal resolving power of rendering the 2160 half pixels as full ones but avoid the resample blur that was introduced to prevent information getting lost in the missing sub pixels.

As i suspect the latter is more an attempt to claim a marketing spec I believe your solution would work better if it can be implemented.

I do however doubt the panel controller will let you do that. It’s likely supplied as a module that takes a native 4k input and performs the reinterpretation itself. Worth a look though.

3 Likes

Unless there is an uneven distribution of subpixels per colour, I don’t give up any resolving power. The render buffer rotation expands the rendered pixels so that supersampling does the right thing. Before that, those extra invisible pixels would be spread out in all directions, instead of to the closest subpixel to the particular sample.

The main downside I see is that the desktop mirror will suddenly be tilted. This will cause more consternation than even the ordinary perspective rendering of wide fields of view for angled panels did.

3 Likes

Hmm, tricky to mentally map once you bring the partial sub pixels into it but I suspect it depends on how the pixel resampling works presently. If it spreads the missing data to all adjacent pixels it will reduce resolving power below your solution. If it has a backed in preferred direction then it would maintain higher resolving power at a cost of loosing position accuracy by half a pixel. Given how apparent the blur effect is it’s probably the former so I believe you are completely correct.

2 Likes

There’s a third reasonably likely option: all the blur is from the first scaler, and the panel outright discards half the data it receives. If so, the scaler is really tuned to blur a lot, but on the plus side the panel is already in nearest neighbour (just subsampling instead of pixel doubling).

1 Like

Also plausible. Well I suppose we will find out when 8kx comes along. I’ll be interested to see whether we get a somewhat less blurry 8k or a native clarity but lower vertical resolution, higher horizontal resolution of the 5k+. I’ll hold out hope for the latter I suppose.

Or they could knock it out of the park and find a new panel for us but I must admit I’ll not hold my breath.

1 Like

The same colored pixels makes diagonal lines, because they are shifted for one “slot” each line. But the “full RGB pixel” can be formed only in horizontal or vertical direction if you pair the closest G and R/B pairs, not diagonal. At least on @SweViver recent shots, G and R/B form pretty clear horizontal/vertical matrix.

G     B/R   G
B/R   G     B/R
G     B/R   G

Just to clarify my understanding, you suggest to use the scaler chip not for the upscaling, but for particular placement of 1:1 mapped source?

Why is it important to reach the same bandwidth as the original solution? Is it because the scaler is bandwidth limited? (I have no idea what are “typical” features of one scaler.)

1 Like

I agree about the subpixel pattern you describe. Now look at what the closest same colour subpixel is for any subpixel. That forms the 45 degree rotated grid. Thus the border between the closest subpixels capable of taking over forms rhombuses.

G  BR  G
  /  \
BR G  BR
  \  /
G  BR  G

The rotation trick is to make those borders line up with the GPU’s calculations, no matter how much we super- or subsample. (Admittedly, we’d need to supersample 2x2 to get perfect rendering where the pixels are lined up, but that is a corner case with how huge the panels are and lens CA.)

Yes, I am suggesting using the scaler for placement. This is merely a trick to avoid the pixels we don’t use.

And yes, we do have limited bandwidth. The DP receiver is capable of 720MHz pixel clock, typically with some overheads for margins, metadata, sound etc, so 2 screens at 3.7Mpixel and 90Hz ends up at 666Mpixel/s, 714.2MHz pixel clock at CVT 1.2 reduced blanking standard timings (5120x1440 90Hz). The scaler and panel have their own limits.

With standard timings (which isn’t strictly necessary), 3456x2160 is 733.5MHz pixel clock, but 3456x2120 slips just under the 720MHz limit. This is a concept draft, not final specs for anything.

2 Likes

That’s a clever idea, but unless Pimax open-sources the drivers and the firmware it will be difficult to prove/prototype. O/S of drivers is possible but firmware is unlikely, I tried to contact Analogix to get some info about the supposed 8k scaler and they didn’t want to release anything without phonecalls and NDA.

2 Likes

Or Pimax could evaluate and/or develop the concept in house or using a consultant. That’s a decision for @Sean.Huang.

4 Likes

Minor thought process update: I’m not sure the rhombus pattern is actually ever the most correct. It’s the one the GPU will approach with higher sampling, though. For a regular delta pattern hexagons should be the right shape. There’s a homomorphism related to sphere packing here; the patterns depend on what angle you cut a plane through a dense sphere packing. I may try to illustrate it later this evening.

3 Likes

I am thinking about different approach as the rotation seems to me quite convoluted.
Let’s imagine the matrix partitioned in this way:

G   | B/R | G
B/R | G   | B/R
---------------
G   | B/R | G

This will give “logical” pixels made of 1x2 physical pixels. The subpixel colors in those logical pixels are flipping around, but the overall pixel color is fully defined (RGB). Partitioning above suggests creating virtual display 3840x1080 which is fully RGB. It can be done in horizontal direction as well then it would define 1980x2160, either way this will provide full RGB matrix with pixels of size 1x2, or 2x1.
Those pixels will be easily to compute, simply by setting normal rendering resolution (i.e. 3840x2160) and then squishing the pic vertically (to 3840x1080) or horizontally (1980x2160). This can be done after pre-lens warp or before pre-lens warp as long as pre-lens warp, operate on the same display geometry (i.e. 2x1 or 1x2 pixel ratio) and no info will be lost.

The only requirement is that panel itself supports this particular mode - i.e. partitioning the display into half res panel either vertically, or horizontally, with full RGB control over individual subpixels.

This would not directly fit into DP bridge clock limit (7680x1080x90Hz gives 787.1 MHz pixel clock), but cutting down the unused parts of the display should reduce it to the manageable level - 10% reduction of horizontal res (6912x1080x90 Hz is at 709.2 MHz pixel clock).

EDIT: I forgot that 8K run at 80 Hz, so it will fit directly into the bridge limit as 7680x1080x80Hz gives 696.54 MHz pixel clock.

2 Likes

Well if you can make the panel avoid all it’s pixel resampling jazz and take a half res image using two lines as one line at full RGB then you can just directly render at 3840x1080@80 Hz and be done with it. If you are loosing half your vertical resolution anyway you might as well get double the performance and native resolution clarity while you are at it.

3 Likes

Not that simple,is still need full resolution to handle the panel .
Is subpixel not less pixel.
The pixel still here but mixed up.

1 Like

Marketing spin aside, pixel structure wise the panel is more like a 3840x1080 panel with adjacent column pixels in the same row alternating and spreading out their sub pixel structure to achieve 4k panel like SDE then addressing the subpixels as separate ones and applying a blur effect to prevent loss of information from the compromise. If we CAN drive it like the 3840x1080 panel it really is rather than try to persist with treating it like a 3840x2160 one we will get better results because we get rid of the need for the deliberate blur process. That it allows native resolution on the single displayport cable is a nice bonus, they wouldn’t even need the upscale chip anymore. Just depends on the flexibility of the interface chip built into the LCD module.

If It can’t be done in software then I imagine the panel manufacturer would be able to make the alteration on their end and supply them as a different sku easily enough which could then be used for the 8kx. It still wouldn’t be a true 4k panel but it would at least be a real native resolution panel run that way. You’d then have a true 3840x1080 panel with a subpixel structure optimised for reducing SDE. Much better marketing spin than a 4k panel with half the correct sub pixels! You’d loose nothing over the current approach and gain in clarity. Vertical resolution will be lower than 5k+ still but the much higher horizontal resolution will probably result in a slightly higher level of clarity and you’ll keep all the SDE benefits this panel has now.

I’d take it. Design for what you have, not what you wish you had.

Edit: If I were Pimax I’d have a go at pushing the LCD manufacturer to take back their current panel stock and change them to the above as it’s their word games over the spec of the panel that caused this problem in the first place.

5 Likes

Note that it’s the predictable mapping of subpixels that helps us render. We can feed the flipping pattern into the sampler during the lens warp stage, so this 2D subpixel pattern does not mean we need to lose the vertical resolution.

I mulled it over and sketched the relation in FreeCAD, and the rectangle pattern is indeed a corner case where one of the hexagon sides is reduced to 0. It happens only when there’s a 90 degree corner in the triangle formed by three neighbouring points on the grid (the diamond/square case is when the two catheti are also equal length). The other interesting case is when all corners are 60 degrees, in which case the hexagon is regular. Equilateral triangles also lead to hexagons symmetric in the row direction.

So only the 90 degree grid case has this opportunity to optimize, but it gets warped with the CA anyway, so in the end what that truly matters is having predictable subpixel positions. Whatever the pattern, a pixel shader can map it. In the worst case using a displacement texture matching the subpixel tiling.

4 Likes

Right :slight_smile: . I was just too deep in square pixels and did not realize that you probably can instruct the graphics card to render directly into 3840x1080 with 1:2 pixel ratio and safe half of the perf.

4 Likes

Took me a while to realise it as well. Simplest possible solution and it’s as good as any of the other compromises. Lonetech’s solution is a very clever way to get correct rendering through the scalar process but given the panels limitations the upscaler is completely redundant. Just respin the board with no upscale chip, direct connect, drive as 3840x1080x80hz and you are done, the screen is looking as good as it possibly can and you’ll get better performance by a large margin.

The current solution is a result of a set of compromises that are no longer valid so we don’t need to make them anymore.

Also the 8k will now be 8 thousand properly rendered horizontal pixels so the name will finally make sense lol

4 Likes

Just reread everything, you covered my question already…

I very much think pimax should look at this thread…

1 Like

Displayport 1.4 can do 4k 120hz so yes it can do up to 120hz at this resolution. The bridge chip can do 4k@90hz as that is the plan for the 8kx so that’s the same bandwidth and as far as I can tell it should work?

Basically this all hinges on the LCD module having a mode where you drive it as a full rgb panel of half the vertical resolution or that the manufacturer can add that. If it can then all this is good to go.

1 Like