So I think conventional wisdom would state that of the 9 output pixels, 4 complete pixels will come from the input and the remaining 5 would require complete interpretation from the upscaler. How about if instead of 4 complete pixels, you were to spread the RGB subpixel of a input pixels values over the 9 output pixels such that 6 pixels would each have 2 subpixel inputs and require complete interpretation of the remaining 3.
I.e. instead of:
000 RGB 000
RGB 000 RGB
000 RGB 000
Do:
000 RG0 R0B
RG0 000 0GB
R0B 0GB 000.
Then use some ‘slight’ colour shifts in the subpixels to indicate some of the missing information, for instance if the missing subpixel would have been on a black pixel if the 2 subpixels will be 0, if the missing subpixel would be of a pure RGB of ‘significant value’, then the first known subpixel could be set to a minimal value of ‘1’, the second known subpixel set to ‘1’ for a high subpixel value or ‘0’ for a ‘mid’ subpixel value (a low subpixel value would default to ‘black’).
If the missing subpixel would be of a white pixel, the 2 known subpixels will be max value (255?), if the missing subpixel would be of non-white pixel (of significant difference for the missing subpixel), then ‘-1’ on the first known subpixel and ‘-1’ on the second unknown pixel if ‘low’ subpixel value, or ‘-0’ if ‘mid’ subpixel value (a high subpixel value will default to ‘white’).
These deviations could then be used to change whether an ‘average’, ‘min neighbour’, ‘max neighbour’ is used for defining the missing subpixel.
The pixels that have no known information would have at least 2 pieces of known information for each subpixel value in neighbouring pixels (and would have 4 pieces of known information on one of the subpixels).
Would this be a conceivable method of transmission, and would some colour sacrifice be worth it for ‘neighbouring information’?