You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I spent a few hours hacking on the original qoi image format, adding a combined index+diff and Index+luma opcode.
From what I've seen (from my images) the run length encoding very rarely exceed 10.
From my understanding of the format the index or luma opcodes can only apply on the previously seen pixel, which is believe is good for gradient. If theses two opcode cannot be applied the only solution is to push a new pixel color with the rgb and rgba opcode.
I had the feeling (biased) that combining index+diff or index+luma (applying the diff/luma opcode on a value from the indexed pixel) could result in a better compression.
That's what I've done and I've confirmed that it indeed "works" (to be taken with a grain of salt).
Also this doesn't increase too much the decoding complexity.
My current implementation is stupid as it goes through all 64 pixel in the "hash" table and tries to find a pixel that's close enough.
Another possibility would be to use another table of pixels with there values truncated/aligned (not sure how), with the lower bits encoded in the "diff" opcode.
I spent a few hours hacking on the original qoi image format, adding a combined index+diff and Index+luma opcode.
From what I've seen (from my images) the run length encoding very rarely exceed 10.
From my understanding of the format the index or luma opcodes can only apply on the previously seen pixel, which is believe is good for gradient. If theses two opcode cannot be applied the only solution is to push a new pixel color with the rgb and rgba opcode.
I had the feeling (biased) that combining index+diff or index+luma (applying the diff/luma opcode on a value from the indexed pixel) could result in a better compression.
That's what I've done and I've confirmed that it indeed "works" (to be taken with a grain of salt).
Also this doesn't increase too much the decoding complexity.
My current implementation is stupid as it goes through all 64 pixel in the "hash" table and tries to find a pixel that's close enough.
Another possibility would be to use another table of pixels with there values truncated/aligned (not sure how), with the lower bits encoded in the "diff" opcode.
edit: the branch can be found here: https://github.com/jmaselbas/qoi/tree/qoi_idiff_iluma
The text was updated successfully, but these errors were encountered: