You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When use Rdo-uastc mode to compress, some selector datas(it seems like the texel weight data in astc block?) in uastc blocks would be replaced by other selector datas in previous blocks, and when saving to a .basis file, those uastc blocks would be saved directly in the memory of this .basis file, so that gpu could use this texture by the implied astc decoder inside . However, when it comes to .ktx2 file, zstd would be used to pack the compressed blocks futhermore & loselossly, and then those zstd-compressed texture would be save into a .ktx2 file, which gpu can't decode. Is my thought above right?
How does etc1s stream be supercompressed? I saw basisu_frontend::compress() make source into etc1s blocks, I don't know why they can be supercompressed after init_etc1_images(). Is there any doc to explain the way to quantize ETC1S endpoints & selectors?
This repo is amazing, but i'm too naive to understand every amazing things inside. An astc bit format can beat me easily already.
The text was updated successfully, but these errors were encountered:
basisu_frontend::compress()
make source into etc1s blocks, I don't know why they can be supercompressed afterinit_etc1_images()
. Is there any doc to explain the way to quantize ETC1S endpoints & selectors?This repo is amazing, but i'm too naive to understand every amazing things inside. An astc bit format can beat me easily already.
The text was updated successfully, but these errors were encountered: