Iray Renderer Memory Use vs Texture Size
lilweep
Posts: 2,487
There was a thread on here where someone posted the following.
When the renderer reads the image into memory, it is decompressed and stored at raw resolution and bit depth. So it will always use the same amount of memory for a given pixel resolution and bit depth, regardless of the size of the input file. E.g., an 8-bit RGB 4096x4096 texture will always use 49152MB of memory.
Surely they mean 49152kb of memory... Anyway, if true, is there a graph showing the increase in memory usage per texture size and bit depth. I couldnt find one.
Is there a hard limit of texture size we can use?
Post edited by lilweep on
Comments
Their math was way off to begin with.
To get the MB usage of a file, (((X*Y*B)/8)/1024)/1024
X and Y are the vertical and horizontal sizes, e.g. 4096x4096, 2048x2048, etc.
B=Bit depth, e.g. 8bit, 16bit, 32bit, etc.
8 is the number of bits in a byte
1024= the number of bytes in a kilo, mega, giga, tera etc.
In the case of the example the equation looks like this: (((4096*4096*8)/8)/1024)/1024
((134217728/8)/1024)/1024
(16777216/1024)/1024
16384/1024
16
so it should be 16MB not 49152MB.
I've attached an example to show what Irfanview reports vs what i got in calculator with the information from the 'Properties" dialogue in windows explorer.
The right hand number in the red box for irfanview shows the estimated ram usage of the particular texture.
The image is marked up so as to render it useless and to hopefully avoid a copyright claim.
As far as a graph, i couldn't find anything useful with a quick google search, but it wouldn't be particularly difficult to do in excel.
If nobody elses picks up the gauntlet, i'll try to get around to it this week.
In regard to a 'hard' image size limit, I haven't found anthing in relation to this either in documentation or in practical work. Although i will caveat that the largest images i've worked with are only in the 16k range of the hdri type.
If one of the devs or somebody with a bit more intricate knowledge of Iray and DS is lurking in here they might be able to answer.
Huh?
NO. NO. NO.
Unless you're talking ancient software, 8 bit color depth means 3 bytes of color. 8 bits for the red, 8 bits for the green and 8 bits for the blue.
So a standard 8bit 4k image takes up 50,331,648 bytes or 49,152Kb or 48Mb. So a 4k texture uses 48Mb for every map it uses. A texture could use several maps per shader: diffuse, displacement etc. so it is not so simple as saying each texture uses x amount of memory. You'd also need to figure out how many maps are used.
For instance base Victoria 8 uses, on the surfaces I checked, 4 maps per surface, base color, translucency color, dual lobe specular reflectivity and base bump. so a figure that uses 4k maps for all of those would 192Mb per shader in the texture, for instance the arms, legs, torso, face, eyes and teeth in most, all?, G8 figures.
So the original quote is correct except for the weird decimal shift error.
As to limits, most of us want to render on GPU so a scene doesn't take days to complete. Beyond 4k you're getting into map sizes even a RTX 8000 would struggle with. So just don't. 4k is overkill in almost all uses as well.
One issue with the original quote though maps are not always loaded into VRAM uncompressed. There is a setting that controls which maps are compressed and by how much.
Your calculation is good, but you just forgot to multiply by the number of color channels, which is 4, not 3. Even if the image is just RGB and not RGBA, the driver lays the image out in physical memory like this because digital computers (at least AMD64s) can actually multiply integers by 4 twice as fast as they can multiply by 3, which it has to do in order to calculate the offset of the relavent pixels in the image.
The memory used by the example is not 16, not 48, but 64MB for just a single diffuse map.
But it gets worse. It is not uncommon for the driver to make all sorts of restrictions in the name of speed. This is why map dimensions have to be a power of two in size, even if most of the image will be empty: It allows for a powerful optimization when allocating/deallocating memory for textures, and makes it easier to prevent memory fragmentation.
Any strategy for making scenes smaller simply has to use UDIM for this reason.
Again huh?
RGB is always 3 bytes not 4. RGBA is used in PNG and not many other places. Shader maps make no allowance for an alpha channel at all. What would an alpha channel even do in a map? Also if every color channel had to occupy a double word then an 8 bit color pixel would occupy 12 bytes not 4. Because each color channel is handled seperately and pretty much never as a single unit.
BTW computer multiplication is just bit shifting. For instance to multiply a number by 2 just shift the number left and add a 0 at the end. This is very, very fast and much faster than other schemes for such. So any pair of number that fits in a register is just as fast as any other pair of numbers that also fits in registers. 64 bit CPU's have 64 bit registers (that's what makes them 64 bit).
To bring this back to address the OP's question, this is from a post on the Foundry Forum.
4096 * 4096 = 16,777,216 pixels
8 bits per channel * 4 channels = 32 bit = 4 bytes
4 * 16,777,216 = 67,108,864 bytes = 64 megabytes
And it needs to exist both on the GPU and in main memory.
And also this from Stack Exchange. Note below my added emphasis on 1,2, or 4 components. Nowhere does anyone discuss 3, for the reasons I have tried to explain. The correct memory usage for a 4k map is 64 megabytes, not 48.
Let's keep the discussion civil and on the substance of the tiopic, please.
This was one of the most informative threads I've read in a while.
Of course, I always value solutions and how to get there as opposed to just getting an answer, that way I know WHY the answer is what it is...wish you guys would start a thread just for this discussion and finish this conversation there! Also appreciate the source citations.
Thanks guys!