Testing IRAY vs GPU
Have been doing some testing to find out what happens in terms of memory usage when rendering in IRAY.
System i7-5820K, X99, 64GB RAM, RTX2070Super (8GB VRAM), W7 Ultimate, DS 4.15.0.2 and Nvidia 456.38
Case A) was with one G8F figure loaded to an empty scene with clothes and hair, lit with three point lights.
Case B) was four G8 figures (SubD 2) with clothing and hair, architecture and lit with three point lights.
Case C) was four G8 figures (SubD 4) with clothing and hair, architecture and lit with three point lights.
Case D) was four G8 figures (SubD 5) with clothing and hair, architecture and lit with three point lights
The last one dropped to CPU, Logfile says "CUDA device 0 (GeForce RTX 2070 SUPER) ran out of memory and is temporarily unavailable for rendering", but reports geometry using 3.8GiB before telling that it can't fit 7.7GiB:s into available 3.1GiB:s
Interesting how VRAM is not released even when opening a new scene...
Comments
I need to reboot often when using DAZ iray
That's a known issue with IRAY.
As soon as you run the first render, Iray won't fully release vram between renders, even if you load a new scene or clear the scene.
You'll either have to close out DS or in the worst case, as wendyluvscatz says, restart the system.
I've personally never had to restart my system, but i attribute that more to running dual GPU's, one for video and one for rendering, as well as not using "consumer" class gpus. As soon as i close DS, it's fully released.
---------------------------------------------------------------------------------------------------------
Since you seem to be testing Sub-d, i'll share some of my recent insights.
Each increase in Sub-d increases the polycount by 4x and a matching increase in Vram used.
Base G8F has ~16k polys.
Sub-d 1: 65k, and increases the Vram utilization by ~11-12MB.
Sub-d 2: 261k, and ~46-48MB.
Sub-d 3: 1m, and ~178-180MB.
Sub-d 4: 4m, and 698MB
Sub-d 5: 16m, and 2.7-2.8GB
I tested with a base g8f, no clothes or hair, no additional lighting, default view position, default render settings at 1kx1k.
Between the three different versions of DS, 4.12.1.117, 4.12.1.118, 4.15.0.2, with 3 different gpus, P106-100(gtx 1060 mining card), tesla M40(12GB) and tesla m20(5GB), the vram use was consistant.
The weird one was sub-d 4 at 698MB being constant across everything.
Sub-d 5 did have a variation of ~100MB between 4.12 and 4.15, with 4.12 being the larger.
I was not actually testing SubD, but just using it to fill the VRAM with everything else on the scene staying the same, since I wanted to see what would happen when the render dropped to CPU.
The result was in par with what I already suspected - Some of the VRAM was still reserved, even when the GPU didn't take part in rendering.
Having VRAM used was claimed to be proof of GPU being involved but at a "slower pace"... And not getting a logfile to prove otherwise, had to recreate the scenario by myself.
That dead horse, again.
when you close rendering VRAM does not get released but I guess it remains cached because if you restart rendering it starts a bit faster than if you start it for the first time.
But this seems not to cause any problems because memory will be released when it will be needed
I was left wondering what does it actually leave in the memory, the amount seems to be;
1. The base load from the OS = 200MB
2. The base load from DS = 170MB
3. The base load from the scene = XX (52 to 470MB:s in my tests)
4. Something quite consistent = ~2140MB
The last one gets lower only when the overall VRAM load is about to or has gone over whats available on GPU
I can understand keeping geometry and texture related stuff in cache, but it is no use for a new scene or when you open a different scene.