Nvidia RAM questions

For render purposes you can get some decent prices on Tesla cards on eBay. Has anyone had any experience with these?
I know RAM does not add with multiple cards, only CUDA cores.
If two cards have different amounts of RAM will the render engine use the one with the larger amount or will it always default to one and ignore RAM in the other? Is there a way to define this?
Tesla with no video out, will iRay choose to use that ram if greater than that in the GeForce card?
Has anyone had any issues with driver conflicts using a GeForce card ( with monitor outputs ) in combination with Tesla cards for CUDA and RAM?
What if MOBO has onboard video for monitors ( shared Ram ) and a tesla card is added that has no video out? Will iRay use Tesla RAM?
Thanks.
Comments
Iray will use all Nvdia cards you've tagged in Studio as available. If a scene is too large for the vram in a card, that card will be dropped and the remaining card(s) will be used. If the scene fails to fit in any card Iray will switch to cpu render even if it is not tagged for use.
So if I get this right, Card A has 2GB and card B has 4 GB. If scene is 1.5 GB iRay with just grab one and go. If the scene is 2.5 GB iRay will ignore the 2GB card's Vram and use the 4GB card. All cores on both cards will still be used in the render process.
If the scene needs 1.5GB both cards will be used, if it needs 2.5GB only the four GB card will be used, if it needs 4.5GB neither card will be used. Thing of each card as a separate computer - it has to be able to load the scene to participate.
So basically what I was saying holds true. In this configuration the max scene I could render would be 4GB because the 4 is not added to the 2 to provide 6GB.
Yes but cuda cores don't add if one card is not used. They only add if both cards are used. So if you have 2G and 4G and you want to add cuda cores then you are limited to 2G scenes. Also, if there's a big difference in speed in the two cards then cuda drivers may have problems adding them. The best configuration is having two cards of the same kind, or very similar in memory and speed.
I seem to be getting conflicting information on this score and I don't believe you are correct. Many render machines use multiple cards to achieve a BUNCH of CUDA cores and if what you say holds true these maga machines would be limited to rendering only a scene that will fit in the Vram of just one card basicaly making render farms impossible.
https://developer.nvidia.com/cuda-faq You can cascade tens of thousands of CUDE cores according to Nvidia. This would all be pretty pointless if in the end one were limited by the amount of RAM available on only a single card.
https://helpdaz.zendesk.com/hc/en-us/articles/207530513-System-Recommendations-for-DAZ-Studio-4-
"each card has to hold the entire scene" .. "So GPU memory is what determines if the GPU can be used or not, if the file fits, it will be used and if the file doesn’t fit, it won’t."
I've got a Pascal Titan X (12GB), a GTX 980 (4GB), and a Titan Z (12GB). Iray commonly uses 6GB or more of the Titan X, 3+ GB of the 980, and (due to some sort of error with Windows not recognizing both halves of the Titan Z) the same amount of RAM as the 980, within 20MB.
The scene is evenly distributed among all 3 cards' VRAM.
However, if it does exceed the 980's 4GB, it dumps off to the CPU entirely, rather than putting more work on the Pascal, which could certainly handle it. Perhaps that's something either Daz or Nvidia could look into.
For a while I ran a 780 and 1080Ti, using the Daz Octane plug-in rather than Iray. Octane uses CUDA cores from all of the cards, but only uses the amount of VRAM of the smallest card. So my renders benefited from the cores on both cards (faster) but were memory limited by the 780. However, Octane can then use Out-of-core memory so I wasn't limited to 3Gb scenes.
You may try disabling the 980 from rendering and updating the Titan Z drivers to get the full 12 GB. I guess the 980 will be great for the viewport anyway if you set it as primary card in windows.
I fear if you let the 980 in, then iray will reverse to CPU when the scene exceeds 4 GB because it can't use all the GPUs you selected. But I'm not sure so test it yourself.
EDIT: since the Titan Z has 2 GPUs and 12 GB, I guess you have 6 GB per GPU. If this is the case iray will reverse to CPU when the scene exceeds 6 GB. May be from the drivers you can disable just one GPU to get the full 12 GB for the other one. But I fear you can't.