Adding an Nvidia graphics card (or two) (or three)
Hello,
First post, but been lurking around the forums for a couple of months. I currently have an nvidia 970 4GB card installed. I know iRay can take advantage of additional nvidia cards, if installed. My question is really two parts, though the second half may not be appropriate for here (as opposed to asking nVidia directly).
- If I add another 970 4GB, will iRay utilize all 8GB of graphics memory? Or will it still be limited to just the 4GB of the original card? What happens if I install a third nvidia card? Will iray aggregate the available RAM?
- Since I am not interested in utilizing SLI (I don't game on my graphics desktop), is there any issue, or any advantange, to going with a different model of card (i.e. the 780, if I can find one reasonably priced, or a 980)?
Thanks in advance for any insight!!
Comments
Hi,
Each card only works with its own memory (and they don't have to match for our purposes).
You don't have to activate the SLI on SLI Cards.
)
3D apps. generally require SLI to be turned off but they don't have to specifically be non-SLI cards. (you'll want to turn SLI back on for Minecraft
Thanks prixat! Yeah, I'm not worried about SLI at all since I have no plans to use it. I guess my main concern(?) was whether iRay would leverage all of the available graphics RAM. I'm actually pretty happy with my render speeds right now for most basic scenes. However, I have created some more complex scenes that are pretty hefty and zipping past the 3.5 GB that the 970 can actually address (yes, it's a 4GB card, but it's architecture limits it to only having 3.5 available for use). This is obviously causing iRay to bounce back to CPU rendering, which still isn't terrible, but completely bogs me down from doing anything else while even some spot renders are progressing. So while adding more Cuda cores would be a huge bonus, taking advantage of the additional graphics RAM is really my end goal by adding another card.
Thanks again for the quick feedback!
I see your problem, unfortunately the current answer is more memory on each card. The memory doesn't aggregate.
Ok, not sure I'm following. One card in the system now with 4GB. If I add a second nvidia card that also has 4GB, are you thinking iRay would only use the 4GB on the original card for the entire render, and would not utilize the 4GB on the second board? In other words, if the scene tops out at needing 6GB of video ram, iRay is going to fall back to CPU-only mode. I am very sorry if I'm not explaining correctly, or if I am just being extraordinarily dense on this Friday morning. Been a long week. :)
Iray does not combine the memory. If you add a second 4 GB card it will be used, and all the cores of both cards will participate as long as the scene fits in 4 GB.. But - if the scene you are rendering takes more than 4 GB of Vram, both cards will drop out and the render will finish cpu only.
Thanks namffuak. That's what I was afraid of, and what prixat was saying, I believe.
Uhg. In reality, my wallet is happy, though my heart is a bit sunk.
Thanks again to each of you. Hope to be a little more active around the forums.
Not quite...it kind of addresses 3.5 GB in a very fast, typical video card fashion and reserves the other half gig that can be accessed for graphics applications, if needed, but in a much slower fashion. For gaming purposes this royally sucks...probably knocks about 5 to 10% off your frame rates. For rendering...it just doesn't matter. And if it's the only card in your system that's probably why you can't fit a full 4 GB scene on the card...you are using between 150 and 512 MB (minimally) just to run your monitor.
Interesting. I hadn't investigated what the actual issue was as I found out about it after buying the card, so it didn't matter much at that point. Too much of a pain to return it, and since I don't game on that workstation, it wasn't a huge deal. Of course, that's before I got hooked on messing around w/3D.
I'm running two monitors off of it now because the integrated graphics card doesn't have a display port, and my monitors are resolution-limited with HDMI. Long story short, I'm aware of that limiting the size of a scene I can render via GPU.
On the topic of scene sizing, I'm under the assumption texture sizes have as much impact on how much ram the scene takes as much or more than poly count. Is that a correct assumption? I understand the higher the poly count, the more calculations required, but sort of seems to me that large textures would impact graphics memory more. I'm a noob, though, so what do I know? lol
PS: I humbly bow to your script-writing prowess. Insanely useful things you have come up with.
Scripts aren't me...thats MCJ
But, yes, Iray uses abotu 3 bytes per pixel, color/b&w, doesn't matter. So a 4k texture will eat about 50 MB. Figure out how many unique texture maps are on a typical model (about a dozen....give or take). So, yeah, textures will eat a lot of memory, quickly...
Ooops, sorry for the confusion. Great info, and thank you for taking the time to help out a new guy.
It may be useful to add an inexpensive NVIDIA card to run your monitors. For example a GT 740 with 4GB of RAM. If that is in the first slot and your monitors are plugged into it, then it will offload the viewport draws, OpenGL, etc. from the 970, making the GTX 970 fully available for rendering. (Note if you get the 4GB version of the 740 it is a low powered card that can also add to the render power, though it won't add as much as adding a second GTX 970, or adding a 980Ti (6GB) or Titan X (12GB).
I was actually looking at a 750 (has built-in displayport) over the weekend, and thinking then the 970 could just be dedicated to rendering. I'm just sort of torn between spending $125-$150 on that now, or being patient and saving up some cash to get a Titan at some point down the road. Problem is I have little patience, or grasp of delayed satisfaction.
A question came to mind though while looking at the 750. If I have both cards, with the 750 just running the displays, would iRay draw mode in the Aux viewport (which I use a lot when tweaking scenes) just utilize the 750, so it would be slower than now? Or would it be able to use the 970?
I wonder if there is a reliable way to determine how much system RAM Iray is using on a CPU-only render? We can easily tell the memory requirements when running GPU-Z on a cooperative nVidia card during a render. I can see, fo example, that on most of my scenes -- 1 G2F with 4K textures, clothes and hair, HDR on dome, and a couple of photometric lights -- consumes about 1200MB. (I seldom do more than one figure, and typically don't render scenes -- just a transparent background.) I've used that to determine memory requirements for upgrading.
But is there a way if someone doesn't have a nVidia card to begin with?
Speaking of the 970, does anyone have that for Iray, and testing >3.5GB scenes? I know for games there's can be a tremendous slowdown, but I'm not sure how that might relate to Iray. According to GPU-Z, the memory controller load is often <40% when all GPUs are firing.
If you go for the 750, be sure it's the 2GB version. A 1GB card may not fit anything but simple test scenes. And then it's just a plain video card.
Personally, I'd consider a 4GB 740. Though it has a few less CUDA cores than the 750, the additional memory will mean it'll more likely participate in the render of larger scenes. Amazon has the DDR5 4GB 740 for around $120.
If you are just going to use the 700 series card for display duties, then it doesn't really matter. But, a 4 GB that can join in when the scene fits makes more sense to me, cost wise, than a card that can't join in everyonce and a while.