Are two Titan RTX's better than 1 if they are bridged?

My MB has four pci slots... Is it pointless to bridge two Titan RTX cards? Or should one run them separately? Or is two stupid and I should give one to a friend? Thanks, always wondered this. Nehl

«1

Comments

  • GordigGordig Posts: 9,893

    Even unlinked, the second card would improve your render times. If you can successfully link them (I don't remember what the verdict was about DS Iray supporting that), it would allow memory pooling, increasing the size of scenes that you could render. 

  • nehleonehleo Posts: 20

    Thanks Gordig. So... what if I kind of had a 3070 laying around... does that help too? Two Titans bridged and a single 3070. Nehl

  • GordigGordig Posts: 9,893

    The more (CUDA cores) the merrier. 

  • nehleonehleo Posts: 20

    Sweet. Thanks Gordig. Was always afraid that the extra cards were a goofball idea. 

  • GordigGordig Posts: 9,893

    This thread tracks benchmarks, so you can compare the results of different combinations of cards. Maybe you can contribute your own results if this all works out. 

  • nehleonehleo Posts: 20

    Ok.... Can I just unclick inside daz render settings or do you want me too unplug one at a time? 

  • CenobiteCenobite Posts: 206

    You need more Vram which means better video card, not more cards the extra card doesn't add to your total Vram nor does it add to anything really but draw power and waste your running time, dual cards are a gimmick don't bother making these useless setups that only multiple monitors make use of and you don't really need it because you can set up more then one display on one card. This all generates to much heat for no benefit i advise not to waste time & money so called crossfiring cards it's total BS.

  • nehleonehleo Posts: 20

    Well....Sorry Cenobite. If you have a test you want me to try, then let me know. As for now... Having three cards is way faster. Like, holy balls faster.

    I waited until the system had cooled down to idle temps on both test. 38 C

    Test #1, Two Titan RTX in bridge plus a 3070 RTX while using Studio drivers. Render time of 27 minutes and 28 seconds.

    Test #2, Two Titan RTX in bridge while using Studio drivers. Render time of 1 hour 3 minutes and 38 seconds.

    Nehl

    2021-06-02 (5).png
    445 x 1561 - 150K
    2021-06-02 (3).png
    441 x 1116 - 143K
  • CenobiteCenobite Posts: 206

    So they do add to render time having more then one card, i was told by so called experts thats a load of bs waste of money, my system doesn't go above 30 liquid cooled, in winter stays under 20 while rendering unless i'm doing real high quality renders. The graphics card runs a little hotter around 30 to 40 because i haven't liquid cooled my 2080ti 12gb VRAM it's only using the 3 fans cooling it's proccesors.

    I was told that the extra card never added to overall Vram and was a waste of time and power, i put money into CPU proccessors and ram for the 10 physical and 20 logical 4.5ghz proccessors my cpu has which makes up for so called extra cards, hows does vram ram bridge when the highest cards available are 12GBvram unless they have done something new. I have a full 256 GB on the mainboard for my cores to use maxium proccessing power, my cores are all fully fed if i need to pipe up to 4.5ghz on all banks.

  • CenobiteCenobite Posts: 206

    besides i notice you have zero use on one card and its 44 degrees, means its not doing anything! just sitting idling not doing anything so i fail to see how it added to power, your CPU is quite good maybe the extra card boosted your core ram and your cpu is running better but it still has 0% on your picture so that means its not doing anything but drawing power..

  • ebergerlyebergerly Posts: 3,255
    edited June 2021

    CPU's generally provide little benefit to render speeds compared to additional GPU's. Especially now that the RTX technology has specific hardware that is designed to address ray tracing problems and speed up renders that have complex scenes with a lot of ray tracing calculations. However, the amount of benefit depends on your scene and to what extent the ray tracing renderer software is written to take advantage of all that hardware. There are thousands of cores in a GPU, which means it hypothetically can do thousands of tasks simultaneously, compared to say a 20 logical processor CPU that can only do 20 chores simultaneously. Again, it all depends on the scene and the software. 

    Also, if you're rendering Iray with a GPU, you only need system RAM that is maybe only 3 times the amount of VRAM the GPU has. When you hit "Render", the scene gets loaded into system RAM, then Iray processes it and compresses it into special GPU code that it sends to the GPU. So for a 12GB VRAM GPU you may only need less than a total of 48 to 64GB system RAM to handle the rendering plus all the other apps that may be running while you're rendering. 

    Post edited by ebergerly on
  • ebergerlyebergerly Posts: 3,255

    Also, I wouldn't worry much about GPU temperatures, since one of the nice things about those GPU drivers we're always updating is that they have a lot of software that works with the GPU to measure the actual temperatures of the GPU hardware as its rendering (or whatever it's doing) and senses if the temps seem to be getting too high for the hardware to handle. If the temps are getting up in the 80-100C range (generally around 100C is the point where electronics start to get overheated), then the software cranks up the fans and maybe even puts the brakes on the rendering process until the temps start dropping. But generally, as long as you clean your PC of dust and make sure the fans are working there shouldn't be a need for additional cooling. I have two GPU's sitting next to each other, air cooled, and even with both rendering at max the temps are fine. As long as they're in the 60-80C range worst case I'm happy. 

  • nehleonehleo Posts: 20
    edited June 2021

    [Bother]. I will try to keep up with what you two are saying. Thanks for the response. Learning a lot her. 

    Cenobite, that's awesome on your temps. What's your method for cooling? I have to do something better.

    I don't use my CPU when rendering. Should I give it a shot? See how it affects times?

    As for the task manager image, that was taken while preping the scene. I didn't want to touch anything while running the test. While running the test, task manager never reads a precent on the cards, however the temps on the Titans hit around 77 C to a max of 80 C. I think this is the thermal throttling Ebergerly is speaking of. The render will hit 77 C in about six minutes at 75% render leaving the final 25% to take 21 minutes. All while windows reads 0% on task manager. Very strange. Maybe task manager only reads... game style graphics and not cuda cores. 

    Edergerly, My CPU sits around 30% while rendering. 

    I noticed that when I ran Game drivers my Titan temps hit 90 C. Once I switched to Studio drivers my temps are capped at 80C on the Titans and 68 C on the 3070.

    Post edited by Richard Haseltine on
  • PerttiAPerttiA Posts: 10,014
    edited June 2021

    Cenobite said:

    I was told that the extra card never added to overall Vram and was a waste of time and power,

    Unless you have identical GPU's connected to each other with an NVlink, the available VRAM is still only the amount available on one card. If you have NVlinked cards the amount of VRAM is combined between cards fot textures but not for geometry.

    On 30xx series, NVlink is only available on 3090, on 20xx series the NVlink can be found on RTX 2070 Super and 2080 cards.

    Multiple cards even without NVlink, will add to the processing power (if the scene can fit on VRAM of each card separately), but does not enable you to render bigger scenes (as in scenes requiring more VRAM).

    https://www.daz3d.com/forums/discussion/341041/daz-studio-iray-rendering-hardware-benchmarking/p1

    Post edited by PerttiA on
  • nehleonehleo Posts: 20

    PerttiA, I promise to check out that link, I promise. Looks very.... yeah... I will check it out after I mow the yard. I promise.... So... textures vs geometry. I have the two Titans nvlinked together. How would one know if the geometry in a scene is too large and making the bridge pointless? Nehl

  • PerttiAPerttiA Posts: 10,014

    nehleo said:

    PerttiA, I promise to check out that link, I promise. Looks very.... yeah... I will check it out after I mow the yard. I promise.... So... textures vs geometry. I have the two Titans nvlinked together. How would one know if the geometry in a scene is too large and making the bridge pointless? Nehl

    The log (Help->Troubleshooting->View Log file) will tell you how much VRAM was reserved for textures and how much was used for geometry, the same can also be seen on the pop-up thingy while rendering 

  • ebergerlyebergerly Posts: 3,255

    My point was that your CPU will give very little improvement in render times compared to your GPU's. You can try it yourself, by doing a render with only the GPU's, then the identical render with the CPU and GPU's enabled in the Render settings/Advanced. And a downside is that if you include the CPU in your renders it will lock up your computer, unless you only allow CPU rendering on only a few of it's cores. 

    Also, just because your temps might be flattening at a certain value doesn't mean it's throttling performance. It's more likely that it's just cranking up the fans to bring the temps down, and continually controlling the fan speed to maintain a certain temperature. 

    And if you want to see a graph of what the CUDA cores are doing (utilization) during an Iray render, go to the Task Manager/Performance tab, select a GPU on the left, and in any one of the four graphs you see click on the down arrow dropdown on the upper left corner of any graph and select CUDA. If that graph is near zero during a render your GPU isn't being utilized. 

     

  • ebergerlyebergerly Posts: 3,255

    The way to find out if the NVLINK is pointless is to try to render your scene with only the smallest VRAM GPU enabled for rendering. If the scene can't fit on the smallest VRAM GPU then that GPU won't participate in the render, and the NVLINK will be of little use.

    And to see if the smallest VRAM GPU is actually rendering when you try this, go to Task Manager/Performance tab, select that GPU on the left hand side, and in any of the 4 graphs you see select the small dropdown in the upper left corner and select "Cuda". During a render that graph should be showing near full utilization. Since Windows' job is to allocate hardware resources like VRAM, it's the only source for correct information on how much VRAM is being used in your GPU. The Iray log file will only tell you a small subset of that allocation and personally I've found it to be of little use. 

  • nehleonehleo Posts: 20

    PerttiA, Awesome. Thanks

    I know Ebergerly says not to worry about GPU temps, and I don't, well I don't stress about it, however it would be nice to stretch my 6 minute a bit longer. Cenobite keeps his frosty. I kind of want to keep mine frosty. Is liqud cooled a bad idea? It looks pricey. But if I could extend that 6 minutes to 12 minutes, I could finish a render before I hit the temp ceiling.

    Is this considered a taxing scene?

    Any advice on keeping her cool? I have to keep the Titans together due to the nvlink.

    2021-06-03 (3).png
    2619 x 1383 - 4M
    2021-06-03 (4).png
    1832 x 1341 - 4M
    2021-06-03 (5).png
    1798 x 1376 - 4M
  • PerttiAPerttiA Posts: 10,014

    nehleo said:

    Is this considered a taxing scene?

    No way to tell from the picture, unless one can tell which products are used and how taxing they are on the system, there are big architectural products that are light on the system but at the same time a pair of plain looking boots can bring down a high end system to it's knees.

  • ebergerlyebergerly Posts: 3,255

    Not sure what you mean by extending 6 min to 12 min. But personally I think liquid cooling is a waste of time and money. Electronics equipment is designed to operate within a certain temperature range, and if it does it should very likely last for the entire life it was designed for. And that's probably a lot longer than you or I will own our computers. While "frosty" temperatures might feel like a good thing, the actual hardware doesn't care whether it's running at 30C or 70C. Cooler is not necessarily better. Just as long as the GPU drivers can control the temps so they don't get past the 100C region you're fine. 

    That doesn't mean that your particular installation doesn't have some issues that's preventing the GPU's from being cooled as they should. If you're overclocking or something else is going on then of course you may be getting up in the 100C range and causing problems. 

    And keep in mind that temps are pretty much independent of the scene. Typically, when you start a render the temps start climbing, and after a few minutes the controller software goes into action and plays with the fan speeds to maintain the temps in the "normal" range that GPU was designed for. And engineers went to a lot of effort to figure out what that safe range is, and set the controllers to maintain that range for as long as you render. So if it's 80C for 10 minutes or for 3 hours, it doesn't matter. It's in the safe operating range and all is well.  

  • nehleonehleo Posts: 20

    Perttia, Wow, thanks, I need to go down the rabbit hole of.... asset management. 

  • TheKDTheKD Posts: 2,677

    ebergerly said:

    While "frosty" temperatures might feel like a good thing, the actual hardware doesn't care whether it's running at 30C or 70C. Cooler is not necessarily better. Just as long as the GPU drivers can control the temps so they don't get past the 100C region you're fine. 

    That doesn't mean that your particular installation doesn't have some issues that's preventing the GPU's from being cooled as they should. If you're overclocking or something else is going on then of course you may be getting up in the 100C range and causing problems. 

    And keep in mind that temps are pretty much independent of the scene. Typically, when you start a render the temps start climbing, and after a few minutes the controller software goes into action and plays with the fan speeds to maintain the temps in the "normal" range that GPU was designed for. And engineers went to a lot of effort to figure out what that safe range is, and set the controllers to maintain that range for as long as you render. So if it's 80C for 10 minutes or for 3 hours, it doesn't matter. It's in the safe operating range and all is well.  

    That's not true. I don't know about intel, it's been a while, last two builds have used AMD. Ryzen CPU, as well as nvidia GPU will throttle their chips, as well as ramping the fans up if they reach higher temps. That means if you don't have good cooling it goes up and down up and down in performance. It's not very stable. If you can keep it cool, they just run at 100% instead of fluctuating. Personally, water cooling scares me, I imagine nightmare scenarios where the tube busts open and hoses my mobo and everything fries lol. So I stick with huge towers, with lots of vents and fans and a noctua cooler for the ryzen beast.

  • ebergerlyebergerly Posts: 3,255
    What's not true?
  • nehleo said:

    My MB has four pci slots... Is it pointless to bridge two Titan RTX cards? Or should one run them separately? Or is two stupid and I should give one to a friend? Thanks, always wondered this. Nehl

    Unless you know you're going to be rendering scenes that require more than 24 Gb VRAM, I wouldn't bother with an Nvlink bridge. Also, if I recall correctly, its faster when using both cards without a bridge vs. using both cards Nvlinked when iray rendering in Daz. Just make sure you have enough system RAM(128 Gb) to support it. They recommend having twice the system RAM to VRAM, so 48 Gb VRAM equates to needing at least 96 Gb RAM, which means 8 sticks of 16 Gb or 4 sticks of 32 Gb depending on your motherboard.

  • TheKD said:

    ebergerly said:

    While "frosty" temperatures might feel like a good thing, the actual hardware doesn't care whether it's running at 30C or 70C. Cooler is not necessarily better. Just as long as the GPU drivers can control the temps so they don't get past the 100C region you're fine. 

    That doesn't mean that your particular installation doesn't have some issues that's preventing the GPU's from being cooled as they should. If you're overclocking or something else is going on then of course you may be getting up in the 100C range and causing problems. 

    And keep in mind that temps are pretty much independent of the scene. Typically, when you start a render the temps start climbing, and after a few minutes the controller software goes into action and plays with the fan speeds to maintain the temps in the "normal" range that GPU was designed for. And engineers went to a lot of effort to figure out what that safe range is, and set the controllers to maintain that range for as long as you render. So if it's 80C for 10 minutes or for 3 hours, it doesn't matter. It's in the safe operating range and all is well.  

    That's not true. I don't know about intel, it's been a while, last two builds have used AMD. Ryzen CPU, as well as nvidia GPU will throttle their chips, as well as ramping the fans up if they reach higher temps. That means if you don't have good cooling it goes up and down up and down in performance. It's not very stable. If you can keep it cool, they just run at 100% instead of fluctuating. Personally, water cooling scares me, I imagine nightmare scenarios where the tube busts open and hoses my mobo and everything fries lol. So I stick with huge towers, with lots of vents and fans and a noctua cooler for the ryzen beast.

    Yes, too great of a temperature fluctuation range too frequently is never a good thing, which can happen with gaming cards. My new Threadripper system is still on the process of being built & waiting on a few other parts(just fans and a couple other peripherals). Originally I was going to throw in an RTX A6000 with the kingpin 3090, but I'm probably going to get an A5000 instead & Nvlink it with an additional one later on down the road if I find I need more VRAM(doubtful). They're not the fastest for rendering compared to their gaming counterparts, but they're great for long renders without having to worry as much about thermals.

    I don't like liquid cooling either, but the case I'll be using isn't the most ideal for air cooling, so I opted for AIO liquid cooling solutions for the 3090(comes with it) and the cpu. No need to worry about refilling/changing fluid every 6 months.

  • ebergerlyebergerly Posts: 3,255

    I don't understand why "temperature fluctuation" is a concern. Attached is a graph showing the temperature and fan speed of my RTX 2070 Super during an 11 minute render. Note that as the temperature (on the left) rises, the fan speed (on the right) also rises in response. And thanks to the GPU driver controller software, which monitors the temps and adjusts the fan speeds accordingly, the GPU temp flattens out at around 83C, with virtually zero fluctuation, as well as zero performance throttling since throttlling doesn't occur until the 100C region. 

    Now if somebody's cooling system isn't working for some reason, and temperatures do get into the 100C region, then yeah, the controller might throttle GPU performance, but all that causes is a slower render. 

    By the way, to the question about CPU's and render times, my RTX did this scene in 11 minutes, while my 8 core, 16 processor Ryzen took 120 minutes (2 hours). 

    GPU.PNG
    1910 x 888 - 84K
  • nehleonehleo Posts: 20

    Ebergerly, I love what your saying. I hope it's true. Would solve a lot of problems for me. Any chance I can get a screenshot of your 11 minute render? 

    Here is why we think that temp fluctuation is a concern. On a render, the progress goes 0% - 25% - 50% - 75% Holdup! TEMPS ARE MAXED! 

    So from 0% to 75% system was running a sprint race. And on my system that is around 6 minutes. 

    The problem is that I still have 25% of my render left.

    And because my PC is no longer sprinting, the remaing 25% takes an additional 21 minutes.

    So the hope is that if I could keep my PC sprinting past the 6 minute mark.... maybe I could finish a render before she gets.... slow.

  • ebergerlyebergerly Posts: 3,255

    So you're saying that when temps of your RTX card(s) reach the high 70's (takes about 6 min to get there), suddenly the rendering slows down? 

    Did you do as I suggested and check Task Manager as it's rendering, under Performance, select the GPU, then select CUDA in the dropdown to see the graph of CUDA activity? Maybe for some reason the GPU (or both GPU's) are quitting at the 6 minute mark. If so you'll see the CUDA graph drop to zero. Also look at the CPU activity, because if the GPU's shut down it should fall to the CPU to do the rendering, and all the CPU cores should be maxed out. 

    If it's happening after 6 minutes, then it probably has nothing to do with VRAM because the VRAM was filled with the scene at the start. 

    In any case, I don't think what you're experiencing is normal, and may have nothing to do with temperatures. But to figure it out you'll need to get data to see what's happening. Look at what the GPUs are doing in Task Manager, check the Iray log to see if it has any hints, check the Reliability History to see if it holds any clues, etc. 

    I'm not sure of exactly what you're running or how you're configured, but it may even be a driver issue or hardware issue, or maybe even some strange render settings. But it's important to see if the GPU's are still running until the end of the render and look at the data, not just speculations. 

  • ebergerlyebergerly Posts: 3,255

    Also, one thing to consider if you're going to debug this, is that the more complex your setup the more difficult it may be to find the culprit. I'd suggest that you simplify and isolate, the first step in any debugging process. You might want to disconnect the NVLINK and just use one GPU and see if it exhibits the same behaviour. 

    And honestly I can't figure why you're assuming that temperatures have anything to do with this. Like I said, temps in the 70's are fine, and should have no effect on GPU performance. It sure sounds like there's something else going on, and my first hunch is that the NVLINK complexity might have something to do with it. 

Sign In or Register to comment.