2 GPUs Configuration: Should I Mix RTX 3000 and 2000? Which 3000 to buy? When?

Hi everyone!

As everyone, I'm happy about the RTX 3000 release.
But there's a problem: it means that RTX 2000 GPUs have suddenly drastically depreciated.

I am going to buy a RTX 3080.
My system is ready to have 2 GPUs (I have built it for this purpose, with a X570 motherboard, a 1300W PSU etc., but I've always only had one GPU installed).

My question is, should I use a RTX 2070 Super I already have, together with the new 3080, or should I sell it ASAP, save a bit more, and get a 2nd 3080?
How much do you think the performance would change between those two situations?

This is the thread with Nvidia GPUs benchmarks: https://www.daz3d.com/forums/discussion/341041/daz-studio-iray-rendering-hardware-benchmarking/p1#Section 3.1

  • RTX 2070 Super makes 6.2 iterations/s
  • A RTX 3080 should make around 10 (5.5 is the average for a 2080, and the 3080 should be twice as powerful)
  • I'm actually currently using a RTX 2060 (because of covid...a long story), that makes 3.8 iterations/s.
  • It's hard to imagine the performance of a 3090, but it should be +20% compared to a 3080, so around 12 iterations/s.

This means that

  • 1x2070S + 1x2060 should give me 10 iterations/s, or +263% compared to the 2060.
    Cost? 0€.
    But, for value calculations, I'll consider the 2060 worth 250€, and the 2070 worth 400€. So, 650€.
  • 1x3080 should give me the same +263% improvement.
    Cost? 50-200€. A 3080 will cost around 750€ (I am from Italy, and I don't know if I'm gonna buy a reference model). I'd probably recover around 550/750€ selling both the 2060 and the 2070S.
    For value calculations: 750€.
  • 1x3080 + 1x2070S should give me 16.2 iterations/s, or +426% performance. But would the 2070S slow down the 3080? I guess it will still be faster than having a single 3080.
    Cost? 450/550€. I'd probably recover around 200/300€ selling just the 2060.
    For value calculations: 1150€.
  • 2x3080 should produce 20 iterations/s, or +530% performance.
    Cost? 750-950€. 1500€ for the 2x3080, minus 550/750€ from the old GPUs.
    ​For value calculations: 1500€.
  • 1x3090 should give me +316% performance.
    Cost? 750-950€, because it should cost around 100€ more than 2x3080.
    For the value calculations: 1600€.

So, let's compare the cost of upgrading.

GPUs Iter/s Value (€/iter/s)
2070S + 2060 650 10 65
3080 750 10 75
3080 + 2070S 1150 16.2 71
2x3080 1500 20 75
3090 1600 12 133

I know there are other factors to consider, like the fact that the 2060 only has 6GB of VRAM, and sometimes that's not enough for me. That will be even more the case after I'll switch to a 4K monitor plus a 1080p one.
So, even if the 2070S+2060 configuration looks like the best value one, maybe it would be better to get a 3080 and to keep the 2070S. If the 2070S doesn't slow down the 3080.
I don't need the 24GB that the 3090 has, and it costs too much IMHO.

I know we'll have to wait for the actual benchmarks to make precise calculations. Still, I'd like to know this beforehand to budget and to plan a bit.

I've read about a future release of 3070/80 Ti/Super...but I'd rather get the performance upgrade ASAP: for a 1500 iterations benchmark render, I'd go from 6:34 minutes to 2:30 with a 3080 alone.

So, what do you think?
What GPU/GPUs should I buy?
And when?

Thanks in advance!

Comments

  • Buy another 2070 super and an NVLink bridge. You'll get more VRAM.

    The only 3000 GPU's with NVLink are the 3090's and it looks like $3000 is out of your price range.

  • LenioTGLenioTG Posts: 2,118
    edited September 2020

    Buy another 2070 super and an NVLink bridge. You'll get more VRAM.

    The only 3000 GPU's with NVLink are the 3090's and it looks like $3000 is out of your price range.

    Maybe if I can find a used one at a good price...it's definitely not worth it at 500€.

    Are you absolutely sure memory pooling works fine in Daz?

    There's also to consider that a NVLink bridge costs more than 100€, and that I would get to 16GB of VRAM, that's not very far off from the 10GB of the 3080.

    I have used a 3GB VRAM card for my first 600 renders, a 6GB one for the following 1000, and I rarely feel the need for more. So I guess 10GB should be enough for me!

    Post edited by LenioTG on
  • PerttiAPerttiA Posts: 10,014
    LenioTG said:

    Buy another 2070 super and an NVLink bridge. You'll get more VRAM.

    The only 3000 GPU's with NVLink are the 3090's and it looks like $3000 is out of your price range.

    Maybe if I can find a used one at a good price...it's definitely not worth it at 500€.

    Are you absolutely sure memory pooling works fine in Daz?

    There's also to consider that a NVLink bridge costs more than 100€, and that I would get to 16GB of VRAM, that's not very far off from the 10GB of the 3080.

    I have used a 3GB VRAM card for my first 600 renders, a 6GB one for the following 1000, and I rarely feel the need for more. So I guess 10GB should be enough for me!

    Consider this... The 2070 Super is on the market now, it has proven to be working fine and it has NVLink. There are no 30XX cards yet to be bought and the people having them already are not allowed to say one word about them.

    One coming first will be the 3080 10GB, affordable but only 10GB, which is not a lot more than the 2070 Super already has, and the 3080 doesn't have NVLink...

    Even alone the 2070 Super is quite good, and I would value VRAM over speed at this point.

    Of course two 2080Ti:s+NVLink would be better, but when you already have one 2070 Super, getting another one +NVLink would get you 16GB (-something) and somewhat better performance than a single 2070 Super.

    In my mind, the 10GB 3080 without NVLink was a disapointment, too little and no possibilities to expand it further...

  • LenioTGLenioTG Posts: 2,118
    edited September 2020
    PerttiA said:
    LenioTG said:

    Buy another 2070 super and an NVLink bridge. You'll get more VRAM.

    The only 3000 GPU's with NVLink are the 3090's and it looks like $3000 is out of your price range.

    Maybe if I can find a used one at a good price...it's definitely not worth it at 500€.

    Are you absolutely sure memory pooling works fine in Daz?

    There's also to consider that a NVLink bridge costs more than 100€, and that I would get to 16GB of VRAM, that's not very far off from the 10GB of the 3080.

    I have used a 3GB VRAM card for my first 600 renders, a 6GB one for the following 1000, and I rarely feel the need for more. So I guess 10GB should be enough for me!

    Consider this... The 2070 Super is on the market now, it has proven to be working fine and it has NVLink. There are no 30XX cards yet to be bought and the people having them already are not allowed to say one word about them.

    One coming first will be the 3080 10GB, affordable but only 10GB, which is not a lot more than the 2070 Super already has, and the 3080 doesn't have NVLink...

    Even alone the 2070 Super is quite good, and I would value VRAM over speed at this point.

    Of course two 2080Ti:s+NVLink would be better, but when you already have one 2070 Super, getting another one +NVLink would get you 16GB (-something) and somewhat better performance than a single 2070 Super.

    In my mind, the 10GB 3080 without NVLink was a disapointment, too little and no possibilities to expand it further...

    Thanks for your answer.

    But does NVlink work seamlessly in Daz Studio?

    I think 10GB should be more than enough for me. I've been using a 6GB card for a lot of time, and I rarely go out of VRAM, without any optimization, even when using the Iray viewport.

    What concerns me the most now is rendering speed, since I need around 40 minutes for every single render, and my readers are happier the more pages I make! :)

    It would be my intention to take another 3080 down the line, just not right now.

    I mean, if the 3090 really was such a big deal, I could consider it, but I'd prefer not to spend that much amount of money.
    Also because I'll probably upgrade my Ryzen 5 3600 to a Ryzen 7 4700X/Ryzen 9 4900X, and I'll add 32 more GB of system RAM.

    Why are you disappointed by the 3080? It looks very powerful and cheap.
    Is it because of the VRAM? They're probably waiting to release a 3080 Super with 12GB after Big Navi.

    But, again, who knows when it'll be out! While I could use the performance increase now, and benefit from it for many months.
    I could still get a 3090 down the line, for the few scenes that require more VRAM, if my earnings increased, you may never know.
    I don't think 3090 will get cheaper, considering that 2080 Tis are still being sold for 1200€ after 2 whole years.

    Post edited by LenioTG on
  • We don't have an enormous number of people with two matching 20x0s and an nvLink, but we have had several reprots that the combination does work and none that I recall saying it doesn't

  • LenioTGLenioTG Posts: 2,118
    edited September 2020

    We don't have an enormous number of people with two matching 20x0s and an nvLink, but we have had several reprots that the combination does work and none that I recall saying it doesn't

    Thank you for the info Richard!

     

    The problem with the 2x2070S solution is that I'd get around 12 iterations/s, so just 20% more than a single 3080. But I couldn't upgrade that anymore, without spending a huge amount of money.

    So, since VRAM is important, but it's not my main concern, I still prefer going for the 3080!

    It would be nice if it supported NVlink, but the 3090 is pretty expensive.

    Post edited by LenioTG on
  • PerttiAPerttiA Posts: 10,014
    LenioTG said:
    PerttiA said:
    LenioTG said:

    Buy another 2070 super and an NVLink bridge. You'll get more VRAM.

    The only 3000 GPU's with NVLink are the 3090's and it looks like $3000 is out of your price range.

    Maybe if I can find a used one at a good price...it's definitely not worth it at 500€.

    Are you absolutely sure memory pooling works fine in Daz?

    There's also to consider that a NVLink bridge costs more than 100€, and that I would get to 16GB of VRAM, that's not very far off from the 10GB of the 3080.

    I have used a 3GB VRAM card for my first 600 renders, a 6GB one for the following 1000, and I rarely feel the need for more. So I guess 10GB should be enough for me!

    Consider this... The 2070 Super is on the market now, it has proven to be working fine and it has NVLink. There are no 30XX cards yet to be bought and the people having them already are not allowed to say one word about them.

    One coming first will be the 3080 10GB, affordable but only 10GB, which is not a lot more than the 2070 Super already has, and the 3080 doesn't have NVLink...

    Even alone the 2070 Super is quite good, and I would value VRAM over speed at this point.

    Of course two 2080Ti:s+NVLink would be better, but when you already have one 2070 Super, getting another one +NVLink would get you 16GB (-something) and somewhat better performance than a single 2070 Super.

    In my mind, the 10GB 3080 without NVLink was a disapointment, too little and no possibilities to expand it further...

    Thanks for your answer.

    But does NVlink work seamlessly in Daz Studio?

    I think 10GB should be more than enough for me. I've been using a 6GB card for a lot of time, and I rarely go out of VRAM, without any optimization, even when using the Iray viewport.

    What concerns me the most now is rendering speed, since I need around 40 minutes for every single render, and my readers are happier the more pages I make! :)

    It would be my intention to take another 3080 down the line, just not right now.

    I mean, if the 3090 really was such a big deal, I could consider it, but I'd prefer not to spend that much amount of money.
    Also because I'll probably upgrade my Ryzen 5 3600 to a Ryzen 7 4700X/Ryzen 9 4900X, and I'll add 32 more GB of system RAM.

    Why are you disappointed by the 3080? It looks very powerful and cheap.
    Is it because of the VRAM? They're probably waiting to release a 3080 Super with 12GB after Big Navi.

    But, again, who knows when it'll be out! While I could use the performance increase now, and benefit from it for many months.
    I could still get a 3090 down the line, for the few scenes that require more VRAM, if my earnings increased, you may never know.
    I don't think 3090 will get cheaper, considering that 2080 Tis are still being sold for 1200€ after 2 whole years.

    The missing NVLink...

  • I look at it this way. VRAM is insurmountable. If a scene doesn't fit they only option is optimization which may be impossible or may affect quality in ways you find unacceptable. A slower render, given the render speed of these cards in the first place, is just not that huge.

    You've got a 2070 Super if you're not satisfied with the speed at which it renders I'm shocked. I've got a 2070 and it renders roughly as fast as my 1080ti. The two together are amazing. For $600, less on the used market I just don't see why you wouldn't look at that as the way to go.

  • LenioTGLenioTG Posts: 2,118
    edited September 2020

    I look at it this way. VRAM is insurmountable. If a scene doesn't fit they only option is optimization which may be impossible or may affect quality in ways you find unacceptable. A slower render, given the render speed of these cards in the first place, is just not that huge.

    You've got a 2070 Super if you're not satisfied with the speed at which it renders I'm shocked. I've got a 2070 and it renders roughly as fast as my 1080ti. The two together are amazing. For $600, less on the used market I just don't see why you wouldn't look at that as the way to go.

    Well, but I don't need more VRAM, I'm currently using 6GB and I almost never feel the need for more, so, again, 10GB should be more than enough.
    Also, if things changed a lot in the future, and if I then needed more than 10GB of VRAM, I could always buy a 3090, and it wouldn't slow down the 3080, since it's of the same generation. And, really, I don't think I'll ever need more than 24GB in the upcoming years.

    Plus, I don't live in the US, and in Italy the used marked is much smaller. A RTX 2070S costs around 400€ right now. Since it's used, there's no guarantee (in Europe it lasts at least 2 years when you buy new), and I could get scammed. To that, add 130€ for an NVlink.
    Given that I don't care about the 6 more GB of VRAM...why should I ever settle for 12 iter/s, if I could get 16 (with better future-proofing) with 170€ more?

    At 6 iterations/s a 2070S is not slow, but it is compared to a 10 iter/s 3080.
    I make a lot of renders every single day, so rendering speed is what's keeping me back.

     

    My original question is more about: "Is it okay to mix different generations? Would I get, with a 3080 and a 2070S together, a performance that's equal to summing those two up? Or would the 2070S slow down the 3080?"

    Post edited by LenioTG on
  • PerttiAPerttiA Posts: 10,014
    LenioTG said:

    I look at it this way. VRAM is insurmountable. If a scene doesn't fit they only option is optimization which may be impossible or may affect quality in ways you find unacceptable. A slower render, given the render speed of these cards in the first place, is just not that huge.

    You've got a 2070 Super if you're not satisfied with the speed at which it renders I'm shocked. I've got a 2070 and it renders roughly as fast as my 1080ti. The two together are amazing. For $600, less on the used market I just don't see why you wouldn't look at that as the way to go.

    Well, but I don't need more VRAM, I'm currently using 6GB and I almost never feel the need for more, so, again, 10GB should be more than enough.
    Also, if things changed a lot in the future, and if I then needed more than 10GB of VRAM, I could always buy a 3090, and it wouldn't slow down the 3080, since it's of the same generation. And, really, I don't think I'll ever need more than 24GB in the upcoming years.

    Plus, I don't live in the US, and in Italy the used marked is much smaller. A RTX 2070S costs around 400€ right now. Since it's used, there's no guarantee (in Europe it lasts at least 2 years when you buy new), and I could get scammed. To that, add 130€ for an NVlink.
    Given that I don't care about the 6 more GB of VRAM...why should I ever settle for 12 iter/s, if I could get 16 (with better future-proofing) with 170€ more?

    At 6 iterations/s a 2070S is not slow, but it is compared to a 10 iter/s 3080.
    I make a lot of renders every single day, so rendering speed is what's keeping me back.

     

    My original question is more about: "Is it okay to mix different generations? Would I get, with a 3080 and a 2070S together, a performance that's equal to summing those two up? Or would the 2070S slow down the 3080?"

    6GB might seem good enough now, but the size of textures has been climbing at an alarming rate and with the new cards now being released, it's only going to cause more texture bloating.

    Nobody knows yet, what the actual performance will be be since no 30XX card in the open market yet and the ones having them already are not allowed to say one word about them.

    "why should I ever settle for 12 iter/s, if I could get 16 (with better future-proofing)"

    Better future proofing?... In what sense? The only upgrade path for the 3080 is buying another card either to replace it or next to it, but with no NVLink, having two cards still doesn't increase the available VRAM

  • nicsttnicstt Posts: 11,715
    edited September 2020

    If you can justrify the cost, a 3090.

    I don't see why it shouldn't work with earlier versions.

    Until they are released, we are all guessing.

    I'd use one card (2000 series) to drive monitors; I'd use the other card to do rendering.

    If you are able due to workflow, you can add the 2000 series as appropriate to assist.

    Edit:

    Personally, I would NOT judge what performance might be until we see benchmarks; gaming benchmarks could also be misleading, so specific rendering benchmarks. Blender Cycles is often used during reviews as one of the benchmarking metrics.

    Post edited by nicstt on
  • LenioTGLenioTG Posts: 2,118
    nicstt said:

    If you can justrify the cost, a 3090.

    I don't see why it shouldn't work with earlier versions.

    Until they are released, we are all guessing.

    I'd use one card (2000 series) to drive monitors; I'd use the other card to do rendering.

    If you are able due to workflow, you can add the 2000 series as appropriate to assist.

    Edit:

    Personally, I would NOT judge what performance might be until we see benchmarks; gaming benchmarks could also be misleading, so specific rendering benchmarks. Blender Cycles is often used during reviews as one of the benchmarking metrics.

    If the 3090 has the performance I imagine it has, and its only selling point is the VRAM, there's no way I would spend 1500€ on it!

    I know I have to wait for real benchmarks, as I said in the first post, but money doesn't grow on tree, so I prefer to plan this a bit earlier! :)

    PerttiA said:
    LenioTG said:

    I look at it this way. VRAM is insurmountable. If a scene doesn't fit they only option is optimization which may be impossible or may affect quality in ways you find unacceptable. A slower render, given the render speed of these cards in the first place, is just not that huge.

    You've got a 2070 Super if you're not satisfied with the speed at which it renders I'm shocked. I've got a 2070 and it renders roughly as fast as my 1080ti. The two together are amazing. For $600, less on the used market I just don't see why you wouldn't look at that as the way to go.

    Well, but I don't need more VRAM, I'm currently using 6GB and I almost never feel the need for more, so, again, 10GB should be more than enough.
    Also, if things changed a lot in the future, and if I then needed more than 10GB of VRAM, I could always buy a 3090, and it wouldn't slow down the 3080, since it's of the same generation. And, really, I don't think I'll ever need more than 24GB in the upcoming years.

    Plus, I don't live in the US, and in Italy the used marked is much smaller. A RTX 2070S costs around 400€ right now. Since it's used, there's no guarantee (in Europe it lasts at least 2 years when you buy new), and I could get scammed. To that, add 130€ for an NVlink.
    Given that I don't care about the 6 more GB of VRAM...why should I ever settle for 12 iter/s, if I could get 16 (with better future-proofing) with 170€ more?

    At 6 iterations/s a 2070S is not slow, but it is compared to a 10 iter/s 3080.
    I make a lot of renders every single day, so rendering speed is what's keeping me back.

     

    My original question is more about: "Is it okay to mix different generations? Would I get, with a 3080 and a 2070S together, a performance that's equal to summing those two up? Or would the 2070S slow down the 3080?"

    6GB might seem good enough now, but the size of textures has been climbing at an alarming rate and with the new cards now being released, it's only going to cause more texture bloating.

    Nobody knows yet, what the actual performance will be be since no 30XX card in the open market yet and the ones having them already are not allowed to say one word about them.

    "why should I ever settle for 12 iter/s, if I could get 16 (with better future-proofing)"

    Better future proofing?... In what sense? The only upgrade path for the 3080 is buying another card either to replace it or next to it, but with no NVLink, having two cards still doesn't increase the available VRAM

    Future proofing because I could always add a 2nd RTX 3000 GPU, maybe even a 3090. While if I get another 2070S, well, chance is I'd be stuck with that configuration for a long time.

    Thanks you for your opinion PerttiA, but I'm not interested in applying that solution! :)

    Can you tell me if using a RTX 3000 together with a RTX 2000 would give less performance than those two added up?

  • TheKDTheKD Posts: 2,677

    I doubt mixing would be an issue. I would think if there was a big issue, it would have popped up between gtx and rtx mix, not rtx and newer rtx mix. But that is an assumption, seems logical to me though. I have been mixing generations a long time now, I pop out the older card, and move the card I am replacing to the secondary slot when upgrading. Started with 960 and 660, then 960 and a 1070, then a 2080 super and a 1070. Never any issues, other than the secondary dropping out due to vram.

  • LenioTGLenioTG Posts: 2,118
    TheKD said:

    I doubt mixing would be an issue. I would think if there was a big issue, it would have popped up between gtx and rtx mix, not rtx and newer rtx mix. But that is an assumption, seems logical to me though. I have been mixing generations a long time now, I pop out the older card, and move the card I am replacing to the secondary slot when upgrading. Started with 960 and 660, then 960 and a 1070, then a 2080 super and a 1070. Never any issues, other than the secondary dropping out due to vram.

    Thanks! There are people on the forum saying that older CUDA cores will slow down newer ones.

    For example, if you use only your 2080 Super, then only your 1070, and sum the two up, do you get the same iterations/s you would get using them both at the same time?

  • fastbike1fastbike1 Posts: 4,077

    A 3090 will take up 3 slots of space. It is a huge card, quite a bit bigger than a 3080.

  • LenioTGLenioTG Posts: 2,118
    fastbike1 said:

    A 3090 will take up 3 slots of space. It is a huge card, quite a bit bigger than a 3080.

    I hadn't considered that, thank you!

    Maybe the partners designs will be smaller?

    This is one more argument towards getting 2x3080, because it's hard to fit a 2nd 3090, and two 3080 should go much faster than a single 3090.

    Yep, VRAM is the problem.

    My concern is that Nvidia is going to release a 3080 Super with 12GB of VRAM or something like that after Big Navi release. But it'll probably cost more as well.

  • JTL2JTL2 Posts: 46
    edited September 2020
    LenioTG said:

    I think 10GB should be more than enough for me. I've been using a 6GB card for a lot of time, and I rarely go out of VRAM, without any optimization, even when using the Iray viewport.

    I'm running a GTX 1060 and whenever I try to dForce I get this error and Daz crashes. I don't know if it's because my card isn't as fast as yours, but this seems to suggest I'm running out of memory. This is one scene, two characters fully clothed with fibermesh hair in an HDRI envrioment (no props) on Windows 7 64bit SP1, Intel Core i7-3770k @ 3.5Ghz, Asus Z77 Motherboard, Nvidia G-Force 1060 6gb,  32gb RAM:

    2020-08-19 23:40:44.456 WARNING: ..\..\..\src\dzopenclkernelfactory.cpp(32): Open CL notify: CL_MEM_OBJECT_ALLOCATION_FAILURE error executing CL_COMMAND_MAP_BUFFER on GeForce GTX 1060 6GB (Device 0).2020-08-19 23:40:44.456 WARNING: ..\..\..\src\dzdynamicsengine.cpp(2489): ERROR: clEnqueueMapBuffer (-4)2020-08-19 23:40:44.558 Total Simulation Time: 3.35 seconds
    LenioTG said:

    Maybe the partners designs will be smaller?

    If you have a water cooler, EVGA's Hydro Copper is two slot.
    nicstt said:

    I'd use one card (2000 series) to drive monitors; I'd use the other card to do rendering.

    What is the benefit of this? I hadn't planned on keeping my 1060, but if there's a serious benefit, I might consider it. Also, how does that effect gaming, if you know.

    Post edited by JTL2 on
  • LenioTGLenioTG Posts: 2,118
    edited September 2020

    @JTL2 using a GPU for your monitor occupies VRAM, so it makes sense to leave the main one free.

    I have air cooling!

    Yes, 6GB is not a ton of VRAM, but it's manageable.

    The thing is, I just have three alternatives:

    • Buying a 3090. It costs too much for my budget (my Patreon earnings are public, convert to euro, remove commissions and taxes, and you'll see xD)
    • Having 2x2070S. That way I'd lose performance (occupying my two slots, I'd get 12 iter/s compared to 20).
    • Going for the 3080, hoping the 10GB of VRAM will last as long as possible.
    Post edited by LenioTG on
  • I would wait a while.  NVIDIA may release new variants of the 3xxx series with more RAM.  I think they deliberately reduced the RAM on these cards at initial launch to keep the price competitive with AMD.  Once they're gone through that comparison and if AMD are reasonably competitive in performance and price, they'll have to bump the RAM (and the price) with new "Super" or "Ti" variants.  You may get, for example, a 3070 with 16Gb.  Should know by Q1 2021 I think.

  • LenioTGLenioTG Posts: 2,118
    Robinson said:

    I would wait a while.  NVIDIA may release new variants of the 3xxx series with more RAM.  I think they deliberately reduced the RAM on these cards at initial launch to keep the price competitive with AMD.  Once they're gone through that comparison and if AMD are reasonably competitive in performance and price, they'll have to bump the RAM (and the price) with new "Super" or "Ti" variants.  You may get, for example, a 3070 with 16Gb.  Should know by Q1 2021 I think.

    I'd like that, but the problem is that I need it for work, and I'm stuck with a 2060 at the moment, that's a lot slower! :(

    I've read about that rumor, but it feels unlikely to me that a 3070S model would have more VRAM than the 3080.
    I would believe it if it was a 3080S, but at that point it'd cost around 1100€, that's a bit too much compared to the 700€ of a 3080!

  • nicsttnicstt Posts: 11,715
    LenioTG said:
    nicstt said:

    If you can justrify the cost, a 3090.

    I don't see why it shouldn't work with earlier versions.

    Until they are released, we are all guessing.

    I'd use one card (2000 series) to drive monitors; I'd use the other card to do rendering.

    If you are able due to workflow, you can add the 2000 series as appropriate to assist.

    Edit:

    Personally, I would NOT judge what performance might be until we see benchmarks; gaming benchmarks could also be misleading, so specific rendering benchmarks. Blender Cycles is often used during reviews as one of the benchmarking metrics.

    If the 3090 has the performance I imagine it has, and its only selling point is the VRAM, there's no way I would spend 1500€ on it!

    I know I have to wait for real benchmarks, as I said in the first post, but money doesn't grow on tree, so I prefer to plan this a bit earlier! :)

    PerttiA said:
    LenioTG said:

    I look at it this way. VRAM is insurmountable. If a scene doesn't fit they only option is optimization which may be impossible or may affect quality in ways you find unacceptable. A slower render, given the render speed of these cards in the first place, is just not that huge.

    You've got a 2070 Super if you're not satisfied with the speed at which it renders I'm shocked. I've got a 2070 and it renders roughly as fast as my 1080ti. The two together are amazing. For $600, less on the used market I just don't see why you wouldn't look at that as the way to go.

    Well, but I don't need more VRAM, I'm currently using 6GB and I almost never feel the need for more, so, again, 10GB should be more than enough.
    Also, if things changed a lot in the future, and if I then needed more than 10GB of VRAM, I could always buy a 3090, and it wouldn't slow down the 3080, since it's of the same generation. And, really, I don't think I'll ever need more than 24GB in the upcoming years.

    Plus, I don't live in the US, and in Italy the used marked is much smaller. A RTX 2070S costs around 400€ right now. Since it's used, there's no guarantee (in Europe it lasts at least 2 years when you buy new), and I could get scammed. To that, add 130€ for an NVlink.
    Given that I don't care about the 6 more GB of VRAM...why should I ever settle for 12 iter/s, if I could get 16 (with better future-proofing) with 170€ more?

    At 6 iterations/s a 2070S is not slow, but it is compared to a 10 iter/s 3080.
    I make a lot of renders every single day, so rendering speed is what's keeping me back.

     

    My original question is more about: "Is it okay to mix different generations? Would I get, with a 3080 and a 2070S together, a performance that's equal to summing those two up? Or would the 2070S slow down the 3080?"

    6GB might seem good enough now, but the size of textures has been climbing at an alarming rate and with the new cards now being released, it's only going to cause more texture bloating.

    Nobody knows yet, what the actual performance will be be since no 30XX card in the open market yet and the ones having them already are not allowed to say one word about them.

    "why should I ever settle for 12 iter/s, if I could get 16 (with better future-proofing)"

    Better future proofing?... In what sense? The only upgrade path for the 3080 is buying another card either to replace it or next to it, but with no NVLink, having two cards still doesn't increase the available VRAM

    Future proofing because I could always add a 2nd RTX 3000 GPU, maybe even a 3090. While if I get another 2070S, well, chance is I'd be stuck with that configuration for a long time.

    Thanks you for your opinion PerttiA, but I'm not interested in applying that solution! :)

    Can you tell me if using a RTX 3000 together with a RTX 2000 would give less performance than those two added up?

    I can understand that, but it is the only card (currently) that also allows the Nvlink in the 3000 line up

  • nicsttnicstt Posts: 11,715
    edited September 2020
    JTL2 said:
     
    nicstt said:

    I'd use one card (2000 series) to drive monitors; I'd use the other card to do rendering.

    What is the benefit of this? I hadn't planned on keeping my 1060, but if there's a serious benefit, I might consider it. Also, how does that effect gaming, if you know.

    You can chose not to connect monitors to a card, so in affect it is used for rendering only. Windows reserves RAM on consumer cards but it shouldn't on the 3090 as Nvidia stated it was the Titan replacement; what the actuality is will remain to be seen.

    You wouldn't be using it for gaming, but plugging in the cabling would allow you to do so if you wished.

    I use my 980ti for rendering and my 970 for driving 3 montiors - and it is really time to upgrade both; in the short-term I'll be switching the 980ti to the monitor card.

    I'll make a decission on what card to replace that with after AMD anounce their lineup.

    Post edited by nicstt on
  • Even on a Titan class card not plugging a cable is not enough to not have some VRAM reserved. You can actually plug a monitor into a video out while the computer is on, its not recommended but it can be done. You have to actually put it into TCC mode which disables the video outs.

  • You can mix GPU generations but I believe that the card with less memory will be the limiting factor unless you disable the smaller memory card in DS and only check the higher memory card in DS for rendering.   Also something that I do not believe anyone has mentioned in this thread yet is how long it will take DS to be able to use the RTX 3000 cards to render.  Was a long time between the GTX 10 series and the RTX 20 series but it may be shorter this time around or maybe just a driver update.  I wound also wait for actual rendering benchmarks before making a final decision.  If there is a higher memory release of a GTX 3080, it will probably be 20 GB not 12.  All of the rumors were for 4 cards, 10 GB & 20 GB versions of the RTX 3080 and 8 GB & 16 GB versions of the RTX 3070.  I was really hoping for a 16GB version of the 3070 as that is all I need right now as a hobbyist.  If I were doing this for a living, I would be doing everything I could to get a 3090 and really future proof my system.  Yeah, I know $1500 is nuts for a video card but Nvidia decided to release it into the wild this time due to the high demand of the RTX TITAN card.  $1500 is still out of my price range too but the TITAN is still $2500.00 according to Amazon ans really out of my price range.

  • kenshaw011267kenshaw011267 Posts: 3,805
    edited September 2020

    No card will in any way will limit the rendering of any other card unless the cards are Nvlinked.

    Post edited by kenshaw011267 on
  • JTL2JTL2 Posts: 46
    nicstt said:

    I use my 980ti for rendering and my 970 for driving 3 montiors - and it is really time to upgrade both; in the short-term I'll be switching the 980ti to the monitor card.

    I'm only running a single monitor. Do you think I would notice a difference if I ran my 1080p monitor off a 1060 and rendered off a 3090, vs 3090 only? And what about the motherboard's on board gpu? Wouldn't that be similar, except using my system ram (in my new system, it'll have 64 or 128 gigs depending on my budget at the end of this)?

  • No card will in any way will limit the rendering of any other card unless the cards are Nvlinked.

    I did not say that one card would limit another card per se.  The Issue is that DAZ Studio will only load data up to the memory limit of the card with the lowest memory amount because there is no wany to pool memory unless you have the smae model cards that support memory pooling via NV LINK.  Both cards will still use all of ther CUDA cores but if you have an 8 GB GTX 1070 & a 4GB GTX 960,. DS will use all of the cuda cores from both cards but the data loaded into memory is only 4 GB on the smaller card.  If the image being rendered has data that exceeds 4GB it drops to CPU.  I know because I tried running both cards in my last system when I got my GTX 1070.  If you don't believe me, get two of your older cards with different memory amounts and then try to render an image that you know exceeds the memory avaialbvle on the smaller memory card and see what happens.  Be sure and check both boxes for both cards in Hardware Devices when you do.

  • No card will in any way will limit the rendering of any other card unless the cards are Nvlinked.

    I did not say that one card would limit another card per se.  The Issue is that DAZ Studio will only load data up to the memory limit of the card with the lowest memory amount because there is no wany to pool memory unless you have the smae model cards that support memory pooling via NV LINK.  Both cards will still use all of ther CUDA cores but if you have an 8 GB GTX 1070 & a 4GB GTX 960,. DS will use all of the cuda cores from both cards but the data loaded into memory is only 4 GB on the smaller card.  If the image being rendered has data that exceeds 4GB it drops to CPU.  I know because I tried running both cards in my last system when I got my GTX 1070.  If you don't believe me, get two of your older cards with different memory amounts and then try to render an image that you know exceeds the memory avaialbvle on the smaller memory card and see what happens.  Be sure and check both boxes for both cards in Hardware Devices when you do.

    No it will not. Each card is independent unless they are NVLinked, in which case they are sharing VRAM. I have a 1080ti with 11Gb and a 2070 with 8Gb. I exceed the 2070 and the 1080ti still works fine.

Sign In or Register to comment.