Threadripper for IRAY or RTX?

So i'm looking to build a new system in a few months to do some really heavy G8 IRAY animations for a game that I am making and i'm trying to get the most performance that I can for as little money as possible. Right now I have a Ryzen 3 1200 with a GTX 960 and I cant get the GPU to render anything unless I strip the scene down using a scene optimiser, if I render just the characters in the scene though I have found that my CPU absolutely dominates my GPU on render times. So that leaves me wondering if I should spend $1,000 on an RTX 2070 SLI config plus another $700 or more on something like a Ryzen 9 3900X or just drop in a single card like a 2060 and jump straight into something like a Threadripper 3970X with 32-64GB of RAM. Anyone have any experience on which one would win in render times? Both setups are able to be built for around the same cost if im careful of sales and such. I just havent been able to find anything on CPU rendering in IRAY on purpose let alone a Threadripper vs RTX comparison.

«13

Comments

  • kenshaw011267kenshaw011267 Posts: 3,805

    Nothing at $1000 would be good for iRay animation that is just a fact.

    You flat cannot get 2070's into NvLink (RTX cards equivalent to SLI) because they don't have the connector.

    You have to strip scenes is to get them to fit on the 960. Everything else runs on the CPU only.

    I have no idea where you are but you could not build a workable system in the US even on sale with the parts you're talking about for $1k.

    If your current rig has a big enough PSU, which it likely doesn't, and 2 full length PCIE slots you could spend $1100 on 2 2070 Supers and an NVlink bridge and get enough VRAM for quite large scenes, which since you have no idea what you're doing would save you a lot of time learning how to optimize scenes, and give you a lot of speed. If not just get something like a 2060 Super and live with it till you get enough cash for a serious production system. Animation is never going to be fast in iRay no matter what hardware you throw at the problem

  • Nothing at $1000 would be good for iRay animation that is just a fact.

    You flat cannot get 2070's into NvLink (RTX cards equivalent to SLI) because they don't have the connector.

    You have to strip scenes is to get them to fit on the 960. Everything else runs on the CPU only.

    I have no idea where you are but you could not build a workable system in the US even on sale with the parts you're talking about for $1k.

    If your current rig has a big enough PSU, which it likely doesn't, and 2 full length PCIE slots you could spend $1100 on 2 2070 Supers and an NVlink bridge and get enough VRAM for quite large scenes, which since you have no idea what you're doing would save you a lot of time learning how to optimize scenes, and give you a lot of speed. If not just get something like a 2060 Super and live with it till you get enough cash for a serious production system. Animation is never going to be fast in iRay no matter what hardware you throw at the problem

     

    I can go up to around $3,000 for the whole system including a dual monitor setup, just not sure how fast I can get things while trying to stay away from using the whole 3 grand.

  • kenshaw011267kenshaw011267 Posts: 3,805

    Nothing at $1000 would be good for iRay animation that is just a fact.

    You flat cannot get 2070's into NvLink (RTX cards equivalent to SLI) because they don't have the connector.

    You have to strip scenes is to get them to fit on the 960. Everything else runs on the CPU only.

    I have no idea where you are but you could not build a workable system in the US even on sale with the parts you're talking about for $1k.

    If your current rig has a big enough PSU, which it likely doesn't, and 2 full length PCIE slots you could spend $1100 on 2 2070 Supers and an NVlink bridge and get enough VRAM for quite large scenes, which since you have no idea what you're doing would save you a lot of time learning how to optimize scenes, and give you a lot of speed. If not just get something like a 2060 Super and live with it till you get enough cash for a serious production system. Animation is never going to be fast in iRay no matter what hardware you throw at the problem

     

    I can go up to around $3,000 for the whole system including a dual monitor setup, just not sure how fast I can get things while trying to stay away from using the whole 3 grand.

    Give me an honest budget, minus what you want for monitors, keyboards and mice, and I'll give you some builds when you're ready to build. Things are always changing so wait till you're actually ready to start buying parts.

  • Nothing at $1000 would be good for iRay animation that is just a fact.

    You flat cannot get 2070's into NvLink (RTX cards equivalent to SLI) because they don't have the connector.

    You have to strip scenes is to get them to fit on the 960. Everything else runs on the CPU only.

    I have no idea where you are but you could not build a workable system in the US even on sale with the parts you're talking about for $1k.

    If your current rig has a big enough PSU, which it likely doesn't, and 2 full length PCIE slots you could spend $1100 on 2 2070 Supers and an NVlink bridge and get enough VRAM for quite large scenes, which since you have no idea what you're doing would save you a lot of time learning how to optimize scenes, and give you a lot of speed. If not just get something like a 2060 Super and live with it till you get enough cash for a serious production system. Animation is never going to be fast in iRay no matter what hardware you throw at the problem

     

    I can go up to around $3,000 for the whole system including a dual monitor setup, just not sure how fast I can get things while trying to stay away from using the whole 3 grand.

    Give me an honest budget, minus what you want for monitors, keyboards and mice, and I'll give you some builds when you're ready to build. Things are always changing so wait till you're actually ready to start buying parts.

     

    Yeah im aiming to build in March, budget is going to top out at $2,500 for the tower since I need to buy a dual monitor stand and 2 matching monitors for it.

  • if I render just the characters in the scene though I have found that my CPU absolutely dominates my GPU on render times.

    If the scene fits in VRAM, there is no CPU anywhere at any price that wouldn't be several times slower than a decent card like a 2070. As a reference point, a single 2080ti is 6 times faster than a 1950X.

  • if I render just the characters in the scene though I have found that my CPU absolutely dominates my GPU on render times.

    If the scene fits in VRAM, there is no CPU anywhere at any price that wouldn't be several times slower than a decent card like a 2070. As a reference point, a single 2080ti is 6 times faster than a 1950X.

     

    Yeah i'm seriously considering 2 RTX 2070 or 2080 supers over NVLlink

  • kenshaw011267kenshaw011267 Posts: 3,805

    Nothing at $1000 would be good for iRay animation that is just a fact.

    You flat cannot get 2070's into NvLink (RTX cards equivalent to SLI) because they don't have the connector.

    You have to strip scenes is to get them to fit on the 960. Everything else runs on the CPU only.

    I have no idea where you are but you could not build a workable system in the US even on sale with the parts you're talking about for $1k.

    If your current rig has a big enough PSU, which it likely doesn't, and 2 full length PCIE slots you could spend $1100 on 2 2070 Supers and an NVlink bridge and get enough VRAM for quite large scenes, which since you have no idea what you're doing would save you a lot of time learning how to optimize scenes, and give you a lot of speed. If not just get something like a 2060 Super and live with it till you get enough cash for a serious production system. Animation is never going to be fast in iRay no matter what hardware you throw at the problem

     

    I can go up to around $3,000 for the whole system including a dual monitor setup, just not sure how fast I can get things while trying to stay away from using the whole 3 grand.

    Give me an honest budget, minus what you want for monitors, keyboards and mice, and I'll give you some builds when you're ready to build. Things are always changing so wait till you're actually ready to start buying parts.

     

    Yeah im aiming to build in March, budget is going to top out at $2,500 for the tower since I need to buy a dual monitor stand and 2 matching monitors for it.

    There is absolutely no point in talking about a build now for March. Ampere could be out then as well as Ryzen 4000. That could completely change the entire landscape for PC building. Don't even start thinking about hardware choices until January except in broad strokes till then.

  • Nothing at $1000 would be good for iRay animation that is just a fact.

    You flat cannot get 2070's into NvLink (RTX cards equivalent to SLI) because they don't have the connector.

    You have to strip scenes is to get them to fit on the 960. Everything else runs on the CPU only.

    I have no idea where you are but you could not build a workable system in the US even on sale with the parts you're talking about for $1k.

    If your current rig has a big enough PSU, which it likely doesn't, and 2 full length PCIE slots you could spend $1100 on 2 2070 Supers and an NVlink bridge and get enough VRAM for quite large scenes, which since you have no idea what you're doing would save you a lot of time learning how to optimize scenes, and give you a lot of speed. If not just get something like a 2060 Super and live with it till you get enough cash for a serious production system. Animation is never going to be fast in iRay no matter what hardware you throw at the problem

     

    I can go up to around $3,000 for the whole system including a dual monitor setup, just not sure how fast I can get things while trying to stay away from using the whole 3 grand.

    Give me an honest budget, minus what you want for monitors, keyboards and mice, and I'll give you some builds when you're ready to build. Things are always changing so wait till you're actually ready to start buying parts.

     

    Yeah im aiming to build in March, budget is going to top out at $2,500 for the tower since I need to buy a dual monitor stand and 2 matching monitors for it.

    There is absolutely no point in talking about a build now for March. Ampere could be out then as well as Ryzen 4000. That could completely change the entire landscape for PC building. Don't even start thinking about hardware choices until January except in broad strokes till then.

     

    Im only putting things together now in case the build I have my eye on now drops enough in price by March, i'm also pricing 2 EYPC builds and a Threadripper setup.

  • TheMysteryIsThePointTheMysteryIsThePoint Posts: 2,923
    edited July 2020
    Im only putting things together now in case the build I have my eye on now drops enough in price by March, i'm also pricing 2 EYPC builds and a Threadripper setup.

    Unless you need the PCIe lanes, and you don't :), forget about those high end CPUs and put the money into your GPUs. They don't help at all in single threaded apps like Daz.

    I upgraded from an AMD Phenom II X6 to a TR 1950X and, for Daz, didn't even notice any practical difference at all. I basically donated $1000 to AMD.

    Post edited by TheMysteryIsThePoint on
  • kenshaw011267kenshaw011267 Posts: 3,805

    Even if you want to go CPU there still is no point in doing more than basic research. You're talking 8 months. There will be at least a major CPU or GPU launch in that time if not both. That will upend everything.

    AMD says Ryzen 4000 will be out by then and with the way stocks of high end RTX cards are disappearing Ampere may be here by Thanksgiving in some form.

    By March absolutely any advice anyone gives here would just be wrong and if you took it you'd later on be mad.

  • takezo_3001takezo_3001 Posts: 1,957
    edited July 2020

    I have a 3900x which is great for multi-tasking as you have enough CPU thread support to do other things like surfing/video production/blender/gaming etc, while waiting for your render; also I am saving up for the 3080ti both for gaming and its rumored 12gb VRAM... (Would much rather have the 3080 titan for that sweet 24gb!)

    Also, 64gb ram is perfect for my multi-tasking those ram-hungry video encoding progs while waiting for rendering as well, the only thing holding me back is having only 8 gb VRAM!

    Post edited by takezo_3001 on
  • JPJP Posts: 60

    Nothing at $1000 would be good for iRay animation that is just a fact.

    You flat cannot get 2070's into NvLink (RTX cards equivalent to SLI) because they don't have the connector.

    You have to strip scenes is to get them to fit on the 960. Everything else runs on the CPU only.

    I have no idea where you are but you could not build a workable system in the US even on sale with the parts you're talking about for $1k.

    If your current rig has a big enough PSU, which it likely doesn't, and 2 full length PCIE slots you could spend $1100 on 2 2070 Supers and an NVlink bridge and get enough VRAM for quite large scenes, which since you have no idea what you're doing would save you a lot of time learning how to optimize scenes, and give you a lot of speed. If not just get something like a 2060 Super and live with it till you get enough cash for a serious production system. Animation is never going to be fast in iRay no matter what hardware you throw at the problem

    8 GB GPU RAM is not enough. I have a 1080 TI and the RAM is useless for anything but a basic scene. I would get the faster CPU with 64 GB  RAM. The GPU performance is overated in my opinion.

    I have used VRAY NEXT with 3dsmax and it has an optional GPU rendering engine bundled with the standard CPU based engine

    VRAY supports hybrid rendering where both the CPU and GPU function as CUDA devices. I can confirm this because both devices are operating at max usage and my power usage reflect that! I did a test rendering of a car to see which was faster. The performance was about the same with the GPU edging out the CPU by a few passes in a minute. So they were just about equal. And when I enabled both devices the number of rendering passes combined to match the passes for each separate run. However the CPU has access to 128 GB RAM and the GPU only 11 GB!

    Obviously IRAY can also execute on the CPU otherwise we would not be getting any rendering results when the GPU ran out of RAM. What I wonder is can they both function together to speed up the rendering of scenes that fit within the display RAM - like VRAY does?

    https://www.chaosgroup.com/blog/understanding-v-ray-hybrid-rendering

    CUDA Rendering on CPUs

    GPU code can be difficult to debug. When the code crashes, as it inevitably does, it may only return a kernel dump, with no information about which part of the code actually caused the crash. To uncover the cause, a developer will comment out each section of the code until the culprit is found. This process can be tedious and time consuming.

    To make GPU debugging easier, our lead GPU developer Blago Taskov had the idea to port the CUDA code over to the CPU, where he could use better debugging tools. Once he had it working, Blago could identify exactly which line of code caused the crash. But this also led to a bigger discovery. Now that V-Ray CUDA was rendering on both CPUs and GPUs, and producing the exact same results, V-Ray Hybrid rendering was officially born.

    V-Ray Hybrid Benchmarks

    To find out the speed boost we get by adding CPUs to the GPU mix, we benchmarked two V-Ray CUDA scenes from our friends at Dabarti Studio.

    For these scenes, the addition of CPUs helped reduce render times by 13% and 25%. It’s a welcome speed boost, rather than leaving these powerful CPUs idle.

  • kenshaw011267kenshaw011267 Posts: 3,805

    I use a 2070 and it renders many not basic scenes every day. I simply do not know what you guys who cannot make 8Gb cards work are doing. Either you think 10 character scenes are basic or you're doing something else that I don't. I've got scenes with 5 G8's plus lots of props plus an 8k hdri plus some base environment. Renders on 8Gb no problem.

    As to the rest, HUH? What did that have to do with anything at all?

  • JPDAZ said:

    8 GB GPU RAM is not enough. I have a 1080 TI and the RAM is useless for anything but a basic scene. I would get the faster CPU with 64 GB  RAM.

    I agree with @kenshaw011267; while 8G is not enough for many things, there are many things you can do about that. First and foremost not using 4K textures for anything that isn't very close to the camera, and then maybe even only for the character's face, or even just the eyes. There's a product called Scene Optimizer that is supposed to help out with that.

    JPDAZ said:

    The GPU performance is overated in my opinion.

    Go to Daz Studio Iray - Rendering Hardware Benchmarking and look at the actual data for those who have posted both CPU only and GPU data. I don't see how anyone could look at the multiple times speedup and say it is "overrated". There is simply not a CPU anywhere for any amount of money that can compare to a middling GPU.

  • JPJP Posts: 60

    I use a 2070 and it renders many not basic scenes every day. I simply do not know what you guys who cannot make 8Gb cards work are doing. Either you think 10 character scenes are basic or you're doing something else that I don't. I've got scenes with 5 G8's plus lots of props plus an 8k hdri plus some base environment. Renders on 8Gb no problem.

    As to the rest, HUH? What did that have to do with anything at all?

    I usually go with 16K HDRIs. Maybe I should do a test and jump down to 8K and see if I can really notice the quality drop.

    One thing I noticed is the progress bar for GPU rendering is slow but the amount of noise is not that much. So a rendering that shows only 20% complete with the GPU rendering could be cancelled and the remaining little noise removed via denoising.

    All I know is the moment I load up a few more characters and render at 10,000 pixels horizontal the GPU drops out and the CPU is rendering the image.

  • JPJP Posts: 60
    JPDAZ said:

    8 GB GPU RAM is not enough. I have a 1080 TI and the RAM is useless for anything but a basic scene. I would get the faster CPU with 64 GB  RAM.

    I agree with @kenshaw011267; while 8G is not enough for many things, there are many things you can do about that. First and foremost not using 4K textures for anything that isn't very close to the camera, and then maybe even only for the character's face, or even just the eyes. There's a product called Scene Optimizer that is supposed to help out with that.

    JPDAZ said:

    The GPU performance is overated in my opinion.

    Go to Daz Studio Iray - Rendering Hardware Benchmarking and look at the actual data for those who have posted both CPU only and GPU data. I don't see how anyone could look at the multiple times speedup and say it is "overrated". There is simply not a CPU anywhere for any amount of money that can compare to a middling GPU.

    I bought SO and tried it and it just took too long rescaling the texture maps. I was watching the thing run for a whille and just cancelled it. Maybe I'll look at it again. I just figured but the time Scene Optimizer is done I already have a decent amount of rendering completed.

  • kenshaw011267kenshaw011267 Posts: 3,805

    I've got entire racks of servers in the datacenter I run. Literally thousands of CPU cores with plenty of room, power and cooling for more. I have no plans to add more because the demand isn't there. Where the demand is, and I'm filling that as fast as the parts come in, is for GPGPU's. If Nvidia would double or triple their production of Quadros I could then fill all that rack space. Sure you can run AI and physics sims and all the other stuff that gets run on Quadros on CPU's but at 1/4 to 1/10 or worse the speed. I just got an order in yesterday some company, can't name them sorry, wants 72 Quadro 8000's running fluid dynamics ASAP. It's not like I can just call anyone and get those delivered next week or something. And I do not even want to think about how many dual socket 7402's it would take to to do that work at the same speed. I am pretty sure it would not fit in our available floor space.

    All those companies building self driving vehicles. None, that I've seen, are using CPU's. They're all using GPGPU's. Some guys are trying to roll their own, which is cray cray, but most are just slapping a Quadro, or 4, in to it and calling it a day.

    Some outfit just sent me some lit on a nuts rack they hand build with 8, IIRC, watercooled quadros and some desktop CPU in a 2U rack as some sort of ultimate baller rendering machine. Apparently they'll build whole racks of them into a container, generators etc included, and deliver the whole thing to any shooting location you want. 

    https://grando.ai/en/solutions/

    So yeah anyone who is betting against GPGPU performance is in the minority. They might be right in the long run but that hasn't been the right way to bet for a long time and more and more it doesn't look like there is any reasonable way to reverse it.

  • kenshaw011267kenshaw011267 Posts: 3,805
    JPDAZ said:

    I use a 2070 and it renders many not basic scenes every day. I simply do not know what you guys who cannot make 8Gb cards work are doing. Either you think 10 character scenes are basic or you're doing something else that I don't. I've got scenes with 5 G8's plus lots of props plus an 8k hdri plus some base environment. Renders on 8Gb no problem.

    As to the rest, HUH? What did that have to do with anything at all?

    I usually go with 16K HDRIs. Maybe I should do a test and jump down to 8K and see if I can really notice the quality drop.

    One thing I noticed is the progress bar for GPU rendering is slow but the amount of noise is not that much. So a rendering that shows only 20% complete with the GPU rendering could be cancelled and the remaining little noise removed via denoising.

    All I know is the moment I load up a few more characters and render at 10,000 pixels horizontal the GPU drops out and the CPU is rendering the image.

    Why are you rendering a 10k wide image? Are you printing at 600dpi? If so it is time to get professional hardware. 

    If not there is no monitor in existence that can display an image at that scale.

  • kenshaw011267kenshaw011267 Posts: 3,805
    JPDAZ said:
    JPDAZ said:

    8 GB GPU RAM is not enough. I have a 1080 TI and the RAM is useless for anything but a basic scene. I would get the faster CPU with 64 GB  RAM.

    I agree with @kenshaw011267; while 8G is not enough for many things, there are many things you can do about that. First and foremost not using 4K textures for anything that isn't very close to the camera, and then maybe even only for the character's face, or even just the eyes. There's a product called Scene Optimizer that is supposed to help out with that.

    JPDAZ said:

    The GPU performance is overated in my opinion.

    Go to Daz Studio Iray - Rendering Hardware Benchmarking and look at the actual data for those who have posted both CPU only and GPU data. I don't see how anyone could look at the multiple times speedup and say it is "overrated". There is simply not a CPU anywhere for any amount of money that can compare to a middling GPU.

    I bought SO and tried it and it just took too long rescaling the texture maps. I was watching the thing run for a whille and just cancelled it. Maybe I'll look at it again. I just figured but the time Scene Optimizer is done I already have a decent amount of rendering completed.

    When I get a scene that drops to CPU I run it through SO. I start it and get up and do something. Go to the toilet, make a sandwich, something. When I come back it will be done or I'll see from the progress bar roughly how long it needs. I think I had one take 15? minutes. Compared to CPU renders that's nothing.

  • JPJP Posts: 60
    JPDAZ said:

    I use a 2070 and it renders many not basic scenes every day. I simply do not know what you guys who cannot make 8Gb cards work are doing. Either you think 10 character scenes are basic or you're doing something else that I don't. I've got scenes with 5 G8's plus lots of props plus an 8k hdri plus some base environment. Renders on 8Gb no problem.

    As to the rest, HUH? What did that have to do with anything at all?

    I usually go with 16K HDRIs. Maybe I should do a test and jump down to 8K and see if I can really notice the quality drop.

    One thing I noticed is the progress bar for GPU rendering is slow but the amount of noise is not that much. So a rendering that shows only 20% complete with the GPU rendering could be cancelled and the remaining little noise removed via denoising.

    All I know is the moment I load up a few more characters and render at 10,000 pixels horizontal the GPU drops out and the CPU is rendering the image.

    Why are you rendering a 10k wide image? Are you printing at 600dpi? If so it is time to get professional hardware. 

    If not there is no monitor in existence that can display an image at that scale.

    It is a hobby. I have a 4K monitor and like to zoom in and see detail in big scenes. My computer is fully capable of handling it. I have two workstations with over 64-128 GB RAM and powerful Intel CPUs. I set my rendering time limit to 2 hours. That is usually sufficient to render many characters. I run the Intel Denoiser on the rendered file. I should probably start saving my renderings in EXR format via canvasses. I render EXR format with 3dsmax / VRAY and it is very useful to tweak various levels.

    8K monitors will be around sooner than later though. I actually use a 40" 4K TV as a monitor and it is good enough.

    8K TVs are already available! But like I said my renderings are done so I can zoom in and out as I prefer. This also allows cropping options later.

    https://www.whathifi.com/news/samsung-8k-tv-deal-samsungs-2020-8k-range-is-now-10-off-limited-time-only

  • JPJP Posts: 60
    edited July 2020

    I've got entire racks of servers in the datacenter I run. Literally thousands of CPU cores with plenty of room, power and cooling for more. I have no plans to add more because the demand isn't there. Where the demand is, and I'm filling that as fast as the parts come in, is for GPGPU's. If Nvidia would double or triple their production of Quadros I could then fill all that rack space. Sure you can run AI and physics sims and all the other stuff that gets run on Quadros on CPU's but at 1/4 to 1/10 or worse the speed. I just got an order in yesterday some company, can't name them sorry, wants 72 Quadro 8000's running fluid dynamics ASAP. It's not like I can just call anyone and get those delivered next week or something. And I do not even want to think about how many dual socket 7402's it would take to to do that work at the same speed. I am pretty sure it would not fit in our available floor space.

    All those companies building self driving vehicles. None, that I've seen, are using CPU's. They're all using GPGPU's. Some guys are trying to roll their own, which is cray cray, but most are just slapping a Quadro, or 4, in to it and calling it a day.

    Some outfit just sent me some lit on a nuts rack they hand build with 8, IIRC, watercooled quadros and some desktop CPU in a 2U rack as some sort of ultimate baller rendering machine. Apparently they'll build whole racks of them into a container, generators etc included, and deliver the whole thing to any shooting location you want. 

    https://grando.ai/en/solutions/

    So yeah anyone who is betting against GPGPU performance is in the minority. They might be right in the long run but that hasn't been the right way to bet for a long time and more and more it doesn't look like there is any reasonable way to reverse it.

    I have more than 40 GPUs with 8GB each. They were used for cryptocurrency mining in 3 rigs. I am going to test how they function with rendering in DAZ. I know that each GPU requires a CPU thread so a 2 core / 4 thread processor will only make DAZ see 3 GPUs since one thread is required for the GUI. I have confirmed this already. I am curious to see how many GPUs VRAY will see with the 4 thread processor. If I see a big rendering improvement then it will be time to replace the CPU with a 6-8 core w/ 12-16 threads if the motherboard can handle it - that is if VRAY also needs the better processor. The only thing that sucks is 18 of those GPUs are AMD RX580 cards. They work with Blender but obviously not IRAY. The RX580 cards will work with VRAY though I believe - need to test it.

    Post edited by JP on
  • takezo_3001takezo_3001 Posts: 1,957

    Yes, I do use 2-3 subD levels as the low poly artifacts are pretty ugly, I also use the HDRI pics as a background as well, hence the 4-8k resolution, not to mention my 4k multi-texture maps; I do multiple renders that have close-ups and far shots (About 2-3 feet for the far shots) as I post in a "photo-shoot styled" series of pics, however, as for my action/fantasy compositions/animations I can get away with lighter text/SubD requirements.

     

    Thanks though, for the suggestions!smiley

    Again, this will be moot once I can get an alleged 12gb ti as I won't know for sure about the true specs until August/September assuming that the announcements are around that time, who knows at this point? I hate the wait!

    In my last post discussing VRAM usage, I use 3 levels of Sub-D, due to some horrible artifacts with the render as pictured below, as well as this product mandating high Sub D levels if you want more details, and as we know, Sub-D uses up VRAM, even if I keep my VRAM Low, I still want to use my computer to be able to do other things while my 2.5 hour animation is rendering; like watch a movie/videogame/surf online/etc! 

    If I use scenes with simple lights and simple 1-slot textures, sure I can have tens of low-rez g8 in the scene, so 8gb is plenty, but as a visual artist, I don't want to be limited, which is why I'm getting a card with more than 8gb of VRAM, I'd prefer 24gb RTX Titan, but can only save up for a 12gb RTX 3080ti...

    SUB-D 1.png
    1404 x 810 - 728K
    SUB-D 2.png
    1415 x 778 - 1014K
  • JPJP Posts: 60
    edited July 2020

    Yes, I do use 2-3 subD levels as the low poly artifacts are pretty ugly, I also use the HDRI pics as a background as well, hence the 4-8k resolution, not to mention my 4k multi-texture maps; I do multiple renders that have close-ups and far shots (About 2-3 feet for the far shots) as I post in a "photo-shoot styled" series of pics, however, as for my action/fantasy compositions/animations I can get away with lighter text/SubD requirements.

     

    Thanks though, for the suggestions!smiley

    Again, this will be moot once I can get an alleged 12gb ti as I won't know for sure about the true specs until August/September assuming that the announcements are around that time, who knows at this point? I hate the wait!

    In my last post discussing VRAM usage, I use 3 levels of Sub-D, due to some horrible artifacts with the render as pictured below, as well as this product mandating high Sub D levels if you want more details, and as we know, Sub-D uses up VRAM, even if I keep my VRAM Low, I still want to use my computer to be able to do other things while my 2.5 hour animation is rendering; like watch a movie/videogame/surf online/etc! 

    If I use scenes with simple lights and simple 1-slot textures, sure I can have tens of low-rez g8 in the scene, so 8gb is plenty, but as a visual artist, I don't want to be limited, which is why I'm getting a card with more than 8gb of VRAM, I'd prefer 24gb RTX Titan, but can only save up for a 12gb RTX 3080ti...

    There are rumors of upto 20 GB for the 3080.

    https://www.tweaktown.com/news/70053/nvidia-geforce-rtx-3080-3070-leaked-specs-up-20gb-gddr6-ram/index.html

    Post edited by JP on
  • takezo_3001takezo_3001 Posts: 1,957
    edited July 2020
    JPDAZ said:

    Yes, I do use 2-3 subD levels as the low poly artifacts are pretty ugly, I also use the HDRI pics as a background as well, hence the 4-8k resolution, not to mention my 4k multi-texture maps; I do multiple renders that have close-ups and far shots (About 2-3 feet for the far shots) as I post in a "photo-shoot styled" series of pics, however, as for my action/fantasy compositions/animations I can get away with lighter text/SubD requirements.

     

    Thanks though, for the suggestions!smiley

    Again, this will be moot once I can get an alleged 12gb ti as I won't know for sure about the true specs until August/September assuming that the announcements are around that time, who knows at this point? I hate the wait!

    In my last post discussing VRAM usage, I use 3 levels of Sub-D, due to some horrible artifacts with the render as pictured below, as well as this product mandating high Sub D levels if you want more details, and as we know, Sub-D uses up VRAM, even if I keep my VRAM Low, I still want to use my computer to be able to do other things while my 2.5 hour animation is rendering; like watch a movie/videogame/surf online/etc! 

    If I use scenes with simple lights and simple 1-slot textures, sure I can have tens of low-rez g8 in the scene, so 8gb is plenty, but as a visual artist, I don't want to be limited, which is why I'm getting a card with more than 8gb of VRAM, I'd prefer 24gb RTX Titan, but can only save up for a 12gb RTX 3080ti...

    There are rumors of upto 20 GB for the 3080.

    https://www.tweaktown.com/news/70053/nvidia-geforce-rtx-3080-3070-leaked-specs-up-20gb-gddr6-ram/index.html

    Thanks for the info, but that info is too old as it's from january, I get my current news from gamer's nexus... But yeah, the current rumor is the 3080ti is 11, maybe12gb, while the 3080 is 8, maybe10gb, while the RTX titan is 24gb, all of this is pure speculation until we get official confirmation in either August or September, but even that is speculation.

    EDIT: Sweet looking house ya got there!

    Post edited by takezo_3001 on
  • JPJP Posts: 60

     

    JPDAZ said:

    Yes, I do use 2-3 subD levels as the low poly artifacts are pretty ugly, I also use the HDRI pics as a background as well, hence the 4-8k resolution, not to mention my 4k multi-texture maps; I do multiple renders that have close-ups and far shots (About 2-3 feet for the far shots) as I post in a "photo-shoot styled" series of pics, however, as for my action/fantasy compositions/animations I can get away with lighter text/SubD requirements.

     

    Thanks though, for the suggestions!smiley

    Again, this will be moot once I can get an alleged 12gb ti as I won't know for sure about the true specs until August/September assuming that the announcements are around that time, who knows at this point? I hate the wait!

    In my last post discussing VRAM usage, I use 3 levels of Sub-D, due to some horrible artifacts with the render as pictured below, as well as this product mandating high Sub D levels if you want more details, and as we know, Sub-D uses up VRAM, even if I keep my VRAM Low, I still want to use my computer to be able to do other things while my 2.5 hour animation is rendering; like watch a movie/videogame/surf online/etc! 

    If I use scenes with simple lights and simple 1-slot textures, sure I can have tens of low-rez g8 in the scene, so 8gb is plenty, but as a visual artist, I don't want to be limited, which is why I'm getting a card with more than 8gb of VRAM, I'd prefer 24gb RTX Titan, but can only save up for a 12gb RTX 3080ti...

    There are rumors of upto 20 GB for the 3080.

    https://www.tweaktown.com/news/70053/nvidia-geforce-rtx-3080-3070-leaked-specs-up-20gb-gddr6-ram/index.html

    Thanks for the info, but that info is too old as it's from january, I get my current news from gamer's nexus... But yeah, the current rumor is the 3080ti is 11, maybe12gb, while the 3080 is 8, maybe10gb, while the RTX titan is 24gb, all of this is pure speculation until we get official confirmation in either August or September, but even that is speculation.

    EDIT: Sweet looking house ya got there!

    Thanks! The room is a lot more messy at the moment. I have been remodelling a part of the house and the tools found their way into the home office lol! And the chairs have seen better days. The leather has peeled on both. They were cheap though and still comfortable. I can't bring myself to throw them out and upholstering is not worth it. I'll get Herman Miller next time!

    I really hope the 3080 ti will have more than 12 GB! I have run out of memory with my 1080 ti's 11 GB in both DAZ and 3dsmax with IRAY and VRAY.

  • I use a 2070 and it renders many not basic scenes every day. I simply do not know what you guys who cannot make 8Gb cards work are doing. Either you think 10 character scenes are basic or you're doing something else that I don't. I've got scenes with 5 G8's plus lots of props plus an 8k hdri plus some base environment. Renders on 8Gb no problem.

    As to the rest, HUH? What did that have to do with anything at all?

     

    See thats what I wanted to find out, I want a dual monitor setup so that I can render on one side and still be able to possibly play a light game like League of Legends or something on the other monitor. I'm not looking to jump straight into next gen hardware since that stuff is always bugged to crap for at least a year after release. I just need something that is current gen right now that can handle a G8 animation render while doing gaming. The idea is that a current gen setup will tank in cost when RTX 3000 and AMD 4000 launch and I will swoop in and get a decent system at a fraction of the current cost.

  • kenshaw011267kenshaw011267 Posts: 3,805
    JPDAZ said:
    JPDAZ said:

    I use a 2070 and it renders many not basic scenes every day. I simply do not know what you guys who cannot make 8Gb cards work are doing. Either you think 10 character scenes are basic or you're doing something else that I don't. I've got scenes with 5 G8's plus lots of props plus an 8k hdri plus some base environment. Renders on 8Gb no problem.

    As to the rest, HUH? What did that have to do with anything at all?

    I usually go with 16K HDRIs. Maybe I should do a test and jump down to 8K and see if I can really notice the quality drop.

    One thing I noticed is the progress bar for GPU rendering is slow but the amount of noise is not that much. So a rendering that shows only 20% complete with the GPU rendering could be cancelled and the remaining little noise removed via denoising.

    All I know is the moment I load up a few more characters and render at 10,000 pixels horizontal the GPU drops out and the CPU is rendering the image.

    Why are you rendering a 10k wide image? Are you printing at 600dpi? If so it is time to get professional hardware. 

    If not there is no monitor in existence that can display an image at that scale.

    It is a hobby. I have a 4K monitor and like to zoom in and see detail in big scenes. My computer is fully capable of handling it. I have two workstations with over 64-128 GB RAM and powerful Intel CPUs. I set my rendering time limit to 2 hours. That is usually sufficient to render many characters. I run the Intel Denoiser on the rendered file. I should probably start saving my renderings in EXR format via canvasses. I render EXR format with 3dsmax / VRAY and it is very useful to tweak various levels.

    8K monitors will be around sooner than later though. I actually use a 40" 4K TV as a monitor and it is good enough.

    8K TVs are already available! But like I said my renderings are done so I can zoom in and out as I prefer. This also allows cropping options later.

    https://www.whathifi.com/news/samsung-8k-tv-deal-samsungs-2020-8k-range-is-now-10-off-limited-time-only

    Then you need dual Quadro 8k's in NVlink. You want pro level images get pro level gear. Stop saying basic images when you mean absolute top level pixar/ILM level results.

  • I've got entire racks of servers in the datacenter I run. Literally thousands of CPU cores with plenty of room, power and cooling for more. I have no plans to add more because the demand isn't there. Where the demand is, and I'm filling that as fast as the parts come in, is for GPGPU's. If Nvidia would double or triple their production of Quadros I could then fill all that rack space. Sure you can run AI and physics sims and all the other stuff that gets run on Quadros on CPU's but at 1/4 to 1/10 or worse the speed. I just got an order in yesterday some company, can't name them sorry, wants 72 Quadro 8000's running fluid dynamics ASAP. It's not like I can just call anyone and get those delivered next week or something. And I do not even want to think about how many dual socket 7402's it would take to to do that work at the same speed. I am pretty sure it would not fit in our available floor space.

    All those companies building self driving vehicles. None, that I've seen, are using CPU's. They're all using GPGPU's. Some guys are trying to roll their own, which is cray cray, but most are just slapping a Quadro, or 4, in to it and calling it a day

    Some outfit just sent me some lit on a nuts rack they hand build with 8, IIRC, watercooled quadros and some desktop CPU in a 2U rack as some sort of ultimate baller rendering machine. Apparently they'll build whole racks of them into a container, generators etc included, and deliver the whole thing to any shooting location you want. 

    https://grando.ai/en/solutions/

    So yeah anyone who is betting against GPGPU performance is in the minority. They might be right in the long run but that hasn't been the right way to bet for a long time and more and more it doesn't look like there is any reasonable way to reverse it.

    CPU's will beat GPU's at some point, we just saw LinusTechTips running Crysis on a Ryzen 9 3950X not that long ago. It didnt run smoothly but the very fact that it could run it smoothly in bursts proved that CPU's are closing the gap.

  • kenshaw011267kenshaw011267 Posts: 3,805
    JPDAZ said:

    I've got entire racks of servers in the datacenter I run. Literally thousands of CPU cores with plenty of room, power and cooling for more. I have no plans to add more because the demand isn't there. Where the demand is, and I'm filling that as fast as the parts come in, is for GPGPU's. If Nvidia would double or triple their production of Quadros I could then fill all that rack space. Sure you can run AI and physics sims and all the other stuff that gets run on Quadros on CPU's but at 1/4 to 1/10 or worse the speed. I just got an order in yesterday some company, can't name them sorry, wants 72 Quadro 8000's running fluid dynamics ASAP. It's not like I can just call anyone and get those delivered next week or something. And I do not even want to think about how many dual socket 7402's it would take to to do that work at the same speed. I am pretty sure it would not fit in our available floor space.

    All those companies building self driving vehicles. None, that I've seen, are using CPU's. They're all using GPGPU's. Some guys are trying to roll their own, which is cray cray, but most are just slapping a Quadro, or 4, in to it and calling it a day.

    Some outfit just sent me some lit on a nuts rack they hand build with 8, IIRC, watercooled quadros and some desktop CPU in a 2U rack as some sort of ultimate baller rendering machine. Apparently they'll build whole racks of them into a container, generators etc included, and deliver the whole thing to any shooting location you want. 

    https://grando.ai/en/solutions/

    So yeah anyone who is betting against GPGPU performance is in the minority. They might be right in the long run but that hasn't been the right way to bet for a long time and more and more it doesn't look like there is any reasonable way to reverse it.

    I have more than 40 GPUs with 8GB each. They were used for cryptocurrency mining in 3 rigs. I am going to test how they function with rendering in DAZ. I know that each GPU requires a CPU thread so a 2 core / 4 thread processor will only make DAZ see 3 GPUs since one thread is required for the GUI. I have confirmed this already. I am curious to see how many GPUs VRAY will see with the 4 thread processor. If I see a big rendering improvement then it will be time to replace the CPU with a 6-8 core w/ 12-16 threads if the motherboard can handle it - that is if VRAY also needs the better processor. The only thing that sucks is 18 of those GPUs are AMD RX580 cards. They work with Blender but obviously not IRAY. The RX580 cards will work with VRAY though I believe - need to test it.

    You cryptomine with GPU's? In 2020? Why not just throw money out the window? It's cheaper.

    Please tell me you're in Iceland or someplace where the electricty is almost free. If you're in the US omg.

    I get being a HW enthusiast I do. My team at work put together a cabinet of folding racks from our own gear back in March. We had a blast. But GPU mining ethereum is at best pocket change if not negative return all the rest are really scammy and ASIC will just crush any price spikes in ethereum. 

Sign In or Register to comment.