RTX/Iray Performance

A couple years ago when I last checked, and when the RTX cards were just coming out, the latest testing of Iray and RTX ended up with the following conclusion:
"Iray RTX speed-up is highly scene dependent but can be substantial. If your scene has low geometric complexity then you are likely to only see a small improvement. Larger scenes can see multiples of about a 2x speed-up while extremely complex scenes can even see a 3x speed-up."
So does anyone know if there's been more definition of exactly what the Iray/RTX performance depends upon? They were pretty much saying that using benchmark scenes was of little use since the performance was so dependent upon scene complexity, which varies for each user and even each scene. And I recall their example of a "complex" scene (with max RTX/Iray performance) was a bunch of trees with many thousands of individual leaves. Sounded like the number of individual objects was the big factor, though with all of the material and ray tracing and physics, etc., features in Iray/RTX I imagine that's a gross oversimplification.
I also recall at the time Iray's future was a bit of a question, but apparently those fears were unfounded?
I'm curious if the (very) expensive new cards are indeed worth the price for the average user, or only for those with super complex scenes.
Thanks.
Comments
On one hand, I would not spend money on a GPU now, unless you are swimming in money and a few G's is just chump change. I am not lol. On the other hand, who knows what will happen.... Prices might keep rising, instead of going down, so waiting might be bad if you can eek out buying one now.
example: if you have a simple Genesis version 1.0, clothed, posed, with a simple environment and render with an RTX3090 probably you will get 4500 iterations in less than 2 minutes
the same graphic card but now you are using 6 Victorias 8.1 with SubD 4, PBR Skin, all dforce clothed and dforce hair and heavy props like Stonemason City series on a 6500x4600 px size image you won't have the same iterations and much more time to process, and we are not counting the bottleneck because still having the ultimate Nvidia RTX 3 series Daz Studio still depends of CPU and DDR4 ram just for initiate your render process with GPU, and good luck if don't crash.
is a myth that having an "RTX-Any model" can give you a render in 0.01 seconds and 99;999,999 iterations.
of course more CUDAs gives you better performance but what is your goal with your renders?, make 200 renders daily for what?
NOW, if your goal is iRay+Animation, GO for IT!
Just an FYI...more CUDA cores does NOT necessarily give you better performance. It's far more complicated than that. In fact the opposite can be true.
My 1080ti has over 3,500 cores, and my RTX 2070 Super has about 2,300, and the 2070 Super was slightly faster than the 1080ti in a recent render. Especially when you're talking RTX technology together with Iray, which depends on scene complexity, Iray implementation of the RTX hardware, scene requirements, materials, physics, and on and on. Hence my original question.
As I found out, RTX cores essentially offload certain tasks from Cuda cores as they do the geometry raytracing job and CUDA cores process collisions with materials.
But for the normal user, the main issue is GPU memory(which is also vital for complex scenes) not rendering performance as performance is already pretty high so I oud rather choose 1080 ti than 2080 super even if the latter is faster. If you don't have enough GPU memory you can't have any scene complexity either.
rtx3090 cards are of course the best but way too expensive. their main advantage is still very large memory size which is 24 GB so that fact alone makes them very desirable. because you can render extremely complex scenes that are just impossible on any other card.
Onix,
The RTX technology employs specific hardware components designed to solve specific aspects of a ray-traced render:
I view it a bit like having a big problem, and breaking it up into individual problems, and you assign a team of experts in each of those fields to get together to solve the problem quicker. But if the problem doesn't fit into what the experts are good at, there's not much you can do.
Kinda like saying "I have this huge problem, how fast can you solve it?"
"Well, does it require a ton of ray tracing so I can assign my ray tracing expert?"
"Hmm, well, not so much..."
"Well, we're really good at physics sims, does it have a lot of that?"
"Hmm, well, not so much..."
And so on.
it seems that you already know a lot of Nvidia technology at such point that is unecessary to ask anothers for answers that apparently you already have confirmed as true.
thanks God (or my money) that I have my trusty GTX1060 and in some place hibernating on my attic my GT-720, and because of your savvyness I prefer not buy a newer card.
I have money to spend on an RTX but reading all the OP answers looks like is not a good option.
(and slower nvidia card is better for my YT channel, longer videos, more minutes watched, all is aligned!)
Like I said, I've been away from the RTX/Iray scene for a couple of years, so I only know the basics that were described when RTX was introduced. What I am asking is whether those who have been dealing with RTX/Iray for the last couple of years have seen more info that narrows down ways to categorize its performance that is more definitive than the previous "it's good with complex scenes", and "benchmarks are of little use".
For example, those who are familiar with the realtime interactive feature in Blender called Eevee, which does some very nice and fast noise reduction to make the realtime preview look almost like a final render, with little or no lag in moving around the scene. Since RTX has some AI noise reduction features for that very purpose, I'm wondering if anyone knows to what extent Iray has implemented that, and maybe I'm just missing a setting in D|S to get something similar. And also, what happened to the rumors of Iray not being supported by RTX hardware since they seemed to be focusing on Optix, etc.
Stuff like that.
I don't think DS has put AI denoise in yet, but I really haven't used the new version a lot. I know the last version I was using didn't. Cycles has put optix AI denoise, the differece was huge last time I compared iray denoise in DS with AI denoise in blender. The DS denoise wipes out a lot of details, unless you did a ton of iterations, which defeats the point of using denoise IMO. I have not really played much with eevee yet. Lately been trying to learn houdini again with all my free times, I must be a masochist lol.
Solomon Jagwe uses it.
Cool, thanks. Those "Filtering/Post Denoiser" settings were around before the RTX came out weren't they? Or am I mis-remembering?
You're right and nVidia's denoiser has always been AI based. I let it kick in early on the Iray preview and quite late on actual renders.
Also Optix was updated to make full use of RTX if you have it while still providing Optix Prime if you don't.
Cycles AI denoiser can use the render + normal + albedo maps as data for the denoising, while DS Iray only uses the render(unless they added that in DS last update). It just makes it more accurate with less itterations, it can see skin blemishes and surface details and take that into account.
sincere question!
When you get an RTX card is there a need to use a denoiser feature?, I mean, you buy an expense card for just use the same filters that in some way accelerates and fakes your iterations with your actual low cudas card?
oh and still using firefly and quality settings ON?...pretty pointless in my perspective.
Faster may still be relatively slow on some scenes. It depends on the goal - for an animation in particular any measure that can reduce render time may be welcome.
Both Solomon Jagwe and Free Nomon over on YouTube stress the importance of the denoiser so you can use fewer samples and use less time per frame, 15 to 30 seconds, which is needed in animation and it is welcome as Richard said.
For me personally, my biggest annoyance is not being able to manipulate objects in real time while having a fairly accurate Iray-type render to look at. I'm constantly jumping back and forth between Iray preview and Texture Shaded preview. Constantly. With denoisers you get a quick (and fairly blurred) rendition, and the ability to rotate and move the view quickly. But if you want to see details of the shader you just applied (sparkles, bumps, etc.), you're still gonna sit there and wait. And if you want to rotate an arm or move a character, forget about doing it in Iray mode, denoiser or not.
Now of course if you have one character with no background and a denoiser you can get some fast response in rotating the scene view. But if you want to modify, forget about it. To me, THAT is the biggest annoyance and the thing I've been hoping for vast improvements in for a long time. Eevee looked like it was a start, and I was hoping that RTX and Iray would be a major improvement, but apparently not.
Again, I mostly use scene management along with compositing to get some very lightweight scenes to render. But still, while it's nice to rotate a scene quickly in Iray preview, making realtime changes is still, from what I've seen, a long way away. I assume most of us spend the vast majority of our time modifying the scene characters, poses, materials, etc. And that is SLOW.
Oh, and an indoor scene? Forget about it. All the denoiser stuff goes out the window.
Final scene render time, to me, is almost a non-issue compared to realtime preview while modifying scene components. The time I waste in jumping between Iray and Texture Shaded and back, always waiting for the Iray preview to restart, is FAR more of an issue.
ah, I know what and how denoise accelerates your images, that is out of question, I do that when i post on dA bigger images for sale.
but I would turn off any "accelerating" feature when I'm buying a card costing as the same as a used car
what about Quadro cards?, yes they have very few CUDAs compared to gaming cards but apparently Quadros manage better geometry in realtime or Linus Tech Tips lied to us many months ago?
your probably not looking at the Equivalent generation,... the Ampere Quadro cards which is the same gen as the 30xx series RTX cards,.. both the A6000 (top Quadro) and the 3090 have over 10,000 cuda cores
the 3090 is 10,496 and the A6000 has 10752 Cuda cores, that A6000 starts at over $4500 while the 3090's MSRP is at $1499 another noteworthy thing is 48gigs of ram on the a6000
Probably shouldn't ask us to make a value judgement for you. I don't know if it's worth it for you. I know it isn't for me. Most of the new cards are being bought either by scalpers or miners. I'm pretty disgusted with NVIDIA for allowing this to happen actually. I keep reading about Ti this and VRAM that, "new release" razzmatazz but when I go to their site guess what, "Out of Stock". I suppose the marketing droids are still getting paid, releasing products nobody can buy without remortgaging their house, if they can even be bothered to refresh a store page for the many days it takes to find one in stock.
So here's the deal: don't waste your money. In twenty years time 95% of all energy generated on earth will be used to mine bitcoin and the tiny bit left for us we can use to render Daz characters on our Raspberry Pis. And in true Douglas Adams style, we'll switch to using leaves as currency and then burn down all the forests.
My point was merely this:
Prior to RTX, you could get a reasonable, ballpark estimate of benefit to cost for the new GPU's. Take a scene (pretty much any scene), render it with card A, note the final render time, then do the same with card B. And with that you could make a simple "improvement in render times/cost" ratio for each of the cards.
My sense was that, due to the RTX complexities, that's pretty much out the window. My only question is whether that is still a reasonable assumption or has something else arisen in the last two years? Like, "hey, with RTX you can now do some insane realtime stuff in the Iray preview, which makes it worth its weight in gold !!".
Nvidia sells, if someone pays for a lot of their cards they sell to them, they don't hold one card waiting for some 3D artist come to buy just one in monthly payments on NewEgg (dunno if such still exists actually)
mine bitcoin?, there is impossible minning with Nvidia Cards and even ASIC miners are getting trouble...but you can mine Vertcoin (VTC) with your Nvidia and AMD Cards right now overnight and have some mollas.
Could be the implementation on DS the main cause and not the Nvidia card for itself the guilty of slow iRay previewing, because my old GTX1060 previews so fast using realtime Eeve on Blender.
My point is basing the generation of money supply by simply burning as much energy as possible is one of the most idiotic concepts mankind has ever come up with.
So true, and it is disturbing to see nobody really makes a big story out of it - probably they just all particiopate in some form or the other, like "Mr. Green Energy" Musk even propagating this utter madness..... I just read that mining (which basically means "securing single transactions in the block chain") already consumes as much energy as a country like whole Norway does. And then they dispute to get climatic neutral her in EU until... when?
If you consider what goes on when a ray is fired into the scene and hits a surface:
The ray has to be evaluated and 'hit shaders' are initiated.
[RTX] trace ray till it hits a surface
[CUDA] hit_Alpha
[CUDA] hit_Transparency
[CUDA] hit_Reflection
[CUDA] hit_Refraction
[CUDA] hit_Scattering
[CUDA] etc...
[CUDA] etc...
...repeat for next ray.
You can see that much of the activity is CUDA based. Even if we accept nVidias claim that RTX is 10 times faster than CUDA raytracing. It's not going to have that magical speed increase that was slightly overhyped.
The more surfaces the ray hits the more noticeable the RTX contribution becomes. Thats why hundreds of leaves on a tree, with all those surfaces to hit, is such a good demonstration of RTX.
When I sold a 4 year old AMD Radeon RX 570 8GB MK2 for $350 2 weeks ago when I bought it brand new for $130 December 2019 on Amazon, lets just say I realized there is too much money in a few hands cornering the market and artificially driving up prices. I have no ideal when I'll get a nVidia RTX 3000 series now in these circumstances as I'm not going to compete with the deficit financed financial markets though, that is for sure.