An odd note on iRay render speed
I recently utilized a 1600/900 image size for a render I was doing. I'd had trouble with fireflies and grain in my render and the only way to get the cleanest render was to allow my render to run about 50 minutes give or take (and this was at a 1280/720 image size). Someone suggested to me try rendering a larger image and then compress it in Photoshop to sharpen it up and remove the grain. Which works sort of well incidently.
The oddity that occurred however is that when I upsized my render image to 1600/900 the image that had previously taken 50 minutes finished in just over 15 minutes. Crystal clear and sharp as it could be. I thought this was a fluke so I tried it repeatedly with different scenes and images - but using the exact same iRay render settings and light setup. I conclusively get much, much faster renders by upsizing the rendered image to the 1600/900 size.
Does anybody have an explanation on this? In my thinking, it seems as though the larger image would take longer not less time - but here we are. Can someone shed some light on this oddity?
Comments
Am I invisible? Should I use my powers for good or evil? Wow...does nobody really have any input on this or is my post just being utterly overlooked? It's been days and no one has replied. Very unusual.
Because this issue couldn't be reproduced. And it didn't make sene if you really changed nothing else but just Pixel Size with the very same scene... At least I've never experienced such an issue. See if other folks ever had the same issue and a solution as well.
You're probably not getting any response as none of us have any idea what it could be, nor are able to replicate the results you're getting.
I tried a few tests of varying complexity, and out of a dozen tests, only had one time that the results for 1600*900 were lower than the 1280*720.
The difference was negligable, at less than 60 seconds, and barely above margin of error. 12m14s(720), to 11m45s(900), were the actual numbers
The only thing i could theorize, there's something bugged in your scenes that increasing the render size is clearing.
Could be a driver related issue, a material issue, geometry issue, a render setting issue, heck it could just be a light that's causing the problem.
Do you get similar results if you increase the render size further, say to 1080 or 4k?
Would you mind posting an example file so we might be able to test it out and see if we can replicate your results?
Please don't post to bump. If you are not getting a response look to see if there is relevant information you could have added, posting to add that is legitimate.
I'll postulate:
In Windows System / Display
What's the recommended resolution of your display? (Is it 1600:900 ?)
Approximately 1,000,000 years ago, "graphically intensive games" [and such] ran "faster" if you set the graphics to match your windows screen res (and that both were at your "recommended" screen res too).
NB: I'd be very surprised if your "recommended" was 1600:900
You could use this. https://www.openimagedenoise.org/
https://sites.google.com/site/mcasualsdazscripts9/mcjdenoise <<<< Mcasual writes very useful scripts
KPR - that weird point caught my attention. My displays are both set to 1920/1080...so out of curiosity I'm going to adjust the render size even bigger and test the speed.
FOLLOW UP:
I did run a render at 1600/900 and the render was finished in about 15 minutes. I upped same scene render size to 1920/1080 and render completed in about 11 minutes. So apparently the closer I get to that recommended size the faster my card is rendering. I'm running a two-screen Workstation with Windows 7 Pro and an Nvidia card. It is very odd to experience this but I am not complaining. I would suggest everyone try this and see if they get any improvements. I change nothing in scenes but the the render pixel size. I also use HDRI lighting with no other lights involved. Maybe that has an effect? Anyway thanks for that point...I'm running even faster now!
I don't know and I'm not qualified to reasonably further surmise the reason since you've just said you're using Windows 7... I started using DS with Windows 10...
But I may guess something ~
- you rendered your scene with a single Nvidia card
- you turned on "Rendering Quality Enable" in Render Settings rather than using pure Max Samples to render
- if you render with 2560 x 1440, you probalbly won't get shorter rendering time equivalently... If you could... that would be a miracle.
When I've a little while to test, I'll find a sufficently long-to-render scene and give it a whirl.
(I also note crosswind's point:- that you're running DS on Windows 7... Maybe the "suggested screen resolution" thing is only present on "older versions of Windows"?)
Crosswind - yes, I am running on a single card. I will check on that render setting though and see.
UPDATE: I actually do NOT have RENDERING QUALITY ENABLED turned on. I'm using MAX SAMPLES. The RQE box shows "OFF."
Some other random notes that might mean something:
1. I have CPU and my GeForce GTX 1650 both set for rendering devices. The card is handling the display but I think CPU might be handling the actual render.
2. Using Lanczos pixel filter and the rec709 for spectral conversion
3. In progressive rendering options, I've got Max Samples set to 2500 (min=10) and "zero" as the max time. Post SSIM is also OFF.
Not sure if that means much to anyone.
It's possible that you restarted DS before rendering the larger image. If your smaller image took a long time DS may have been hogging enough memory to slow down the render ( for example, forcing the render to cpu rather than GPU ).