What things have you found speed up IRay renders without new HW and is there a way to test/confirm?

My system isn't the beefiest out there, heck it actually isn't super great at all. I would like to find ways to speed up renders withing the program and the scene and I would like to test and confirm objectively.
Step one I think is get together several suggested ways to improve the render performance based on what we logically think will help and anactodal suggestions (this is where you all come in).
Step two would be create a base scene that renders in IRay on my computer (so something that doesn't take more than my 4 gigs of video ram so GPU is tested). This scene would need to be able to have enough in it for all of the suggestions to be tested individually.
Step three would be find if there is any additional way to monitor the render process to give us more in depth metrics (This I could also use your help on with suggestions)
Step four is reboot the computer, open Daz, open any monitors, open the scene, and render the scene and get a baseline.
Step five through whatever is basically redo step four but after opening the scene apply one proposed improvement so each is independantly tested.
So I would like as many suggestions on ways to improve render times for an IRay render as you have got other than changing the hardware and any way you have found to monitor the process. I will then proceed to test each and document findings. My hope is that we can find what does and doesn't actually work to optimize a scene in Daz's IRay render engine. Now the suggestions should not just be limited to things like hiding items off screen, they should also include things like increasing or decreasing the number of lights in a scene, using spot vs emmisive lights, using the tone mapping to allow lower lights to look brighter or vice versa. Make the suggestion and I will try it so long as it for the most part can create the same scene.
Comments
What you're asking is something you must do but changing settings for the same scene and then recording time to finish render to 10%. You're not going to like hearing to do that you need to make maximum render time infinite, and set render convergence at 100% and maybe let your machine render on one scene for weeks. Well 10% convergence is unrealistic for some scenes in DAZ with your HW so try 3%.
So set up your scene, render to 3%, record to time & number of iRay iterations, make alterations that you think will speed rendering up while keeping the same artistic aestetic, render to 3% again, and repeat...
Here is a thread to attempts to explain how to speed up iRay renders via various techniques
http://www.daz3d.com/forums/discussion/129971/iray-shadow-noise#latest
They all amount to keeping your lighting in the same locations and relative strengths but increasing the strength of them to overwhelm the scene with light, then you render, and then you use something like Gimp or Photoshop 'Auto-Fix', 'Auto-Levels', and such tools to wring out the excess light.
@ theweezel_c08014c91f All of these recommendation have been discussed extensively. GPU-Z will give you plentey of info on the GPU. The Studio logfile will gove you any other useful metrics. Yes, I know the forum search function is substandard, but Google's isn't. Use that to find your threads. TOnemapping is used to get the look you want. When people suggest more lighting for Iray, they mean/should be saying lights or light levels. Tonemapping doesn't do that, it makes the scene appear brighter to the eye, but not to the render engine. Otherwise you've named the things that work. The science experiments have been done.
@nonesuch00 you don't necessarily need to go to PShop/Gimp. You can use tone mapping to get the look you want in the render. You can also change the tonemapping while the render is running.
Change something in DAZ Studio while a render is running? I can't do anything in DAZ Studio while a render is running except minimize, restore, maximize, and stop/pause the render. Maybe you need an nVidia video card to change things in DAZ Studio wile a render is running.
I've tried different tonemappings suggestions and experiments of my own but in a 'pixely shadows' render still the suggestion that works best for me is from fishtales, he says change F-Stop to 4.6 and ISO to 400 (or you can do 800 and higher too but I haven't tried higher yet).
@nonesuch00
Look for a small rectangle at the midpoint fo the left hand border of the render window. Click that to open the tonemapping settings.
Hi,
as I demonstrated with a reproducable test, this is no use !
I think I would probably want to reproduce this test using more than just the sun light, with a more complex scene. I am not doubting your findings but I think for my own piece of mind I would like to confirm that the 15 second difference (Which is almost a 3% change) was not due to the change in lights.
The changes you have suggested may produce a more absolutely correct test however I think that the additional accuracy is unlikely to really produce data that is useful in the practical sense. If the Convergance and Sample size are consistent and the time for the render is 0 then we should be able to see if there is an over all change between renders based on the single variable that we are looking at for each test. I will grant you that many renders will seem to slow down or speed up throughtout their run but we also should consider the practicality of the tests given the equipment the test is run on. Now if we had someone with a high end Quadro or a couple really good 1080's then they could more effectively run such tests (alternately if you wish to provide me with better equipment I would of course not decline :) ). I was planning on running with a 95% Convergance with a 15000 sample max, The scene I plan should not require more than that and it should allow me to run enough tests to actually be able to figure out if X or Y actual change render times. I am also planning to make a copy of the intial scen save file I make and always run off a new copy of that file so that there is no change to the base state.
Actually, it is not wise, for these kinds of tests, to use "optimal hardware", since the point is to optimize it for "less than optimal hardware".
What you want is the slowest damn computer you can find... That is where you see the "large and small changes". On a fast computer that is using "tricks" to simulate speed by taking shortcuts or has optimized things that no-one-else has, it will not tell you anything other than how well THAT renders on YOUR computer.
Unfortunately, the only real test for how fast something will render, is to render isolated and specific things. One scene with multiple specific things, will not give you a measure of anything important. Like measuring how fast "cars travel", by observing at them in a demolition-derby. You need one race-car, on a closed-circuit track, then another race-car, and another one, and a dump-truck, and a go-cart... etc...
Once you isolate the biggest speed-killer, and isolate the things which have no real impact on speed, then you advance to the indy-500, but not a demolition-derby again.
1: Polygons (Daz's biggest hurdle)
2: Texture-layers (within shaders)
3: Deformable and reflective/refractive/translucent surfaces (light redirection consumes a lot, lengthening paths to the max)
4: Quantity of reflections, in abundance of multi-point and strong lights with low fall-off, rendered in infinite bounces
5: Lack of light for convergence
6: Multiple lights, without enough for convergence
7: VRAM (Falling back to RAM, for nVidia cards)
8: RAM (Falling back to virtual-memory and swap-memory)
9: Cores and g-flops ratings of said cores (Counting virtual-threads)
10: Background and foreground running operations (Virus scanners, GPU hungry browsers, animated windows junk.)
Still, at best, you can shave little off the best times, unless you sacrafice something related to quality. Which, in all honesty, is not an equal comparison of results. You can not get the same results with less settings. The rest is external, and beyond measure.
The only exception to that rule, for instance, is when rendering 10,000 polygons into a single pixel, for the worlds most accurately rendered colorized pixel in 3D. That is a factor of poor design, and not within the scope of tests like this. As are rendering 2048x2048 images for an iris, with 4x layers of shaders of equal size, into an 8x8 pixel on a model. (Stuff that Daz should be auto-reducing, prior to a render, so it never sends those large images to be rendered, but it doesn't.)
JD_Mortal,
Could not agree more.......sometimes a 4K map is noncense. We need a background set of materials ASAP, and that for all we buy (for one I'm going to make that a rule with all I produce). Using a 4K normal and bump map on a 300 pixel backgroundish Vick7 is like shooting sparrows with a A6 gattling gun......the sparrows will drop down but so will the trees in orchard.
Greets, ArtisanS
Worse is that the default HDR environment map has less detail than the default iris.
That's why I don't use them......but even more detailed HDR maps will fail if you don't render them into the background (ou of focus when rendering a portrait). But to be fair when I shoot a portrait with my Canon 550D and kit lens at 55mm that happens too and things would have been worse had I bought a 1DMark whatever and a 85 or 105 portrait lens. If you shoot a portrait you should be able shoot some parts of the world in focus others not. But a eye reflection is a reflection on a curved surfase.....so it should be sharp....now in the Genesis 2 set (and some G3 sets as well) are special eye reflection fakes but these are as said fakes.....but a good HDR IBL set like the once that come with the cool prop Terradome 3 are sharp enough to give a sharp reflection.....and of course you can make your own....for a few dollars of investment if you own a DSLR or even a compact camera.
The cheapest (and, don't tell anyone this, sjjjt most versatile) panoramic head is made in the US of A (now is the Donald listening) and is called Panosaurus.....
http://gregwired.com/pano/Pano.htm
I own one and selfbuiled a motorised Panohead (using some Canadian machined motormouts with anti-backlash gears and all, 2 stepperdrivers and an Arduino....and some software I wrote) but it was no match for the ease of use of the manual Panosaurus.
Ah, and for camera I use the 550D with a 10-18 lens.....and Magic Lantern......a hacked OS that allows it to shoot HDR like you wanna.....16 shots x 7 makes about 10 minutes work of a full dome and that's about 25.000 pixels wide.....if memory serves.
http://www.vrwave.com/panoramic-lens-database/canon/
Greets, ArtisanS
I say, for your testing... Use specific things.
One light, one untextured plane with one triangulated surface.
Then another plane, scaled to the same size with 10x10 triangulated surfaces, Giving you a poly-load value, with one light.
Then two lights, with each of those two setups, playing with the light values.
Then both again, with one and two lights, with a textured surface... and again with a bumped stextured surface, and again with normal mapping...
Then you upgrade to adding another plane, with all those tests, where one is translucent... then two are translucent... then a solid cube is translucent... then a solid sphere is translucent...
Now you can see what, in all of that, is most demanding. However, trying to micro-manage billions of potential changes is futile in an actual project. This is why those duties should be auto-managed by the rendering devices and project managers (Daz ana Iray), not Joe Smith, sitting in a chair, fiddling with millions of settings and crossing his fingers in the hope that it actually has an impact that is positive and not negative.
For a developer... You should be doing your part to make sure your items are not being excluded, due to being the source of a major render-stalling. (Which includes making low-poly versions and low-texture versions, to include into a package. With HD being a third option, not a default option.
First, I think Daz has to remove much of this overhead junk and organize the program better, so it actually facilitates ease of use, for those of us who have to "sit behind a desk" and manually make all of these adjustments. (Honestly, I think they need to completely isolate 3Delight stuff from Iray stuff, pick one, use it, and never see things related to the other rendering engines again... Like the lights, which are still a hack of 3delight lights with iray values stacked onto them... Yet, they render twice in iray, due to the hack. Once with the iray settings and again with the 3delight hacked values.)
P.S. Don't use the "Icons" to add lights to an iray scene. Make your own lights with the Iray emissive controlls that iray expects to be used for light. (That includes setting the "Never use camera overhead light".)
That is not valid...
The 'icons' will create photometric lights (Iray lights) if Iray is the seleceted renderer when they are created.
Hmm. What I am getting from all the posts is that because I am not going even further with my tests and I'm trying to test a reasonable scene that any conclusions I could come up with are invalid. This of course isn't a problem because others have discussed this in length and already come up with the answers. So because of these facts there is no reason for me to post any results they will provide no insight into how to improve performance for those looking to.
Of course I could be reading this wrong and if that is the case please let me know if you would like me to post results as I get them, otherwise I will just do it for myself and keep my findings to myself for personal use and improvemnts.
Ignore all the 'validity' discussions for a moment and look at your methodology...
1. Is what you are doing going to be repeatable?
2. Is what you are doing translatable to other hardware configurations?
3. Is what you are doing easy to follow...like making a change to a specific parameter has a clear cause/effect chain?
If you can answer yes to those questions, then your results will have meaning and should be posted.
I beg to differ... They don't just "create photometric lights", they ADD photometric lights, in addition to whatever is being used. (Or, they are incorrectly "creating photometric lights", which have no matching render values, or additional unmanaged values within a scene.)
They do not operate like the lights they are intended to "replace", and have higher demands and graphic issues, unlike emissive light-sources, which is all they actually are, or should be, but are not.
I can throw-up many examples, and how it severely kills rendering times and quality and light control. With a direct comparison to a "non converted light", a standard emissive element, which, by what is implied, is the exact same thing. (Though it is not, at all, nor remotely close.)
--------------------
I was not implying NOT to post anything. I was simply saying that you should not be attempting to use a "scene of random stuff", as a measure or test of "what is faster", or "what is better".
I was also saying that "faster" is not usually "equal". I can rip-out all lights and my scene will render 100x faster. However, it will not obviously look the same. Same if I replace all the 2048 PNG images with 256x256 JPG's with 90% compression, or decimate all my models into 30-polygon figures... All much faster, but you lose, no matter which path you take.
Inherently, it renders as fast as it renders, based on hardware and settings, being equal, for equal output. The slower the hardware, the longer it takes. The only option for equal output is bluntly, faster hardware. (or, as I also stated... hardware that is not being "slowed down".)
The rare cases, like above... removing the "rendering nothing over and over again", and "rendering unseen things", and "rendering mismanaged resources that yield the same output in less time", which often all result in "the same exact output", which is how you know that it is mismanaged somewhere. (This is like adding normals to a "decal", which does not render output, but it wastes time loading and rendering it still.)
Initially Iray lights and 3delight lights were separate things, they are now represented by a single node the behaviour of which will depend on the render engine. They are not 3delight lights with add-ons - 3Delight code simply wouldn't work in Iray. If you are seeing the 3Delight options that's because you have Show Hidden properties on - the hidden properties won't do anything, which is why they are auto-hidden depending on the render engine. Photometris are an option with Iray lights, so they do add extra features and have a performance impact (which is why they are an option).
Please do...but maybe in another thread, because if that's what you are experiencing it is not what the Nvidia docs or the devs have said is the case.
I am aware of what the devs said. :P
I will post raw examples, which, at the moment, look like this...
Scene, default sphere and a plane with one triangulated surface. None have a texture. The sphere is slightly translucent, refractive, back-scattered, reflective, etc... The plane is essentially a pure mirror with a fractional diffusion to hint at an actual mirrored surface, as opposed to a logic-mirror that just reflects everything 100% (Found nowhere in nature, or even in science fiction.)
Rendered at 500x458px. All lights with the same exact values and locations and directions. (Where possible.) Rendered to default convergence, caustics and architectural on, and 0.0 for pixel filter radius. Also, no tone-mapping settings are used, or filters. Just the scene and each individual light.
Daz linear point light... 150 iterations, 2min:50sec
Daz spot light... 530 iterations, 14min:10sec
Daz point light... 150 iterations, 2min:40sec
Daz distant light... Still running... 3902 iterations, 1hour:40min:12sec (only 92.79% converged) Has not gotten better looking in the past 3000 iterations, looks worse, honestly. (Stopped rendering... Would render for another two hours, at this rate. 0.01% per 60 iterations. The noise you see has not changed in 3000 iterations.)
Next I will do "sun" then "environment", then a custom "emissive light", which I normally use as a light source.
None of the pictures, so far, look the same. Similar, but all with "issues", due to the "default lights" and wonky cross-settings, which I will also demonstrate, showing the "second source within".
Attached are the first four files. Will post the fifth later. Not sure I will post the sun and environment, as only the times are needed.
In fairness, the ones that resolved below 300 iterations, never got to a few of the rendering passes... They were resolved before doing some reflections and caustics passes... I usually force rendering to 600+ iterations. The one that hit 530, started to do them, obviously.
EDIT: Added sun-sky and dome (Same settings as above.)
Sun-sky, 1710 iterations, 30min:00sec (86.25% converged. Got tired of waiting.) {blue is the sky reflecting, it was not drawn}
Dome, 1250 iterations, 21min:30sec (100%) {Same here, with the sky, not drawn}
Now for the emissive.
3" sphere, 8 sections 16 sides. (Could have been a bit brighter, meh)
Emissive, 2120 iterations, 40min:04sec
Not exactly matching the "directional light" (sun), due to it being in the same location as the other lights. (Trying to match the sun angles now, and brightness, to make it 100% fair.)
However, at half the time, it rendered more than half the iterations, and was 100% complete. The other one would have hit well over 8000 iterations, if I let it go... Completely saturating the scene with blown-out light. Taking note that those light values were super-super low, just so it would render something other than a pure white screen. (In iray, not GL, in GL, the scene was perfect, but black once I lowered the lights to a normal value. Making editing impossible, unless you use the camera-light to see the screen. A setting in GL, which alters the IRAY output too, when it should not. They essentially have two sliders for the same exact thing, and they are not... One adjusts the hidden light in the render.)
{Note, the point-lights, are purposely super-fast fall-off and low light. They explained that they were intended to be like candles. Should have called them candle-lights. Point normally is used to describe emittance from a singular point in space, in all directions. They both fake it well, by default. If I cranked-up those lights, they would have blown-out the scene and still not rendered past 150 iterations.}
The whole point being... The output, even when all numbers/values are exactly the same, are horribly diferent, and most are inaccurate. (Only one can be accurate/correct, obviously. The rest are simply faster due to shortcuts or sacrafices, usually in quality.)
Oddly, it looks liek the spotlight is actually missing a whole rendering-pass... It doesn't "transmit refraction". Might be because it is not a light, but a projected greyscale image-map emitting light, only on the projected path. Absolutely NOT an IRAY photometric light there.
A linear pointlight should behave as a regular pointlight in Iray, since the whole point of the linear light is that it has non-physical fall-off.
Should... but the test above shows that it acts the same exact way. :P
Literally, 0 diff, between the two images, with the same exact noise and everything.
Here are the two last renders with just the emissive source as a light. I adjusted as best as I could, so the refractions and reflections were as equal as possible, and the light-volume. When compared to the "Distant Light". (Had to hike the light up about 1500 units on Y, a few hundred on X, and about 700 on Z axis. Still not perfect.)
First render was with the light-source as a sphere, scaled-up to about 1/2 meter in diameter. Second is with the light moved a bit more, but reduced to 0.001% of 1 meter.
Ran both for 1h:22min roughly...
First converged to around 98% in 3340 iterations
Second converged to 92% in 3620 iterations (More like the first one, with the "distant Light".) I am honestly torn by the last set of results. Especially since I like the look of that nested light element within the other light. :P (Seen in the bottom reflection, through the refraction, in the sphere shadow, on the post above.)
Bare in mind that the values for the other lights were turned-down to near zero, the last image showing the "default" setting of 1500 lumens. For the other renders I used "10" for the "Lumens". It literally stayed stuck at 20.85% convergence, for 20 minutes and 825 iterations... I got tired of waiting. More light volume, in this case, equals longer rendering times and near zero convergence.
Just to clarify a few of the points in your test conditions...
Why is the architectural sampler being used?
What's the purpose of a pixel filter setting of 0.0?
So are you rendering to canvasses or exr?
I won't get a chance to try my own runs until later...
Well as far as the OP is concerned, I just spent a few different render attempts adjusting the Tone Mappings to increase the brightness of the Exposure Values, F-Stop. ISO, & other such settings and in my opinion the results are so washed out it's better to use the light levels of a correctly lit scene and tolerate the graininess of the renders or let the renders run much longer and plan to buy new nVidia video card in the future and hope for rendering and speed improvement in iRay.
A common trick that is sometimes discussed is running a render at UHD size and then using Gimp or other image editing SW to shrink the size of the render to HD or FHD size. You get to keep the lighting that looks like you wanted but much of the graininess is lost because the detail is lost when you shrink the picture.