How to make a environment map sharp (Hdri map)
rotmensen
Posts: 5
When I render the scene, the dome is too blurry while the HDRI is a very sharp image.
I am using Daz studio 4.14, where can I find the setting to change the blur? I cannot find any option that does the job.
Thank you allready for your answer.
Comments
Probably you're trying to render a too large image. Most HDRI:s are only 8000 * 8000 pic large and that only works for an image about 800 pic large.
A rule is to divide everything with ten, meaning that if you want to render an image which is 1000 * 1000 you need a HDRI which is at least 10.000 * 10.000 and so on.
This is because you are seeing only a section of the HDRI - how much will depend on your camera settings, the wider the angle the more of the image you will see and the sharper it will be.
I'm using one of the HDRIs from https://www.daz3d.com/iradiance-hdri-variety-pack-one in the examples below. This is RoomC and it is an 8K HDRI (8192 pixels in 360 degrees). I reset all the environment parameters and put the camera at (0,0,0) rotated (-10,148,0) degrees. This is the base Iray HDRI render at 640x640 with the "standard" camera, 65mm equivalent lens, and no DOF:
So that is a 65mm equivalent lens and it therefore has a field of view of 31 degrees. This works out at 704 pixels in the HDRI, the image is rendered at 640 pixels so it is pretty much exactly a match for the HDRI resolution.
Firstly if DOF is switched on in the camera it affects the HDRI too; if the HDRI is of a room yet the environment settings "Dome Mode" is "Infinite Sphere" the HDRI will be falsely out of focus. Here's the same scene with the camera DOF turned on and the focal distance set to 2m. Because this is meant to be a room scene I set the f/ to 1.8. Notice that this is the camera f/, not the one in the "render settings" pane which is just bling and doesn't do anything useful:
Yep, that really is pretty close to what an infinitely large teddy bear taken at infinite distance with a camera focused on 2m with a f/1.8 lens looks like... It's not actually blurred, it's just out of focus. A very partial work round is to set the HDRI to "finite dome" and the dome radius to 2m. Do this by setting the "dome radius" field in the "environment settings" to the radius in metres, or, more consistent with Daz, set the "dome scale multiplier" to 1.0 and then set the "dome radius" in cm (the standard Daz unit).
This is a partial solution because, unless you use the Daz "spherical" lens distortion, parts of the HDRI will be out of focus. Using f/1.8 makes this obvious if you compare the first image with this one; only the very center is in focus. (The default Daz camera has a flat focal plane, the behaviour of most real world camera lens.) This can be fixed further by using "finite box" mapping of the HDRI but I have yet to work out how to get that set up correctly :-(. Anyway, it doesn't work for a scene with furniture in the HDRI as in my example scene; the furniture is closer than the wall, you can't get DOF right if the furniture in the HDRI is visible in the rendered scene.
The second possible problem is field of view; basically the problem @Hera and Richard commented on. While a 65mm lens has a 31 degree FoV the 80mm lens typically used for portraits only runs to 25degrees and a telephone shot with a 120mm lens only runs to 17 degrees. For an 8k HDRI those numbers equate to 577 pixels and 388 pixels. The latter is enough to show perceptually lower resolution even on a 640x640 image; the image still looks correct; scaling up a modern digital image by a factor of 2 pretty much never results in a perceptually significant loss of resolution so long as the image is presented 1:1 on the screen, i.e. with no further scaling:
That image is generated with no DOF, otherwise I would be conflating two different problems. Teddy's new friend is at 2m and the dome is still a finite sphere of radius 2m but that doesn't matter because I haven't used DOF. The camera is at (0,0,0) so, even with a finite dome, I can rotate the camera. I set the lens to 120mm then rotated the camera to get the teddies in the shot. The Daz teddy reveals the lower resolution of the image by comparison; our brains can't tell that the original teddy has lower resolution until presented with a new teddy which is actually, effectively, sharpened by the Daz rendering process. A partial solution to this is to increase the "pixel filter radius" in render settings/filtering; there's more about this in the dicussions on photo realism. I increased the radius to 2.2, a value recommended elsewhere, in the first image then I set it to 3.6 in the second to take account of the fact that the HDRI is being upsampled (3.6 is 640/388x2.2):
The effect is subtle but for me the last rendering of the new teddy is a better match for the first rendering of the HDRI; this illustrates a bug in Daz, the HDRI is being double [re]sampled somehow; it looks like it is upsampled to the output resolution then sampled by the pixel filter radius. I believe sampling an already sampled image produces the wrong answer. Another partial answer is to go to a 16k HDRI, that gives you 1409 pixels (wide) with a 65mm lens. So long as the HDRI is separated from rendered objects (i.e. the HDRI really is the background) this is likely to be sufficient up to 4k (3840 px wide on monitors). In any case careful composition with HDRIs so that they are actually outside the focal range combined with use of DOF is really the best answer.
A further, more obvious, problem arises when using Filament PBR rendering (i.e. "viewport" rendering with the viewport mode set to "Filament"). Both images below use a 120mm lens, infinite sphere and a reset pixel filter radius of 1.5. The left (first) image is Iray the second Filament:
So there is an exposure difference which can be corrected by altering the "EV" in the render settings/tone mapping options (BTW EV is the only setting that affects Filament), however the sampling of the HDRI is completely wrong; it looks like it might be even be pixel replication (the infamous "box" filter); please Daz, at least use bilinear. (Fant's algorithm works too but Fant might still hold the patent. Fant is bilinear when scaling up but degrades to a box when scaling down, however in the scale down case the order of the computation with respect to the input image dimensions is much better behaved than bilinear or bicubic.)