Quick question about SubD and detail maps.
SolitarySandpiper
Posts: 566
[Quick question about SubD and detail maps.] For example... if i use a HD normal map would increasing the SubD have any effect on that particular map (or any map) or does SubD only effect the available resolution of the geometry?
Post edited by Richard Haseltine on
Comments
SubD is a geoemtric operation - its only cionnection with maps is displacement, where the map cotnrols the movement of the vertices created by SubD. Without displacement the only effect of increasing the SubD level beyond that needed for any HD morphs will be a (geenrally very) small amount of smoothing.
Thank you Richard... i've been raising my awareness of the numerous detail options before trying to get my head around the what, when and why.
1/ Subdivision of the whole wireframe. Modifies the geometry as long as Resolution Level is set to High Resolution. Visible in any shading and iRay.
Parameters > Mesh Resolution > Subdivision Level : subdivides polygons by dividing each quad in 4. Useful only for HD versions of Genesis, jewelry, etc. Anything sold by a Published Artist who has access to HD morphs. Other than that it indeed "smoothes" curves and make a low poly mesh look less polygonal.
2/ Partial Subdivision of a given Surface. Subdivides portions (Surfaces) of the geometry, visible only in iRay.
Surfaces Tab (iRay) > Your_Object > Surfaces > Geometry > SubD Displacement Level (Displacement Strength needs to be activated by a map). Also modifies geometry, but using info coming from a black and white displacement map (aka height map) loaded in Displacement Strength. SubD from a texture will be visible at render time, regardless of wether the Genesis (or any object) has SubD activcated in Parameters. Wireframe Subdivision for HD morphs (PAs only) and Surface subdivision are two different things.
3/ Normal Maps. Does not modify the geometry at all. No relation whatsoever with subdivision. Visible only in iRay.
The normal map is generated from a high poly version of the object (baking process), or using 3D painting tools such as Substance Painter. This map will fake details and enable tons of great options when pixel painting. Once a Normal Map is finished and imported in Daz as a texture, it acts by replacing the normal direction of every polygon by much more complex normal info coming from the high poly version (baked) or from manually painted normal details. A normal map will look perfectly identical whether it is projected on a single quad or on a subdivided quad. The equivalent (let's say) of SubD for a normal map would be to generate the texture in higher resolution.
In a nutshell :
1/ Parameters SubD : smoothes polygonal aspect and enables true high poly details (aka sculpted details). Absolute control over all vertices. Visible at all times.
2/ Surface SubD : enables high poly details using a displacement (height) map. More limited than the precedent method, as subdivided surface can only be pulled or pushed along a single axis. Visible in iRay.
3/ Normal Map : replaces low poly quads normals with normals from a high poly sculpt, or from manually painted normal information (Normal is a polygon's surface orientation). Does not modify the geometry at all. Visible in iRay.
Each of those 3 methods has its own advantages and disadvantages. Combined, they all aim to add details to a final object, each in its own way.
Thank you so much for taking the time to post such a useful explanation... it makes a lot of sense as i pull up Vicky9 and Vicky9HD and look at the relationships between the the mesh, the SubD and the various normal maps both specific and generic.
No problem. Glad if it helps you understand. I just finished something that'll make it easier to explain than words. I wanted to make a quick example... but ended up working on it for about 3 hours (ᴗ˳ᴗ) So I hope it'll help some.
In the following exampling, I use a Base plane and an HD subdivided plane. It'd be the exact same for a Base Genesis and an HD subdivided Genesis.
I started with a simple plane in Daz. I subdivided it and exported that plane to ZBrush. I sculpted a Lion head in it then used that sculpt to showcase the 3 main different options available (in Daz) to get more details onto a Base object : 1/ HD Morph 2/ Displacement Map 3/ Normal Map.
And again, all 3 methods can be combined. None eclipses the other. They are all very useful and complementary.
Once again, thank you so much...
1, i think HD is the most straight forward to get my head around... no problems there.
2, Displacement (i think) is the most difficult to understand... are all of the vertices being pulled or pushed along the same axis? and the amount relative to a (kind of) greyscale?
3, Normal maps (i thought) i had a reasonable understanding of until you mentioned they react to light... in my head they are details that were (kind of) superimposed onto the surface of the UV map. Is the light reacting to the colors in the normal map? Why are they that color?
Edit: No need to answer the 'Why are they that color?'... i've just done some reading and i don't want to burden you with that lot!
1/ HD Morph
A Published Artist exports as obj the Genesis with some subdivisions. Adds details with Blender or ZBrush without modifying the polygon count. Import that as an HD morph. That's the most basic thing : HD detailing.
2/ Displacement Map
You got that right. A 2D image (displacement or height map) will affect 3D vertices. An image = pixels. 3D = vertices. If pixels are 50% gray, nothing happens. If pixels are black 100%, vertices are pushed down to the max. If pixels are white 100%, pixels are pulled up to the maximum. The more Surface Tab > My_Object > Geometry > SubD Displacement Level is a high number, the more your vertices will look like the pixels.
3/ Normal Map.
In Normal Maps (I answer quickly anyway) each 2D color means a direction in the X,Y,Z environment/space of a 3D model. My Lion has many polygons going in multiple directions. Those directions (aka normal of the polygon) are recorded (baked) onto an image and become colors. And that image, once applied on anything, will have color info about the orientation of all polygons from the sculpted object. Orientation that Iray or any other path/ray tracing engine, or real-time game engine, can use to fake the light hitting on objects... that are not there. Fooling our brains.
The brilliant John Carmak that I had the chance to work for a while ago, revolutionized 3D forever, with Doom 3 around 2003. Since then, Normal Maps are widely used in all 3D games and movies... because it's such an amazingly powerful technique, for skin pores, the weft of a fabric, small rocks on the ground, etc. and tons of other macro and micro detailing of surfaces without the need to add any geometry.
Romans were using the "trompe l'oeil" technique in their walls paintings. 3D artists use Normal Maps. Similar concept, except Normals catch light sources, create ultra believable details that don't exist, and can be seen from any angle. That's pure genius.
I'm gonna stop there. I never know how much people know before answering. My advice kindly is : if you love 3D, forget Daz for a few months. Install Blender and learn the basics of a 3D workflow :
3D blockout > 3D modeling / sculpting > 3D retopology > Defining UV Seams > Unwrapping > Baking > 3D hand painting / procedural texturing.
There are many steps in a 3D modeling workflow nowadays and if you practice them a bit (avoid the terrible f@$%king donut tutorial...) they will make you see Daz on a whole new level. If you don't spend that time learning the behind the scenes of how assets in Daz are made, you're gonna end up drowning in tons of blurred technical info and wrong interpretations. Better to invest time learning, than loose time trying to figure out things without the necessary keys to understand them.
Cheers. I really gotta go back to work ;)
P.S : Polycount Wiki is getting old. But it still has a lot of well explained info about all those technical 3D words.
What started as a quick question has turned into an invaluable resource,
Thank you Hansolocambo.