-
-
Notifications
You must be signed in to change notification settings - Fork 35.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MeshPhongMaterial: added support for vertex displacement map #7107
Conversation
@mrdoob Oh, man, we were working on this at the same time... :/ In either case, shadows need to be addressed separately. |
The displacement map, like a light map, or ao map, should use the 2nd set of UVs -- independent of the offset/repeat settings of the primary UV set. This is how I have implemented it. The displacement map also should have both a scale and bias setting, so displacements can vary over any range, and allow for indentations. Again, that was how I implemented it. As far as the normals go, I do not think we have to worry about them. Users can specify a normal map, too, that is created to be consistent with the displacement map, scale, and bias. |
Yep! Seems like we did the same. I did a bit of clean up around though 😇 Regarding Normals would be nice, but would need tangents for that I think... Low quality bump/normal maps look like this otherwise (terrain example using 😅 If we wanted to automatically compute tangents the geometry needs to be indexed though. I wonder if the derivatives bump/normal code can be improved so is less pixely/blocky... |
@mrdoob wrote:
My theory is that you can not without more per-vertex information like tangents/bitangents. |
One option is to override the attribute normal with the face normal, computed via screen-space derivatives in the fragment shader. The problem with that, though, is the faces will appear faceted.
The terms 'scale' together with 'bias' are quite common. I believe those are the terms @bhouston uses. |
Ok. Merging the |
Sorry for the duplicated efforts! 😅 |
@mrdoob wrote:
I chose to assume skinning/morph would not be implemented concurrently with a vertex displacement map. In any event, if you want to go your route, it is not clear to me if displacement should be the first or last transform in object space. (In the case of morphs without morph-normals, I would think first, since the normals would not be changing with the positions, and we don't want to displace in the wrong direction. In the case of skinning, I would think last, since skinning is based on raw positions -- not displaced ones.) If it were me, I would not try to combine these. Your code:
Also, as I said previously,
I recommend that you merge the UV logic in this PR, too. Note that I was careful to set the if-defs so there would be no shader warnings about unused UVs, regardless of which textures were included in the model. |
Do you know if we're uploading |
I mean, if we don't, I think we should so we don't have to do this: geometry.faceVertexUvs[ 1 ] = geometry.faceVertexUvs[ 0 ]; // 2nd set of UVs required for aoMap and displacementMap with MeshPhongMaterial |
I just checked, and I do not see it. I think your suggestion is a good idea. But I would suggest a console warning, too. |
@mrdoob wrote:
Might be easier to just assign uv2 from uv at the beginning of the shader like this to save compute and memory:
|
By the time we're composing the program we don't know anything about the geometry though. |
@mrdoob There are two unresolved issues here. The first is the issue of combining displacement, morph, and skinning as you have done. I expressed my view in #7107 (comment), but I expect you want to leave it the way you have it, and I will not fight that battle. : - ) But I feel strongly about the UV issue I mentioned in the same comment. Are you willing to make that change? |
Yes. I would prefer if someone bumps into an issue with the current setup rather than trying to guess what the right setup is.
I think I want to fix the |
Can one map the same data buffer to multiple attributes? Like the mapping uv buffer to both of the attributes uv and uv2 if there is no uv2 buffer? |
It is currently a requirement that the user provide a second set of uvs for
Based on the above, I am not sure you should... @bhouston What is the use case for two uv sets? Can we get away with supporting just one uv set and instead have an |
Wouldn't that be a bit wasteful? I don't think we can refer to already uploaded buffer at the moment? |
I believe you can in OpenGL but I've never done it -- some web searches suggested it was possible. I think it is less wasteful that uploading a second buffer? It would incur less memory transfer to the shader. The best solution honestly is to allow for specifying which UV set should be used with which texture set instead of a fixed UV or UV2 channel.
Our version of Three.JS supports a offset/repeat per texture map, rather that on a UV basis. I believe that is the right thing to do for the Clara.io use case. It is useful when, for example, you want to have the gloss map have a different repeat than the underlying diffuse map repeat -- the staggered wrapping makes the texture look unique for longer even though both are repeating, just at different points.
I guess the main one is the one you know -- the second UV map is used for unwrapped textures usually used for oaMap, lightMap, and other baked results that require a unique texel on each surface. BTW it looks like Exocortex may be embarking attempting to write a Layered BRDF with graph based inputs that would allow for flexible arbitrary UV usage (any number), procedural textures and all the rest. We are exploring a design for this over the next few weeks and will implement it if possible. Basically we are hitting the limit of what we can achieve with the the current glsl snippet + #define based shader composition strategy. So maybe one solution would be to go for the good enough near term solutions using the current approach and leave the crazy flexibility for later, when one has a flexible shader graph compiler. |
And |
I think the displacement map often follows the same UV mapping as the main diffuse texture, sort of like bump maps/normal maps -- really displacement is just a continuation in that vein -- it is surface micro structure represented as texture. If you have brick, the displacement/normal/bump repeats with the brick texture. If you have a face, the displacement/normal/bump is usually the same as the diffuse map. I think it generally isn't unwrapped in my opinion. PS. I am not a artist, so my experience is limited to seeing what artists do. So maybe sometimes displacement is unwrapped, but I haven't yet seen it to be that case. |
Well, in http://threejs.org/examples/webgl_materials_displacementmap.html, the vertex displacements are closely coupled with the geometry. There is no way repeat-wrapping makes sense in this case. Sure, repeat-wrapping is appropriate for normal/bump maps. If you say there are use cases for repeat-wrapping with vertex displacement, then we need to find a solution that is more flexible than the one we have now. |
The mesh in this example is misleading. Character heads often are do not use any repeating/offsets in their textures. Look at the skin example in ThreeJS as well. With character heads all textures are unwrapped - diffuse, bump, normal, displacement, etc. But they generally all share the first UV channel even though they are unwrapped. In this case, e.g. character heads, displacement is following the bump/normal map usage pattern and thus can share UV rather than use the separate UV2 channel. |
@bhouston Thank you for your reply. So in the ninja example, the And what if we want a marble ninja with repeating diffuse map? We can't do that in three.js because the |
UV usually is not unwrapped but sometimes is. It usually drives diffuse/specular/bump/normal/displacement. UV2 is always unwrapped and drives oaMap/lightMap and other textures that are generally always unwrapped. Not everything is possible with the above setup, that is true. You are right that you will have to have bump/normal/displacement follow the diffuse texture -- which I think is generally the desired case, but not always. I do think that any example you can make for displacement not following the diffuse map being undesirable can be equally made for bump/normal as they are all just varying degrees of sub-face surface detail. So the limitation you mention is already being imposed on bump and normal and if it isn't a huge problem already it isn't likely to become huge - but it is a real limitation. BTW did you know that there exists a higher quality displacement called "vector displacement"? Details: http://docs.pixologic.com/user-guide/3d-modeling/exporting-your-model/vector-displacement-maps/ Thus the sequence from low quality to high quality is "bump" -> "normal" -> "displacement" -> "vector displacement", this progression is a bit more clear when one supports geometry tessellation, because just moving the original low resolution vertices can make "displacement" seem less useful than normal maps. This is not a general solution. The high end solution is to allow an arbitrary number of UV channels which can be arbitrary used by any texture. That is how production renderers work: V-Ray/Arnold/RenderMan/etc. But implementing this is challenging given the current #define + glsl snippet structure we use. |
BTW a simple way to hack in support for specifying which map uses which UV, one could create defines like OAMAP_UV which resolves to "vUv2/uv2" or something like that. Thus one uses a define per texture to specify which UV set it reads from. |
@bhouston OK, I will have to defer to you to suggest a new approach if you feel that we should improve how Currently, |
We modified our branch of Three.JS to honor it on a per texture basis via a function like this:
|
OK. If/when we implement uniform packing of some sort, perhaps we can add this feature, too. |
Agreed!
This sounds good to me.
Do you hit uniform limits with that setup? Honoring offset/repeat per texture would be great. We may also want to implement the |
The examples of this pullrequest are now built and visible in threejsworker. To view them, go to the following link: |
I've been using this new feature and I think the As it is implemented now we can only have an offset from [0 to |
You mean combining bias and scale into a min/max range? |
If we move towards @sunag's Node-based material graphs (which are already incredibly powerful), displacement would be an input and you could modify it in any way you want with any equation. It would simplify things tremendously. |
No description provided.