-
-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Marching Directly #3
Comments
As a quick example of rendering the distance field directly, https://github.com/Kethku/rusty-marcher this is the project I built before which has a "shader" crate which is compiled to a spirv shader using rust-gpu and a normal crate library for computation on the cpu. This means I can render the app efficiently in a pixel shader while also querying the distance field for physics and such using the exact same rust code. Unfortunately the app doesn't have much in the way of documentation as it was just a test, but I may convert it into something similar to a cad system sometime in the near future. |
Hey @Kethku, thank you for your interest!
Yes, I have, and indeed earlier (non-public) iterations have used ray marching (which I think is the same as sphere marching?) to render the SDFs. I stopped doing that for multiple reasons:
So I decided to use SDFs for the modeling, and convert them into a triangle mesh for interaction with the outside world (i.e. rendering and file export). That sounded very appealing to me, for the following reasons:
The short answer to your question, why I turned towards other approaches, is that with more experience with SDFs, all of those supposed advantages have been negated. Let me elaborate. 1. SDFs aren't that neatActually, turns out the math isn't that elegant. For more information, see the discussion about "exact" and "bound" SDFs in this article by Inigo Quilez, or his article on interior distance. Learning that contributed to me kinda falling out of love with SDFs, which made me want to explore other approaches. This isn't rational decision-making, of course, but there are more rational reasons here too:
2. Good isosurface extraction is crazy hardTrue, you "just" need the one algorithm, but most of them aren't that good around sharp edges. Those that are, are complex and thus hard to implement. And sharp edges are just one problem. Check out these notes for more information. As far as I can tell (and I'm not an expert), there's no perfect algorithm. If you look at CAD programs based on SDFs out in the wild, most just don't produce geometry that is that good. libfive comes closest to perfection (using Manifold Dual Contouring). The closest I ever got was a simplified implementation of Dual Contouring, and the results weren't that great. libfive proves you can get it under control, but putting in the effort of doing so seemed not worth it, given the other disadvantages. And note that libfive, for all its qualities, is still very basic in its capabilities, as far as CAD goes. 3. Non-CSG CAD is possibly a no-goAll my ideas came to nothing. I couldn't come up with anything that worked, given closer inspections, and never attempted any implementations. Plus, all of these ideas would have required more code on a per-primitive basis, further negating the "you just need the one algorithm" benefit. As far as I can tell (and again, I'm far from an expert), feature detection in SDFs is just an unsolved problem. I saw no evidence that it can be solved. And while CSG is neat, it's not enough, in my opinion. So yeah, there we go. I hope that wasn't too long, but it's certainly good to have that written down somewhere, for future reference. |
I'm closing this issue, as I believe the questions have been answered. But please don't let that deter anyone from engaging here further. |
Lots of useful links and details here. I will go over them in the coming days. One point though: I'm not sure 3d printing requires triangle meshes. In fact if a slicer could interface with an isosurface directly, it could extract slices that are more accurate and not approximations of the mesh rather than having to do interior vs exterior calculations all the time. In fact, for sla printing in particular, extracting image slices out of a distance field is trivial :) That said, this is my back of the napkin thoughts rather than the obvious actual amount of work you have put in to this area, so I may have to just go try it myself and come to the same conclusions. Thanks for taking the time to respond |
Ah yes! I see you mentioned generating gcode directly. This is what I get for skimming. Again though I think I need to try it myself in order to fully understand the problems you ran to. Brb while I go waste a month |
That's correct, but I was speaking more in practical terms. As in, the slicer I'm using consumes triangle meshes, and I want to use that soon-ish instead of reinventing not just CAD, but also everything besides 😁
That's totally right, and a great insight. Never thought of that!
To be clear, my ideas (that kinda failed before they got to implementation) were about generating motion directly from the SDF. There might be good ways that go through some intermediate representation. I don't know.
It's the way of our people 😄 Please let me know how it goes, especially if you prove me wrong about something! Despite going in a different direction myself, I'm still very interested in this topic. |
On the topic of creating a mesh from an sdf, I've been thinking about whether treating the model as a series of 2d slices helps the problem at all. I don't know if it does, but I figure it might because building 2d polygons from implicit surfaces is a simpler problem than creating 3d meshes from them. I figure you could split the mesh problem like this:
Such an algorithm is probably more complicated than I'm making it out to be, but is possibly good enough for 3d printing purposes and maybe doesn't have the same drawbacks that grid based mesh generation would have. But I guess my question is this: does such an idea seem worth investigating? Am I missing something from your experience going down this path? |
I'm still very interested in this idea. Given that I can compute image slices of the model for any height on the gpu very quickly, it seems to me that a slicer which runs on bitmap slices would be faster because it wouldn't have to do computations on collections of triangles. I don't know the slightest thing about how tool paths are generated, but I may spend some time looking at the open source slicers out there to see if I could hack a backend based on sdfs on to it. At the very least itd be fun to play around with. |
Oh, that's an impressive result! And nice GUI! I'm also planning to also look into egui (#4).
Interesting idea! It sounds workable, but I doubt that there's much of a point to it, compared to the established algorithms. The reason for that is simply, that the complicated algorithms are complicated, because preserving features of the geometry (like sharp edges) makes them complicated. Here are the problems I see:
Maybe I'm wrong here, but this algorithm with good feature preservation sounds like a PhD-thesis-level problem to me. This algorithm without feature detection still sounds much more complicated than Surface Nets or Marching Cubes. I don't mean to discourage you. If you see something there, go for it. Just keep in mind, this field of research is 30 years old, and there don't seem to be any really great solutions, which is one of the things that turned me away from SDFs in the first place.
This seems like a much more promising approach to me. My (decidedly non-expert) opinion is, that you're much more likely to contribute something worthwhile in that area, rather than trying to reinvent isosurface extraction. |
Have you considered rendering the model by sphere marching rather than building a mesh and using traditional rendering techniques? Doing such a thing could push much of the computation to the gpu speeding up the process and preserve the desirable aspects of signed distance field "iso surfaces" such as domain/boolean operations.
I stumbled upon your project while contemplating building a version of my own in order to model some things for 3d printing. I really like jscad but found it to be way too unstable I think in part because the boolean operations are all done on the mesh rather than some implicit function. In the past I've done rendering similar to Inigo Quilez' signed distance field ray tracing https://www.iquilezles.org/www/articles/distfunctions/distfunctions.htm and figured a similar approach could be used to make 3d prints so long as a good meshing algorithm could be used. Which led me here.
I'd be interested if you could elaborate on what is pushing you away from using distance fields in favor of "simpler" algorithms. They seem very appealing at first glance especially when compared to other existing cad approaches.
The text was updated successfully, but these errors were encountered: