Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Marching Directly #3

Closed
Kethku opened this issue Nov 10, 2021 · 10 comments
Closed

Marching Directly #3

Kethku opened this issue Nov 10, 2021 · 10 comments

Comments

@Kethku
Copy link

Kethku commented Nov 10, 2021

Have you considered rendering the model by sphere marching rather than building a mesh and using traditional rendering techniques? Doing such a thing could push much of the computation to the gpu speeding up the process and preserve the desirable aspects of signed distance field "iso surfaces" such as domain/boolean operations.

I stumbled upon your project while contemplating building a version of my own in order to model some things for 3d printing. I really like jscad but found it to be way too unstable I think in part because the boolean operations are all done on the mesh rather than some implicit function. In the past I've done rendering similar to Inigo Quilez' signed distance field ray tracing https://www.iquilezles.org/www/articles/distfunctions/distfunctions.htm and figured a similar approach could be used to make 3d prints so long as a good meshing algorithm could be used. Which led me here.

I'd be interested if you could elaborate on what is pushing you away from using distance fields in favor of "simpler" algorithms. They seem very appealing at first glance especially when compared to other existing cad approaches.

@Kethku
Copy link
Author

Kethku commented Nov 10, 2021

As a quick example of rendering the distance field directly, https://github.com/Kethku/rusty-marcher this is the project I built before which has a "shader" crate which is compiled to a spirv shader using rust-gpu and a normal crate library for computation on the cpu. This means I can render the app efficiently in a pixel shader while also querying the distance field for physics and such using the exact same rust code.

Unfortunately the app doesn't have much in the way of documentation as it was just a test, but I may convert it into something similar to a cad system sometime in the near future.

@hannobraun
Copy link
Owner

Hey @Kethku, thank you for your interest!

Have you considered rendering the model by sphere marching rather than building a mesh and using traditional rendering techniques?

Yes, I have, and indeed earlier (non-public) iterations have used ray marching (which I think is the same as sphere marching?) to render the SDFs. I stopped doing that for multiple reasons:

  1. Performance. It got slow pretty quickly, as SDFs became more complex, but also because some edge cases are inherently slow to render using ray marching (think about what happens, if the camera points in a direction that's parallel or almost parallel to a surface; lots of small steps, until the surface ends). The main problem here was that the performance cost had to be paid per frame, making interactions with the scene (i.e. zooming, moving, rotating) noticeable slow.
  2. Complexity. I don't remember the details, but I think my first approach, generating the shader code from the model, got unwieldy for some reason. I experimented with building a kind of interpreter on the GPU, passing instructions via the uniform buffer, but that was super-slow. (I've since learned that this approach can work, but that insight comes too late. I've already moved on.)
  3. Saving work. One of my use cases is 3D printing, and I need to generate triangle meshes for that anyway (for export to 3MF, STL, etc.). If I'm going to write and maintain that code anyway, why not re-use it for the rendering too? Yes, isosurface extraction is also slow, but it's a one-time cost. After that, interaction with the scene is instant, at least for the number of triangles I've worked with so far.

So I decided to use SDFs for the modeling, and convert them into a triangle mesh for interaction with the outside world (i.e. rendering and file export). That sounded very appealing to me, for the following reasons:

  1. SDFs are just neat. The math seems elegant and CSG is trivial.
  2. You just need a single algorithm to interface the elegant SDF world with the nasty reality of triangle meshes. Once you've covered that base, you can add new SDF-based geometry without having to write more complicated code.
  3. I had some ideas on how to integrate more advanced CAD features, like selecting and manipulating existing features, like chamfering an edge, or extruding a sketch from a face. Working on that sounded fun.

The short answer to your question, why I turned towards other approaches, is that with more experience with SDFs, all of those supposed advantages have been negated. Let me elaborate.

1. SDFs aren't that neat

Actually, turns out the math isn't that elegant. For more information, see the discussion about "exact" and "bound" SDFs in this article by Inigo Quilez, or his article on interior distance.

Learning that contributed to me kinda falling out of love with SDFs, which made me want to explore other approaches. This isn't rational decision-making, of course, but there are more rational reasons here too:

  • For one, I didn't learn about this by carefully reading these articles. I first stumbled upon it when I observed a bug (the spacer suddenly grew a bottom at low resolutions)), and later when I followed up on weird readings from my debug output. I think there are ways to work around those specific problems, but I fear they were just signs of more problems to come.
  • I had some ideas about bypassing the slicer/CAD and generating G-Code directly from an SDF. Those ideas can't work, if the gradients in the SDF are all messed up (as they invariably become when using even the most basic CSG operations).
  • It might be possible to work around all that on the user's side when modeling, but then the CAD program becomes something that you have to hold just right, and that fails in mysterious ways if you don't.

2. Good isosurface extraction is crazy hard

True, you "just" need the one algorithm, but most of them aren't that good around sharp edges. Those that are, are complex and thus hard to implement. And sharp edges are just one problem. Check out these notes for more information. As far as I can tell (and I'm not an expert), there's no perfect algorithm.

If you look at CAD programs based on SDFs out in the wild, most just don't produce geometry that is that good. libfive comes closest to perfection (using Manifold Dual Contouring). The closest I ever got was a simplified implementation of Dual Contouring, and the results weren't that great.

libfive proves you can get it under control, but putting in the effort of doing so seemed not worth it, given the other disadvantages. And note that libfive, for all its qualities, is still very basic in its capabilities, as far as CAD goes.

3. Non-CSG CAD is possibly a no-go

All my ideas came to nothing. I couldn't come up with anything that worked, given closer inspections, and never attempted any implementations. Plus, all of these ideas would have required more code on a per-primitive basis, further negating the "you just need the one algorithm" benefit.

As far as I can tell (and again, I'm far from an expert), feature detection in SDFs is just an unsolved problem. I saw no evidence that it can be solved. And while CSG is neat, it's not enough, in my opinion.


So yeah, there we go. I hope that wasn't too long, but it's certainly good to have that written down somewhere, for future reference.

@hannobraun
Copy link
Owner

I'm closing this issue, as I believe the questions have been answered. But please don't let that deter anyone from engaging here further.

@Kethku
Copy link
Author

Kethku commented Nov 10, 2021

Lots of useful links and details here. I will go over them in the coming days.

One point though: I'm not sure 3d printing requires triangle meshes. In fact if a slicer could interface with an isosurface directly, it could extract slices that are more accurate and not approximations of the mesh rather than having to do interior vs exterior calculations all the time.

In fact, for sla printing in particular, extracting image slices out of a distance field is trivial :)

That said, this is my back of the napkin thoughts rather than the obvious actual amount of work you have put in to this area, so I may have to just go try it myself and come to the same conclusions. Thanks for taking the time to respond

@Kethku
Copy link
Author

Kethku commented Nov 10, 2021

Ah yes! I see you mentioned generating gcode directly. This is what I get for skimming. Again though I think I need to try it myself in order to fully understand the problems you ran to. Brb while I go waste a month

@hannobraun
Copy link
Owner

One point though: I'm not sure 3d printing requires triangle meshes. In fact if a slicer could interface with an isosurface directly, it could extract slices that are more accurate and not approximations of the mesh rather than having to do interior vs exterior calculations all the time.

That's correct, but I was speaking more in practical terms. As in, the slicer I'm using consumes triangle meshes, and I want to use that soon-ish instead of reinventing not just CAD, but also everything besides 😁

In fact, for sla printing in particular, extracting image slices out of a distance field is trivial :)

That's totally right, and a great insight. Never thought of that!

I see you mentioned generating gcode directly.

To be clear, my ideas (that kinda failed before they got to implementation) were about generating motion directly from the SDF. There might be good ways that go through some intermediate representation. I don't know.

Again though I think I need to try it myself in order to fully understand the problems you ran to. Brb while I go waste a month

It's the way of our people 😄

Please let me know how it goes, especially if you prove me wrong about something! Despite going in a different direction myself, I'm still very interested in this topic.

@Kethku
Copy link
Author

Kethku commented Nov 29, 2021

DIYCad
I've been making slow but steady progress. Thus far I've just been working on the modeling half but I was able to adapt your threemf library to export an admittedly blocky but printable model.

I took a detour for a bit around rendering gui overlays for manipulating and more easily understanding what each constant does to the resulting model. Above is a gif demonstrating that live rendering and manipulation. Very much like I describe above around marching the sdf in order to get a fast render out. Because the shader also compiles on the cpu exactly the same, I am able to then march over this using a simple marching cubes implementation to create the actual mesh.

Actually creating the final mesh takes a while because the model is pretty complex, but I think with some doing I could do the actual interior/exterior calculation on the gpu as well. So that the resulting mesh is created just by indexing into a 3d texture during the marching cubes run.

@Kethku
Copy link
Author

Kethku commented Nov 29, 2021

On the topic of creating a mesh from an sdf, I've been thinking about whether treating the model as a series of 2d slices helps the problem at all. I don't know if it does, but I figure it might because building 2d polygons from implicit surfaces is a simpler problem than creating 3d meshes from them. I figure you could split the mesh problem like this:

  1. scan over the model which collects 2d slices at each layer height interval
  2. Then those slices are turned into polygons (I suppose this is a question mark atm but I have interacted with pretty good image to polygon systems in game dev contexts which may be applicable)
  3. Either render those slices into images for sla printing, or bridge each polygon with a triangle strip (this is more hand wavy than id like)

Such an algorithm is probably more complicated than I'm making it out to be, but is possibly good enough for 3d printing purposes and maybe doesn't have the same drawbacks that grid based mesh generation would have. But I guess my question is this: does such an idea seem worth investigating? Am I missing something from your experience going down this path?

@Kethku
Copy link
Author

Kethku commented Nov 29, 2021

One point though: I'm not sure 3d printing requires triangle meshes. In fact if a slicer could interface with an isosurface directly, it could extract slices that are more accurate and not approximations of the mesh rather than having to do interior vs exterior calculations all the time.

I'm still very interested in this idea. Given that I can compute image slices of the model for any height on the gpu very quickly, it seems to me that a slicer which runs on bitmap slices would be faster because it wouldn't have to do computations on collections of triangles. I don't know the slightest thing about how tool paths are generated, but I may spend some time looking at the open source slicers out there to see if I could hack a backend based on sdfs on to it. At the very least itd be fun to play around with.

@hannobraun
Copy link
Owner

Oh, that's an impressive result! And nice GUI! I'm also planning to also look into egui (#4).

does such an idea seem worth investigating? Am I missing something from your experience going down this path?

Interesting idea! It sounds workable, but I doubt that there's much of a point to it, compared to the established algorithms. The reason for that is simply, that the complicated algorithms are complicated, because preserving features of the geometry (like sharp edges) makes them complicated.

Here are the problems I see:

  1. If you want to preserve sharp features when polygonizing the layers, you'll probably end up implementing an easier-to-reason-about-because-2D version of (Manifold) Dual Contouring. And I don't think any of the other problems that the established algorithms have, like generating manifold geometry, go away in 2D either.
  2. For connecting the layers, you'll probably run into enough corner cases to make that a headache. Doesn't sound impossible by any means, but there'll probably be enough situations where you don't know how to connect your n layer-1 polygons to your m layer-2 polygons.
    You also can't just connect the vertices of several layers. If thorn-like features grow out of a flat surface, then you need to consider the vertices of the thorn's bases to triangulate the flat surface, meaning you need to triangulate a polygon with holes, which is a non-trivial problem in its own right.
  3. Even if 2. doesn't turn into too much of an issue, you'll still have the problem of preserving geometry between your connected layers. I don't even know how you'd attempt this. I suspect it won't be any easier than just implementing (Manifold) Dual Contouring in the first place.

Maybe I'm wrong here, but this algorithm with good feature preservation sounds like a PhD-thesis-level problem to me. This algorithm without feature detection still sounds much more complicated than Surface Nets or Marching Cubes.

I don't mean to discourage you. If you see something there, go for it. Just keep in mind, this field of research is 30 years old, and there don't seem to be any really great solutions, which is one of the things that turned me away from SDFs in the first place.

I'm still very interested in this idea. Given that I can compute image slices of the model for any height on the gpu very quickly, it seems to me that a slicer which runs on bitmap slices would be faster because it wouldn't have to do computations on collections of triangles. I don't know the slightest thing about how tool paths are generated, but I may spend some time looking at the open source slicers out there to see if I could hack a backend based on sdfs on to it. At the very least itd be fun to play around with.

This seems like a much more promising approach to me. My (decidedly non-expert) opinion is, that you're much more likely to contribute something worthwhile in that area, rather than trying to reinvent isosurface extraction.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants