-
-
Notifications
You must be signed in to change notification settings - Fork 35.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Raytracing Renderer using Web Workers Sandbox #7671
Conversation
@zz85 can you also check in raytracing_sandbox_workers.html?? http://ci.threejs.org/api/pullrequests/7671/examples/index.html#raytracing_sandbox_workers |
ah yes, good catch @gero3 ! |
cace5a7
to
0895798
Compare
- Removes examples/raytrace_scene.js
- In a way this is more efficient because free workers can be task to take on more jobs and load would be more balanced
var scene = new THREE.ObjectLoader().parse( json ); 😊 |
It doesn't detect the number of cores or allow one to change the number of workers. It would be cool to do that to see the speed difference of more workers or less workers. Maybe just a drop down that changes the number of workers and retriggers a render. BTW modern raytracers are going towards GPU compute these days -- see Blender's cycles, NVIDIA's Iray, Redshift and Otoy's Octane for example. But I think without compute shader in WebGL, it isn't yet feasible to go that route. Funny thing, I bet that if it did all the hard work on the GPU, a JavaScript-based raytracer would actually be competitive with a pure C++ raytracer or even a C++/CUDA-based one. |
@bhouston you could append a hashtag eg |
Had no idea that you could query browsers for that kind of information. Very interesting. Is it some kind of estimation? I should read up on the Navigator API I guess. |
@zz85 sweet! It works. BTW a hashtag may not be the best way to go because by default we use hashtags in the example listing, e.g... http://ci.threejs.org/api/pullrequests/7671/examples/index.html#raytracing_sandbox_workers |
- in json format. - unfortunately object3d.toJSON doesn't seem to serialize positions
- kinda works but something is still wrong
@bhouston I know.. that's why workers are now elastic ;) |
I initially thought it'd be nice if the states (scene + camera) could be shared easily across all the workers. That allows a scene to be modified from a main place and the web workers would magically render them. Unfortunately that might have to way. One easy workaround for now to add interactivity would be to send update code snippets to the workers to be eval-ed. Anyways, code is now ready for review / merging :) @mrdoob |
@zz85
Is it using a spatial hierarchy yet? Without, we won't see it scale to real world scenes, regardless of the implementation. To compete with anything "serious", the per ray complexity must be cut to O(log N)! I guess it'd be easiest to get that part in place before even considering a GPU port. There are two parts involved: Constructing the hierarchy and traversing it for rendering. A both simple to implement and fairly efficient algorithm for construction is given by Eisemann 2011. More papers on BVH construction that should work for background information:
NVIDIA's Optix SDK implements several different construction algorithms. Unfortunately, that's the closed-source part of it. Binary BVH stack-based traversal works as follows:
Important optimization: Whenever a triangle hit is encountered, truncate the maximum length of the ray (we are not interested in farther away triangles, we are looking for the nearest). We probably can't implement a traversal stack in GLSL ES 1, therefore the hierarchy traversal must be done with a state machine for a WebGL1 GPU implementation, see Hapala et al 2011. |
@tschw I would not encourage people to make huge sacrifices necessary to make this work under WebGL 1.0. I've seen well funded very talented people struggle under those types of limitations -- specifically I am thinking of NVIDIA's Gelato renderer - https://en.wikipedia.org/wiki/Gelato_(software) Yes, it worked (I used it back in 2004) but not that well giving how much talent and effort was spent on it. GPU rendering has really only had real success very recently (Redshift, Octane, iray) with people using compute shaders and arbitrary output buffers and all that nice stuff that should come with WebGL 2.0. |
BTW @tschw a lot of CPU-based production path tracers (V-Ray, Cinema 4D, Thea, Corona) are adopting Intel's Embree library to do BVH and coherent raycasting: https://embree.github.io/ Not sure if it is possible to emScripten Embree to JavaScript... |
Side topic: once a while the topic of raytracing on the GPU comes up... :) I believe @kig has spent some amount of time experimenting with raytracing on the GPU http://fhtr.org/ - he may have things to chip in on that ... |
- randomized painting behind a flag
I don't think I am. The state machine algorithm for rendering is really quite simple, just not as fast as the stack-based variant. In WebGL 2 (dynamic data structures are hopefully implementable and then) traversal would end up ~twice as fast, so it's just something to look forward to, but not required. Even half the speed will still be a lot faster than JavaScript / CPU. But, as said, there's really no point to port it to the GPU until we get rid of brute force intersection.
OK, 2004 is pretty much the stone age for GPGPU applications. No talent is spent on anything when it gets reused, (even in case that one product sucked badly) and parallelism is the future. Clock rates won't get much higher, but, as we all know, no computer, renderer, or graphics card can ever be fast enough. Put differently: Talent must be "spent" on mapping problems to manycore architectures!
Oh yes, there should be some BVH construction algorithms in it somewhere. In fact, the list of contributors overlaps with the authors of above papers. Sven Woop is also well-known for his thesis on a ray tracing hardware architecture.
Probably won't work: The assembler of emscripten is JavaScript not Intel. Even if it does, I doubt it will perform all too well on that "platform" (single-threaded, + some overhead, 4-way SIMD at best). Probably several minutes for a high quality BVH build - clearly too much to run in a browser. I guess one could use it in a Node.js tool to precalculate static BVH layers from a Three.js scene, or just one static BVH if all we want is to path trace a scene while moving the camera... A simple BVH builder will already be a huge advantage, for a start. Note that a BVH is a spatial database index. Therefore the quality of the index tells us how fast the ray intersection queries are, where higher quality indexes happen to take more time to build. It has absolutely nothing to do with image quality. The result of a query must always be correct, the better the index, the quicker we get an answer. For real-time ray tracing of Three.js scenes, including ones that can contain moving objects (maybe as part of more complex, partly rasterized renders), we'd need a very fast, probably GPU-based, low quality BVH builder. This is the part that may have to wait for WebGL2. High quality BVH construction can be performed offline and loaded.
Commercial success and technical feasibility are two very different pairs of shoes: Iray is built on top of Optix and I evaluated that technology over three years ago. BVH construction was already extremely fast, even with the high quality algorithms. It also allowed multiple construction algorithms to be combined for real-time rendering - similar to what I have sketched out above. |
The best triangle-soup ray tracer I've seen for WebGL is the (sadly bitrot) http://cg.alexandra.dk/?p=1748 It's using a stackless multipass approach to trace the AABB BVH that's constructed in JS. It doesn't seem too complicated, there's about 2000 lines of JS and 1000 lines of GLSL (most of which is the BVH intersection test.) They're quoting performance roughly 40% of a stack-based OpenGL 3.3 renderer. Which is not too shabby, given the limitations. |
I agree with most of your comment @tschw and the places were I don't it is debatable. :) I wrote a production quality GPU-based renderer that has been used on quite a number of Hollywood films. It sucked trying to make it work with low end versions of OpenGL as well as with the memory and PCIe speed constraints of older GPUs - lots of time wasted on that stuff. But your suggestion of a BVH and the idea of static/dynamic objects brings up two ideas for improvements to ThreeJS outside of just raytracing (which I am not sure is going useful in the near term):
ThreeJS as a game engine is missing hit testing in a robust fashion that is fast. Ignoring the ray tracing part, it is useful for doing particle systems, debris, physics and vehicle dynamics and mouse interactions. It usually consists of static objects (level elements) and dynamic objects (characters, vehicles, items, destructables, etc.). This sounds like the BVH you describe that has both a moderate quality precompute and a dynamic element (which can often just be lists if the number of items is small, with elements easily removable from the precompute when necessary but slow to add back.)
Also having objects (meshes and lights) declarable as static can allow for a lot of cool things. Physics systems can keep static objects immobile, thus this helps integration with Ammo.js. Light baking of static lights on static objects is very useful in improving quality. Light baking can be done in an number of ways, it can be precomputed in a high quality fashion using high resolution shadow maps upon level load and mapped onto unwrapped texture, or it can be done via a ray tracer/path tracer offline. I am sure there is a ton more opportunities to use both the idea of explicitly static objects and a fast BVH. :) Thus I think that your idea of adding static object support (add |
How about just replacing the old |
yes, I could do that. initially I thought of keeping the original example so they could be compared. I was also thinking about browsers that has doesn't support webworkers.... which turns out to be the minority now... http://caniuse.com/#feat=webworkers |
- also raytracing_sandbox_workers.html -> raytracing_sandbox.html
✋ ok, old |
@bhouston A ray tracing renderer with full shading stage will need someone to give it serious love - no doubt about that. It's not a trivial task, but isn't impossible either. Should this be the goal? I think it should be up to whoever picks up the project. It would be nice to have - and save you some server time, I guess. Probably not to an extent worth the invest, but let's say someone came along and built it as a Ph.D thesis - it'd be awesome, wouldn't it? For other tasks, the way I see it, primary rays are rather uninteresting - those can also be found via rasterization. The secondaries are the interesting ones for all kinds of hybrid rendering (this demo part & the author's blog should give an inspiring example). Distance to point / cone / frustum queries are very interesting too: These allow AO (in 3D, not the screen space hack) and plausible soft shadows for spherical / box shaped area light sources at fairly low sample count.
Yes, blending static & (then much faster to render) dynamic environment maps, for one instance. But there's probably a lot more :).
Very nice! Just basic shading, but very much proves the point.
The data occupies too many samplers for practical use. Did you look into that Eisemann 2011 paper above? The algorithm is really quite easy. Even more interestingly, it describes a method to encode the entire BVH as the order in which the triangles and their vertices are stored :-).
Yeah, amazing enough! |
Nice. The link to the post of interest is https://directtovideo.wordpress.com/2013/05/08/real-time-ray-tracing-part-2/
Interestingly I was also reading directtovideo's blog yesterday because of this cool webgl demo... http://dev.miaumiau.cat/rayTracer/ https://vimeo.com/109401743 @bhouston sounds like a good to have :) |
Raytracing Renderer using Web Workers Sandbox
Thanks thanks! |
Wow! It looks terrific on video, but it refuses to run correctly on my box :-(. Particles show with Chrome but the meshing quirks off completely. Guessing GLSL variables lack initialization, at least has that typical look... |
@zz85 @tschw WebGL 2.0 is sort of available in FireFox now. :) https://wiki.mozilla.org/Platform/GFX/WebGL2 I think it may be backwards compatible, maybe we could request a WebGL 2.0 context if it is available.... |
Is this useful? |
Raytracing Renderer with Web Worker Support.
Live Example
Improved performance for the raytracing_sandbox example. This is a slightly cleaned up version of the initial prototype I've done at http://jabtunes.com/labs/3d/raytraceworkers/#2
I've a couple more considerations + things to do in this PR
RaytracingRender
andRaytracingWorkerRenderer
(I imagine the names to be confusing to anyone reading it), while it's possible to share some code, we could either decide to modularize it or keep them separate.raytrace_scene.js
. Ideally I want to keep the scene code as similar to the raytracing_sandbox.html and other three.js scenes as possible. That means we could send evals code over, or serialize and send toJSON scenes and camera to the workers. I'm not too sure how to do JSON to scene though..