Geometric Algorithms implemented in Java and Processing v3. Algorithms details can be found at: Computational Geometry - Algorithms and Applications, 3rd Ed. I also used this book as reference, Real-Time Collision Detection, to at least know what exists in the field.
- QuadTree
- Pseudo-Triangulation by Brute Force method or using QuadTree search
- Gilbert Johnson Keerthi collision detection algorithm
- Simple polygon triangulation
- Marching Squares for Metaballs/MetaDiamonds
- Spatial Hashing or Spatial Partitioning Bin
- Find line segments intersections by Brute Force method
- Plane sweep to find segment intersections by Bentley-Ottmann algorithm (not reliable and fault)
One of the first data structures for orthogonal range searching was the quadtree, which the book discusses in Chapter 14 in the context of meshing. Unfortunately, the worst-case behavior of quadtrees is quite bad, but as the Coding Train made its implementation and explanation trivial, I don' care, it works: CodingTrain QuadTree. This stuff is so useful even I fixed an issue on their code base so it works properly, and then they deleted the processing version. And to make someone do work for free for humanity you see this is worth a try. But I heard R-tree are better.
The implementation here is a better version than the CodingTrain's QuadTree processing version was. I used binary tree for the internal search (TreeSet) so we get log(n) speed on get, add, remove processes. I also included a circular range search besides the regular retangular range search. But don't expect this will make your program magically work super fast. The scenarios that a QuadTree increases performance are not many. Quadtree depends on particle distribution, size of the query range, so it can get less performative than some efficient brute force method really easy.
(canvas size: 1000x400) 1000 points created
I know, so much work, so many classes for shit improvements if any. All these algorithms are really getting in my nerve. They promise super efficiency but delivery less than the trivial implementation. I feel like I'm wasting my life on this.
Pseudo-Triangulation, better known as proximal networking connection or something like this. As I wasted 3 weeks studying Delaunay Triangulation, I just stopped and started from an easier step point. The PseudoTriangulation is done by efficient bruteForce method, where a line is drawn between two points only when it less than some maxDist. If there are n points, this search is done on n*(n-1)/2 steps. The whole explanations can be found at the end of this video Coding Math: Episode 58 - Array Math. This is far from some true triangulation as the points don't act as true vertex.
The sketch looks like it's pretty inefficient, but it is actually very good. At 150 points and maxDist=100, we get around 60 frames/s. At 300 points same maxDist, 25 frames/s, 1000 points, 2 frames/s. Probably using some QuadTree structure we can improve the performance.
Visualization of this Pseudo-Triangulation is bellow:
So by putting a QuadTree structure I expected at least some minor improvements, but no. Even using every trick I could come on, the QuadTree search only got the same level of performance than the efficient Brute Force method or worst. As a reminder, the efficient Brute Force makes the search for pair of points only once, and since there are no extra data structures being created, deleted, inserted, that is quite fast. For the QuadTree to do the same, even if its query gets only the nearby points, we still have to filter any repetitive pairs. To do that we need a list to catalogue those points, and adding the creation of the QuadTree object, internal search, catalogue the pair of points to decide if we add a line or not, even if we only have to do that for a small subset of points, those operations are quite expensive. And QuadTree have the tendency of only working with optimal efficiency only for a selected query range size and points distribution. So if the query size cannot be changed, like on this sketch, you get stuck on inefficient search.
As for comparison, if we start with 1000 points, maxDist = 40, it gets some 10~15 frames/s. The same as the brute force, maybe a little less fast. Also, the brute force method will nedd 499500 inexpensive checks to generate 3427 unique line segments. For the same point set, using the QuadTree system, the number of checks = 7886 to generate 3433 lines. I don't get this discrepancy for the same point set, but it is not worth investigating 6 extra lines created in a thousand points (most probably is the way I am generating ids of each line). For this animation in particular, the creation on every frame of lines are more computationally expensive than the search of pair of points proximity. So the only way I can see for the QuadTree could beat the efficient Brute force method is by merging the Node Class with the Point class, so we don't need to create new Points every frame, only update their location. But that would make the QuadTree less generic, and I'm not sure if we really could get so much of a better performance, but probably it would. I should spill some graphs comparing the QuadTree version the vanilla one, but I am getting a bit dishearted by the lack of gains for so many code inserted; so many classes for zero. Just watch the video and enjoy and never underestimate the power of efficient Brute Force.
Nothing fancy here, just the function of the GJK algorithm that returns true/false for collision detection of any polygons of any size (polygons, lines, points). This is one of the most efficients algorithms for collision detection, and on its full form it can even detect the minimal distance between the shapes (not done here). A detail explanation can be found here: A Strange But Elegant Approach to a Surprisingly Hard Problem (GJK Algorithm). Other resorce used as reference for my code: https://github.com/kroitor/gjk.c, https://observablehq.com/@esperanc/2d-gjk-and-epa-algorithms.
So if you need a reliable collision detection for polygons, this is the one you need. As a test, I built an Asteroid game with minimal working physics using only this as a collision detection, and it is pretty robust. Besides the GJK algorithm, the game also has functions to calculate: polygon area, mass moment of inertia, polygon centre, polygon rotation, screen space-warping (toroidal mapping) without artifacts, elastic collision linear momentum for velocity updates, angular momentum, and more.
A simple triangulation of any convex/non-convex polygon with ordered clockwise/anti-clockwise points. It creates an array of triangles using only the vertices points of the polygon. Best used if more/finer processing is required for the polygon.
A great algorithm to generates contour/isolines for a two-dimensional scalar field of rectangular grid array of individual numerical values. This version of Marching Squares can fill colours efficiently with gradient colours using vertex shading, hiding the small resolution given by the field values for some impressive performance. An upgraded version from the Coding Train's Marching Squares that only produces the isoline/contour. See the video tutorial and the rest of the references inside the program to know how it works. Since this is place for geometric algorithms, I used metaballls/meta-diamonds flying around a pseudo-metaball plasma field generated by the scalar grid field because why not?! Metaballs in metaballs. This algorithm is far faster than a regular metaballls/meta-diamonds implementation and the loss of resolution is hardly noticeable with the gradient colour filling.
[Metaballs/Meta-diamonds with Marching Squares Algorithm video demonstration]
Limitations: this only produces a single value isoline for some determined threshold. But this can be easily fixed by creating a loop for multiple different threshold values for different isolines. This is important to create map heights isoline sketches. The colouring just need to be tweaked with each threshold value.
Spatial Hashing, or Spatial Partitioning Bin
This type of Spatial Partitionaning method is mentioned on Chapter 7 of Real-Time Collision Detection. I tried the methods described there, but it didn't work the way I expected. The way I implemented was based a blog post from Conkerjo, Spatial hashing implementation for fast 2D collisions. I expect this to be a bit faster than the QuadTree implementation I did. So far, I read that spatial hashing is a very reliable method as a detection method, be that for collision or whatever the need for detecting stuff nearby, and it is what the gaming industry uses.
Spatial Hashing is already better than QuadTree because now I can actually detect line intersections per cell quadrants and have some chance of beating efficient Brute Force method. For me to do the same using QuadTree I probably would have to transform it into R-tree, and I don't even know what those are. And also, Spatial Hashing is way easier to understand and implement.
The way it works you can extract the points from each individual cell (or bin, or bucket or whatever the name convention) or use a rectangular|circular range to get the points. The way I wrote it's pretty much independent from Processing, so easily convertable to plain Java (I know I am speaking to myself here).
- Spatial Hashing for segment detection
A small code increment in the spatial hash class makes it possible to detect segments. It works by assigning a point from the segment line that is also inside the hash cell. Each point references the segment. The next step is to see if using this method we find segment intersections more efficiently than brute force. I still have to figure out how to filter duplicate segments check from different buckets cells as like: 1st bucket cell, check the intersection of segment lines. 2nd bucket cell, still the same lines, it will check again, 3rd bucket the same lines, etc. In the worst-case scenario there will be more checks than from efficient brute force. Possible ways to address this is to shrink the cell size, but that also has some cost effects as more hash cells are created or some clever way to detect if some check was already performed.
Initial implementation of an efficient Plane sweep Line Segment Intersection. This visualization program shows how the brute force works its way in a 2D loop to find all the intersections from some given points that form lines. Basically, every line checks against the others for some point intersection. The text file on the data directory contains the points that form lines following this format separated only by space. Example:
x0 | y0 | x1 | y1 |
---|---|---|---|
91 | 179 | 760 | 353 |
874 | 890 | 648 | 114 |
687 | 715 | 939 | 747 |
(The x0, y0, x1, y1 line doesn't exist in the file, just the numbers).
If you run the program the user can either choose the txt file contenting the points or just cancel the file choosing to automatically generate a random number of points (from 2~100). By running the visualization it will produce a file "output.txt" with the number of points, intersections, processing time and other relevant data. The algorithm itself is found on processing setup() function and it is general enough to be use in any Java aplication. The visualization just shows how the algorithm works in a slow, comprehensible manner. See below the visualization:
Visualization of Line Intersections by brute force search algorithm
(25 lines, 60 intersections found, 300 checks)
Although it looks very inefficient, the number of checks in the brute force increases as O(n²) number of segments, to be exact: n(n-1) as it searches 1 line against the other, it is surprisinly efficient for a small quantity of segments compared with my implematation of Bentley-Ottmann algorithm.
This implementation is heavily based on valenpe7's bentley-ottmann. Yeah, after the first two weeks I gave up trying to do this by myself (I see I was doing the compareTo() on segment class very naive so statusT was doing bananas), so kudos to user valenpe7. I remember being the guy that lent homework to others. Now I lurk GitHub for people that are better than me when I fail. And don't use this method for real applications. It is bad.
- pseudo code: http://geomalgorithms.com/a09-_intersect-3.html
- pseudo code: https://en.wikipedia.org/wiki/Bentley%E2%80%93Ottmann_algorithm#Detailed_algorithm
- http://www.cs.cmu.edu/afs/cs/academic/class/15451-s16/www/lectures/lec26-segments-intersect.pdf
Visualization of Sweep Line Intersections search by Bentley-Ottmann algorithm
(50 lines, 213 intersections found, 313 events)
Three weeks trying to do this to discover that it is only more efficient than the bruteforce method after 200 random generated segment lines are used. And by doing efficiency measurement tests I discovered that this implementation of Bentley-Ottmann does not work properlly if there are too much dense segments it starts to miss intersections. Really a fuck, but I am tired of fixing this shit for a 4th week. It is probably how the sort of segments lines, on 'value' field is done, but I don't know a way to fix. It still works for not so dense lines, like less than 100 random lenght lines in a canvas of 700x700, but I would not trust this implemetation. I passed the Coursera geometric-algorithms course (max case was 25 segment lines) using it, but don't use without knowing where it's the problem and how to fix. This will probably rot on this repository.
People|Bots of the future: If you are so kind and find where the error is and how to fix it, just tweet me. My user name is the same, volfegan.
All the time measurements are the running the program x 1000 in a loop, to accommodate any error. Time is in milliseconds. So when this Bentley-Ottmann algorithm is supposed to get better and less demanding than bruteforce it also when it starts stoping working well. What a crap. But I guess these findings I'm typing bellow are still valid points (maybe). Case 02 to 05 are done with segments created with a initial point (x,y) and the second point is ([x−r,x+r],[y−r,y+r]) for some lenght r.
To have a better idea of these evaluations, see the image below to get the idea on how sparse the segments become for 500 lines generated by different methods used on each test. For the test cases of segments generated by 2 random points in the canvas and the one using the rule of (x,y)->([x−r,x+r],[y−r,y+r]), r = random(0,1*canvas length), the appearance are pretty much the same, although for the second it is bit more sparse.
1. No. of segment lines vs processing time for a random length generated segment (2 points generated randomly in the canvas space) |
---|
In the graph, we see the Bentley-Ottmann algorithm is less efficient than the bruteforce search until around 200 segment lines (each measure is spaced by 100). The number of checks in the brute force increases as expected O(n²) (see 1st graph above), on the other hand, the Bentley-Ottmann algorithm the number of events grows somewhat linearly for this case. An event consist of handling segments start points, end poings and intersetions. But since for each event there are a lot happening: write/delete/get operations in the eventQ and statusT, it is much more demanding than a simple intersection check from the bruteforce operation.
2. No. of segment lines vs processing time for a random segments (x,y)->([x−r,x+r],[y−r,y+r]), r = random(0,1*canvas length) |
---|
The Bentley-Ottmann algorithm only became faster than the bruteforce somewhere after 1000 segments and below 1500 segments. Its efficiency grow is really not consistent as we can see in the graph.
3. No. of segment lines vs processing time for a random segments (x,y)->([x−r,x+r],[y−r,y+r]), r = random(0,canvas length*n^(−1/4)) (n = number of segment lines) |
---|
The Bentley-Ottmann algorithm became faster than the bruteforce somewhere after 300 segments. The segments are less dense, so fewer intersections and fewer event points.
4. No. of segment lines vs processing time for a random segments (x,y)->([x−r,x+r],[y−r,y+r]), r = random(0,canvas length*n^(−1/2)) (n = number of segment lines)* |
---|
The algorithm became faster than the bruteforce somewhere after 200 segments. The segments are even less dense, so even fewer intersections and fewer event points.
5. No. of segment lines vs processing time for a random segments (x,y)->([x−r,x+r],[y−r,y+r]), r = random(0,canvas length*n^(−1)) (n = number of segment lines)* |
---|
With this r, the length of the segment lines are so small (near zero), they are basically dots scattered in the canvas and there are never any intersection. So, of course, the Bentley-Ottmann algorithm will perform better. The only events that occur are the start and end events.
-
Conclusion on Bentley-Ottmann algorithm vs Bruteforce to find intersections: -> If you are going to handle less than 500 lines or poligons made of lines for colision detection, use BruteForce. The loop I did on Bruteforce was the most efficient way to do the searches n(n-1)/2; each line checks against the others only once. This is a video showing how to do it: Coding Math: Episode 58 - Array Math. The bruteforce method is now using the loop with no redudancy search as well the graphs. From this stackoverflow (https://stackoverflow.com/questions/5029882/simple-somewhat-efficient-algorithm-for-finding-intersections-between-one-serie), a user states that Bentley & Ottmann is hard to implement for edge cases (I totally agree) and R-tree are better for this task and also spatial hashing. If you have to use Bentley-Ottmann algorithm for some kind of homework, don't waste your time trying to implement it from scratch to understand the process. It is a shit thing. Go for what the industry uses and understand those.
111 11 1 11 11 1001001 0000 01 11 100101010011 0_01_01_1 011 100110010100 101 0 00 1_0_11_0 11 10001__1_0_1 00 00 00 0 01 01 10 0 1 110 100 110_---110\0\|00-00--1---11~ 00 \||| ||| ||| ||| , -=-~ .-^- _ x / \ ___/ \___ / \ / \ x x __/ \__ __/ \__ / \ / \ x x x x / \ / \ / \ / \ x x x x x x x x _/ \_ _/ \_ \ / \ / \ x x x x x / \ / \ / \ / \ x x x x x x x x and plant a tree. Geometry everywhere Because the roots of knowledge are deep
- - -
[ all code available under MIT License - feel free to use. ]