-
Notifications
You must be signed in to change notification settings - Fork 409
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RAH v4 #737
RAH v4 #737
Conversation
Paging @blitzmann and @MrNukealizer for code review and comments/thoughts. Maybe this can get snuck in to the next build? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is definitely improving, but reducing the resistanceShiftAmount between cycles does not give accurate results. It can get fairly close, but it will never be correct. To give an accurate result when the RAH loops through a series of profiles, it needs to give the average of all the profiles in the loop.
# If damage pattern is even across the board, we "reset" back to original resist values. | ||
logger.debug("Setting adaptivearmorhardener resists to uniform profile.") | ||
# Do nothing, because the RAH gets recalculated, so just don't change it | ||
runLoop = 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why use the default profile for uniform damage instead of actually seeing how the RAH changes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why use the default profile for uniform damage instead of actually seeing how the RAH changes?
We could, but the primary issue is that there is no easy way to distinguish between the default damage profile and a damage profile that has uniform damage.
I want to be able to get back to the default, so that people who don't use the damage profiles get the behavior they have come to expect. It also lets them see the difference in how the damage reduction looks when using RAH with defaults (old version) and the new reactive version.
Not to mention people compare Pyfa and EFT all the time, so this lets you keep the EFT like behavior (and better shows just how awesome Pyfa is).
If there was a clear indication which damage profile you had selected (and not just what the percentages are), then I'd be more inclined to do exactly this.
Until that happens though, it's very difficult to see at a glance which damage profile you have selected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand why someone would want to see it at 15/15/15/15. Maybe it's just the way I use pyfa, but I can't think of any situation where you'd be looking at your ship's defenses and not want the reactive hardener to react.
# Most likely the RAH is cycling between two different profiles and is in an infinite loop. | ||
# Reduce the amount of resists we shift each loop, to try and stabilize on a more average profile. | ||
if resistanceShiftAmount > .01: | ||
resistanceShiftAmount = resistanceShiftAmount - .01 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This causes inaccurate results. It can get fairly close to the actual results, but it will almost never be correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This causes inaccurate results. It can get fairly close to the actual results, but it will almost never be correct.
Anything we do will be inaccurate. Why? Because the resists are changing in a loop, and it's impossible to show a fluctuating value in a static field.
You went with the route of averaging the resists. There's a few problems with this.
One, it's not particularly clean. You end up with values like .8933473434. It's harder for an end user to grep 89.3% vs 89.5%.
Two, because the UI rounds it off at a 3 digits, you can end up with rounding issues, where the displayed resists don't add up to 100% (or adds up to more than 100%).I
Three, you have to store and loop through all the resists you've done previously. This is very expensive execution wise, as you end up with loops within loops. With this method we only care about the outer loop.
Unfortunately, we can't show that one tick you'll get hit for 533 damage, and the next tick will get hit for 574. If we reduce the swings (by reducing the transfer amount) we end up with something very close to the average resists without introducing complicated numbers.
It's a compromise, but any solution chosen will be one. This just happens to be the fastest, cleanest, and fairly close to accurate (that has been demonstrated so far).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One, it's not particularly clean. You end up with values like .8933473434. It's harder for an end user to grep 89.3% vs 89.5%.
Can you give an example of that? Out of all my tests (many try to be as complicated as possible), only one gives results ending in 0.75%/0.25%. I haven't seen any values more complicated than that, and the vast majority are multiples of 1% or 0.5%.
Two, because the UI rounds it off at a 3 digits, you can end up with rounding issues, where the displayed resists don't add up to 100% (or adds up to more than 100%).
I haven't seen any examples of that either. It's rare to get two decimal places, and I'm not sure if there's any situation that would give 3, let alone more.
Unfortunately, we can't show that one tick you'll get hit for 533 damage, and the next tick will get hit for 574. If we reduce the swings (by reducing the transfer amount) we end up with something very close to the average resists without introducing complicated numbers.
Or we could use the actual averages, which aren't complicated. That also takes into account the fact that a 6% swing can be enough to keep changing while 5% could get stuck one a value that's not near the average.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ultimately it comes down to this. Using your code for averages significantly slows down the cycling. There is 3x speed difference, which is not insignificant.
Basically the difference ends up being spending a lot of processing time to record and average out the last few results....or do this:
http://www.lightandmatter.com/html_books/lm/ch18/figs/sc-strongly-damped.png
The end result is very similar. If one is three times as fast, why not use it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In theory that should work something like this: http://imgur.com/le3bwsg
If it worked that way it would be a great solution. The problem is that most of the time when the RAH enters a loop it's bouncing around in such a way that it can't converge on the average point. Reducing the shift amount between cycles tends to result in the RAH getting stuck at a point completely unrelated to the average of the loop cycles, or even any of the cycles themselves. I imagine it as normally skipping over a point of resistance, but bouncing off when the shift amount is reduced, kind of like this: http://imgur.com/6VxrL3Z
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/r/theydidthemath is all I can think of when reading these comments.
@MrNukealizer are you actually plugging these things into an application and graphing them? That is awesome, and helps visualize what the hell is going on.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, I don't graph it. That was just my attempt to illustrate the effect I've noticed by looking at the numbers.
|
||
adaptiveResists_tuple = [0.0] * 12 | ||
for damagePatternType in damagePattern_tuple: | ||
attr = "armor%sDamageResonance" % damagePatternType[0].capitalize() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the call to capitalize()
when it's already capitalized?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the call to capitalize() when it's already capitalized?
To normalize data.
CCP is really bad about this, and it doesn't help that you'll find examples of this scattered around.
For example, you'll find EM
, Em
, and em
, but for this particular case we have to have Em
or it won't read properly.
It's just a good coding practice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The thing is you're not normalizing CCP data, you're normalizing hard coded strings that are defined in the same file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, that's a fair point. Mostly it was just a carry over from what I've seen done elsewhere.
I don't think that normalizing the string is going to cause significant delays however, so maybe a bit picky?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair enough. I'm not very familiar with Python so I was just wondering if that had a purpose and what it was.
damagePattern.explosiveAmount, | ||
damagePattern.explosiveAmount*fit.ship.getModifiedItemAttr('armorExplosiveDamageResonance'), | ||
module.getModifiedItemAttr('armorExplosiveDamageResonance'), | ||
module.getModifiedItemAttr('armorExplosiveDamageResonance')]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not a big deal, but initially sorting the damage types Em, Thermal, Kinetic, Explosive causes incorrect sorting of equal values later. The game seems to sort Em, Explosive, Kinetic, Thermal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does one go about getting the right section of code for these comments?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not a big deal, but initially sorting the damage types Em, Thermal, Kinetic, Explosive causes incorrect sorting of equal values later. The game seems to sort Em, Explosive, Kinetic, Thermal.
Populating the initial tuple doesn't really matter, because the second thing we do every loop is to sort the tuple by modified damage first, name second. So if we end up in a scenario where two resists have identical damage done (post resists), it'll sort it properly no matter what loop it's on.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Populating the initial tuple doesn't really matter, because the second thing we do every loop is to sort the tuple by modified damage first, name second. So if we end up in a scenario where two resists have identical damage done (post resists), it'll sort it properly no matter what loop it's on.
That's where you're wrong. It will properly sort types that have different amounts of damage, but it won't touch the order of types that have the same amounts of damage. That doesn't matter for the amount of damage taken, thus why it's not a big deal, but it can cause the RAH numbers to appear significantly different than what happens in game. I prefer to be accurate if possible, and changing the order you add the data to the tuple is a simple way to make it more accurate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's where you're wrong. It will properly sort types that have different amounts of damage, but it won't touch the order of types that have the same amounts of damage. That doesn't matter for the amount of damage taken, thus why it's not a big deal, but it can cause the RAH numbers to appear significantly different than what happens in game. I prefer to be accurate if possible, and changing the order you add the data to the tuple is a simple way to make it more accurate.
No, it sorts it correctly.
Here's the Gnosis with only a RAH, damage pattern is 100 | 101 | 101 | 101
Let's dump the resists in order (left being 0, right being 3):
2016-09-15 22:15:31,423 eos.effects.adaptivearmorhardener DEBUG Presort (low -> high): Em | Thermal | Kinetic | Explosive
You can see the first time we dump resist names, the order matches how we populate the tuple. We haven't sorted it yet (since it's the very first loop), so we don't actually know what's the lowest resist.
We then run through our logic (including the sort, which happens as one of the first things):
2016-09-15 22:15:31,424 eos.effects.adaptivearmorhardener DEBUG Adaptive Resists, Ship Resists, Modified Damage (EM|The|Kin|Exp) : 0.910000 | 0.790000 | 0.790000 | 0.910000 || 0.614250 | 0.533250 | 0.533250 | 0.614250 || 67.500000 | 68.175000 | 68.175000 | 68.175000
Now next loop lets dump the resist names again to see how it sorted them:
2016-09-15 22:15:31,424 eos.effects.adaptivearmorhardener DEBUG Presort (low -> high): Em | Explosive | Kinetic | Thermal
Hey, look at that. EM is lowest (as it should be, since it took the least damage). The other three take equal modified damage, and it sorted them alphabetically as expected.
So. All in all, working properly.
When you are browsing the file, you can comment on a line. |
adaptiveResists_tuple = [0.0] * 12 | ||
# Apply module resists to the ship (for reals this time and not just pretend) | ||
for damagePatternType in damagePattern_tuple: | ||
attr = "armor%sDamageResonance" % damagePatternType[0].capitalize() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above.
I did a little benchmark to compare the performance of V4 and V3. I removed all logging and added calls to The test was when first loading pyfa with a fit that has three boosters with different RAH setups, and its own RAH that ends up in a 4 cycle loop. All of the boosters get the "uniform profile" with V4 because they have the uniform damage profile. For V3 the first booster's RAH changes to 30/0/0/30, the second stops at 42/0/0/18, and the third booster does a 3 cycle loop. V3:
V4:
It seems like there's something going on in the background that severely slows down access to ship/module attributes for a moment. It seems to add about 15ms to the post-loop part of the first booster for V4 or the initialization part of the main fit for V3, and about 2ms to the same parts of the next calculation for both. Besides that spike, it seems like V3 and V4 have fairly similar performance. V3 seems to run a bit faster on my machine though, especially for situations where the RAH loops. Since you seem to have much worse performance when you test things, perhaps you could do something similar? Just import |
I know that boosters were a serious issue when we first cracked this, but they're going to be gone here in a month or so....so.....screw 'em. :) The booster code significantly slows down the execution of Pyfa (just because we're doing so many recalcs), so I would say just go ahead and do testing without it.
It gets complicated because of the surrounding code, but as mentioned in the #688 thread, while the total execution time is similar v4 runs through many more loops in similar execution time as v3. v4 looped 18 times in 26 ms or 1.4 loops per ms. This is just because of the extra complexity and additional loops that v3 has, if you removed those I'm sure that would largely go away.
I've actually started a new branch to go through the code and look for pain points where code execution slows down. I already found 5 lines of code that (by itself!) adds 4 seconds to the time Pyfa takes to launch. Some psuedo-code to hopefully clear up some confusion. with how this works (Also, because psuedo code is fun!).
|
18/26 is 0.69, not 1.4. That's also another example of how you keep getting astoundingly slow execution; my tests show v4 doing 100 loops in 0.7 ms. Each loop in v3 takes longer the more loops have been done before it, so it's hard to compare loops per ms. It generally does take longer per loop, but it generally doesn't need to do as many loops as v4, resulting in similar or less total time. If the RAH reaches a profile where it stops changing, v3 will do the same number of loops as v4 (except when v4 reduces the shift amount prematurely and stops short) but a little slower. That generally takes only 4-6 loops, and the v3 code won't be much slower. If the RAH keeps cycling in a loop, v3 will do the absolute minimum number of loops (I've never seen more than 11, though theoretically up to 14), whereas v4 can keep going for 20-100 loops. I don't deny that looping through past results each loop is inefficient, but it's the most efficient way I know of to determine when the RAH enters a loop, and the worst-case performance should still be better than the current v4 code's worst-case performance. It also stays accurate in the process. The pseudocode is a good idea, so here's how my method works:
It should work the same except for storing past results and checking of the current cycle matches any of them, then averaging the results since the cycle that matched. |
And this is why I let computers do math for me... The problem is that with multiple tables, multiple loops, it gets progressively worse. With v4, it's a single loop (the subloops aren't actually looping multiple times, it's just for flow control). Anyway, we're going around in circles now. Overall the v4 code is quite a bit faster, and we could speed it up even further by reducing the number of loops allowed. I don't see any easy ways to speed up the v3 code. @blitzmann you want to chime in here? |
Do you have any benchmarks that show that? In my tests the v3 RAH effect as a whole consistently runs in the same or less time compared to the v4 effect, as reflected in the timings a few comments above and the very example you chose to demonstrate loop speed. Sure, the loops themselves take longer, but less loops and more efficient pre- and post-loop code always seems to make up for that. I agree that v4 could be made faster, but as far as I know v3 is as fast as possible while being accurate. |
@MrNukealizer @Ebag333 Can you guys post a few very specific test fittings / scenarios so that I can start testing this PR and v3? Also, someone write a bulleted summary of the current issues that may or may not need to be addressed that cannot be agreed upon. Thanks! |
To summarize the current discussion. In game, the resist profile will switch between 2+ profiles depending on incoming damage. There's 3 different ways of picking the resist profile:
The theory is that solutions 1 and 2 will require caching and loops within loops, while solution 3 can be run as part of the normal loop. There's some other debate about how to handle uniform damage profiles, but since we should keep the existing/historic/EFT behavior (and if you REALLY want to see how it reacts to a uniform damage profile, you can create one that's |
I'll post some tests later. Here are the current issues as I see them:
|
Ok, here are a few tests. v3 and v4 seem to give identical results for any damage profile with 1-2 types and the performance shouldn't vary much across fits, so I haven't included any such situations here. Just set the damage profile on any random fit to one or two types.
Damage profile 99/40/40/0 (purely theoretical). Let's start out strong with the worst case scenario for both algorithms. For this test v3 hits the cycle limit (it would need roughly 65 cycles depending on floating point accuracy) and gives an inaccurate result. v4 gives the most inaccurate result I've seen, actually changing the ship's armor EHP by 6.9% compared to v3. Other damage profiles cause similar situations but aren't quite bad enough for v3 to hit the cycle limit, such as 90/40/40/0, 80/30/30/0, and 50/20/20/0.
Damage profile 0/3/2/3 (Depleted Uranium). This one is interesting because very tiny changes can cause different results. The RAH cycles in game are 9/9/21/21, 3/3/27/27, 0/0/30/30, then a loop of 0/3/33/24, 0/0/34.5/25.5, 0/3/37.5/19.5, 0/0/39/21.
Damage profile 1/2/2/1 (lowsec sentry guns). This is an example of the RAH entering a loop without ever reducing a resistance to 0. It goes to 21/9/9/21 then bounces between 15/3/15/27 and 9/9/9/33.
Damage profile 3/2/2/0 (done with smartbombs). This test partially shows how the algorithm sorts damage types that are equal. Similar tests can be done with other configurations. In game if Thermal and Kinetic are equal, it will always prefer to take resistances from Kinetic and give them to Thermal. The cycles go 21/21/9/9, 27/15/15/3, 31.5/19.5/9/0, then loop through 34.5/13.5/12/0, 37.5/7.5/15/0, and 40.5/10.5/9/0. Interestingly for the loop the average Kinetic resistance ends up higher than the average Thermal resistance despite the tendency to prefer Thermal when the two are equal. |
This is a weird one, where EM gets boosted almost every cycle, but then occasionally the damage changes enough to throw a little bit of resists back at therm/kin for one cycle. Basically no approach we take will be accurate unless we let it cycle for a long time. It took 14 cycles before it got reasonably close to what it roughly stabilizes at (about 4/5ths of the time, then there's that one wild card of 6% where it steals from EM). This is also a good example of what I was talking about earlier, where because of rounding and uneven numbers the resists get messed up. After 100 loops the resists show as 59.4% EM and .572% Therm, which adds up to 59.972%. The longer we let this run, the more resists we drop and it starts to get further away from being accurate. After 100 cycles (of doing what it would in game, always transferring 6%), the RAH ended up at: v4 ended up at: Which is relatively close to being an accurate representation, given how devilish this particular one is and we don't let it cycle for very long. So with this particular scenario in mind, the different approaches end up as such.
This becomes a really fine line, because after just 11 cycles (at 6%) the resists went to hell and we started getting oddball numbers. Just 2 cycles later, we started dropping resists. I shifted the number of times v4 is allowed to run, it's really dangerous at the full 6% because just a couple cycles one way or the other and the numbers get mad. By reducing the resist transfer down to 1%, we can transfer resists much more safely and be less likely to get bad numbers (though it's still possible if we go long enough). I added a check to make sure we don't cycle less than 7 times before reducing resists, and bumped the number of times we transfer resists after the reduction up to 10 (from 5). This slightly increases our chances of hitting weird resist numbers, but should better handle particular scenarios like this one. So, going back to the original behavior:
v4 (with modifications) ended up at: Is it perfectly accurate? No. Is it fairly reasonably close to being accurate? I think so. I've submitted the changes to the RAH cycling. It'll take slightly longer to run now, but the biggest change is that it'll always run a minimum of 7 times (unless we run out of resists to steal). |
Now that I think of it, I probably should've mentioned more about the ideal results from that 99/40/40/0 test. As the RAH cycles it converges on 57.656/1.172/1.172/0. On the subject of weird numbers, one of the other examples I gave (I think 5/2/2/0) shows as something like "60%/0%/5.273e-08%/0%" in v3. Those other similar situations are interesting because they converge on the end point fast enough that before hitting the limit there are two cycles within v3's tolerance for equality, so it detects a loop and gives a fairly good result. I'm going to play around with the cycle limit and equality tolerance to see if it can handle weird situations like these. |
That adds up to 61.096%. Double check my math though. :) |
Okay, so I went through these and basically simulated what the RAH does if we let it run long enough (setting performance aside). For each of these, I let it run 100 times, but basically truncate the log when it stops changing.
So we talked about this one before, but basically after loop 11 it starts going off the rails. If we let it run long enough, it eventually gets to essentially So, the various approaches we've discussed.
This one basically repeats the last 5 lines repeatedly. So, the various approaches we've discussed.
This one loops the last two.
This one loops the last three.
So, to go over the four options we have...
I have an idea that might address this, going to go see if I can play around with it and make it work... |
According to my calculator that adds up to 60%. I'm not sure where you got the extra 1.096% from... |
Added a new check to bail if our bottom two resists get too small (less than the transfer amount). Now for that first example that gets so nasty we end up with:
Which is basically the perfect result! For the Megathron Navy (example 2), averaging the cycle of 5 we get:
And with v4 we get:
Our error is just over 1%. For
And with v4 we get:
Our error is 3% on this one. Finally for
And with v4 we get:
2% error on this one. I'm rather happy with the results of the latest tweak to v4. I'm still a bit worried about the performance overall, but I think it's at a reasonable point where it is now. We can also investigate lowering the maximum number of cycles (currently set to 100, which is rather long). 50 is probably a more reasonable number. |
Doh. What happens when you're doing it on your mobile and put the decimal in the wrong spot.... |
I made a couple tweaks to v3 to improve accuracy when hitting the cycle limit as well as rounding the output to After those changes I combined v3 and v4 so they run together and report their results and run times. Here's what that showed for the test cases I listed above as well as a few real fits I had open.
I'm not sure why, but it seems like the loop in v4 is much slower than it used to be. I might have messed up the timing part, but I don't think so. Maybe applying changes to the module stats and reloading them is just slower than my previous benchmarks indicated. Also, what happened to checking for a 2-type damage profile and going to 30/30 on the first cycle? |
Running them together isn't likely going to work for showing the times properly. We MIGHT be able to do something with threading, but since Pythons scheduler isn't atomic we couldn't get accurate results anyway for sub 1 second times. I'm planning on setting up some profiling and running numbers (basically forcing both to loop many times), I might be able to do that today. V4 IS slower as it cycles to the maximum number of runs more often than it used to. This was so that we could properly calculate it when we run into weird scenarios like that
We can still do this, but I was more concerned with the complex 3/4 damage type profiles. Plus, it only takes 3 cycles to get there, so I'm not really sure that it's worth the extra complexity to save a handful of milliseconds. Once I do some profiling, I'll have a better idea of what that time savings might be. |
It should work. It basically runs the pre-loop part of v3, pre-loop v4, v3's loop, v4's loop, post-loop v4, then post-loop v3, marking the start and end times of each section. I tried reordering the sections and running each version alone, and that made no difference to most of the results. The only exception is that the second time applying resists the ship (yes, it's unusable that way but it was for timing purposes) takes a tiny bit longer because of calculating stacking penalties. |
Well one problem is that v3 uses force. This is no bueno as I spent a week figuring out, we can ONLY use it when those values will never be touched again. Basically it's fairly dangerous to use. It also means that anything run after v3 won't work correctly. You can use |
Good point. I copied that from some old version and didn't think much about it, but that does seem like a problem. |
Just a wee bit. You can find all the module modifiers in The initial goal of the rewrite was to update the ship in essentially real time (each loop) to see if we could get it to recalculate for stacking penalties and whatnot. It seems to, but the catch is that the stacking penalty ends up applying multiple times and it all goes to hell. I tried every method we have, none of them work. For the RAH, you probably want to use That is one interesting side effect of v4, each loop shows up in the affected by tab. |
Since two damage types will always result in a 50/50 split, this speeds up processing a bit.
Unused stuff, formatting, PEP8 standards (most of them), etc
Most of the changes are just formatting and layout. I did add a condition for 2 damage type profiles, to skip looping through those. (Makes sense since most ammo is 2 damage types.) |
@blitzmann it's okay, I'm planning on rewritting it from the ground up in Gnosis anyway. :D |
FYI, this comes off as extremely petty. Circumventing the work of another contributor because your's did not make the cut (smh) |
Apologies to @MrNukealizer. His PR is good and fits Pyfa, and his work fully deserves to get merged. I did not intend it that way at all, but I should have seen that it would be read that way. Entirely my bad. To they and explain what I mean by that bad joke.....It does need to be written for new Eos/Pyfa, and it has been my intent to rewrite it from scratch since Pyfa-NG became a thing. But that won't ever go into current Pyfa, so @MrNukealizer's work won't be circumvented. |
I didn't think it was intended, just came off that way as you noted. As long as we're all on the same page. pyfa-ng will indeed need it, and I think having it bundled with your simulation collection would be a good idea. :) |
Carry over from v2. See the following for history:
#680
#688
#689
Merging from pyfa-org/master to the RAH v2 branch failed spectacularly, so creating a new clean branch.