Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Various Gen refactors for better performance. #575

Closed
wants to merge 4 commits into from

Conversation

non
Copy link
Contributor

@non non commented Sep 28, 2019

There are five changes which I've made to help ScalaCheck generate
values faster. They are:

  • Use custom buildableOfN implementation instead of sequence
  • Use Gen.stringOf and Gen.stringOfN instead of generic builders
  • Rewrite Gen[Char] instances to be faster
  • Remove as much indirection as possible
  • Stop using sieve/sieveCopy internally

There are also some changes that clean things up:

  • Add Gen.unicodeChar and Gen.unicodeStr
  • Add a Buildable[T, Seq[T]] instance
  • Add type annotations on some methods without them
  • A few indentation and formatting changes

The first 4 optimizations should not change any behavior users see
(except possibly balancing out some unbalanced character distributions
we had prevoiusly). The last change is a bit more controversial, and
will be discussed below.

Here are our benchmarks before the changes:

Benchmark                   (genSize)  (seedCount)  Mode  Cnt     Score      Error  Units
GenBench.asciiPrintableStr        100          100  avgt    3  1707.101 ± 2575.431  us/op
GenBench.const_                   100          100  avgt    3     3.431 ±    1.417  us/op
GenBench.double_                  100          100  avgt    3    13.103 ±   42.670  us/op
GenBench.dynamicFrequency         100          100  avgt    3  1804.406 ±  753.188  us/op
GenBench.eitherIntInt             100          100  avgt    3    42.499 ±   14.959  us/op
GenBench.identifier               100          100  avgt    3  3493.749 ± 1272.181  us/op
GenBench.int_                     100          100  avgt    3    13.364 ±    6.165  us/op
GenBench.listOfInt                100          100  avgt    3  1536.995 ±  370.880  us/op
GenBench.mapOfIntInt              100          100  avgt    3  3375.948 ±  588.872  us/op
GenBench.oneOf                    100          100  avgt    3    19.542 ±   14.573  us/op
GenBench.optionInt                100          100  avgt    3    52.750 ±    2.814  us/op
GenBench.sequence                 100          100  avgt    3   215.349 ±    0.736  us/op
GenBench.staticFrequency          100          100  avgt    3  1472.478 ±  655.671  us/op
GenBench.zipIntInt                100          100  avgt    3    26.897 ±    5.523  us/op

And after:

Benchmark                   (genSize)  (seedCount)  Mode  Cnt     Score      Error  Units
GenBench.asciiPrintableStr        100          100  avgt    3   204.613 ±  208.688  us/op
GenBench.const_                   100          100  avgt    3     2.856 ±    6.079  us/op
GenBench.double_                  100          100  avgt    3    11.058 ±   17.209  us/op
GenBench.dynamicFrequency         100          100  avgt    3   498.426 ±  306.193  us/op
GenBench.eitherIntInt             100          100  avgt    3    33.225 ±   10.462  us/op
GenBench.identifier               100          100  avgt    3    96.309 ±    4.903  us/op
GenBench.int_                     100          100  avgt    3     9.213 ±    2.990  us/op
GenBench.listOfInt                100          100  avgt    3   448.545 ±  228.448  us/op
GenBench.mapOfIntInt              100          100  avgt    3  1778.107 ± 1532.473  us/op
GenBench.oneOf                    100          100  avgt    3    18.858 ±   24.069  us/op
GenBench.optionInt                100          100  avgt    3    45.695 ±   18.507  us/op
GenBench.sequence                 100          100  avgt    3   198.399 ±  164.050  us/op
GenBench.staticFrequency          100          100  avgt    3   449.571 ±   97.772  us/op
GenBench.zipIntInt                100          100  avgt    3    24.025 ±   27.260  us/op

Some notable speed increases include:

  • asciiPrintableStr (8.5x faster)
  • dynamicFrequency (3.6x faster)
  • identifier (36.4x faster)
  • listOfInt (3.4x faster)
  • mapOfIntInt (1.9x faster)
  • staticFrequency (3.3x faster)

The speed improvements mostly affects collections, particularly
strings. Since these represent a significant portion of values
ScalaCheck users are likely to generate, this will probably help
shorten test runtime for our users. For example, running the Paiges
tests in this branch resulted in a roughly 2x speed up in time (as
measured by SBT).

The one optimization we might chose to forgoe is deprecating sieves.
Sieves were introduced to allow filtering predicates (introduced by
calling .suchThat or .filter on Gen[T] values) to be preserved in the
generated results. One strange consequence of that is that Gen.R holds
onto values which might be filtered out -- the acutal filtering is
done when .retrieve is called. Another is that many of the collection
combinators include .forall checks to try to verify that their
elements are legitimate.

The reason sieves were added was to support shrinking. The shrinking
code uses the sieve to filter the stream of shrunken values to try to
ensure that only "valid" values are produced. Since users tend to
avoid using filter (because of the issues around flaky test failures
due to too many discarded cases) and since most actual Gen instances
fail to shrink properly anyway, these sieves have likely not benefited
too many users. However, there's a risk that for some users removing
sieves will exacerbate shrinking issues they already have.

I'd like to consider removing sieves, since as it stands I'm not sure
they are consistently used, and using them doesn't solve the problems
we have with shrinking. However, I'll also benchmark this branch with
sieves put back in, and if the difference in performance is minor we
may want to leave sieves alone for now.

There are five changes which I've made to help ScalaCheck generate
values faster. They are:

 * Use custom buildableOfN implementation instead of sequence
 * Use Gen.stringOf and Gen.stringOfN instead of generic builders
 * Rewrite Gen[Char] instances to be faster
 * Remove as much indirection as possible
 * Stop using sieve/sieveCopy internally

There are also some changes that clean things up:

 * Add Gen.unicodeChar and Gen.unicodeStr
 * Add a Buildable[T, Seq[T]] instance
 * Add type annotations on some methods without them
 * A few indentation and formatting changes

The first 4 optimizations should not change any behavior users see
(except possibly balancing out some unbalanced character distributions
we had prevoiusly). The last change is a bit more controversial, and
will be discussed below.

Here are our benchmarks before the changes:

Benchmark                   (genSize)  (seedCount)  Mode  Cnt     Score      Error  Units
GenBench.asciiPrintableStr        100          100  avgt    3  1707.101 ± 2575.431  us/op
GenBench.const_                   100          100  avgt    3     3.431 ±    1.417  us/op
GenBench.double_                  100          100  avgt    3    13.103 ±   42.670  us/op
GenBench.dynamicFrequency         100          100  avgt    3  1804.406 ±  753.188  us/op
GenBench.eitherIntInt             100          100  avgt    3    42.499 ±   14.959  us/op
GenBench.identifier               100          100  avgt    3  3493.749 ± 1272.181  us/op
GenBench.int_                     100          100  avgt    3    13.364 ±    6.165  us/op
GenBench.listOfInt                100          100  avgt    3  1536.995 ±  370.880  us/op
GenBench.mapOfIntInt              100          100  avgt    3  3375.948 ±  588.872  us/op
GenBench.oneOf                    100          100  avgt    3    19.542 ±   14.573  us/op
GenBench.optionInt                100          100  avgt    3    52.750 ±    2.814  us/op
GenBench.sequence                 100          100  avgt    3   215.349 ±    0.736  us/op
GenBench.staticFrequency          100          100  avgt    3  1472.478 ±  655.671  us/op
GenBench.zipIntInt                100          100  avgt    3    26.897 ±    5.523  us/op

And after:

Benchmark                   (genSize)  (seedCount)  Mode  Cnt     Score      Error  Units
GenBench.asciiPrintableStr        100          100  avgt    3   204.613 ±  208.688  us/op
GenBench.const_                   100          100  avgt    3     2.856 ±    6.079  us/op
GenBench.double_                  100          100  avgt    3    11.058 ±   17.209  us/op
GenBench.dynamicFrequency         100          100  avgt    3   498.426 ±  306.193  us/op
GenBench.eitherIntInt             100          100  avgt    3    33.225 ±   10.462  us/op
GenBench.identifier               100          100  avgt    3    96.309 ±    4.903  us/op
GenBench.int_                     100          100  avgt    3     9.213 ±    2.990  us/op
GenBench.listOfInt                100          100  avgt    3   448.545 ±  228.448  us/op
GenBench.mapOfIntInt              100          100  avgt    3  1778.107 ± 1532.473  us/op
GenBench.oneOf                    100          100  avgt    3    18.858 ±   24.069  us/op
GenBench.optionInt                100          100  avgt    3    45.695 ±   18.507  us/op
GenBench.sequence                 100          100  avgt    3   198.399 ±  164.050  us/op
GenBench.staticFrequency          100          100  avgt    3   449.571 ±   97.772  us/op
GenBench.zipIntInt                100          100  avgt    3    24.025 ±   27.260  us/op

Some notable speed increases include:

 * asciiPrintableStr (8.5x faster)
 * dynamicFrequency (3.6x faster)
 * identifier (36.4x faster)
 * listOfInt (3.4x faster)
 * mapOfIntInt (1.9x faster)
 * staticFrequency (3.3x faster)

The speed improvements mostly affects collections, particularly
strings. Since these represent a significant portion of values
ScalaCheck users are likely to generate, this will probably help
shorten test runtime for our users. For example, running the Paiges
tests in this branch resulted in a roughly 2x speed up in time (as
measured by SBT).

The one optimization we might chose to forgoe is deprecating sieves.
Sieves were introduced to allow filtering predicates (introduced by
calling .suchThat or .filter on Gen[T] values) to be preserved in the
generated results. One strange consequence of that is that Gen.R holds
onto values which might be filtered out -- the acutal filtering is
done when .retrieve is called. Another is that many of the collection
combinators include .forall checks to try to verify that their
elements are legitimate.

The reason sieves were added was to support shrinking. The shrinking
code uses the sieve to filter the stream of shrunken values to try to
ensure that only "valid" values are produced. Since users tend to
avoid using filter (because of the issues around flaky test failures
due to too many discarded cases) and since most actual Gen instances
fail to shrink properly anyway, these sieves have likely not benefited
too many users. However, there's a risk that for some users removing
sieves will exacerbate shrinking issues they already have.

I'd like to consider removing sieves, since as it stands I'm not sure
they are consistently used, and using them doesn't solve the problems
we have with shrinking. However, I'll also benchmark this branch with
sieves put back in, and if the difference in performance is minor we
may want to leave sieves alone for now.
implicit lazy val arbShort: Arbitrary[Short] = Arbitrary(
Gen.chooseNum(Short.MinValue, Short.MaxValue)
)
implicit lazy val arbShort: Arbitrary[Short] =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why are these lazy?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The existing code was lazy and I didn't change it.

It's possible there are initialization issues? If we don't need lazy I'm fine removing it.


/** Absolutely, totally, 100% arbitrarily chosen Unit. */
implicit lazy val arbUnit: Arbitrary[Unit] = Arbitrary(const(()))
implicit lazy val arbUnit: Arbitrary[Unit] =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why lazy?

r(None, seed).copy(l = labels)
case Some(t) =>
val r = f(t)
r.copy(l = labels ++ r.labels, sd = r.seed)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would labels | r.labels be clearer that this is set union (and possibly faster)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes and possibly -- seems like a good idea.

gen { (p, seed0) =>
var seed: Seed = seed0
val bldr = evb.builder
val allowedFailures = Integer.max(10, n / 10)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kind of a bummer that this totally ad-hoc. I wonder if it should be in p? Maybe as some kind of filter/retry strategy? Might require more thought, but maybe add a TODO. It would be interesting to think about filter failure strategies in context of flatMap/zip. Large flatmaps/zips cause exponential decrease in filter success rates if you retry the whole chain. But if you retry after each step, you can keep the success rate constant.

e.g.: if you zip N things, each successful with probability p, the full zip has success p^N, so, the expected number of retries to get a single item: (1/p)^N = exp(N ln(1/p)). But if you retry each step in the chain K times for a success, you get p_step = 1 - (1 - p)^K. and 1/p_step^N = 1/(1 - (1 - p)^K))^N ~ exp(N(1 - p)^K) If you choose K ~ log N then exp(N (1 - p)^K) ~ exp(N exp(-pK)) = exp(N (1 / N) ^ p) ~ 1

So, what this means is if you have a zip chain N long, and you retry each step O(log N) times, you get O(1) total success rate, which means we only do N log N work.

For flatMap, we don't know how deep we are going to go, indeed, it can be dynamic, but for zip we statically know, so we could just do log_2(Zip width) size retries.

By the way, this analysis suggests if you know you have N items, like we do in this combinator, we want to retry O(log N) not c N times. I think with linear retries, which you have here, you should make failure of the entire generator exponentially low. Maybe linear is actually fine, since you have to do linear work already, the total work here is still linear.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I would love to be threading through something like a total retries parameter and then a used retries count. In the absence of that I've sort of erred on the side of minimizing filtering in the library itself while also adding retries.

I'm a bit nervous about adding retrying to flatMap/zip in this PR, so I'd rather wait and think about how best to do that. I do like the idea you had for zip of apportioning the retries across all the underlying generators, for example.


/** Generates a map with at most the given number of elements. This method
* is equal to calling <code>containerOfN[Map,(T,U)](n,g)</code>. */
def mapOfN[T,U](n: Int, g: Gen[(T,U)]) = buildableOfN[Map[T,U],(T,U)](n,g)
def mapOfN[T,U](n: Int, g: Gen[(T, U)]) = buildableOfN[Map[T, U],(T, U)](n, g)

/** Generates an infinite stream. */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't this a lie? Doesn't it stop at the first failure?

Again, here is a place for a retry strategy: we may try to retry some number of times.

Or: we could just call unfold(p, r.seed) on the None branch: presumably we will hit more items... It isn't clear to me why a single filter should kill the infinite stream.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, the comment should have said "Generates a potentially infinite stream"

Retrying seems fine to me. I didn't change the existing behavior.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Sorry, Github's layout confused me and I left some comments here that didn't make sense, which I've since deleted.)

new Gen[Char] {
def doApply(p: P, seed0: Seed): Gen.R[Char] = {
val (x, seed1) = seed0.long
val i = ((x & Long.MaxValue) % cs.length).toInt
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is biased. Do you care? It won't produce evenly over cs.length since x is uniform over longs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The bias is very slight, since we have an 31-bit Int denominator and a 63-bit Long numerator. In the worst case we'll produce some values with one billionth more frequency than others. I can add a comment, but it seemed more straightforward not to add in the retry logic for this case (as opposed to Int % Int where the bias can be significant in the worst case).


/** Generates a ASCII character, with extra weighting for printable characters */
def asciiChar: Gen[Char] = chooseNum(0, 127, 32 to 126:_*).map(_.toChar)
val asciiChar: Gen[Char] =
choose(0, 127).map(_.toChar)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why a different strategy here than say charSample?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think charSample is best when you need to allocate a bunch of non-continuous Char ranges that would require a lot of branching to support. For asciiChar and asciiPrintableChar there's a single contiguous range.

That said, since each only uses around 100 characters, there's no real problem if we did want to allocate a static array to sample from.


/** Generates a ASCII printable character */
def asciiPrintableChar: Gen[Char] = choose(32.toChar, 126.toChar)
val asciiPrintableChar: Gen[Char] =
choose(32.toChar, 126.toChar)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why different from charSample?

case Some(c) =>
sb += c
case None =>
()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is going to do a string of at most n, not exactly n, right? That's a change isn't it? Again, here is a place for a retry strategy.

Copy link
Contributor Author

@non non Sep 28, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that's a good point. In practice our built-in character generators don't fail but it's good to be defensive about user-provided generators.

def builder = new ArrayListBuilder[T]
}

implicit def buildableSeq[T]: Buildable[T, Seq[T]] =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can't we do something like implicit def canBuild[A, B](implicit cbf: CanBuildFrom[Nothing, A, B]): Buildable[A, B] and let CanBuildFrom do the work?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe? I just chose to do the simplest thing in line with the existing code. It's worth experimenting with.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After looking into this for a second I think the 2.13 collections makes relying on CanBuildFrom for this a bit dicey. We only need to support Seq and this is really only for internal use so I'd rather not try to do something more general until we need it.

In particular, this adds retrying to the infiniteStream, stringOf, and
stringOfN combinators. It also slightly optimizes the loop conditions
on buildableOfN, removes lazy from most arbitrary definitions, and
cleans up a few other things.

This commit does appear to have made some benchmarks slower, although
it's possible my machine is just more busy than it was. I've also
added a few more benchmarks.

Benchmark                   (genSize)  (seedCount)  Mode  Cnt     Score      Error  Units
GenBench.arbitraryString          100          100  avgt    3   600.706 ±  569.014  us/op
GenBench.asciiPrintableStr        100          100  avgt    3   432.235 ±  229.533  us/op
GenBench.const_                   100          100  avgt    3     2.775 ±    8.017  us/op
GenBench.double_                  100          100  avgt    3     9.941 ±    3.020  us/op
GenBench.dynamicFrequency         100          100  avgt    3   481.478 ±  253.262  us/op
GenBench.eitherIntInt             100          100  avgt    3    30.911 ±   13.071  us/op
GenBench.identifier               100          100  avgt    3   186.688 ±  327.920  us/op
GenBench.int_                     100          100  avgt    3    11.266 ±    8.500  us/op
GenBench.listOfInt                100          100  avgt    3   445.506 ±  403.799  us/op
GenBench.mapOfIntInt              100          100  avgt    3  1910.653 ± 2974.722  us/op
GenBench.oneOf                    100          100  avgt    3    15.945 ±   10.462  us/op
GenBench.optionInt                100          100  avgt    3    42.815 ±   18.030  us/op
GenBench.sequence                 100          100  avgt    3   205.571 ±   42.976  us/op
GenBench.staticFrequency          100          100  avgt    3   510.956 ±  111.016  us/op
GenBench.testFilter               100          100  avgt    3  1081.890 ±  607.106  us/op
GenBench.zipIntInt                100          100  avgt    3    27.987 ±   22.614  us/op
@non
Copy link
Contributor Author

non commented Sep 28, 2019

@johnynek Thanks for the thorough review! I just pushed some code that addresses many of your comments, but not all of them. I do seem to have made some combinators a bit slower so I want to consider how to proceed next.

@non
Copy link
Contributor Author

non commented Sep 28, 2019

Apparently changing lazy val to val breaks binary compatibility. Who knew?

@ashawley
Copy link
Contributor

I ran the benchmark for the fist commit, 126168f.

Before (current master):

GenBench.arbitraryString    avgt    3   4191.12 ± 14057.61  us/op
GenBench.asciiPrintableStr  avgt    3   2254.36 ±  2951.37  us/op
GenBench.const_             avgt    3      4.39 ±     0.60  us/op
GenBench.double_            avgt    3     13.53 ±     3.97  us/op
GenBench.dynamicFrequency   avgt    3   2159.29 ±   347.15  us/op
GenBench.eitherIntInt       avgt    3     58.70 ±    12.89  us/op
GenBench.identifier         avgt    3   4862.64 ±   493.61  us/op
GenBench.int_               avgt    3     17.55 ±     2.89  us/op
GenBench.listOfInt          avgt    3   2008.92 ±   561.94  us/op
GenBench.mapOfIntInt        avgt    3   4607.00 ±   687.68  us/op
GenBench.oneOf              avgt    3     26.04 ±     2.70  us/op
GenBench.optionInt          avgt    3     95.90 ±   440.56  us/op
GenBench.sequence           avgt    3    324.61 ±    77.22  us/op
GenBench.staticFrequency    avgt    3   2081.64 ±   536.27  us/op
GenBench.testFilter         avgt    3  23920.79 ± 10953.43  us/op
GenBench.zipIntInt          avgt    3     37.93 ±     0.85  us/op

After (126168f):

GenBench.arbitraryString    avgt    3    438.94 ±   539.99  us/op
GenBench.asciiPrintableStr  avgt    3    316.78 ±   568.60  us/op
GenBench.const_             avgt    3      3.05 ±     0.24  us/op
GenBench.double_            avgt    3     13.20 ±     8.80  us/op
GenBench.dynamicFrequency   avgt    3    748.58 ±   702.64  us/op
GenBench.eitherIntInt       avgt    3     48.74 ±    41.64  us/op
GenBench.identifier         avgt    3    224.31 ±    89.49  us/op
GenBench.int_               avgt    3     14.27 ±     0.45  us/op
GenBench.listOfInt          avgt    3    623.88 ±    20.94  us/op
GenBench.mapOfIntInt        avgt    3   2536.78 ±   117.44  us/op
GenBench.oneOf              avgt    3     24.77 ±    13.41  us/op
GenBench.optionInt          avgt    3     65.45 ±    32.17  us/op
GenBench.sequence           avgt    3    273.03 ±    17.93  us/op
GenBench.staticFrequency    avgt    3    654.64 ±   130.07  us/op
GenBench.testFilter         avgt    3   1424.34 ±   282.77  us/op
GenBench.zipIntInt          avgt    3     34.77 ±    33.82  us/op

Confirms the improvements Erik observed above, different only from some variation:

  • arbitraryString (9.5x faster)
  • asciiPrintableStr (7x faster)
  • dynamicFrequency (3x faster)
  • identifier (21x faster)
  • listOfInt (3.2x faster)
  • mapOfIntInt (1.8x faster)
  • staticFrequency (3.2x faster)

@non
Copy link
Contributor Author

non commented Sep 30, 2019

The outstanding work on this branch is:

  1. Try restoring the sieves and see how much of the improvement we lose, if any.
  2. Consider whether removing sieves is on the table or not.
  3. Think about a taking a more principled stance on retries.
  4. Try doing some more profiling to see if there's anywhere else we can save a lot of time.

I'd also like to try out this branch on more other repositories to see what the total wall clock time improvement using this branch is.

@ashawley
Copy link
Contributor

ashawley commented Jan 8, 2020

I'm going to work on cherry-picking this apart so that it can get closer to crossing the finish line. I noticed a lot of the improvements here are not very controversial. Isolating them out and merging them will make that case clearer.

This exercise also helps me study the changes. I don't have good intuition about this aspect of the library. Erik is likely the singular expert of the seed implementation that he contributed to ScalaCheck.

Merging the changes separately will also make it easier to troubleshoot any potential regressions. Any bugs are most likely from the challenge of improving an old but popular property-testing library, not because of sloppy coding. Since the Travis build publishes snapshots, it's also worth going this route of merging the individual changes, should we need to verify the changes.

@larsrh
Copy link
Contributor

larsrh commented Oct 26, 2020

I think all of these changes have now been incorporated. Feel free to reopen if this isn't the case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants