-
-
Notifications
You must be signed in to change notification settings - Fork 481
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
avoid unnecessary divisions and calls to gcd #38924
base: develop
Are you sure you want to change the base?
Conversation
Documentation preview for this PR (built with commit a8e2ca1; changes) is ready! 🎉 |
This reduces the time for the example from #38108 and some others significantly. The old timing is
The new timing is about 5 seconds. With a more aggressive patch we could reduce it to 3.6 seconds.
The new timing is about 15 seconds. The drawback is that some computations give less beautiful (but of course, equivalent) results. For example:
instead of |
The failures are a bit puzzling. In
which gives with this patch
These are the same elements, just represented differently. In
we now get
I don't know, whether this is really the same thing. I guess they come from the fact that |
Yes, that is the same morphism as it is projective space (and thus you can scale the coordinates freely). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am pretty sure that b
divides d
in the last part because d
is the gcd of the coefficients of the initial A
and B
, which then we reduce B
by some common factor with A
and then get the gcd of the resulting coefficients. So it should be safe to do return (d // b) * B
. Although perhaps we should cc someone who is more of an expert to triple-check this.
It needs work, because I want to incorporate the much faster version I now have, and I think that the current version has some weak spots. For example, While my implementation is very much faster on some examples, there are others where it leads to problems, and I have to investigate this. |
Indeed, the gcd is only defined up to a unit.
It would be good with your push to add some tests for these cases (at least, given the bot results, it seems like they are not already included). |
It is maybe unrelated but it is possible to compute the |
Sorry for the noise, lcm is slow. |
I think the patch as is is quite good, so setting to needs-review, I will also advertise it on sage-devel. The results are not as beautiful as with the current version, but it is significantly faster. I have an additional improvement, which I am not so sure about. It avoids constructing a new polynomial ring for each variable. Instead, it constructs a univariate ring over the polynomial ring once, and reuses this until all coefficients are constants. I am sure this could be further improved and streamlined, but it is unclear whether it is worth the effort. |
This turns out not to be the case:
At the end of the algorithm, I did not yet check whether this is to be expected. |
I actually do not quite understand what happens concerning the beauty of results. For example, also in develop we have
|
That's a good example. Can you add it as a test and a note in the code? I don't know why that isn't reducing itself properly. That would be a bug IMO. |
Co-authored-by: mathzeta <[email protected]>
I'm not sure where, but yes.
I don't think it's a bug: the gcd is one:
We could require fractions of polynomials to have monic leading terms, but I don't think we always want that:
|
In fact:
|
@@ -652,7 +657,7 @@ def gcd(self,other): | |||
from sage.rings.integer_ring import ZZ | |||
try: | |||
return P(ZZ(self).gcd(ZZ(other))) | |||
except TypeError: | |||
except (TypeError, ValueError): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Possibly this is wrong. Should we always raise a TypeError
in _integer_
? Currently we don't, for example, in QmodnZ_Element
, ComplexIntervalFieldElement
, ComplexBall
, RealBall
, LaurentPolynomial
, LocalizationElement
, AlgebraicNumber
, AlgebraicReal
, RealIntervalFieldElement
, pAdicZZpXCAElement
, pAdicZZpXFMElement
, pAdicZZpXCRElement
, pAdicCappedRelativeElement
.
Or should it actually always be a ValueError
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is good to widen what we catch. Yes, we probably should be more consistent, but that isn't a reason for this to fail. Do you have an example where it was breaking before this change?
Okay, but only because 7 is a unit. (Although if you do the gcd computation in QQbar, you get 7.) It is kind of a bug since it isn't detecting that the unit in the rational function is the leading coefficients of both... |
Yes, but then
is also a bug. Put differently, it is a decision to take in multivariate polynomial normalization, I think. |
This is an interesting inconsistency (tested on 10.5.beta3):
Personally, I feel that putting at least the leading coefficient of the denominator when working over a field to 1 is desirable to normalize the elements. Well, I guess this is off topic to this PR at the end of the day. |
OK, I'll adapt the doctests then. I really dislike
but I got no reaction on sage-devel. |
So, I just checked how univariate polynomial rings are treated specially. There is an implementation of the method |
In an effort to make #38108 more usable, we optimize the computation of gcd's in generic polynomial rings.