Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lifetime and DA calculations for pyAT #351

Merged
merged 63 commits into from
Apr 25, 2022
Merged

Lifetime and DA calculations for pyAT #351

merged 63 commits into from
Apr 25, 2022

Conversation

swhite2401
Copy link
Contributor

@swhite2401 swhite2401 commented Jan 18, 2022

This branch allows the compute the dynamic aperture and lifetime of a pyAT lattice.
Below a script with example usage.
To be noted: use_mp=True has crashed in the past with MacOS, to be tested again with the new patpass #347

import at
import matplotlib.pyplot as plt
from at.plot import plot_acceptance
import multiprocessing
import time
import numpy
from at.physics import chromaticity

path = '../lattice/'
filename = 'S28F.mat'
key = 'LOW_EMIT_RING_INJ'
latticef = path+filename
ring = at.load_lattice(latticef,key=key)
ring.radiation_off()

'''
a,b,c = chromaticity(ring, method='linopt', dpm=0.02, npoints=11, order=3)
plt.plot(b,c)
plt.show()
'''


planes = ['x','y']
npoints = [21,21]
amplitudes = [15e-3,0.006]

# example computing 1D acceptance using multi-processing
# *vertical* and *momentum* exist, first argument is desired precision
#default with is Grid (or vector in 1D)
t0 = time.time()
b,s,g = ring.get_horizontal_acceptance(0.1e-3, amplitudes[0], use_mp=True)
print(time.time()-t0)

# example computing 1D acceptance using single processor
# *vertical* and *momentum* exist, first argument is desired precision
#default with SP is recursive search
t0 = time.time()
b2,s2,g2 = ring.get_horizontal_acceptance(0.1e-3, amplitudes[0], grid_mode=at.GridMode.RECURSIVE, use_mp=True)
print(time.time()-t0)

print(b, b2)


# example computing 1D acceptance using multi-processor
# default is radial grid
bf,sf,gf = ring.get_acceptance(planes, npoints, amplitudes, use_mp=True)

# same thing with a plot at the end, warning: plot is non blocking
ring.plot_acceptance(planes, npoints, amplitudes, use_mp=True)

#  Now example lifetime calculation for 1 cell 
emity = 10e-12
bunch_curr = 200e-3/992
zn = 0.7
ids = ring.get_refpts('ID*')
quads = ring.get_refpts(at.Quadrupole)
dips = ring.get_refpts(at.Dipole)
ncells = 1
refpts = numpy.sort(numpy.concatenate((ids[:ncells], 
                                       quads[quads<ids[ncells]],
                                       dips[dips<ids[ncells]])))
tlt, momap, refpts = ring.get_lifetime(emity, bunch_curr, zn = zn, refpts=refpts,
                                       use_mp=True, verbose=True)
print(tlt/3600)


# some figures
plt.figure()
plt.plot(*gf,'.')
plt.plot(*sf,'.')
plt.plot(*bf)

plt.figure()
plt.plot(momap[:,0])
plt.plot(momap[:,1])

plt.show()

TODO:
1-different radii distribution Done
2-divider in recursive search (defaults to 2 now) Done
3- search for max range in 2D -> use as starting point Tested: no performance gain
4-cluster submission (separate PR)

@swhite2401
Copy link
Contributor Author

I convert this to draft since their might be many further iterations before converging to the final version

@lfarv
Copy link
Contributor

lfarv commented Jan 28, 2022

@swhite2401: works nicely as expected, but there is a problem:

Instead of your awful refpts combination:

ids = ring.get_refpts('ID*')
quads = ring.get_refpts(at.Quadrupole)
dips = ring.get_refpts(at.Dipole)
ncells = 1
refpts = numpy.sort(numpy.concatenate((ids[:ncells], 
                                       quads[quads<ids[ncells]],
                                       dips[dips<ids[ncells]])))

I'm using a simpler combination with boolean refpts:

ncells = 1
ids = ring.get_cells(at.checkname('ID*'))
quads = ring.get_cells(at.checktype(at.Quadrupole))
dips = ring.get_cells(at.checktype(at.Dipole))
# Combine references
refpts= ids | quads | dips
# Set cells > ncells to False
cellstart = ring.uint32_refpts(ids)
refpts[cellstart[ncells]:] = False

Using booleans is usually more efficient, but get_acceptance crashes with boolean refpts.

At least at line 98 of acceptance.py:
rp = numpy.atleast_1d(refpts)
should be replaced by
rp = ring.uint32_refpts(refpts)

But there may be other problems…

@lfarv
Copy link
Contributor

lfarv commented Jan 28, 2022

class GridMode(Enum):
    """"
    Class to defined the grid mode use when searching
    for the boundary
    """
    RADIAL = 0
    GRID = 1
    RECURSIVE = 2

CARTESIAN instead of GRID ?

@lfarv
Copy link
Contributor

lfarv commented Jan 28, 2022

By analogy with get_optics, the processing of dp should be:

  • if ring is 6D, take the ring as it is, dp is ignored,
  • if ring is 4D, dp is honoured.

It makes things simpler:

  • in *boundary_search, use dp=None as the default value. None is interpreted as 0 in 4D
  • set_ring_orbit becomes simpler: forget the frequency, forget the warnings.
  def set_ring_orbit(ring, dp, refpts, orbit):
      """
      Returns a ring starting at refpts and initial
      closed orbit
      """
      if refpts is not None:
          assert numpy.isscalar(refpts), 'Scalar value needed for refpts'
          newring = newring.rotate(refpts)
      if orbit is None:
              orbit, _ = ring.find_orbit(dp=dp)
      return orbit, newring

Now, if orbit is the initial closed orbit of the non-rotated ring, why computing it again for each starting point? It could be done only once. Or should there be a newring.find_orbit instead of ring.find_orbit? Or could you compute once for ever the orbit at all refpts points, and rotate this orbit array simultaneously with ring?
Just looking for minimising the closed orbit searches, though is very fast compared to the rest…

Copy link
Contributor

@lcarver lcarver left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fine for me (once it passes all the checks)

@swhite2401
Copy link
Contributor Author

Latest improvements:
-add small epsilon to line y(x=0) and x(y=0) to avoid artificially large DA on these line
-improve boundary search in 2D cartesian grid
-improve help

@swhite2401
Copy link
Contributor Author

@lfarv I had to exclude again the python 3.7 windows test but I saw it was commented on the master...how do you want to handle this?

@lfarv
Copy link
Contributor

lfarv commented Apr 5, 2022

@swhite2401 : the master branch has modifications in at.c and some .py files to allow the tests again.
I'm not happy with these since apparently they are linked to specific problems with the Github Windows runners which may change without notice (this happened last week).

For the moment, take the python-test.yml form master (or revert to the previous version): the tests will fail, but you will not disturb the master.

In the future, I would like to remove the workarounds in the pyat code a keep a clean version. With that, the test failed for Windows/3.7 in the old Github configuration, and would fail for Windows/3.6 in the present one (so exclusion in the test sequence). But the AT code would be clean.

@swhite2401
Copy link
Contributor Author

For the moment, take the python-test.yml form master (or revert to the previous version): the tests will fail, but you will not disturb the master.

Ok restore

@lfarv
Copy link
Contributor

lfarv commented Apr 6, 2022

All the tests are now running ? very strange… Apparently GitHub changed again. And maybe corrected the bugs. I'll look at that. Anyway it looks ok for merging.

@lfarv
Copy link
Contributor

lfarv commented Apr 6, 2022

@swhite2401 : now I understand: you merged the changes in master, which allows the tests to run. But the problem is still there, like for instance in the wake_factory_method branch…

@swhite2401
Copy link
Contributor Author

Final verification regarding lifetime calculation for @simoneliuzzo .
I used the ESRF lattice and compute the momentum aperture over 2 cells with 248 turns, this gives:

image

Then I compute the lifetime using the default method in matlab, only one method is provided in python, results are:

Python MA + integral: 36.41h
Matlab MA + python integral : 38.51h
Matlab MA + Matlab integral : 38.51h

For me this is good, small difference in MA can be explained by difference in the algorithm and default tolerances.

To be noted:
In python I had originally implemented the lifetime as the weighted sum or each element where the weight was defined as the distance to the next element
However in matlab the weight is the length of the element.

I have corrected the python to match the matlab but I am not convinced this is correct, naively I would have thought that the distance between elements makes more sense as it implicitly assume constant MA between elements. Adding elements would refine this approximation.
On the other hand using the length of the element does not seem very logical to me...

@simoneliuzzo , @carmignani , @lfarv you insight would be much appreciated on this.

Besides that I think all functionalities are validated and this is ready for the merge.

@swhite2401
Copy link
Contributor Author

@swhite2401 : now I understand: you merged the changes in master, which allows the tests to run. But the problem is still there, like for instance in the wake_factory_method branch…

Ah yes you are right! I made a mess at some point...sorry for the confusion

@lfarv
Copy link
Contributor

lfarv commented Apr 6, 2022

@swhite2401

    #spos = numpy.diff(ring.get_s_pos(refpts))
    spos = numpy.array([e.Length for e in ring[refpts]])

These are equivalent if refpts points to all the elements. If not, I agree that your 1st idea is correct (specially if you select only 0-length elements like markers!).

However, the MA and beam dimensions are computed at the entrance of the selected element, and the values are applied over the following length. Wouldn't it be better to compute the weight (or length) as going from half-way from the previous element to half-way to the new element?

@swhite2401
Copy link
Contributor Author

@swhite2401

    #spos = numpy.diff(ring.get_s_pos(refpts))
    spos = numpy.array([e.Length for e in ring[refpts]])

These are equivalent if refpts points to all the elements. If not, I agree that your 1st idea is correct (specially if you select only 0-length elements like markers!).

However, the MA and beam dimensions are computed at the entrance of the selected element, and the values are applied over the following length. Wouldn't it be better to compute the weight (or length) as going from half-way from the previous element to half-way to the new element?

Yes you are right, this would even be better! (selection of zero length elements is properly discarded in matlab, I have not done it because it was not necessary with my initial idea...)

So how would you like to proceed? This has consequences for the matlab that everyone is using...and from the test I have done results in underestimated lifetime. This certainly goes beyond this PR.
I can propose to merge the python with the correct version, and then we may proposed this bugfix for matlab in a separate PR?
What do you think?

@lfarv
Copy link
Contributor

lfarv commented Apr 6, 2022

@swhite2401 :

  • ok for reverting python to the "correct" version and merging,
  • ok for a bug fix in Matlab (with the usual consequences that "the results will change"). Who volunteers?

Concerning the possible improvement, up to you to see if it's worth going to that in another PR.

@swhite2401
Copy link
Contributor Author

I am clearly more comfortable with python, but I can give it a try for matlab (on a separate PR)
Concerning the improvement, I have to think about boundary conditions... but I would like this to be the final version.

@swhite2401
Copy link
Contributor Author

Ok so discussing with @carmignani we thought of a better solution that is to use the optics calculated at each element + interpolated momentum aperture (MA calculation is the time consuming part) and distance between interpolation points.

This is what we think would be the less approximated solution. I will make a proposal for the python in this direction.

@swhite2401
Copy link
Contributor Author

Final check on the DA, using either the interpolated method discussed above or the previous per element method from matlab.
The lifetime is calculated form a single EBS standard cell.

Each points represents different sampling rate (1 all elements, other every n element). Both method give identical results when all elements are used, for fewer elements the interpolated method fluctuates less.
I have set the interpolated method as default, the matlab method is still accessible with interpolate=False argument

For me this is ready to merged. Any objection?

lt_checks

@swhite2401 swhite2401 merged commit 56b6157 into master Apr 25, 2022
@swhite2401 swhite2401 deleted the tltda branch April 25, 2022 11:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Python For python AT code
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants