-
Notifications
You must be signed in to change notification settings - Fork 626
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adjoint solver #795
Adjoint solver #795
Conversation
…in the CW case); added discussion to FAQ section of documentation
…'omega->freq' in python/source.py/fourier_transform
…'omega->freq' in python/source.py/fourier_transform
…mples/adjoint_optimization with two examples so far; rearranged python/Makefile.am to accommodate adjoint module; update python/Makefile.am to byte-compile mpb python source files that were previously not getting byte-compiled (intentionally?)
# finally, specification of what gets installed in the meep python | ||
# module directory of the python site-packages installation | ||
# Q: Why is this not redundant since e.g. the HL_IFACE files should | ||
# already be installed by virtue of being in pkgpython_PYTHON |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This section assembles the meep
package hierarchy in the build directory so that the normal
make
make check
make install
workflow tests the built package before installing anything. If there were any adjoint solver tests, make check
would currently be unable to import the meep.adjoint
module.
###################################################################### | ||
def get_EH_slice(self, c, nf=0): | ||
EH = self.sim.get_dft_array(self.dft_obj, c, nf) | ||
return EH if np.ndim(EH)>0 else 0.0j*np.zeros(self.slice_dims) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like this is a bugfix that should be incorporated into get_dft_array
itself?
EH_forward=cell.get_EH_slices(nf,label='forward') | ||
EH_adjoint=cell.get_EH_slices(nf) # no label->current simulation | ||
self.dfdEps=np.sum( [EH_forward[nc]*EH_adjoint[nc] | ||
for nc,c in enumerate(cell.components) if c in Exyz], 0 ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn't a comprehension like this produce a Python list
and not a numpy array? On a Python list, my recollection is that you might as well do sum
… np.sum
offers no advantage and may actually be slower. Not that it matters here since the dominant cost will be building the list in the first place.
|
||
def eval(self, p=[0.0,0.0], beta_vector=None): | ||
bv = beta_vector if beta_vector else self.beta_vector | ||
return sum( [ bv[idx]*val for idx,val in self.contributors(p) ] ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Eventually we will want a native C++ implementation of this sort of finite-element basis (or just bilinear interpolation on a grid, rather than triangles), since for optimization you will eventually want to use something like this eval
for a material function, but a pure-Python implementation like this will be pretty slow when setting up ε on a lot of grid points. (Especially in practical cases one will want even more complicated basis functions, like the SIMP method with a smoothing filter, in order to do optimization with a "binarization" pass and a resolution constraint.)
FYI, I want to be cautious about incorporating finite-element modules for interpolation of the degrees of freedom from an arbitrary mesh (as opposed to simple bilinear interpolation from a Cartesian grid), for three reasons:
|
Regarding the choice of a basis mesh/method: I've run various adjoint experiments with the newer adjoint module that uses fenics to mesh and interpolate, trying to verify that the adjoint gradient approaches the finite difference gradient. It seems that the gradient w.r.t. the design variables (p) is very sensitive to the projection operation needed to go from ∂J / ∂ϵ to ∂J / ∂p (where J is the objective function). Is this behavior expected (I would assume so since the projection is a least-squares estimation problem)? Would certain bases more closely approximate the finite-difference method better than others? |
* add --without-scheme option to configure to bypass building the scheme interface * revised normalization of eigenmode sources to yield unit power flux (in the CW case); added discussion to FAQ section of documentation * updates * update FAQ entry regarding normalization of eigenmode sources * updates * purged extraneous adjoint-related content from master; also, renamed 'omega->freq' in python/source.py/fourier_transform * purged extraneous adjoint-related content from master; also, renamed 'omega->freq' in python/source.py/fourier_transform * added meep.adjoint python module for adjoint solver; added python/examples/adjoint_optimization with two examples so far; rearranged python/Makefile.am to accommodate adjoint module; update python/Makefile.am to byte-compile mpb python source files that were previously not getting byte-compiled (intentionally?) * updates * updates * updates * updates * updates * updatesS * updates * updates
Adjoint solver module.
The Official documentation aspires to be thorough and comprehensive, though currently has some incomplete sections. I will create a separate PR for the source of this documentation, but it uses a mkdocs theme that is not immediately compatible with the existing MEEP doc tree.
A sketchier documentation page is the quickstart, included in this PR as a single file within the existing
MEEP doc tree: AdjointQuickStart.
Closes #600.