-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Latent space documentation #15
base: main
Are you sure you want to change the base?
Conversation
This PR will be rebased and merged after PR #11. |
773df78
to
ddbac34
Compare
@punkduckable , you did not resolve any conflict while you're rebasing. See there are bunch of conflict messages everywhere in the code. This won't happen if you rebase the branch only with your commits. You had to do the following git commands:
But since you already rebased without resolving, rebasing may still leave those unresolved conflict messages, which needs to be resolved manually. If you need help in this overall, I can help you in a web-ex. |
combination and then return a list housing the latent space ICs. | ||
|
||
|
||
----------------------------------------------------------------------------------------------- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this format is not recognized by sphinx and makes param_grid description hidden in the API website (see this webpage).
Remove the line 51 and reduce line 53 the same as line 52. and writing line 55-60 as below will resolve the issue:
Arguments
---------
param_grid: :obj:`numpy.ndarray`
A 2d numpy.ndarray object of shape (number of parameter combination) x (number of
parameters).
...
It'd be great if you can also specify type, but I don't think it's necessary.
I thought I was following numpy style in src/lasdi/timing.py
, but it was actually google style. Since your comment is already similar to numpy, we can stick to numpy style. I'll fix src/lasdi/timing.py
in future.
autoencoder: The actual autoencoder object that we use to map the ICs into the latent space. | ||
|
||
|
||
----------------------------------------------------------------------------------------------- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above, change line 63-66 as below:
Returns
-------
Z0 : :obj:`numpy.ndarray`
A list of numpy ndarray objects whose i'th element holds the latent space initial condition
...
I also removed the multheadedattention activation type (plus the associated apply_attention function) because they did not make sense in this class. I have now documented the MLP class, but need to add documentation to the autoencoder class.
I added comments + doc strings to the Autoencoder class.
gplasdi had an instance of np.Inf (which is depricated). I also fixed a typo in latent_spaces.py
cc77c3f
to
782f57a
Compare
|
||
|
||
# ------------------------------------------------------------------------------------------------- | ||
# initial_conditions_latent function |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In order to be parsed in API document as one-line description, this line can be moved to line 43 with an additional empty line. Something like below:
def initial_condition_latent(param_grid : np.ndarray,
physics : Physics,
autoencoder : torch.nn.Module) -> list[np.ndarray]:
"""initial_conditions_latent function
This function maps a set of initial conditions for the fom to initial conditions for the
|
||
|
||
# ------------------------------------------------------------------------------------------------- | ||
# MLP class |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to initial_condition_latent
. Move line 99 below line 102 with """ """
comment as below:
class MultiLayerPerceptron(torch.nn.Module):
""" MLP class """
def __init__( self,
self.fcs += [torch.nn.Linear(layer_sizes[k], layer_sizes[k + 1])] | ||
self.fcs = torch.nn.ModuleList(self.fcs) | ||
self.init_weight() | ||
------------------------------------------------------------------------------------------- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above for function argument.
below the threshold. | ||
|
||
|
||
------------------------------------------------------------------------------------------- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above for returns.
""" | ||
|
||
# Run checks. | ||
# Make sure the reshape index is either 0 (input to 1st layer) or -1 (output of last |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need this long explanation for two lines 172-173. Line 160-163 is simply repeating what the code is doing. I suggest:
If the reshape index and shape are provided for input (index 0) or output (index -1),
we will either flatten the input ndarray or reshape the output 1d array into the provided ndarray shape.
Either way, the specified ndarray shape (reshape_shape) should match the size of input/output layer.
|
||
|
||
------------------------------------------------------------------------------------------- | ||
Arguments |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above for arguments.
|
||
|
||
------------------------------------------------------------------------------------------- | ||
Returns |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above for returns.
|
||
|
||
------------------------------------------------------------------------------------------- | ||
Arguments |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above for arguments.
|
||
|
||
------------------------------------------------------------------------------------------- | ||
Returns |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above for returns.
|
||
|
||
# ------------------------------------------------------------------------------------------------- | ||
# Autoencoder class |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as for MLP class, move line 305 below line 308 with """ ... """
comment.
self.use_multihead = False | ||
super(MultiLayerPerceptron, self).__init__() | ||
|
||
# Note that layer_sizes specifies the dimensionality of the domains and co-domains of each |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment describes a member variable, but it is not parsed by API document. This can be parsed if the comment is written after the code with """ ... """
comment as below:
self.n_layers : int = len(layer_sizes) - 1;
"""
Note that layer_sizes specifies the dimensionality of the domains and co-domains of each
layer. Specifically, the i'th element specifies the input dimension of the i'th layer,
...
"""
# while the final element specifies the dimensionality of the co-domain of the final layer. | ||
# Thus, the number of layers is one less than the length of layer_sizes. | ||
self.n_layers : int = len(layer_sizes) - 1; | ||
self.layer_sizes : list[int] = layer_sizes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It'd be great if you can add description for each class member variables, but it's not a must.
if (self.reshape_index == 0): | ||
# make sure the input has a proper shape | ||
# Make sure the input has a proper shape. There is a lot going on in this line; let's |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think most of these comments are somewhat repetitive of what is written earlier in arguments/returns section. I think we can reduced it into one sentence as below:
For k-dimension self.reshape_shape, make sure the last k dimension of x has the same shape.
# we use torch.Tensor.view instead of torch.Tensor.reshape, | ||
# in order to avoid data copying. | ||
|
||
# Now that we know the final k dimensions of x have the correct shape, let's squeeze |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above, how about this sentence?
For k-dimension self.reshape_shape, reshape and flatten the last k dimension of x into 1d array that fits the first layer.
Note that we use torch.Tensor.view instead of torch.Tensor.reshape in order to avoid data copying.
if (self.reshape_index == -1): | ||
# we use torch.Tensor.view instead of torch.Tensor.reshape, | ||
# in order to avoid data copying. | ||
# In this case, we need to split the last dimension of x, the output of the final |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe use key words only?
split the last dimension of x, the output of the final layer, to match the reshape_shape.
Note that we use torch.Tensor.view instead of torch.Tensor.reshape in order to avoid data copying.
class Autoencoder(torch.nn.Module): | ||
def __init__(self, physics : Physics, config : dict) -> None: | ||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently this is somehow not parsed by API. I found that we need to turn on an option. We can just leave as is and wait until I push a new PR for it, or it'd be great if you can add one liner in line 28 of docs/source/conf.py
:
autoapi_python_class_content = 'both'
|
||
|
||
------------------------------------------------------------------------------------------- | ||
Arguments |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above for arguments
|
||
physics: A "Physics" object that holds the fom solution frames. We use this object to | ||
determine the shape of each fom solution frame. Recall that each Physics object has a | ||
corresponding PDE. We |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a sentence missed here at the end?
|
||
|
||
------------------------------------------------------------------------------------------- | ||
Returns |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above for Returns.
# each point). | ||
self.qgrid_size : list[int] = physics.qgrid_size; | ||
|
||
# The product of the elements of qgrid_size is the number of dimensions in each fom |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move these comments after 364 with """ ... """
comment so that it can be parsed into API.
self.decoder = MultiLayerPerceptron(layer_sizes[::-1], act_type, | ||
reshape_index=-1, reshape_shape=self.qgrid_size, | ||
threshold=threshold, value=value, num_heads=num_heads) | ||
# A Physics object's qgrid_size is a list of integers specifying the shape of each frame of |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move these comments after 364 with """ ... """
comment so that it can be parsed into API.
I personally think these belong to Physics
class comments, but we can leave it here. It'd be great if we can refer readers to Physics
class.
|
||
|
||
------------------------------------------------------------------------------------------- | ||
Arguments |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above for arguments
|
||
|
||
------------------------------------------------------------------------------------------- | ||
Returns |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above for returns
|
||
|
||
------------------------------------------------------------------------------------------- | ||
Arguments |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above
|
||
|
||
------------------------------------------------------------------------------------------- | ||
Returns |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above
|
||
|
||
------------------------------------------------------------------------------------------- | ||
Arguments |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above
|
||
|
||
------------------------------------------------------------------------------------------- | ||
Returns |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@punkduckable , I reviewed through the comments. Most of them look good, just need some format changes to be parsed into API.
I thought I commented src/lasdi/timing.py
per numpy style, but it was google style. I'll change this later.
I added documentation (comments + doc strings + type hints + formatting) to latent_space.py
The code is functionally almost identical, save for a few minor tweaks:
Otherwise, all changes are cosmetic (e.g., comments + formatting).
I also fixed a small bug in gplasdi.py: np.Inf no longer exists. It is deprecated. It has been replaced with np.inf. I fixed this.