diff --git a/README.md b/README.md index 3ef6ae9..8b60449 100644 --- a/README.md +++ b/README.md @@ -28,12 +28,12 @@ Clone the package using: > git clone https://github.com/deepskies/AdaptiveMVEforLensModeling -into any directory. No further setup is required once environments are installed. +into any directory. Then, install the environments. #### Environments This works on linux, but has not been tested for mac, windows. -Install the environments in `envs/` using conda with the following command: +We recommend using conda. Install the environments in `envs/` using conda with the following command: > conda env create -f training_env.yml. @@ -42,6 +42,42 @@ Install the environments in `envs/` using conda with the following command: The `training_env.yml` is required for training the Pytorch model, and `deeplenstronomy_env.yml` for simulating strong lensing datasets using `deeplenstronomy`. Note that there is a sky brightness-related bug in the PyPI 0.0.2.3 version of deeplenstronomy, and an update to the latest version will be required for reproduction of results. +### Repository Structure + +AdaptiveMVEforLensModeling/ +│ +├── src/ +│ ├── sim/ +│ │ └── # Contains all information required to generate the dataset +│ │ +│ ├── data/ +│ │ └── # Data should be stored here after download or generation +│ │ +│ └── training/ +│ ├── MVEonly/ +│ │ ├── paper_models/ +│ │ │ └── # Final PyTorch models used in the MVEonly model, along with training information +│ │ │ +│ │ └── # Notebooks and code specific to the MVEonly model +│ │ +│ ├── MVEUDA/ +│ │ ├── paper_models/ +│ │ │ └── # Final PyTorch models used in the MVEUDA model, along with training information +│ │ │ +│ │ ├── figures/ +│ │ │ └── # Figures generated for the paper +│ │ │ +│ │ └── ModelVizPaper.ipynb +│ │ └── # Notebook used to generate figures in MVEUDA/figures/ and used in the paper +│ │ +│ └── # Other notebooks and resources related to training +│ +├── envs/ +│ └── # Environment specific files +│ +└── # Additional files and directories as needed + + ### Quickstart In order to reproduce results, you will first need to generate or download the datasets. To generate them, navigate to `src/sim/notebooks` and generate a source target dataset pair in the `src/data` directory. The config files to generate these datasets are specified in `src/sim/config` using `gen_sim.ipynb`. You will need to use the `deeplens` environment to do so. Alternatively, you can download the data from zenodo here: . Place the folders `mb_paper_source_final` and `mb_paper_target_final` into the `src/sim/data` directory and continue to the next step.