Skip to content

Commit

Permalink
readme v6
Browse files Browse the repository at this point in the history
  • Loading branch information
ShrihanSolo committed Sep 3, 2024
1 parent eb8660d commit a2a6ea9
Showing 1 changed file with 38 additions and 2 deletions.
40 changes: 38 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,12 +28,12 @@ Clone the package using:

> git clone https://github.com/deepskies/AdaptiveMVEforLensModeling
into any directory. No further setup is required once environments are installed.
into any directory. Then, install the environments.

#### Environments

This works on linux, but has not been tested for mac, windows.
Install the environments in `envs/` using conda with the following command:
We recommend using conda. Install the environments in `envs/` using conda with the following command:

> conda env create -f training_env.yml.
Expand All @@ -42,6 +42,42 @@ Install the environments in `envs/` using conda with the following command:
The `training_env.yml` is required for training the Pytorch model, and `deeplenstronomy_env.yml` for simulating strong lensing datasets using `deeplenstronomy`. Note that there is a sky brightness-related bug in the PyPI 0.0.2.3 version of deeplenstronomy, and an update to the latest version will be required for reproduction of results.


### Repository Structure

AdaptiveMVEforLensModeling/
├── src/
│ ├── sim/
│ │ └── # Contains all information required to generate the dataset
│ │
│ ├── data/
│ │ └── # Data should be stored here after download or generation
│ │
│ └── training/
│ ├── MVEonly/
│ │ ├── paper_models/
│ │ │ └── # Final PyTorch models used in the MVEonly model, along with training information
│ │ │
│ │ └── # Notebooks and code specific to the MVEonly model
│ │
│ ├── MVEUDA/
│ │ ├── paper_models/
│ │ │ └── # Final PyTorch models used in the MVEUDA model, along with training information
│ │ │
│ │ ├── figures/
│ │ │ └── # Figures generated for the paper
│ │ │
│ │ └── ModelVizPaper.ipynb
│ │ └── # Notebook used to generate figures in MVEUDA/figures/ and used in the paper
│ │
│ └── # Other notebooks and resources related to training
├── envs/
│ └── # Environment specific files
└── # Additional files and directories as needed


### Quickstart

In order to reproduce results, you will first need to generate or download the datasets. To generate them, navigate to `src/sim/notebooks` and generate a source target dataset pair in the `src/data` directory. The config files to generate these datasets are specified in `src/sim/config` using `gen_sim.ipynb`. You will need to use the `deeplens` environment to do so. Alternatively, you can download the data from zenodo here: . Place the folders `mb_paper_source_final` and `mb_paper_target_final` into the `src/sim/data` directory and continue to the next step.
Expand Down

0 comments on commit a2a6ea9

Please sign in to comment.