Skip to content

Commit

Permalink
Updated README to near-final version.
Browse files Browse the repository at this point in the history
  • Loading branch information
ShrihanSolo authored Sep 3, 2024
1 parent 6a6084f commit 121b240
Showing 1 changed file with 4 additions and 5 deletions.
9 changes: 4 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,26 +100,26 @@ AdaptiveMVEforLensModeling/

#### Acquiring The Dataset

* Option A: Generate the Dataset
* __Option A: Generate the Dataset__
* Navigate to `src/sim/notebooks/`.
* Generate a source/target data pair in the `src/data/` directory:
* Run `gen_sim.py` on `src/sim/config/source_config.yaml` and `target_config.yaml`.
* A source and target data folder should be present in `src/data/`.

* Option B: Download the Dataset
* __Option B: Download the Dataset__
* Zip files of the dataset are available at https://zenodo.org/records/13647416.
* The source and target data downloaded should be added to the `src/data/` directory.
* Place the folders `mb_paper_source_final` and `mb_paper_target_final` into the `src/data/` directory.

#### Running Training

* MVE-Only
* __MVE-Only__
* Navigate to `src/training/MVEonly/MVE_noDA_RunA.ipynb` (or Run B, C, D, E)
* Adjust filepaths to the dataset if necessary.
* Activate the `neural` conda environment.
* Run training by running the notebook.

* MVE-UDA
* __MVE-UDA__
* Follows an identical procedure to above, in `src/training/MVEUDA/`.

#### Visualizing Paper Results
Expand All @@ -130,7 +130,6 @@ AdaptiveMVEforLensModeling/
* New runs by a user will be stored in the adjacent `models/` directories.

<br>
<br>

<div style="display: flex; justify-content: space-between;">
<img src="./src/training/MVEUDA/figures/residual.png" alt="Residual Plot" style="width: 70%;"/>
Expand Down

0 comments on commit 121b240

Please sign in to comment.