Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create the_last_metric.py #413

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open

Create the_last_metric.py #413

wants to merge 4 commits into from

Conversation

xiaolng
Copy link
Contributor

@xiaolng xiaolng commented Jun 17, 2024

An information-based metric comparing the recoverability of redshift information of simulated OpSims runs. It returns a single number as the Figure of Merit for an OpSim. Reference: Alex M, et al. An information-based metric for observing strategy optimization, demonstrated in the context of photometric redshifts with applications to cosmology
https://arxiv.org/abs/2104.08229

An information-based metric comparing the recoverability of redshift information of simulated OpSims runs. It returns a single number as the Figure of Merit for an OpSim.
@rhiannonlynne
Copy link
Member

Hi @xiaolng my apologies on the delay - this has been a very busy season for our team.

I appreciate the contribution of this metric, but as it sits right now - this isn't the API we would expect for a Metric.
The Metric class itself doesn't usually have anything to do with the opsim database or reading data from the opsim database (as your metric does in the 'get_coaddm5' method.

Can we schedule a time to talk about how to reformat this?

When reading through the metric, what it looks like to me (please correct me where I have misinterpreted what's happening) is that the steps are approximately --

  • get the coadded extragalactic m5 (i.e. m5 with extragalactic dust extinction) in all filters -- does this need to be a (single) value averaged over the whole sky, a value per point on the sky, or values for all of the sky at the same time (i.e. the whole healpix array)?
  • run a test_and_train to set up mock catalogs for test and training -- are these the same for every simulation or are they different depending on the different simulations? how large are they? I assume these cover the whole sky, but is it useful to generate catalogs for each healpix individually (could you use them per healpix?)
  • then this catalog is sent to pzflow for calculation of some redshift information numbers - again, I assume this is the entire catalog over the whole sky?

If so, I can see how it is difficult to see how to squish this into our framework ..
you need to calculate something on a healpix basis over the sky (exgal coadd m5), then calculate another thing over the whole sky that needs the entire information from the previous healpix metric outputs (generating catalogs), and then use those catalogs in one further step to generate a final set of (scalar?) values from those catalogs.
Does this seem right?
I'd like a little more information about

  • how big are the catalogs that are generated .. do they need to be written to disk? do they need to be saved after running the metric?
  • confirm what the final metric outputs look like
  • how long do you expect this to require to run?
  • you can run the coadd m5 metric without having to read the opsdb from disk (I can show you how) - this means that if you run the "LastMetric" with a unislicer, you already have all of the opsdb information available and you can just get the coadd m5 metric results back without reading/writing from disk

Copy link
Member

@rhiannonlynne rhiannonlynne left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comments above.
One final question though -- does this need to run on a machine with a GPU?

@rhiannonlynne
Copy link
Member

Are you still working on this PR? I would like it if the metric did not write the sub-selected stellar catalog to disk (and then read it back in during the "run" method).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants