Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spherical average doesn't interact well with baselines that have different times #399

Open
jsdillon opened this issue Mar 13, 2024 · 0 comments

Comments

@jsdillon
Copy link
Member

After I've done incoherent time averaging on a UVPspec object, my blts axis is the size of the number of baselines (each of which has one "time"). However, if not all the times are the same, spherical averaging doesn't combine them correctly. As an example, if I do this:

uvp_tavg = copy.deepcopy(uvp_tavg_1_to_5)
sph_avgs = []
for spw in tqdm(uvp.spw_array):
    dk_multiplier = 2.0 # the size of each spherical k bin (Delta k) when multiplied by the natural k_para spacing
    k_start_multiplier = .75 # the center of first spherical k bin in units of Delta k
    dk = dk_multiplier * np.median(np.diff(uvp_tavg.get_kparas(spw)))
    kbins = np.arange(k_start_multiplier * dk, 2.5, dk) # even spacing 
    uvp_tavg_this_spw = uvp_tavg.select(spws=[spw], inplace=False)
    sph_avgs.append(hp.grouping.spherical_average(uvp_tavg_this_spw, kbins, dk, error_weights='P_N'))
    print(uvp_tavg.data_array[spw].shape, sph_avgs[-1].data_array[0].shape)

I get

(801, 99, 2) (30, 45, 2)
(801, 91, 2) (30, 38, 2)
(801, 105, 2) (30, 40, 2)
(801, 135, 2) (30, 43, 2)
(801, 89, 2) (30, 27, 2)
(801, 81, 2) (30, 23, 2)
(801, 87, 2) (30, 24, 2)
(801, 130, 2) (30, 34, 2)
(801, 114, 2) (30, 29, 2)
(801, 139, 2) (30, 34, 2)
(801, 116, 2) (30, 27, 2)
(801, 61, 2) (30, 14, 2)

The number of blts is the number of unique baselines (801), but it doesn't reduce to 1... it reduces to 30.

Probably related, np.unique(uvp_tavg_1_to_5.time_1_array).shape = (15,)

However, if I force the integrations to have the same time in a rather hacky way, by doing the following:

uvp_tavg.time_avg_array[:] = np.median(uvp_tavg.time_avg_array)
uvp_tavg.time_1_array[:] = np.median(uvp_tavg.time_avg_array)
uvp_tavg.time_2_array[:] = np.median(uvp_tavg.time_avg_array)
uvp_tavg.lst_avg_array[:] = np.median(uvp_tavg.lst_avg_array)
uvp_tavg.lst_1_array[:] = np.median(uvp_tavg.lst_avg_array)
uvp_tavg.lst_2_array[:] = np.median(uvp_tavg.lst_avg_array)
uvp_tavg.average_spectra(time_avg=True, error_weights='P_N', error_field=['P_SN'], inplace=True)

I get

(801, 99, 2) (1, 45, 2)
(801, 91, 2) (1, 38, 2)
(801, 105, 2) (1, 40, 2)
(801, 135, 2) (1, 43, 2)
(801, 89, 2) (1, 27, 2)
(801, 81, 2) (1, 23, 2)
(801, 87, 2) (1, 24, 2)
(801, 130, 2) (1, 34, 2)
(801, 114, 2) (1, 29, 2)
(801, 139, 2) (1, 34, 2)
(801, 116, 2) (1, 27, 2)
(801, 61, 2) (1, 14, 2)

which is what I expect. It also goes much faster.

There should probably be an option on hp.grouping.spherical_average to handle this case, even if it's not the default behavior. Some sort of assume_time_averaged=True...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant