-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug fix/storm speed #104
Bug fix/storm speed #104
Changes from all commits
5e7b4ca
d43edea
90679be
de477ed
90a1eaa
d67e3d5
0f93840
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -1072,40 +1072,60 @@ | |
for advisory in pandas.unique(data["advisory"]): | ||
advisory_data = data.loc[data["advisory"] == advisory] | ||
|
||
indices = numpy.array( | ||
[ | ||
numpy.where(advisory_data["datetime"] == unique_datetime)[0][0] | ||
for unique_datetime in pandas.unique(advisory_data["datetime"]) | ||
] | ||
) | ||
indices = advisory_data.index | ||
shifted_indices = numpy.roll(indices, 1) | ||
shifted_indices[0] = 0 | ||
|
||
indices = advisory_data.index[indices] | ||
shifted_indices = advisory_data.index[shifted_indices] | ||
shifted_indices[0] = indices[0] | ||
|
||
# check for negative time shifts which indicate new forecasts | ||
# and update this with the last previously available time | ||
for counter, ind in enumerate(zip(indices, shifted_indices)): | ||
this_time = advisory_data.loc[ind[0], "datetime"] | ||
shift_time = advisory_data.loc[ind[1], "datetime"] | ||
if shift_time > this_time: | ||
# update shift index | ||
if (advisory_data["datetime"] < this_time).sum() == 0: | ||
shifted_indices[counter] = advisory_data["datetime"][ | ||
advisory_data["datetime"] > this_time | ||
].index[0] | ||
else: | ||
shifted_indices[counter] = advisory_data["datetime"][ | ||
advisory_data["datetime"] < this_time | ||
].index[-1] | ||
|
||
_, inverse_azimuths, distances = geodetic.inv( | ||
forward_azimuths, inverse_azimuths, distances = geodetic.inv( | ||
advisory_data.loc[indices, "longitude"], | ||
advisory_data.loc[indices, "latitude"], | ||
advisory_data.loc[shifted_indices, "longitude"], | ||
advisory_data.loc[shifted_indices, "latitude"], | ||
) | ||
|
||
intervals = advisory_data.loc[indices, "datetime"].diff() | ||
speeds = distances / (intervals / pandas.to_timedelta(1, "s")) | ||
bearings = pandas.Series(inverse_azimuths % 360, index=speeds.index) | ||
|
||
for index in indices: | ||
cluster_index = ( | ||
advisory_data["datetime"] == advisory_data.loc[index, "datetime"] | ||
intervals = ( | ||
( | ||
advisory_data.loc[indices, "datetime"].values | ||
- advisory_data.loc[shifted_indices, "datetime"].values | ||
) | ||
advisory_data.loc[cluster_index, "speed"] = speeds[index] | ||
advisory_data.loc[cluster_index, "direction"] = bearings[index] | ||
.astype("timedelta64[s]") | ||
.astype(float) | ||
) | ||
speeds = pandas.Series(distances / abs(intervals), index=indices) | ||
bearings = pandas.Series(inverse_azimuths % 360, index=indices) | ||
# use forward azimuths for negative intervals | ||
bearings[intervals < 0] = pandas.Series( | ||
forward_azimuths[intervals < 0] % 360, index=indices[intervals < 0] | ||
) | ||
bearings[pandas.isna(speeds)] = numpy.nan | ||
# fill in nans carrying forward, because it is same valid time | ||
# and forecast but different isotach. | ||
# then fill nans backwards to handle the first time | ||
speeds.ffill(inplace=True) | ||
bearings.ffill(inplace=True) | ||
Comment on lines
+1120
to
+1121
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @WPringle where would we get nan to carry forward? We already are getting all indices, we shouldn't run into a case of nan that needs to be filled forward for different isotachs. If possible can you share an example where this happens? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Does There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. For the same forecast time but different isotachs the result will be nan because the time difference is zero. So we just want to repeat the result from the first line of the same forecast time. |
||
speeds.bfill(inplace=True) | ||
bearings.bfill(inplace=True) | ||
advisory_data["speed"] = speeds | ||
advisory_data["direction"] = bearings | ||
|
||
data.loc[data["advisory"] == advisory] = advisory_data | ||
|
||
data.loc[pandas.isna(data["speed"]), "speed"] = 0 | ||
|
||
return data | ||
|
||
@property | ||
|
@@ -1352,7 +1372,21 @@ | |
or forecast.loc[valid_index, "forecast_hours"].iloc[0] == 0 | ||
): | ||
continue | ||
forecast.loc[valid_index, "radius_of_maximum_winds"] = rmw | ||
# make sure rolling rmw is not larger than the maximum radii of the strongest isotach | ||
# this problem usually comes from the rolling average | ||
max_isotach_radii = isotach_radii.loc[valid_index].iloc[-1].max() | ||
if rmw < max_isotach_radii or numpy.isnan(max_isotach_radii): | ||
forecast.loc[valid_index, "radius_of_maximum_winds"] = rmw | ||
# in case it does not come from rolling average just set to be Vr/Vmax ratio of max_isotach_radii | ||
if ( | ||
forecast.loc[valid_index, "radius_of_maximum_winds"].iloc[-1] | ||
> max_isotach_radii | ||
): | ||
forecast.loc[valid_index, "radius_of_maximum_winds"] = ( | ||
max_isotach_radii | ||
* forecast.loc[valid_index, "isotach_radius"].iloc[-1] | ||
/ forecast.loc[valid_index, "max_sustained_wind_speed"].iloc[-1] | ||
) | ||
|
||
# fill OFCL background pressure with the first entry from 0-hr CARQ background pressure (at sea level) | ||
forecast.loc[radp_missing, "background_pressure"] = carq_ref[ | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@WPringle can you please explain what this does? It says that if there's no time in the dataframe smaller than the current row we're processing, then the shifted time ("previous time") should be the first time that is larger than the current time. Why?! This gives us opposite bearing, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good point about the bearing! This could be a bug that needs fixing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems all the tests are passing. The coverage test runs into a timeout issue. Do you want me to go ahead with this merge and then later explore the bearing issue in a separate ticket or do you want me to wait for you to test the potential bearing issue?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this! Please wait until I double check everything. I will let you know once happy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK for the bearing when the shifted time is larger than the current time, we can use the
forward_azimuth
instead ofinverse_azimuth
. The change has been added in latest commit. Note this is overall a rare case.