We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is your feature request related to a problem? Please describe. Currently, the parameters are listed not in a particular order (alphabetical?).
Below, I propose that we compute a score per parameter and rank them.
Describe the solution you'd like
I do this in this "Transform dataset" operator.
d1
cases_observable_state
# Parameter sensitivity score # Get relevant values from simulation result n = d1["sample_id"].max() b = np.array([d1[d1["sample_id"] == i]["persistent_b_param"].iloc[0]for i in range(n)]) rHR = np.array([d1[d1["sample_id"] == i]["persistent_rHR_param"].iloc[0] for i in range(n)]) outcomes = np.array([d1[d1["sample_id"] == i]["cases_observable_state"].max() for i in range(n)]) # Interpolate to a uniform grid m = 100 x = np.linspace(b.min(), b.max(), m) y = np.linspace(rHR.min(), rHR.max(), m) xx, yy = np.meshgrid(x, y) from scipy.interpolate import LinearNDInterpolator xy = np.c_[b, rHR] lut2 = LinearNDInterpolator(xy, outcomes) zz = lut2(xx, yy) # Derivative of "outcome of interest" with respect to each parameter gx_all = [] for i in range(zz.shape[0]): gx = np.gradient(zz[i, :], 1.0 / m) gx_all.extend(list(gx)) gy_all = [] for j in range(zz.shape[1]): gy = np.gradient(zz[:, j], 1.0 / m) gy_all.extend(list(gy)) # Take absolute value gx_all = np.abs(gx_all) gy_all = np.abs(gy_all) # Score = median of the derivatives sensitivity_score_b = np.nanmedian(gx_all) sensitivity_score_rHR = np.nanmedian(gy_all) # Uncertainty # sensitivity_score_b_unc = np.nanstd(gx_all) / np.sqrt(len(gx_all)) # sensitivity_score_rHR_unc = np.nanstd(gy_all) / np.sqrt(len(gy_all)) sensitivity_score_b_unc = np.nanstd(gx_all) sensitivity_score_rHR_unc = np.nanstd(gy_all) fig, ax = plt.subplots(1, 1, figsize = (8, 6)) __ = ax.plot([0.5, 2.5], [0, 0], color = 'k', linewidth = 0.5) __ = ax.bar([1, 2], [sensitivity_score_b, sensitivity_score_rHR], width = 0.95, align = 'center') __ = ax.errorbar(x = [1, 2], y = [sensitivity_score_b, sensitivity_score_rHR], yerr = [0.5 * sensitivity_score_b_unc, 0.5 * sensitivity_score_rHR_unc], color = 'k', capsize = 10) __ = plt.setp(ax, xlabel = 'Model Parameters', ylabel = 'Gradient-based sensitivity score', xlim = [0.5, 2.5]) __ = ax.set_xticks([1, 2], labels = ['b', 'rHR'])
The text was updated successfully, but these errors were encountered:
I updated the above with np.abs to enforce positive value for the scores so the sorting works.
np.abs
Sorry, something went wrong.
No branches or pull requests
Is your feature request related to a problem? Please describe.
Currently, the parameters are listed not in a particular order (alphabetical?).
Below, I propose that we compute a score per parameter and rank them.
Describe the solution you'd like
I do this in this "Transform dataset" operator.
d1
cases_observable_state
) over this gridThe text was updated successfully, but these errors were encountered: