-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: make synthetic runners use dataframes and rename inputs so stat… #10
feat: make synthetic runners use dataframes and rename inputs so stat… #10
Conversation
also resolves AutoResearch/autora#561 |
src/autora/experiment_runner/synthetic/psychophysics/weber_fechner_law.py
Outdated
Show resolved
Hide resolved
…hner_law.py Co-authored-by: benwandrew <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great; just a minor renaming suggestion for the noise in some of the models.
@@ -117,8 +119,8 @@ def experiment_runner(X: np.ndarray, added_noise_=added_noise): | |||
probability_a = x[1] | |||
probability_b = x[3] | |||
|
|||
expected_value_A = value_A * probability_a + rng.normal(0, added_noise_) | |||
expected_value_B = value_B * probability_b + rng.normal(0, added_noise_) | |||
expected_value_A = value_A * probability_a + rng.normal(0, observation_noise) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would call this value_noise
which is specific to the expected utility theory instead of observation_noise
@@ -113,8 +118,8 @@ def experiment_runner(X: np.ndarray, added_noise_=added_noise): | |||
x[3] ** coefficient + (1 - x[3]) ** coefficient | |||
) ** (1 / coefficient) | |||
|
|||
expected_value_A = value_A * probability_a + rng.normal(0, added_noise_) | |||
expected_value_B = value_B * probability_b + rng.normal(0, added_noise_) | |||
expected_value_A = value_A * probability_a + rng.normal(0, observation_noise) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here, I would call observation_noise
instead value_noise
Y = np.zeros((X.shape[0], 1)) | ||
for idx, x in enumerate(X): | ||
similarity_A1 = x[0] | ||
similarity_A2 = x[1] | ||
similarity_B1 = x[2] | ||
similarity_B2 = x[3] | ||
|
||
y = (similarity_A1 * focus + np.random.normal(0, added_noise_)) / ( | ||
y = (similarity_A1 * focus + rng.normal(0, observation_noise)) / ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think here it is somewhat fine because we add it at the end and then normalize. You might still want to call it decision_noise
because it is applied before the division.
…xperiment_runner to run
…ate' of https://github.com/AutoResearch/autora-synthetic into 9-chore-rename-input-arguments-of-runners-to-us-with-state
…me experiment_runner to run in new synthetic models/fixxed wrong input statements, added new models to test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great!
…e logic works