-
Notifications
You must be signed in to change notification settings - Fork 264
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix bug in classification of transformers #972
Conversation
axelrod/strategy_transformers.py
Outdated
if sum(probability) > 1: | ||
probability = tuple([i / sum(probability) for i in probability]) | ||
|
||
if probability[0] in [0, 1]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will this fail on probability=(0.0, 0.5) or the very useless (0.0, 0.0)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think so because of the normalisation step that happens before. I'm fixing some remaining failures and will take a look at the (0, 0) case! 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You were right, that did fall over on those cases :) In 149e2e6 I've added more tests and a fix.
- Final transformer reclassification not tested (or working) - Random transformers not updated the stochastic classification.
Implement a reclassifier option that can be passed to the Transformer factor. This allows for smart reclassification of strategies based on the input parameters. - Also added a test to ensure that original class is not touched.
This was an early attempt an in essence doesn't do anything.
Do we have a test that catches the original issue with cloning these players? |
@@ -349,6 +420,7 @@ def test_mixed(self): | |||
probability = (0, 0, 1) # Note can also pass tuple | |||
strategies = [axelrod.TitForTat, axelrod.Grudger, axelrod.Defector] | |||
MD = MixedTransformer(probability, strategies)(axelrod.Cooperator) | |||
self.assertFalse(MD.classifier["stochastic"]) | |||
|
|||
p1 = MD() | |||
# Against a cooperator we see that we only cooperate |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i've been reading through this to try and wrap my head around decorators, which drive me up the wall. it has helped but it's also hard reading. This comment that sneaked in from a copy-paste made me laugh.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removing that now :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
which drive me up the wall.
Tell me about it! There be dragons... :'(
I've incorporated a number of tests that check that players are now correctly classified. This was the original issue (stochastic players were not being recognised as such and they were starting off the cache). EDIT: My first guess was cloning but as I commented later on in #971 it's nothing to do with cloning. It was just that we weren't classifying things properly. I haven't specifically incorporated a test of the exact symptom but the new tests for the fingerprint (showing that the previous values weren't right) show that this fixes #971 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very minor nitpicking stuff from me
axelrod/strategy_transformers.py
Outdated
@@ -37,6 +39,8 @@ def __call__(self, player, opponent, action) | |||
Any keyword arguments to pass to the wrapper | |||
name_prefix: string, "Transformed " | |||
A string to prepend to the strategy and class name | |||
reclassifier: function, | |||
A function to use to update the class's classifier. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor quibbles:
- Somebody new to this might reasonably ask "What class?"
- "to use" is redundant
How about: "A function which will update the classifier of the strategy being transformed"?
axelrod/strategy_transformers.py
Outdated
@@ -229,9 +242,14 @@ def noisy_wrapper(player, opponent, action, noise=0.05): | |||
return flip_action(action) | |||
return action | |||
|
|||
def noisy_reclassifier(original_classifier, noise): | |||
"""Function to reclassify the strategy""" | |||
if noise not in [0, 1]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really nitpicky comment: I'd use a tuple here as there is no need for a list
axelrod/strategy_transformers.py
Outdated
@@ -241,9 +259,15 @@ def forgiver_wrapper(player, opponent, action, p): | |||
return random_choice(p) | |||
return C | |||
|
|||
def forgiver_reclassifier(original_classifier, p): | |||
"""Function to reclassify the strategy""" | |||
if p not in [0, 1]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same nitpicky tuple vs list comment as previous
- Better docstring description. - Use tuples instead of lists.
Have addressed both those @meatballs. I still often slip in the bad habit of using lists when I shouldn't... 👍 |
If only that were my only bad habit.....
Vince Knight wrote:
… Have addressed both those @meatballs <https://github.com/meatballs>. I
still often slip in the bad habit of using lists when I shouldn't... 👍
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#972 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAUrntC_Sya4ZVBOR-Lkk72-0CQ94nuSks5rwh0CgaJpZM4M93-7>.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a question on memory_depth for the transformers
@@ -294,9 +318,15 @@ def final_sequence(player, opponent, action, seq): | |||
return seq[-index] | |||
return action | |||
|
|||
def final_reclassifier(original_classifier, seq): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We probably need to update the memory depth as well (also for initial transformer)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure doing that now.
Also added a small extension of a test that ensures strategy transformers are completely covered by their unit tests (I assume this was being covered by another test somewhere else).
I've added reclassification of the memory depth too. I think this should go in now under the bug fix rule. If there's anything remaining we can pick it up as other issues (I don't think there are but there might be other reclassifiers that need to be added but the main bug was being caused by the stochastic ones). |
Our transformers were not reclassifying the strategies (mostly we weren't trying but in one case we were but it was not tested and not working). See #971
This implements the ability to pass a
reclassifier
function to the strategy transformer factory so that we can reclassify based on input parameters (for example if using the Noisy transformer but passing noise of0
or1
then the classifier does not change).I've also written such reclassifiers for all relevant transformers EDIT:
although technically the Mixed strategy transformer isn't as smart as it could be but I suggest this gets merged under the bug fix rule and an issue gets opened for that transformer. (I've set it to blankly say that it make strategies stochastic which is what we want for any sensible use case).I've done that one too now.Closes #971