You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The DLR loss is one of the major innovations of your work and is central to one of the four attacks used in the AutoAttack benchmark, APGD-DLR. However, when I was running tests with your framework on a couple of data sets, I noticed that AutoAttack had a tendency to crash when running the APGD-DLR attack. This is caused by the fact that the DLR loss function as defined in equation (6) of your paper implicitly assumes that the classification problem is composed of at least 3 classes; the targeted version presented in equation (7) assumes at least 4 classes.
This limitation raises a number of concerns which I think should be addressed:
The AutoAttack framework itself currently issues no warning and raises no reasonable exceptions when running experiments on data sets with fewer than four classes. Instead, we get an unintuitive index out of bounds exception which makes no sense to someone unfamiliar with this drawback of the DLR loss.
This problem raises the question of how to run the AutoAttack benchmark on, say, binary classification problems without compromising the results. One obvious "solution" is to exclude the APGD-DLR attack from the suite for such data sets, leaving only the APGD-CE, FAB and Square attacks. However, this obviously makes the evaluation of the models weaker, and may call into question the meaningfulness of the results. Ideally, the DLR loss should be generalized to a form that still makes sense even when there are only two classes.
The text was updated successfully, but these errors were encountered:
thanks for bringing this up, I've never experimented with datasets with less than 4 classes. I think it makes a lot of sense to raise a warning stating that in such case AA can't be run as in the original version, I'll try to add it soon.
As a possible replacement, I think that the simplest choice would be falling back to the margin loss, although it's not scale invariant. I'll check whether better solutions exist (as I think) and possibly integrate some in the code.
The DLR loss is one of the major innovations of your work and is central to one of the four attacks used in the AutoAttack benchmark, APGD-DLR. However, when I was running tests with your framework on a couple of data sets, I noticed that AutoAttack had a tendency to crash when running the APGD-DLR attack. This is caused by the fact that the DLR loss function as defined in equation (6) of your paper implicitly assumes that the classification problem is composed of at least 3 classes; the targeted version presented in equation (7) assumes at least 4 classes.
This limitation raises a number of concerns which I think should be addressed:
The text was updated successfully, but these errors were encountered: