diff --git a/_posts/2024-10-07-ifman.md b/_posts/2024-10-07-ifman.md
index 073530b..fd83473 100644
--- a/_posts/2024-10-07-ifman.md
+++ b/_posts/2024-10-07-ifman.md
@@ -112,7 +112,7 @@ We also anticipate our attack to work better with smaller training sets, as ther
Recently influence functions have been proposed to increase the fairness of downstream models , we focus on the study by because it uses the same definition of influence as us. In this study, influence scores from a base model are used to increase the fairness of a downstream model. Since fairness of the downstream model is guided by influence scores, an adversary with an incentive to reduce fairness would be interested in manipulating them.
-We propose an untargeted attack for this use-case : scale the base model $\theta^{*}$ by a constant $\lambda > 0$. The malicious base model output by the model trainer is now $\theta^\prime = \lambda \cdot \theta^*$, instead of $\theta^*$. Note that for logistic regression the malicious and original base model are indistinguishable since scaling with a positive constant maintains the sign of the predictions, leading to the same accuracy.
+We propose an untargeted attack for this use-case : scale the base model $\theta^*$ by a constant $\lambda > 0$. The malicious base model output by the model trainer is now $\theta^\prime = \lambda \cdot \theta^*$, instead of $\theta^*$. Note that for logistic regression the malicious and original base model are indistinguishable since scaling with a positive constant maintains the sign of the predictions, leading to the same accuracy.
#### Experimental Results