You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
, self.Ca1.value = (d_ls @ d_l1.t()), It seems d_ls is the grad_l (gradient produced by the classification loss) and d_l1 is grad_g (gradient produced by the fairness-related loss).
d_ls=[grad_l, grad_g]^T (line 68)
We want to compute the gradient d=alpha @ d_ls, so we only need to get the alpha.
when the fairness constraint is satisfied, optimization goal is alpha @ d_ls @ d_ls^T, it is consistant with the paper.
hi,
Thanks for sharing your code. I'm following up on your work. I have some questions about the code.
self.Ca1
inclass HCO_LP
? I assume it is a variable representing the gradient? (I assume this from theself.prob_dom
definition)FCFL/FUEL/hco_lp.py
Line 71 in 5930200
self.Ca1.value = (d_ls @ d_l1.t())
, It seems d_ls is the grad_l (gradient produced by the classification loss) and d_l1 is grad_g (gradient produced by the fairness-related loss).FCFL/FUEL/hco_lp.py
Line 24 in 5930200
The text was updated successfully, but these errors were encountered: