Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some questions about the code #2

Open
HqChen2021 opened this issue Jan 9, 2023 · 1 comment
Open

Some questions about the code #2

HqChen2021 opened this issue Jan 9, 2023 · 1 comment

Comments

@HqChen2021
Copy link

hi,

Thanks for sharing your code. I'm following up on your work. I have some questions about the code.

  1. what is self.Ca1 in class HCO_LP? I assume it is a variable representing the gradient? (I assume this from the self.prob_dom definition)
  2. I don't get the point in line 71
    self.Ca1.value = (d_ls @ d_l1.t()).cpu().numpy()
    , self.Ca1.value = (d_ls @ d_l1.t()), It seems d_ls is the grad_l (gradient produced by the classification loss) and d_l1 is grad_g (gradient produced by the fairness-related loss).
  3. In the same class, line 24
    constraints_dom = [self.alpha >= 0, cp.sum(self.alpha) == 1]
    , are there some constraints missing?
@zaocan666
Copy link

d_ls=[grad_l, grad_g]^T (line 68)
We want to compute the gradient d=alpha @ d_ls, so we only need to get the alpha.
when the fairness constraint is satisfied, optimization goal is alpha @ d_ls @ d_ls^T, it is consistant with the paper.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants