Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

generate_self_debiasing not implemented for T5 #5

Open
pullelys opened this issue Feb 16, 2022 · 0 comments
Open

generate_self_debiasing not implemented for T5 #5

pullelys opened this issue Feb 16, 2022 · 0 comments

Comments

@pullelys
Copy link

Hi, I noticed that the generate_self_debiasing function is not implemented for the T5 model:

self-debiasing/modeling.py

Lines 131 to 133 in c9764e5

def generate_self_debiasing(self, input_texts: List[str], debiasing_prefixes: List[str], decay_constant: float = 50,
epsilon: float = 0.01, debug: bool = False, **kwargs) -> List[str]:
raise NotImplementedError()

However, in Figure 1 of your paper you give examples of using T5 with self-debiasing.

Screenshot 2022-02-16 at 11 33 53

Would you mind publishing the code for self-debiasing with T5?

Given that T5 is an encoder-decoder model, I assume that self-debiasing has to be performed differently to GPT2, i.e. instead of debiasing the continuation of a prompt, T5 debiases the input sentence itself, or more precisely, the text that is generated for the span in the input sentence that is replaced by a sentinel token. Is it also possible to use self-debiasing with T5 if there are more than one sentinel tokens in the input sentence? Moreover, I'm wondering if it is possible to debias an input sentence with T5 without having to first replace the biased words by sentinel tokens.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant