-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What is the input format for the model to automatically generate marketing copy? #16
Comments
want to ask the same question~~ |
Hi @Nipi64310 , the marketing copywriting presentation is based on T5 architecture, [CLS], [SEP] are special tokens in BERT, which does not exist in T5. You need to refer to Google's practice in T5 to convert your task into Seq2Seq form. The model we have open sourced is a model with same architecture as T5 1.1 and does not include any downstream tasks. So if you want to do a demo similar to ours, you need to prepare the following data:
Using above data, construct the training text pairs. There are various forms of text pairs, and we are exploring which one is better. Here is an example:
|
If there are more questions, feel free to reopen this issue. |
hello langboat, thanks for sharing the good work.
Regarding the automatically generated marketing copy in the paper
What is the input of the model?
Is it in the form of [cls] title [sep] [keywords1,keywords2,keywords3,keywords4] [sep] [kg11,kg12,kg13] [kg21,kg22,kg23]?
The text was updated successfully, but these errors were encountered: