-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loading unprocessed corpus documents with CTM and Optimizer #46
Comments
Thanks for open the specific issue, because I had lost the question. Yes, I confirm that there's currently no way to load the unpreprocessed corpus. A possibility could be to add an additional column representing the unpreprocessed text. This could be mandatory (although it's not necessary if one doesn't use CTM) or this could be optional. In case it's optional, this can create some confusion (how can we recognize that a specific column represents the unpreprocessed text, the labels, etc?), unless we provide a header to the .tsv file. Happy to discuss if you want. Unfortunately my time to dedicate to this project has been reduced lately. So I am be slow to respond. However I think OCTIS can be useful for the community and I'm trying to keep it alive :) |
following up on this cause I stumbled on the same issue (I think) and want to double-check I understand correctly. I need to do hyperparameter optimization + model comparison for multiple CTMs, and I want to pass unpreprocessed text to the transformer part of the pipeline, while passing processed text to the neural topic model. It seems like at the moment this is not supported here, I have to stick to manually trying different HP combinations and computing metrics through https://github.com/MilaNLProc/contextualized-topic-models, correct? Amazing work, by the way 🙏 |
Thanks Roberta! :) yes, that is correct. My suggestion is to first try hyperparameter configurations that "usually" work well. You can find some reference values in these papers:
Moreover, make sure you select an appropriate pre-trained model for generating the contextualized representations of the documents. In this paper we noticed that this has an impact on the results. And also the pre-processing is quite important. It seems CTM works better with smaller vocabularies. Hope it helps :) Silvia |
thanks for the super quick reply and the pointers! :) |
Description
I've asked this at #29, but decided to open a new issue because this is a more specific scenario. So, here it is:
Originally posted by @lffloyd in #29 (comment)
What I Did
I gave a look at the docs.
The text was updated successfully, but these errors were encountered: