-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Medical MVP: VQA+Captum demo #45
Comments
The authors of 1 suggest that 2 gives good results. The code is publicly available at 3, but the requirements are pretty high - it requires 64GB of memory (I'm assuming video) and 4 GPUs, according to the authors. Where are we going to train our models? @bcebere @tudorcebere |
I can manage to provide access to a 2080ti and when it is really critical, I can request access to a tesla v100. I don't think I can get something close to 64VRAM tho. |
I was thinking about the AWS free credit that was listed on the hackaton page - I think that we could also use the Google Compute Engine 300$ free credit 1 if we wanna train something that needs 64Gb of VRAM - unfortunately I've already used my free GCE credit. I've opened a ticket for this (#46). This issue also depends on #46 . |
See #44 for more details about the dataset and network architecture.
The task is:
Nice to have:
3. Federalize the model and do the same Captum demo. (depends on #41)
4. Find a stronger architecture (maybe based on transformers) and make a demo with it.
The text was updated successfully, but these errors were encountered: