-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ViT-B classifier added with first 30 labels #7842
Conversation
WalkthroughThe recent updates introduce a serverless deployment of a PyTorch-based Vision Transformer (ViT-B) model using Nuclio. The changes encompass enhanced configurations, new function definitions, and a class for model handling, streamlining the process for image classification tasks in a cloud environment. Changes
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Out of diff range and nitpick comments (1)
serverless/pytorch/omerferhatt/vit-b/nuclio/function-gpu.yaml (1)
1-88
: Ensure GPU resource allocation is optimized.The configuration specifies the allocation of one GPU. It is important to monitor the utilization of GPU resources to ensure they are being used efficiently. Over-provisioning can lead to unnecessary costs, while under-provisioning can affect performance. Consider implementing monitoring tools to track GPU usage and adjust the allocation based on actual usage patterns.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Hello, I am not sure we may use models from torchvision. Sources: |
@bsekachev I can update the model without weights, and it will be used as a template for classifier serverless models since there are none. Regarding the |
|
||
|
||
class ModelHandler: | ||
weights = tv.models.ViT_B_16_Weights.DEFAULT |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's explicitly specify what model will be used: IMAGENET1K_V1
@@ -0,0 +1,88 @@ | |||
metadata: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Path should be serverless/pytorch/torchvision/vit-b/nuclio
as this model is a part of torchvision
context.logger.info("Init context...100%") | ||
|
||
def handler(context, event): | ||
context.logger.info("Run ViT-B model") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not think we need these logs on production
"confidence": str(score), | ||
"label": context.user_data.labels[class_id], | ||
"type": "tag", | ||
"objectType": "tag", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not necessary field, as discussed in another pull request
class_id = prediction.argmax().item() | ||
score = prediction[class_id].item() | ||
category_name = self.weights.meta["categories"][class_id] | ||
return (class_id, category_name, score) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please, put return inside the context manager
|
Hello @omerferhatt Do you have any plans regarding the pull request? |
I will close the pull request now. |
I'm going to try to finish this week, thanks. |
Motivation and context
Since there is no classifier model in the repo, I added it to make it easier for general use and to be used as a draft. I think this can be useful for testing too.
#3896 (comment)
How has this been tested?
Checklist
develop
branch(cvat-canvas,
cvat-core,
cvat-data and
cvat-ui)
License
Feel free to contact the maintainers if that's a concern.
Summary by CodeRabbit
New Features
Enhancements
Functionality