We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I am training yolov2 with backend mobilenet on WIDER FACE dataset. val_mAP is very small 0.008 and val_loss is also not decreasing.
If object is very small, will changing grid size help? and can we change grid size for yolov2 with backend mobilenet?
Is there any pre-processing annotated dataset (resizing to 224x244 of image and anchor) required before training?
If you have trained yolov2 with mobilenet backend with any standard dataset please post the result.
here is loss, val_loss and val_mAP pattern:
The text was updated successfully, but these errors were encountered:
I carry out mobilenet+yolov2 on VOC2007, the mAP is about 30% which is much lower than mobilenet2+yolov3, I don't know why
Sorry, something went wrong.
No branches or pull requests
I am training yolov2 with backend mobilenet on WIDER FACE dataset. val_mAP is very small 0.008 and val_loss is also not decreasing.
If object is very small, will changing grid size help? and can we change grid size for yolov2 with backend mobilenet?
Is there any pre-processing annotated dataset (resizing to 224x244 of image and anchor) required before training?
If you have trained yolov2 with mobilenet backend with any standard dataset please post the result.
here is loss, val_loss and val_mAP pattern:
The text was updated successfully, but these errors were encountered: