-
Notifications
You must be signed in to change notification settings - Fork 439
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test: Update landmark image #2394
Conversation
Codecov Report
@@ Coverage Diff @@
## master #2394 +/- ##
=========================================
Coverage 92.55% 92.55%
Complexity 4503 4503
=========================================
Files 312 312
Lines 13497 13497
=========================================
Hits 12492 12492
Misses 1005 1005 Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm tired of having to constantly fix these system tests as the machine learning results evolve. Is there a way we could avoid asserting against specific results and just check that the expected values exist?
Approved nonetheless, just something to consider.
Agreed. I've been toying around with the idea of a bot that will open PRs to update ML system tests on our behalf across the googleapis repositories. Let me know if you're interested in building out the idea. |
No description provided.