-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Deploying on OpenShift leads to a Quay error for the first deployment #15415
Comments
/cc @geoand |
|
@gsmet what version of OpenShift are you doing this on ? |
So this was the old error with OpenShift 3. The problem with 4 is slightly different: it complains about manifest. Unfortunately, for my last deployment, I didn't get the error in the console (dunno why) and the build pod in OpenShift is gone. I'll take note of the error next Sunday. |
gotcha - just to check you are using quarkus mvn command to do the deploy,right ? i have a hunch what could be causing it. let me check. |
okey so just for my own sanity to keep track of what happens:
will generate a
which definitely gets bumped every week. i'm a bit confused why that gives an error since the image stream policy should be be |
its as if openshift does actually use the cached sha but then remotely fetches the manifest which of course wont exist anymore ...kinda beating the whole purpose of imagestreams. I must be missing something here. |
Yes I use the OpenShift extension and a simple You might be on the right path as IIRC the message complains about the manifest not being found. In this very case, I think we would want to upgrade to the latest version as we are using floating tags to push security fixes. |
I wonder if a "fix" is to do as documented in https://docs.openshift.com/container-platform/4.7/openshift_images/image-streams-manage.html#images-imagestreams-update-tag_image-streams-managing
to force an update or even have it scheduled:
note: these are workarounds to the problem and I don't grok why there is a pull of images that the cluster already knows...that should not happen. |
yes, but if you used the same cluster to push previously why is it fetching manifest to check...it will quite often fail on a floating tag...so there is either some bug in how openshift does this or we configured it wrong :) |
long shot: could there be a policy on the cluster pruning the images somehow ? causing the cache to be partially fullfilled -i.e. sha1 is there but image gone, so it tries pulling the manifest using the sha rather than the tag? |
@maxandersen so I can confirm I get the error every time the image is refreshed. Here is the error I get:
I fix the issue by executing the following command:
and then restart the deployment. BTW @iocanel , I used to have the error in the terminal and now I have to go to the OpenShift console to get the error. I get this in the terminal:
|
Is this still an issue or can we close it? |
i haven't verified it is gone - last i checked it still happened but been a while ;/ |
@iocanel one of your latest fixes should take care of this, right? |
Not intentionally no! |
Okay then, let's leave it open |
Is this still an issue with Quarkus 3.8? |
Let's close for now |
Nearly every time I deploy my Quarkus application to an OpenShift cluster using the OpenShift extension, I get the following error on the OpenShift side for my first attempt:
The second attempt succeeds with:
As you can see, the hash is not the same. As the Quarkus images are rebuilt often, and I don't deploy that often, maybe that could explain things.
I deploy my application with:
and I use the
quarkus-openshift
extension.Another issue slightly related, I have absolutely no feedback of the error: the Maven build is green:
The error is only visible in the OpenShift UI.
@iocanel this is the issue we discussed on Zulip the other day. You wanted a GH issue, here we go :).
The text was updated successfully, but these errors were encountered: