-
Notifications
You must be signed in to change notification settings - Fork 545
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
package-server pod keeps crashing #598
Comments
I'm seeing the same issue using the OKD manifests on oc cluster up. |
Thanks for the bug report! We have been working on our CI and some things slipped through. This is fixed in master, but we haven't yet cut a release that includes the fix. |
Hi @ecordell I'm still seeing this on the 0.8.0 manifests from the last master (7afcd1e today at 3:22EST). I believe this is the same issue, but recording the stack trace here in case it's useful: https://pastebin.com/raw/bqyTF7Q6. Let me know if you'd like me to move it to a separate issue. |
@brian-avery The images in the 0.8.0 manifests haven't been updated yet to include the fix for the sporadic panic. I believe this will be resolved when we cut a new release. |
Did this problem carry over to configmap-registry-server? Seeing this in master (last night):
|
@smarterclayton does it stay crashed? I haven’t seen it fail permanently, but it’s designed to crash quickly on start if it can’t fins what it needs. It should resolve itself - if it doesn’t that’s a new bug. One way it can fail is if the catalog data is incorrect, so that’s the most likely culprit. |
All the info I have is in the linked job.
For it to show up here it has to have been crashing for a while.
On Dec 26, 2018, at 9:07 PM, Evan Cordell <[email protected]> wrote:
@smarterclayton <https://github.com/smarterclayton> does it stay crashed? I
haven’t seen it fail permanently, but it’s designed to crash quickly on
start if it can’t fins what it needs. It should resolve itself - if it
doesn’t that’s a new bug.
One way it can fail is if the catalog data is incorrect, so that’s the most
likely culprit.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#598 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABG_p-4nOfYI-Fa5W4y4wnhvC6N3_GsZks5u9Ct1gaJpZM4ZAtFX>
.
|
@smarterclayton We just merged an e2e test to verify that rh-operators pod starts up and doesn't crashloop: #643 It passed OLM's e2e (with the new test) and the The error itself looks like OLM is generating a bad role/rolebinding for the rh-operators pod that it creates. I've only ever seen that specific error once, on a branch of OLM, and the error was resolved before it was merged into master (it was a repeatable bug in our CI). I mention this because this isn't the first time it's looked like there was non-master-branch OLM code in master of OpenShift, so I'm wondering if there may be some bugs in the way images get tagged into releases. |
The latest release contains all of the fixes formpackage server that were causing an issue here (0.8.1) |
package-server panics regularly:
I installed OLM this way:
kubectl create -f deploy/upstream/manifests/latest/
(with today's OLM master)./scripts/run_console_local.sh
The text was updated successfully, but these errors were encountered: