Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to deploy OLM on minishift #705

Closed
leseb opened this issue Feb 8, 2019 · 4 comments
Closed

Unable to deploy OLM on minishift #705

leseb opened this issue Feb 8, 2019 · 4 comments

Comments

@leseb
Copy link
Contributor

leseb commented Feb 8, 2019

Env:

[leseb@tarox~/operator-lifecycle-manager][master !] minishift status
Minishift:  Running
Profile:    minishift
OpenShift:  Running (openshift v3.11.0+948efc6-96)
DiskUsage:  11% of 39G (Mounted On: /mnt/sda1)
CacheUsage: 1.863 GB (used by oc binary, ISO or cached images)

Error log after running: make run-local-shift:

[leseb@tarox~/operator-lifecycle-manager][master !] make run-local-shift                                                                                                                              [34/17639]
go: finding github.com/googleapis/gnostic/OpenAPIv2 latest
go: finding github.com/googleapis/gnostic/compiler latest
go: finding github.com/google/gofuzz latest
go: finding github.com/emicklei/go-restful/log latest
go: finding github.com/google/btree latest
go: finding github.com/modern-go/concurrent latest
. ./scripts/build_local_shift.sh
-- Starting profile 'minishift'
The 'minishift' VM is already running.
Logged into "https://192.168.42.248:8443" as "system:admin" using existing credentials.

You have access to the following projects and can switch between them with 'oc project <projectname>':

    default
    kube-dns
    kube-proxy
    kube-public
    kube-system
  * myproject
    openshift
    openshift-apiserver
    openshift-controller-manager
    openshift-core-operators
    openshift-infra
    openshift-node
    openshift-service-cert-signer
    openshift-web-console

Using project "myproject".
Sending build context to Docker daemon 77.39 MB
Step 1/3 : FROM golang:1.11
 ---> 901414995ecd
Step 2/3 : WORKDIR /go/src/github.com/operator-framework/operator-lifecycle-manager
 ---> Using cache
 ---> 34ea57fdac84
Step 3/3 : COPY . .
 ---> b06482cd582e
Removing intermediate container 24fb4d455f3f
Successfully built b06482cd582e
mkdir -p build/resources
. ./scripts/package-release.sh 1.0.0 build/resources Documentation/install/local-values-shift.yaml
wrote /tmp/tmp.Flk2FRskCM/chart/olm/templates/0000_50_00-namespace.yaml
wrote /tmp/tmp.Flk2FRskCM/chart/olm/templates/0000_50_14-olm-operators.configmap.yaml
wrote /tmp/tmp.Flk2FRskCM/chart/olm/templates/0000_50_02-clusterserviceversion.crd.yaml
wrote /tmp/tmp.Flk2FRskCM/chart/olm/templates/0000_50_03-installplan.crd.yaml
wrote /tmp/tmp.Flk2FRskCM/chart/olm/templates/0000_50_04-subscription.crd.yaml
wrote /tmp/tmp.Flk2FRskCM/chart/olm/templates/0000_50_05-catalogsource.crd.yaml
wrote /tmp/tmp.Flk2FRskCM/chart/olm/templates/0000_50_13-operatorgroup.crd.yaml
wrote /tmp/tmp.Flk2FRskCM/chart/olm/templates/0000_50_01-olm-operator.serviceaccount.yaml
wrote /tmp/tmp.Flk2FRskCM/chart/olm/templates/0000_50_12-aggregated.clusterrole.yaml
wrote /tmp/tmp.Flk2FRskCM/chart/olm/templates/0000_50_10-olm-operator.deployment.yaml
wrote /tmp/tmp.Flk2FRskCM/chart/olm/templates/0000_50_11-catalog-operator.deployment.yaml
wrote /tmp/tmp.Flk2FRskCM/chart/olm/templates/0000_50_15-olm-operators.catalogsource.yaml
wrote /tmp/tmp.Flk2FRskCM/chart/olm/templates/0000_50_16-operatorgroup-default.yaml
wrote /tmp/tmp.Flk2FRskCM/chart/olm/templates/0000_50_17-packageserver.subscription.yaml
. ./scripts/install_local.sh local build/resources
namespace "local" created
namespace "local" configured
clusterrole.authorization.openshift.io "system:controller:operator-lifecycle-manager" created
serviceaccount "olm-operator-serviceaccount" created
clusterrolebinding.authorization.openshift.io "olm-operator-binding-local" created
customresourcedefinition.apiextensions.k8s.io "clusterserviceversions.operators.coreos.com" created
customresourcedefinition.apiextensions.k8s.io "installplans.operators.coreos.com" created
customresourcedefinition.apiextensions.k8s.io "subscriptions.operators.coreos.com" created
customresourcedefinition.apiextensions.k8s.io "catalogsources.operators.coreos.com" created
deployment.apps "olm-operator" created
deployment.apps "catalog-operator" created
clusterrole.rbac.authorization.k8s.io "aggregate-olm-edit" created
clusterrole.rbac.authorization.k8s.io "aggregate-olm-view" created
customresourcedefinition.apiextensions.k8s.io "operatorgroups.operators.coreos.com" created
configmap "olm-operators" replaced
catalogsource.operators.coreos.com "olm-operators" created
operatorgroup.operators.coreos.com "global-operators" created
operatorgroup.operators.coreos.com "olm-operators" created
subscription.operators.coreos.com "packageserver" created
Waiting for rollout to finish: 0 of 1 updated replicas are available...

error: deployment "olm-operator" exceeded its progress deadline
make: *** [Makefile:68: run-local-shift] Error 1

However oc get deployment indicates the resource is running but not available:

[leseb@tarox~] oc get deployments --all-namespaces
NAMESPACE                       NAME                                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
local                           catalog-operator                         1         1         1            0           17m
local                           olm-operator                             1         1         1            0           18m
openshift-core-operators        openshift-service-cert-signer-operator   1         1         1            1           26m
openshift-core-operators        openshift-web-console-operator           1         1         1            1           24m
openshift-service-cert-signer   apiservice-cabundle-injector             1         1         1            1           25m
openshift-service-cert-signer   service-serving-cert-signer              1         1         1            1           25m
openshift-web-console           webconsole                               1         1         1            1           24m

Description of the resource:

[leseb@tarox~] oc describe deployment.apps/olm-operator -n local
Name:                   olm-operator
Namespace:              local
CreationTimestamp:      Fri, 08 Feb 2019 15:27:51 +0100
Labels:                 app=olm-operator
Annotations:            deployment.kubernetes.io/revision=1
                        kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"olm-operator"},"name":"olm-operator","namespace":"local"},"sp...
Selector:               app=olm-operator
Replicas:               1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           app=olm-operator
  Service Account:  olm-operator-serviceaccount
  Containers:
   olm-operator:
    Image:      quay.io/coreos/olm:local
    Port:       8080/TCP
    Host Port:  0/TCP
    Command:
      /bin/olm
    Args:
      -watchedNamespaces
      local
      -debug
      -writeStatusName

    Liveness:   http-get http://:8080/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:  http-get http://:8080/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      OPERATOR_NAMESPACE:   (v1:metadata.namespace)
      OPERATOR_NAME:       olm-operator
    Mounts:                <none>
  Volumes:                 <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      False   MinimumReplicasUnavailable
  Progressing    False   ProgressDeadlineExceeded
OldReplicaSets:  <none>
NewReplicaSet:   olm-operator-6ffbdc7886 (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  21m   deployment-controller  Scaled up replica set olm-operator-6ffbdc7886 to 1

Could this simply be a timing issue?
Is my minishift version too old? (minishift does not have any 4.0 alpha AFAIK)

Thanks.

@leseb leseb changed the title unable to deploy OLM on minishift Unable to deploy OLM on minishift Feb 8, 2019
@njhale
Copy link
Member

njhale commented Feb 9, 2019

@leseb We've been meaning to remove the minishift scripts from OLM. AFAIK it will not be available 4.0+. If you just need to run OLM, I suggest make run-local with minikube. If you need all of OpenShift, I would try the installer - I believe there are some instructions at (try.openshift.com)[try.openshift.com] as well.

@leseb
Copy link
Contributor Author

leseb commented Feb 11, 2019

Thanks for your response, it seems to be starting on minikube v1.13.2, so perhaps we should update the build_local.sh script with this version, or is there any reasons to stick with 1.11? Thanks.

@njhale
Copy link
Member

njhale commented Feb 12, 2019

1.11 is the version of kubernetes minikube is deploying - I believe our client modules depend on 1.11 and we haven't upgraded yet.

@leseb
Copy link
Contributor Author

leseb commented Feb 12, 2019

@njhale Thanks!

@leseb leseb closed this as completed Feb 12, 2019
leseb added a commit to leseb/operator-lifecycle-manager that referenced this issue Feb 13, 2019
As per
operator-framework#705 (comment)
it has been decided to drop support for minishift.

Signed-off-by: Sébastien Han <[email protected]>
leseb added a commit to leseb/operator-lifecycle-manager that referenced this issue Feb 18, 2019
As per
operator-framework#705 (comment)
it has been decided to drop support for minishift.

Signed-off-by: Sébastien Han <[email protected]>
leseb added a commit to leseb/operator-lifecycle-manager that referenced this issue Feb 18, 2019
As per
operator-framework#705 (comment)
it has been decided to drop support for minishift.

Signed-off-by: Sébastien Han <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants