You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 24, 2023. It is now read-only.
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
Orchestrator and version (e.g. Kubernetes, DC/OS, Swarm)
Kubernetes 1.7.7
$ az --version
azure-cli (2.0.25)
What happened:
I deployed two Kubernetes ACS clusters into two different locations and the clusters seem quite different.
We can install our software in one location, in the other not. (Specifically because of the SecurityContextDeny admission is given in one cluster and in the other not)
This is the how to the api server has been started in each location:
Maybe I'm wrong, but aren't the ACS clusters eventually created using the acs-engine scripts?
If yes, which acs-engine release is used for that region?
Or is it always the latest but there's a RPv1 and RPv2 switch?
I'm asking because lately there was a special flag "SecurityContextDeny" introduced that gave us trouble but I want to know in which regions we have to consider for this.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Is this a request for help?:
Yes
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
Orchestrator and version (e.g. Kubernetes, DC/OS, Swarm)
Kubernetes 1.7.7
$ az --version
azure-cli (2.0.25)
What happened:
I deployed two Kubernetes ACS clusters into two different locations and the clusters seem quite different.
We can install our software in one location, in the other not. (Specifically because of the SecurityContextDeny admission is given in one cluster and in the other not)
This is the how to the api server has been started in each location:
uksouth:
/hyperkube
apiserver
--address=0.0.0.0
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,DenyEscalatingExec,AlwaysPullImages,SecurityContextDeny
--advertise-address=10.240.255.5
--allow-privileged=true
--anonymous-auth=false
--audit-log-maxage=30
--audit-log-maxbackup=10
--audit-log-maxsize=100
--audit-log-path=/var/log/apiserver/audit.log
--authorization-mode=Node,RBAC
--client-ca-file=/etc/kubernetes/certs/ca.crt
--cloud-config=/etc/kubernetes/azure.json
--cloud-provider=azure
--etcd-cafile=/etc/kubernetes/certs/ca.crt
--etcd-certfile=/etc/kubernetes/certs/etcdclient.crt
--etcd-keyfile=/etc/kubernetes/certs/etcdclient.key
--etcd-quorum-read=true
--etcd-servers=https://127.0.0.1:2379
--insecure-port=8080
--kubelet-client-certificate=/etc/kubernetes/certs/client.crt
--kubelet-client-key=/etc/kubernetes/certs/client.key
--profiling=false
--repair-malformed-updates=false
--secure-port=443
--service-account-key-file=/etc/kubernetes/certs/apiserver.key
--service-account-lookup=true
--service-cluster-ip-range=10.0.0.0/16
--storage-backend=etcd3
--tls-cert-file=/etc/kubernetes/certs/apiserver.crt
--tls-private-key-file=/etc/kubernetes/certs/apiserver.key
--v=4
westeurope:
/hyperkube
apiserver
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
--address=0.0.0.0
--allow-privileged
--insecure-port=8080
--secure-port=443
--cloud-provider=azure
--cloud-config=/etc/kubernetes/azure.json
--service-cluster-ip-range=10.0.0.0/16
--etcd-servers=http://127.0.0.1:2379
--etcd-quorum-read=true
--advertise-address=10.240.255.15
--tls-cert-file=/etc/kubernetes/certs/apiserver.crt
--tls-private-key-file=/etc/kubernetes/certs/apiserver.key
--client-ca-file=/etc/kubernetes/certs/ca.crt
--service-account-key-file=/etc/kubernetes/certs/apiserver.key
--storage-backend=etcd2
--v=4
What you expected to happen:
Same cluster configuration when started the same.
Or:
How to reproduce it (as minimally and precisely as possible):
az acs create
--orchestrator-type kubernetes
--orchestrator-version "1.7.7"
--name abcd
--resource-group efgh
--client-secret ${CLIENT_SECRET}
--service-principal ${SERVICE_PRINCIPAL}
--agent-count 3
--agent-vm-size D12_Standard
--location [uksouth|westeurope]
Anything else we need to know:
The text was updated successfully, but these errors were encountered: