Skip to content

Commit

Permalink
Merge branch 'master' into master
Browse files Browse the repository at this point in the history
  • Loading branch information
Dan Gorst authored Apr 12, 2018
2 parents 34e6490 + d0518d9 commit 2349511
Show file tree
Hide file tree
Showing 4 changed files with 137 additions and 20 deletions.
127 changes: 114 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Provide IAM credentials to containers running inside a kubernetes cluster based

Traditionally in AWS, service level isolation is done using IAM roles. IAM roles are attributed through instance
profiles and are accessible by services through the transparent usage by the aws-sdk of the ec2 metadata API.
When using the aws-sdk, a call is made to the ec2 metadata API which provides temporary credentials
When using the aws-sdk, a call is made to the EC2 metadata API which provides temporary credentials
that are then used to make calls to the AWS service.

## Problem statement
Expand All @@ -26,8 +26,8 @@ IAM roles. This is not acceptable from a security perspective.

The solution is to redirect the traffic that is going to the ec2 metadata API for docker containers to a container
running on each instance, make a call to the AWS API to retrieve temporary credentials and return these to the caller.
Other calls will be proxied to the ec2 metadata API. This container will need to run with host networking enabled
so that it can call the ec2 metadata API itself.
Other calls will be proxied to the EC2 metadata API. This container will need to run with host networking enabled
so that it can call the EC2 metadata API itself.

## Usage

Expand Down Expand Up @@ -80,9 +80,8 @@ role. See this [StackOverflow post](http://stackoverflow.com/a/33850060) for mor
### kube2iam daemonset

Run the kube2iam container as a daemonset (so that it runs on each worker) with `hostNetwork: true`.
The kube2iam daemon and iptables rule (see below) need to run before all other pods that would require
access to AWS resources.

The kube2iam daemon and iptables rule (see below) need to run before all other pods that would require
access to AWS resources.

```yaml
apiVersion: extensions/v1beta1
Expand All @@ -103,6 +102,12 @@ spec:
name: kube2iam
args:
- "--base-role-arn=arn:aws:iam::123456789012:role/"
- "--node=$(NODE_NAME)"
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
ports:
- containerPort: 8181
hostPort: 8181
Expand All @@ -111,7 +116,7 @@ spec:
### iptables
To prevent containers from directly accessing the ec2 metadata API and gaining unwanted access to AWS resources,
To prevent containers from directly accessing the EC2 metadata API and gaining unwanted access to AWS resources,
the traffic to `169.254.169.254` must be proxied for docker containers.

```bash
Expand All @@ -137,6 +142,7 @@ different than `docker0` depending on which virtual network you use e.g.
* for CNI, use `cni0`
* for weave use `weave`
* for flannel use `cni0`
* for [kube-router](https://github.com/cloudnativelabs/kube-router) use `kube-bridge`

```yaml
apiVersion: extensions/v1beta1
Expand All @@ -159,11 +165,16 @@ spec:
- "--base-role-arn=arn:aws:iam::123456789012:role/"
- "--iptables=true"
- "--host-ip=$(HOST_IP)"
- "--node=$(NODE_NAME)"
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
ports:
- containerPort: 8181
hostPort: 8181
Expand Down Expand Up @@ -267,9 +278,10 @@ metadata:
name: default
```

_Note:_ You can also use glob-based matching for namespace restrictions, which works nicely with the path-based namespacing supported for AWS IAM roles.
_Note:_ You can also use glob-based matching for namespace restrictions, which works nicely with the path-based
namespacing supported for AWS IAM roles.

Example: to allow all roles prefixed with `my-custom-path/` to be assuemd by pods in the default namespace, the
Example: to allow all roles prefixed with `my-custom-path/` to be assumed by pods in the default namespace, the
default namespace would be annotated as follows:

```yaml
Expand All @@ -282,20 +294,108 @@ metadata:
name: default
```

### RBAC Setup

This is the basic RBAC setup to get kube2iam working correctly when your cluster is using rbac. Below is the bare minimum to get kube2iam working.

First we need to make a service account.

```yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube2iam
namespace: kube-system
```

Next we need to setup roles and binding for the the process.

```yaml
---
apiVersion: v1
items:
- apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: kube2iam
rules:
- apiGroups: [""]
resources: ["namespaces","pods"]
verbs: ["get","watch","list"]
- apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kube2iam
subjects:
- kind: ServiceAccount
name: kube2iam
namespace: kube-system
roleRef:
kind: ClusterRole
name: kube2iam
apiGroup: rbac.authorization.k8s.io
kind: List
```

You will notice this lives in the kube-system namespace to allow for easier seperation between system services and other services.

Here is what a kube2iam daemonset yaml might look like.

```yaml
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube2iam
namespace: kube-system
labels:
app: kube2iam
spec:
template:
metadata:
labels:
name: kube2iam
spec:
serviceAccountName: kube2iam
hostNetwork: true
containers:
- image: jtblin/kube2iam:latest
imagePullPolicy: Always
name: kube2iam
args:
- "--app-port=8181"
- "--base-role-arn=arn:aws:iam::xxxxxxx:role/"
- "--iptables=true"
- "--host-ip=$(HOST_IP)"
- "--host-interface=weave"
- "--verbose"
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
ports:
- containerPort: 8181
hostPort: 8181
name: http
securityContext:
privileged: true
```

### Debug

By using the --debug flag you can enable some extra features making debugging easier:

- `/debug/store` endpoint enabled to dump knowledge of namespaces and role association.
* `/debug/store` endpoint enabled to dump knowledge of namespaces and role association.

### Base ARN auto discovery

By using the `--auto-discover-base-arn` flag, kube2iam will auto discover the base arn via the ec2 metadata service.
By using the `--auto-discover-base-arn` flag, kube2iam will auto discover the base ARN via the EC2 metadata service.

### Using ec2 instance role as default role

By using the `--auto-discover-default-role` flag, kube2iam will auto discover the base arn and the iam role attached to
By using the `--auto-discover-default-role` flag, kube2iam will auto discover the base ARN and the IAM role attached to
the instance and use it as the fallback role to use when annotation is not set.

### Options
Expand Down Expand Up @@ -324,6 +424,7 @@ Usage of ./build/bin/darwin/kube2iam:
--insecure Kubernetes server should be accessed without verifying the TLS. Testing only
--iptables Add iptables rule (also requires --host-ip)
--remove-iptables-on-exit Attempt to remove iptables rule on exit (also requires --iptables)
--log-format string Log format (text/json) (default "text")
--log-level string Log level (default "info")
--metadata-addr string Address for the ec2 metadata (default "169.254.169.254")
--namespace-key string Namespace annotation key used to retrieve the IAM roles allowed (value in annotation should be json array) (default "iam.amazonaws.com/allowed-roles")
Expand All @@ -338,7 +439,7 @@ Usage of ./build/bin/darwin/kube2iam:
* Build and push dev image to docker hub: `make docker-dev DOCKER_REPO=<your docker hub username>`
* Update `deployment.yaml` as needed
* Deploy to local kubernetes cluster: `kubectl create -f deployment.yaml` or
`kubectl delete -f deployment.yaml && kubectl create -f deployment.yaml`
`kubectl delete -f deployment.yaml && kubectl create -f deployment.yaml`
* Expose as service: `kubectl expose deployment kube2iam --type=NodePort`
* Retrieve the services url: `minikube service kube2iam --url`
* Test your changes e.g. `curl -is $(minikube service kube2iam --url)/healthz`
Expand Down
8 changes: 7 additions & 1 deletion cmd/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,10 @@ func addFlags(s *server.Server, fs *pflag.FlagSet) {
fs.BoolVar(&s.NamespaceRestriction, "namespace-restrictions", false, "Enable namespace restrictions")
fs.StringVar(&s.NamespaceKey, "namespace-key", s.NamespaceKey, "Namespace annotation key used to retrieve the IAM roles allowed (value in annotation should be json array)")
fs.StringVar(&s.HostIP, "host-ip", s.HostIP, "IP address of host")
fs.StringVar(&s.NodeName, "node", s.NodeName, "Name of the node where kube2iam is running")
fs.DurationVar(&s.BackoffMaxInterval, "backoff-max-interval", s.BackoffMaxInterval, "Max interval for backoff when querying for role.")
fs.DurationVar(&s.BackoffMaxElapsedTime, "backoff-max-elapsed-time", s.BackoffMaxElapsedTime, "Max elapsed time for backoff when querying for role.")
fs.StringVar(&s.LogFormat, "log-format", s.LogFormat, "Log format (text/json)")
fs.StringVar(&s.LogLevel, "log-level", s.LogLevel, "Log level")
fs.BoolVar(&s.Verbose, "verbose", false, "Verbose")
fs.BoolVar(&s.Version, "version", false, "Print the version and exits")
Expand All @@ -56,6 +58,10 @@ func main() {
log.SetLevel(logLevel)
}

if strings.ToLower(s.LogFormat) == "json" {
log.SetFormatter(&log.JSONFormatter{})
}

if s.Version {
version.PrintVersionAndExit()
}
Expand Down Expand Up @@ -108,7 +114,7 @@ func main() {

signalChan := make(chan os.Signal)
go func() {
if err := s.Run(s.APIServer, s.APIToken, s.Insecure); err != nil {
if err := s.Run(s.APIServer, s.APIToken, s.NodeName, s.Insecure); err != nil {
log.Errorf("%s", err)
signalChan <- syscall.SIGABRT // On error, just quit now by faking a signal
}
Expand Down
11 changes: 8 additions & 3 deletions k8s/k8s.go
Original file line number Diff line number Diff line change
Expand Up @@ -27,11 +27,16 @@ type Client struct {
namespaceIndexer cache.Indexer
podController *cache.Controller
podIndexer cache.Indexer
nodeName string
}

// Returns a cache.ListWatch that gets all changes to pods.
func (k8s *Client) createPodLW() *cache.ListWatch {
return cache.NewListWatchFromClient(k8s.CoreV1().RESTClient(), "pods", v1.NamespaceAll, selector.Everything())
fieldSelector := selector.Everything()
if k8s.nodeName != "" {
fieldSelector = selector.OneTermEqualSelector("spec.nodeName", k8s.nodeName)
}
return cache.NewListWatchFromClient(k8s.CoreV1().RESTClient(), "pods", v1.NamespaceAll, fieldSelector)
}

// WatchForPods watches for pod changes.
Expand Down Expand Up @@ -119,7 +124,7 @@ func (k8s *Client) NamespaceByName(namespaceName string) (*v1.Namespace, error)
}

// NewClient returns a new kubernetes client.
func NewClient(host, token string, insecure bool) (*Client, error) {
func NewClient(host, token, nodeName string, insecure bool) (*Client, error) {
var config *rest.Config
var err error
if host != "" && token != "" {
Expand All @@ -138,5 +143,5 @@ func NewClient(host, token string, insecure bool) (*Client, error) {
if err != nil {
return nil, err
}
return &Client{Clientset: client}, nil
return &Client{Clientset: client, nodeName: nodeName}, nil
}
11 changes: 8 additions & 3 deletions server/server.go
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ const (
defaultCacheSyncAttempts = 10
defaultIAMRoleKey = "iam.amazonaws.com/role"
defaultLogLevel = "info"
defaultLogFormat = "text"
defaultMaxElapsedTime = 2 * time.Second
defaultMaxInterval = 1 * time.Second
defaultMetadataAddress = "169.254.169.254"
Expand All @@ -46,10 +47,12 @@ type Server struct {
MetadataAddress string
HostInterface string
HostIP string
NodeName string
NamespaceKey string
LogLevel string
LogFormat string
AddIPTablesRule bool
RemoveIPTablesRuleOnExit bool
RemoveIPTablesRuleOnExit bool
AutoDiscoverBaseArn bool
AutoDiscoverDefaultRole bool
Debug bool
Expand Down Expand Up @@ -265,8 +268,8 @@ func write(logger *log.Entry, w http.ResponseWriter, s string) {
}

// Run runs the specified Server.
func (s *Server) Run(host, token string, insecure bool) error {
k, err := k8s.NewClient(host, token, insecure)
func (s *Server) Run(host, token, nodeName string, insecure bool) error {
k, err := k8s.NewClient(host, token, nodeName, insecure)
if err != nil {
return err
}
Expand All @@ -293,6 +296,7 @@ func (s *Server) Run(host, token string, insecure bool) error {
// This is a potential security risk if enabled in some clusters, hence the flag
r.Handle("/debug/store", appHandler(s.debugStoreHandler))
}
r.Handle("/{version}/meta-data/iam/security-credentials", appHandler(s.securityCredentialsHandler))
r.Handle("/{version}/meta-data/iam/security-credentials/", appHandler(s.securityCredentialsHandler))
r.Handle("/{version}/meta-data/iam/security-credentials/{role:.*}", appHandler(s.roleHandler))
r.Handle("/healthz", appHandler(s.healthHandler))
Expand All @@ -310,6 +314,7 @@ func NewServer() *Server {
IAMRoleKey: defaultIAMRoleKey,
BackoffMaxInterval: defaultMaxInterval,
LogLevel: defaultLogLevel,
LogFormat: defaultLogFormat,
MetadataAddress: defaultMetadataAddress,
NamespaceKey: defaultNamespaceKey,
}
Expand Down

0 comments on commit 2349511

Please sign in to comment.