Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[update-readme] update README.md file and remove node modules #31

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
The diff you're trying to view is too large. We only load the first 3000 changed files.
49 changes: 48 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@ Get the config from terraform output, and save it to a yaml file:
terraform output config-map > config-map-aws-auth.yaml
```

Apply the config map to EKS:
Configure aws cli with a user account having appropriate access and apply the config map to EKS cluster:

```bash
kubectl apply -f config-map-aws-auth.yaml
Expand All @@ -195,6 +195,53 @@ You can verify the worker nodes are joining the cluster
kubectl get nodes --watch
```

### Authorize users to access the cluster

Initially, only the system that deployed the cluster will be able to access the cluster. To authorize other users for accessing the cluster, `aws-auth` config needs to be modified by using the steps given below:

* Open the aws-auth file in the edit mode on the machine that has been used to deploy EKS cluster:

```bash
sudo kubectl edit -n kube-system configmap/aws-auth
```

* Add the following configuration in that file by changing the placeholders:


```yaml

mapUsers: |
- userarn: arn:aws:iam::111122223333:user/<username>
username: <username>
groups:
- system:masters
```

So, the final configuration would look like this:

```yaml
apiVersion: v1
data:
mapRoles: |
- rolearn: arn:aws:iam::555555555555:role/devel-worker-nodes-NodeInstanceRole-74RF4UBDUKL6
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
mapUsers: |
- userarn: arn:aws:iam::111122223333:user/<username>
username: <username>
groups:
- system:masters
```

* Once the user map is added in the configuration we need to create cluster role binding for that user:

```bash
kubectl create clusterrolebinding ops-user-cluster-admin-binding-<username> --clusterrole=cluster-admin --user=<username>
```
Replace the placeholder with proper values

### Cleaning up

You can destroy this cluster entirely by running:
Expand Down
1 change: 0 additions & 1 deletion node_modules/.bin/esparse

This file was deleted.

1 change: 0 additions & 1 deletion node_modules/.bin/esvalidate

This file was deleted.

1 change: 0 additions & 1 deletion node_modules/.bin/flat

This file was deleted.

1 change: 0 additions & 1 deletion node_modules/.bin/is-ci

This file was deleted.

1 change: 0 additions & 1 deletion node_modules/.bin/js-yaml

This file was deleted.

1 change: 0 additions & 1 deletion node_modules/.bin/rc

This file was deleted.

1 change: 0 additions & 1 deletion node_modules/.bin/release-it

This file was deleted.

1 change: 0 additions & 1 deletion node_modules/.bin/semver

This file was deleted.

1 change: 0 additions & 1 deletion node_modules/.bin/shjs

This file was deleted.

1 change: 0 additions & 1 deletion node_modules/.bin/uuid

This file was deleted.

1 change: 0 additions & 1 deletion node_modules/.bin/which

This file was deleted.

1 change: 0 additions & 1 deletion node_modules/.bin/window-size

This file was deleted.

265 changes: 0 additions & 265 deletions node_modules/@iarna/toml/CHANGELOG.md

This file was deleted.

14 changes: 0 additions & 14 deletions node_modules/@iarna/toml/LICENSE

This file was deleted.

Loading