Migrating from aws-auth identity mapping
Customer who already use EKS may already be using the aws-auth ConfigMap mechanism to manage IAM principal access to cluster. This section demonstrates how you can migrate entries from this older mechanism to using cluster access entries.
An IAM role eks-workshop-admins has been pre-configured in the EKS cluster that is used for a group with EKS administrative permissions. Lets check the aws-auth ConfigMap:
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::1234567890:role/eksctl-eks-workshop-nodegroup-defa-NodeInstanceRole-acgt4WAVfXAA
username: system:node:{{EC2PrivateDNSName}}- groups:
- system:masters
rolearn: arn:aws:iam::1234567890:role/eks-workshop-admins
username: cluster-admin
mapUsers: |
[]
kind: ConfigMap
metadata:
creationTimestamp: "2024-05-09T15:21:57Z"
name: aws-auth
namespace: kube-system
resourceVersion: "5186190"
uid: 2a1f9dc7-e32d-44e5-93b3-e5cf7790d95e
Impersonate this IAM role to check its access:
We should be able to list any pods, for example:
NAME READY STATUS RESTARTS AGE
carts-6d4478747c-vvzhm 1/1 Running 0 5m54s
carts-dynamodb-d9f9f48b-k5v99 1/1 Running 0 15d
Delete the aws-auth ConfigMap entry for this IAM role, we'll use eksctl for convenience:
Now if we try the same command as before we will be denied access:
error: You must be logged in to the server (Unauthorized)
Lets add an access entry to enable the cluster admins to access the cluster again:
Now we can associate an access policy for this principal that uses the AmazonEKSClusterAdminPolicy policy:
Test access has working again:
NAME READY STATUS RESTARTS AGE
carts-6d4478747c-vvzhm 1/1 Running 0 5m54s
carts-dynamodb-d9f9f48b-k5v99 1/1 Running 0 15d