8

I have locked myself out after modifying the config map. Is there any way around this?

This happened after i modified the config map using

kubectl edit -n kube-system configmap/aws-auth

Now i am getting an error using the IAM role that was used to create the cluster

Error from server (Forbidden): pods is forbidden: User "USERNAME" cannot list resource "pods" in API group "" in the namespace "default"
winn j
  • 322
  • 1
  • 15
  • The IAM user or IAM role that you created the EKS cluster with still have full cluster-admin access to it. Such arn for role or user is added to the cluster RBAC during the initial build. Your configmap change wouldn’t be able to modify it in any way. So please use that to access your cluster and apply correctly configured configmap in order to use another role to list pods in default namespace. – marcincuber Jul 19 '19 at 21:17
  • @MC_ I'm in a similar situation, where the IAM role which created the cluster has no privileges. I can authenticate fine, but the authenticated user no privileges. I haven't found any documentation saying this shouldn't be possible. – groner Jul 23 '19 at 19:34
  • This is really strange because there is no way to remove permissions for user or role that created the EKS cluster. I shall test it myself and provide an update later this week. – marcincuber Jul 23 '19 at 20:08
  • @MC_ It may be helpful for you to know that I accidentally created mapRole entries with no groups field, and the only rolebindings I created were to groups. – groner Jul 23 '19 at 20:45
  • Just want to confirm that I tested it throughly and even when I removed the configmap which means I lost worker nodes etc. My IAM role that I created Eks cluster with still had access to the ControlPlane. Additionally, I removed that role then created it again and everything works as expected. Can’t replicate your issues guys, sorry :(. – marcincuber Jul 29 '19 at 17:09
  • 1
    I've managed to do this as well after what seemed like a trivial edit. None of the users listed can access the cluster anymore, including the user that originally created it. I haven't found a workaround yet and will most likely end up re-creating the EKS cluster instead. – tobias.mcnulty Sep 03 '19 at 19:41
  • 1
    Did you manage to fix this problem? We have also locked ourselves out of our cluster. Setting up a new cluster would be very tedious and I find it quite unbelievable that there is no work-around. – joe Feb 10 '20 at 13:16

0 Answers0