3

I am building a kubernetes cluster and I was wondering if there is a way to hide the master(s) node(s) from "kubectl get nodes". Like they are doing with eks, aks etc. The purpose is to give full admin control to the user except on the master nodes/components. I guess there is way with the k8s RBAC somehow but cant find anything relevant yet.

I also tried to disalow the kubelet on the master as sugested here (How do managed Kubernetes providers hide the master nodes?) but the node just appears as "Not Ready"

Dam
  • 53
  • 6
  • I think kubectl list the nodes that contain certain tags. If you remove that tag, they wouldn't show up when `kubectl get nodes`. – Jose Armesto Jan 27 '20 at 11:01
  • here is the result of kubectl get nodes --show-labels ip-172-31-1-201 Ready controlplane,etcd 3h8m v1.17.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-172-31-1-201,kubernetes.io/os=linux,node-role.kubernetes.io/controlplane=true,node-role.kubernetes.io/etcd=true not sure which one I should delete :/ – Dam Jan 27 '20 at 13:06
  • 1
    can you provide the output of kubectl -v=8 get nodes on the cloud provider where you only get worker nodes. – Arghya Sadhu Jan 27 '20 at 14:12
  • @ArghyaSadhu there is no cloud providers, its on premise (vms) – Dam Jan 27 '20 at 14:51

3 Answers3

1

The key indeed is kubelet component of the Kubernetes.
I suspect managed Kubernetes versions do not run kubelet at all.
You can do the same on your DIY cluster to prove.

Main goal of kubelet is to run Pods.
If you don't need to run Pods on a host, you don't start kubelet.
Control Plane components can run as systemd services or static containers.

There is Alpha feature to self-host Control Plane components (ie run them as Pods): https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/self-hosting/
So in future they may start running kubelet on master hosts, but no need for now.

The kubelet is the primary “node agent” that runs on each node. It can register the node with the apiserver using one of: the hostname; a flag to override the hostname; or specific logic for a cloud provider.

https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/

When the kubelet flag --register-node is true (the default), the kubelet will attempt to register itself with the API server. This is the preferred pattern, used by most distros.

https://kubernetes.io/docs/concepts/architecture/nodes/#self-registration-of-nodes

Ivan
  • 7,084
  • 4
  • 49
  • 65
  • thanks for the response. How would you disable kubelet "at all"? I mean, I install my k8s master with "kubeadm init", I dont install and run "systemctl kubelet start", yet, my node is still registering and remain as "Not Ready" node, so the registering part is still here – Dam Jan 28 '20 at 13:38
  • I haven't tested it but I think one option is to skip kubelet phases like this: `kubeadm init ... --skip-phases kubelet-start,bootstrap-token,kubelet-finalize` and `kubeadm join ... --skip-phases kubelet-start` – Ivan Jan 28 '20 at 20:02
  • Alternatively you can pass Config to kubeadm with custom kubelet part that will dictate kubelet to skip node registration (analogue of --register-node arg for kubelet) – Ivan Jan 28 '20 at 20:04
  • This might be just enough: `sudo echo "KUBELET_EXTRA_ARGS=--register-node=false" >> /etc/default/kubelet` – Ivan Jan 28 '20 at 20:23
  • I've tried and it is an epic fail with `kubeadm init ... --skip-phases kubelet-start` - as it configures control plane services to run as static pods via kubelet, hence it requires kubelet running. – Ivan Jan 28 '20 at 20:28
  • And tested with KUBELET_EXTRA_ARGS - also didn't work for me. It seems design of the whole setup of `kubeadm` is heavily dependant on `kubelet`, so are better off with something more custom. For example Kelsey doesn't start `kubelet` on master hosts: https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md#start-the-controller-services – Ivan Jan 28 '20 at 20:44
  • yea, thanks for the testing. Anyway, it looks like the simple "kubectl delete node" do the job nicely. I am still looking if there are some limitations or impacts with this command. – Dam Jan 29 '20 at 09:48
1

After you have created a cluster you can run below command to delete the master nodes

kubectl delete node master-node-name

After you do this you can no more see that master node in kubectl get nodes but you should still be able to interact with the cluster normally.Here the nodes entry in ETCD is only getting deleted.

Another way to achieve the same is to configure Kubelet not to register the node via --register-node=false flag and manually administer it

I believe this is what the managed kubernetes service providers do internally.

Arghya Sadhu
  • 28,262
  • 9
  • 35
  • 59
  • the command works pretty well thanks! I confirmed the node does not appear anymore via kubectl and for testing purpose only, I was still able to deploy pods and services, I also tried to add an extra worker node and deploy new apps on it, without problems. So its looks ok! If anyone know about limitations or impact with the kubectl delete node, please share with us – Dam Jan 29 '20 at 09:46
  • @Dam can you please provide result of your cluster restart after getting rid of entry in ETCD? – Vit Jan 29 '20 at 10:38
  • @VKR I did not restart the cluster – Dam Jan 29 '20 at 12:45
  • @Dam ok, thanks for letting know. Please notify results once that happens :) – Vit Jan 29 '20 at 12:56
0

How would you disable kubelet "at all"? I mean, I install my k8s master with "kubeadm init", I dont install and run "systemctl kubelet start", yet, my node is still registering and remain as "Not Ready" node, so the registering part is still here.

If you've set up your kubernetes cluster using kubeadm, it is required to install kubelets on all nodes, including master as it deploys vast majority of key cluster components such as kube-apiserver, kube-controller-manager or kube-scheduler as Pods in kube-system namespace (you can list them by kubectl get pods -n kube-system). In other words: you cannot run your cluster with kubeadm without having running kubelet on your master node. Without it no system Pods forming your kubernetes cluster can be deployed. See also this section in official kubernetes documentation.

As to Self-hosting the Kubernetes control plane mentioned by @Ivan, better read it carefully in official docs to understand how it really works:

kubeadm allows you to experimentally create a self-hosted Kubernetes control plane. This means that key components such as the API server, controller manager, and scheduler run as DaemonSet pods configured via the Kubernetes API instead of static pods configured in the kubelet via static files.

It's not written anywhere that you don't need kubelet on master-node at present. On the contrary, it says that in case of using self-hosted Kubernetes control plane (currently experimenta) approach in kubeadm:

key components such as the API server, controller manager, and scheduler run as DaemonSet Pods configured via the Kubernetes API instead of static Pods configured in the kubelet via static files.

So again: in both approaches key cluster components are run as Pods, only DaemonSets are configured via Kubernetes API, but these are still Pods and yes, static Pods configured via static files (which is current kubeadm approach) still need kubelet that can read those static files on master node and create appropriate Pods declared in them.

mario
  • 5,054
  • 1
  • 9
  • 23