306

1 - I'm reading the documentation and I'm slightly confused with the wording. It says:

ClusterIP: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType

NodePort: Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting <NodeIP>:<NodePort>.

LoadBalancer: Exposes the service externally using a cloud provider’s load balancer. NodePort and ClusterIP services, to which the external load balancer will route, are automatically created.

Does the NodePort service type still use the ClusterIP but just at a different port, which is open to external clients? So in this case is <NodeIP>:<NodePort> the same as <ClusterIP>:<NodePort>?

Or is the NodeIP actually the IP found when you run kubectl get nodes and not the virtual IP used for the ClusterIP service type?

2 - Also in the diagram from the link below:

http://kubernetes.io/images/docs/services-iptables-overview.svg

Is there any particular reason why the Client is inside the Node? I assumed it would need to be inside a Clusterin the case of a ClusterIP service type.

If the same diagram was drawn for NodePort, would it be valid to draw the client completely outside both the Node andCluster or am I completely missing the point?

mohan08p
  • 3,723
  • 1
  • 22
  • 35
AmazingBergkamp
  • 3,502
  • 4
  • 12
  • 18

7 Answers7

285

A ClusterIP exposes the following:

  • spec.clusterIp:spec.ports[*].port

You can only access this service while inside the cluster. It is accessible from its spec.clusterIp port. If a spec.ports[*].targetPort is set it will route from the port to the targetPort. The CLUSTER-IP you get when calling kubectl get services is the IP assigned to this service within the cluster internally.

A NodePort exposes the following:

  • <NodeIP>:spec.ports[*].nodePort
  • spec.clusterIp:spec.ports[*].port

If you access this service on a nodePort from the node's external IP, it will route the request to spec.clusterIp:spec.ports[*].port, which will in turn route it to your spec.ports[*].targetPort, if set. This service can also be accessed in the same way as ClusterIP.

Your NodeIPs are the external IP addresses of the nodes. You cannot access your service from spec.clusterIp:spec.ports[*].nodePort.

A LoadBalancer exposes the following:

  • spec.loadBalancerIp:spec.ports[*].port
  • <NodeIP>:spec.ports[*].nodePort
  • spec.clusterIp:spec.ports[*].port

You can access this service from your load balancer's IP address, which routes your request to a nodePort, which in turn routes the request to the clusterIP port. You can access this service as you would a NodePort or a ClusterIP service as well.

sitaktif
  • 1,088
  • 9
  • 13
kellanburket
  • 9,514
  • 2
  • 35
  • 65
  • 3
    Could you comment on how `externalIPs` changes the equation here? Specifically, it's possible to assign an `externalIPs` array to a `ClusterIP`-type Service, and then the service becomes accessible on the external IP, too? When would you choose this over a NodePort? – Bosh Apr 23 '17 at 21:48
  • The question doesn't mention externalIPs--I think you would probably best be served by posting this as a new question. – kellanburket Apr 24 '17 at 17:36
  • 65
    This post is actually more useful clarifying these differences than the official Kubernetes documentation itself. – adrpino Dec 23 '17 at 13:16
  • @kellanburket, how does this work: `spec.clusterIp` . Can ClusterIP be explicitly mentioned in service.yaml. And similarly `spec.loadBalancerIp` – samshers Aug 16 '19 at 17:45
  • you made my day with your answer, thank you a lot! (as a side note, in 2020 still networking documentation is a little bit obscure) – user430191 May 26 '20 at 17:42
148

To clarify for anyone who is looking for what is the difference between the 3 on a simpler level. You can expose your service with minimal ClusterIp (within k8s cluster) or larger exposure with NodePort (within cluster external to k8s cluster) or LoadBalancer (external world or whatever you defined in your LB).

ClusterIp exposure < NodePort exposure < LoadBalancer exposure

  • ClusterIp
    Expose service through k8s cluster with ip/name:port
  • NodePort
    Expose service through Internal network VM's also external to k8s ip/name:port
  • LoadBalancer
    Expose service through External world or whatever you defined in your LB.
200_success
  • 6,669
  • 1
  • 36
  • 68
Tomer Ben David
  • 6,005
  • 1
  • 38
  • 20
74

ClusterIP: Services are reachable by pods/services in the Cluster
If I make a service called myservice in the default namespace of type: ClusterIP then the following predictable static DNS address for the service will be created:

myservice.default.svc.cluster.local (or just myservice.default, or by pods in the default namespace just "myservice" will work)

And that DNS name can only be resolved by pods and services inside the cluster.

NodePort: Services are reachable by clients on the same LAN/clients who can ping the K8s Host Nodes (and pods/services in the cluster) (Note for security your k8s host nodes should be on a private subnet, thus clients on the internet won't be able to reach this service)
If I make a service called mynodeportservice in the mynamespace namespace of type: NodePort on a 3 Node Kubernetes Cluster. Then a Service of type: ClusterIP will be created and it'll be reachable by clients inside the cluster at the following predictable static DNS address:

mynodeportservice.mynamespace.svc.cluster.local (or just mynodeportservice.mynamespace)

For each port that mynodeportservice listens on a nodeport in the range of 30000 - 32767 will be randomly chosen. So that External clients that are outside the cluster can hit that ClusterIP service that exists inside the cluster. Lets say that our 3 K8s host nodes have IPs 10.10.10.1, 10.10.10.2, 10.10.10.3, the Kubernetes service is listening on port 80, and the Nodeport picked at random was 31852.

A client that exists outside of the cluster could visit 10.10.10.1:31852, 10.10.10.2:31852, or 10.10.10.3:31852 (as NodePort is listened for by every Kubernetes Host Node) Kubeproxy will forward the request to mynodeportservice's port 80.

LoadBalancer: Services are reachable by everyone connected to the internet* (Common architecture is L4 LB is publicly accessible on the internet by putting it in a DMZ or giving it both a private and public IP and k8s host nodes are on a private subnet)
(Note: This is the only service type that doesn't work in 100% of Kubernetes implementations, like bare metal Kubernetes, it works when Kubernetes has cloud provider integrations.)

If you make mylbservice, then a L4 LB VM will be spawned (a cluster IP service, and a NodePort Service will be implicitly spawned as well). This time our NodePort is 30222. the idea is that the L4 LB will have a public IP of 1.2.3.4 and it will load balance and forward traffic to the 3 K8s host nodes that have private IP addresses. (10.10.10.1:30222, 10.10.10.2:30222, 10.10.10.3:30222) and then Kube Proxy will forward it to the service of type ClusterIP that exists inside the cluster.


You also asked: Does the NodePort service type still use the ClusterIP? Yes*
Or is the NodeIP actually the IP found when you run kubectl get nodes? Also Yes*

Lets draw a parrallel between Fundamentals:
A container is inside a pod. a pod is inside a replicaset. a replicaset is inside a deployment.
Well similarly:
A ClusterIP Service is part of a NodePort Service. A NodePort Service is Part of a Load Balancer Service.


In that diagram you showed, the Client would be a pod inside the cluster.

neokyle
  • 2,367
  • 20
  • 25
  • Based on your follow up questions I was under the impression that you wanted to know how traffic entered into the cluster. I took the liberty to make a Q&A on that if you're interested. https://stackoverflow.com/questions/52241501/how-does-traffic-flow-inside-a-kubernetes-cluster/52241503#52241503 – neokyle Sep 09 '18 at 05:11
  • 1
    Hey, really good explanation, I am wondering about the LoadBalancer. The LoadBalancer will forward any traffic to a NodeIP:NodePort (that node that is the next in the round robin) and how does the call proceed on that node? How does the node port know that this is a service call and that it should distribute it via the kube-proxy to the virtual IP of the service? Will the kube-proxy make a simple port forward? – ItFreak Jul 12 '19 at 11:06
  • kube-proxy plays 3 main roles: 1. make services exist/work by making the iptables on the node match the desired state of services in etcd. 2. is responsible for mapping node port to service to pod (my understanding is it does this via iptables) + port remapings 3. make sure each pod has a unique ip. The nodeport could enter in on 1 node, the service definitions exist in the iptables of every node/services exist on every node, pods are usually on a virtualized overlay network, and nodes double as routers, so although traffic comes in on 1 node it gets routed to pod existing on another node. – neokyle Jul 12 '19 at 13:39
  • Knowing how it works on a level deeper than this is pointless, because kubernetes it's made of modular pieces, and like how linux has flavors/distros that all work a little different with some overarching themes, each k8s distro is slightly different. Example cilium cni is looking to replace kube-proxy entirely, which means the how it works behind the scenes is a moving target, thus not worth understanding unless you're actually contributing to the project/trying to fix a bug. – neokyle Jul 12 '19 at 13:42
  • Is there a way to contact you? I am writing a bachelor thesis about security in k8s and would love to learn about the intern functions of the proxy, e.g. how does he distribute IP adresses to nodes and pods and how services get their virtual IP – ItFreak Jul 16 '19 at 07:02
64

Lets assume you created a Ubuntu VM on your local machine. It's IP address is 192.168.1.104.

You login into VM, and installed Kubernetes. Then you created a pod where nginx image running on it.

1- If you want to access this nginx pod inside your VM, you will create a ClusterIP bound to that pod for example:

$ kubectl expose deployment nginxapp --name=nginxclusterip --port=80 --target-port=8080

Then on your browser you can type ip address of nginxclusterip with port 80, like:

http://10.152.183.2:80

2- If you want to access this nginx pod from your host machine, you will need to expose your deployment with NodePort. For example:

$ kubectl expose deployment nginxapp --name=nginxnodeport --port=80 --target-port=8080 --type=NodePort

Now from your host machine you can access to nginx like:

http://192.168.1.104:31865/

In my dashboard they appear as:

enter image description here

Below is a diagram shows basic relationship.

enter image description here

Teoman shipahi
  • 43,086
  • 13
  • 113
  • 137
13
  1. clusterIP : IP accessible inside cluster (across nodes within d cluster).
nodeA : pod1 => clusterIP1, pod2 => clusterIP2
nodeB : pod3 => clusterIP3.

pod3 can talk to pod1 via their clusterIP network.

  1. nodeport : to make pods accessible from outside the cluster via nodeIP:nodeport, it will create/keep clusterIP above as its clusterIP network.
nodeA => nodeIPA : nodeportX
nodeB => nodeIPB : nodeportX

you might access service on pod1 either via nodeIPA:nodeportX OR nodeIPB:nodeportX. Either way will work because kube-proxy (which is installed in each node) will receive your request and distribute it [redirect it(iptables term)] across nodes using clusterIP network.

  1. Load balancer

basically just putting LB in front, so that inbound traffic is distributed to nodeIPA:nodeportX and nodeIPB:nodeportX then continue with the process flow number 2 above.

1

And do not forget the "new" service type (from the k8s docu):

ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.

Note: You need either kube-dns version 1.7 or CoreDNS version 0.0.8 or higher to use the ExternalName type.

tal47
  • 23
  • 3
0

Thought of sharing the issue and the solution for the same I was facing for the whole day.

Brief: I am running 2 VMs for a 2 Node cluster. 1 Master Node and 1 Worker Node. A Deployment is running on the worker node. I wanted to curl from the master node so that I can get response from my application running inside a pod on the worker node. For that I deployed a service on the worker node which then exposed those set of pods inside the cluster.

Issue: After deploying the service and doing Kubectl get service, it provided me with ClusterIP of that service and a port (BTW I used NodePort instead of Cluster IP when writing the service.yaml). But when curling on that IP address and port it was just hanging and then after sometime giving timeout.

Solution: Then I tried to look at the hierarchy. First I need to contact the Node on which service is located then on the port given by the NodePort (i.e The one between 30000-32767) so first I did Kubectl get nodes -o wide to get the Internal IP address of the required Node (mine was 10.0.1.4) and then I did kubectl get service -o wide to get the port (the one between 30000-32767) and curled it. So my curl command was -> curl http://10.0.1.4:30669 and I was able to get the output.

ayush
  • 3
  • 1