0

I was reading k8s docs at https://kubernetes.io/docs/concepts/services-networking/service/.

NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.

So, I tried checking it practically. In this LoadBalancer service, port 31724 is a NodePort and 80 is the container port. According to docs, the NodePort should be open to the outer world and the container port should not. But with nmap, I found the reverse being true.

NAME      TYPE           CLUSTER-IP      EXTERNAL-IP               PORT(S)                  AGE
gen       LoadBalancer   10.200.32.132   10.44.9.162,10.44.9.163   80:31724/TCP,443:30039   20d
$ nmap -p 80 10.44.9.162

Starting Nmap 6.40 ( http://nmap.org ) at 2021-04-08 12:33 UTC
mass_dns: warning: Unable to determine any DNS servers. Reverse DNS is disabled. Try using --system-dns or specify valid servers with --dns-servers
Nmap scan report for 10.44.9.162
Host is up (0.00061s latency).
PORT   STATE SERVICE
80/tcp open  http

Nmap done: 1 IP address (1 host up) scanned in 0.02 seconds

$ nmap -p 31724 10.44.9.162

Starting Nmap 6.40 ( http://nmap.org ) at 2021-04-08 12:33 UTC
mass_dns: warning: Unable to determine any DNS servers. Reverse DNS is disabled. Try using --system-dns or specify valid servers with --dns-servers
Nmap scan report for 10.44.9.162
Host is up (0.00044s latency).
PORT      STATE  SERVICE
31724/tcp closed unknown

Nmap done: 1 IP address (1 host up) scanned in 0.03 seconds

I am surely missing something. Please help me understand this. Thanks!


Follow-up:
I know follow-up should be a different question but it seemed like the correct place.
I created a NodePort service and retried the same. And it was as told in the description.

object-controller-np      NodePort       10.200.32.240   <none>         7203:31206/TCP                                             5s

NodeIP

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
inet 10.46.104.20 ...
$ nmap -p 7203 10.46.104.20

Starting Nmap 6.40 ( http://nmap.org ) at 2021-04-09 07:01 UTC
mass_dns: warning: Unable to determine any DNS servers. Reverse DNS is disabled. Try using --system-dns or specify valid servers with --dns-servers
Nmap scan report for 10.46.104.20
Host is up (0.00052s latency).
PORT     STATE  SERVICE
7203/tcp closed unknown

Nmap done: 1 IP address (1 host up) scanned in 0.03 seconds

$ nmap -p 31206 10.46.104.20

Starting Nmap 6.40 ( http://nmap.org ) at 2021-04-09 07:01 UTC
mass_dns: warning: Unable to determine any DNS servers. Reverse DNS is disabled. Try using --system-dns or specify valid servers with --dns-servers
Nmap scan report for 10.46.104.20
Host is up (0.00050s latency).
PORT      STATE SERVICE
31206/tcp open  unknown

Earlier I tried with LoadBalancer, as I thought it is kind of a superset of NodePort.
Question: So why the exact opposite behaviour with NodePort in NodePort-type service and LoadBalancer-type service.
From a popular answer:

NodePort
If you access this service on a nodePort from the node's external IP, it will route the request to spec.clusterIp:spec.ports[*].port, which will in turn route it to your spec.ports[*].targetPort, if set. This service can also be accessed in the same way as ClusterIP. ...
LoadBalancer
You can access this service from your load balancer's IP address, which routes your request to a nodePort, which in turn routes the request to the clusterIP port.

So, for a NodePort, the request goes like:

NodeIP:NodePort -> ClusterIP:Port -> ClusterIP:TargetPort

Above Port is the port specified as port in yaml that exposes the Kubernetes service on the specified port within the cluster. And TargetPort is the port specified by targetPort in yaml on which the service will send requests to, that your pod will be listening on.

For a LoadBalancer, the behaviour I expected was:

ExternalLBIP:NodePort --(load-balanced across nodes)--> NodeIP:NodePort -> ClusterIP:Port -> ClusterIP:TargetPort

What I see is:

ExternalLBIP:NodePort -> (doesn't work)

And instead, what works is:

ExternalLBIP:Port --(load-balanced across nodes)--> NodeIP:Port -> ClusterIP:Port -> ClusterIP:TargetPort
subtleseeker
  • 1,556
  • 16
  • 31

1 Answers1

2

What you're seeing is right because the IP you're hitting with nmap is the IP of the LoadBalancer created by the service of type LoadBalancer, which is meant to be open at 80 (and/or 443). While NodePort is accessible on the IP of the worker node on which the pod and the service are running.

The service you've deployed here is of type LoadBalancer and not NodePort.

For further read, check this out.

Answer to the follow up question:

What you mentioned in what you expect and what actually works, both are incorrect.

ExternalLBIP:NodePort : If you look at the post you shared, <NodePort> is accessible via <NodeIP> and not with External IP of the LB.

ExternalLBIP:Port --(load-balanced across nodes)--> NodeIP:Port -> : LB routes requests to a NodePort, so it will be NodeIP:NodePort.

So as to your question, the behaviour with NodePort in NodePort-type service and LoadBalancer-type service is not opposite. You just need to remember NodePort is to be accessed only on NodeIP.

In service type NodePort:

NodeIP:NodePort -> ClusterIP:Port -> Pod:TargetPort

In service type LoadBalancer:

ExternalIPofLB:Port -> NodeIP:NodePort -> ClusterIP:Port -> Pod:TargetPort

Eg. from a running service of type LoadBalancer:

kubectl get svc -n <namespace> <service-name>
NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP                                                               PORT(S)          AGE
<service-name>-**********   LoadBalancer   172.20.96.130   a4b63c833c2***************d4-1996967498.<region>.elb.amazonaws.com   8443:31010/TCP   8m49s

As you can see in the below snippet, request will be forwarded from LB Port to NodePort.

Listener of the ELB

rock'n rolla
  • 878
  • 5
  • 13