2
[root@kubemaster ~]# kubectl get pods -o wide
NAME                             READY   STATUS    RESTARTS   AGE   IP             NODE          NOMINATED NODE   READINESS GATES
pod1deployment-c8b9c74cb-hkxmq   1/1     Running   0          12s   192.168.90.1   kubeworker1   <none>           <none>

[root@kubemaster ~]# kubectl logs pod1deployment-c8b9c74cb-hkxmq
2020/05/16 23:29:56 Server listening on port 8080

[root@kubemaster ~]# kubectl get service -o wide
NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE   SELECTOR
kubernetes    ClusterIP   10.96.0.1        <none>        443/TCP   13m   <none>
pod1service   ClusterIP   10.101.174.159   <none>        80/TCP    16s   creator=sai

Curl on master node:

[root@kubemaster ~]# curl -m 2 -v -s http://10.101.174.159:80
* About to connect() to 10.101.174.159 port 80 (#0)
*   Trying 10.101.174.159...
* Connection timed out after 2001 milliseconds
* Closing connection 0

Curl on worker node 1 is sucessfull for cluster IP ( this is the node where pod is running )

[root@kubemaster ~]# ssh kubeworker1 curl -m 2 -v -s http://10.101.174.159:80
Hello, world!
Version: 1.0.0
Hostname: pod1deployment-c8b9c74cb-hkxmq

Curl fails on other worker node as well :

[root@kubemaster ~]# ssh kubeworker2 curl -m 2 -v -s http://10.101.174.159:80
* About to connect() to 10.101.174.159 port 80 (#0)
*   Trying 10.101.174.159...
* Connection timed out after 2001 milliseconds
* Closing connection 0
hariK
  • 1,842
  • 9
  • 11
confused genius
  • 731
  • 6
  • 18
  • could you please try ```curl -m 2 -v -s http://pod1service.default.svc``` from the master node? – hariK May 16 '20 at 19:16
  • If you're trying to reach a service from outside the cluster proper (even from the console of one of your nodes), would a NodePort service fit your needs better? – David Maze May 16 '20 at 19:32
  • [root@kubemaster ~]# curl -m 2 -v http://pod1service.default.svc * Could not resolve host: pod1service.default.svc; Unknown error * Closing connection 0 – confused genius May 17 '20 at 05:26
  • David , yes thats true .. but the actual problem we have in set up is "not able to reach cluster ip from with in the cluster" – confused genius May 17 '20 at 05:32
  • David , please clarify this doubt for me .. should the serviceip:port be accessible from inside the pods or also from the k8 nodes ? – confused genius May 17 '20 at 06:58
  • This should be helpful for you [here](https://stackoverflow.com/questions/41509439/whats-the-difference-between-clusterip-nodeport-and-loadbalancer-service-types). – Crou May 18 '20 at 14:07

1 Answers1

0

First of all, you should always be using Service DNS instead of Cluster/dynamic IPs to access the application deployed. The service DNS would be < service-name >.< service-namespace >.svc.cluster.local, cluster.local is the default Kubernetes cluster name, if not changed otherwise.

Now coming to the service accessibility, it may be DNS issues. What you can do is try to check the kube-dns pod logs in kube-system namespace. Also, try to curl from a standalone pod. If that's working.

kubectl run --generator=run-pod/v1 bastion --image=busybox

kubectl exec -it bastion bash

curl -vvv pod1service.default.svc.cluster.local

If not the further questions would be, where is the cluster and how it was created?

redzack
  • 800
  • 3
  • 12