0

I am trying to figure out is how can I connect a TCP Load balancer with a http/https load balancer in GCP.

I have installed kong on a GKE cluster and it creates a TCP Load balancer.

Now if I have multiple GKE clusters with Kong they all will have their own TCP Load balancers.

From a user perspective I need to then do a DNS load balancing which I dont think is always fruitful.

So m trying to figure out if I can use Cloud CDN, NEG and or HTTP/HTTPS load balancer to act as a front end for Kong's TCP Load balancer..

Is it possible, r there any alternatives... Thanks!!!

Nik
  • 931
  • 2
  • 13
  • 36

1 Answers1

1

There are several options you can follow depending on what you are trying to do and your needs, but if you must use Kong inside each GKE cluster and handle your SSL certs yourself, then:

TCP Proxy LB

(optional) You can deploy GKE NodePorts instead of Load Balancer service for your Kong deployment, since you try to unify all your Kong services, having individual Load Balancer exposing to the public internet can work, but you will be paying for any extra external IP address you are using.

You can manually deploy a TCP Proxy Load Balancer that will use the same GKE Instance Groups and port as your NodePort / current Load Balancer (behind the scenes), you would need to setup each backend for each GKE cluster node pool you are currently using (across the all the GKE clusters that you are deploying your Kong service).

HTTP(S) LB

You can use NodePorts or take advantage (same thing as TCP Proxy LB) from your current Load Balancer setup to use as backends, with the addition of NEGs in case you want to use those.

You would need to deploy and maintain this manually, but you can also configure your SSL certificates here (if you plan to provide HTTPS connections) since client termination happens here.

The advantage here is that you can leave SSL cert renewal to GCP (once configured) and you can also use Cloud CDN to reduce latency and costs, this feature can only be used with HTTP(S) LB as per today.

Frank
  • 385
  • 1
  • 7
  • Thanks for the info Frank... NodePorts is definitely the best option and can open multiple options.. r u aware of any setting in Kong that will create the service of type nodeport rather than lb... Otherwise I have 2 manipulate the kong k8s manifest and will have to maintain them in future as well.. – Nik Dec 31 '20 at 18:58
  • I think I found it. it's proxy.type & I can set it to NodePort... That should do it... I need 2 give this a try... – Nik Dec 31 '20 at 19:04
  • BTW can u plz elaborate what do u mean by take advantage (same thing as TCP Proxy LB) from your current Load Balancer setup to use as backends – Nik Dec 31 '20 at 19:48
  • I meant, that when you use GKE Load Balancer service type, you get a similar setup as `NodePort` with the exception that you have an additional external IP address, so you can use a TCP Proxy LB or an HTTP(S) LB when you have a GKE Load Balancer service type as a backend, I would use `NodePort` for this, but you have the final say on your architecture – Frank Dec 31 '20 at 20:14
  • I think it makes sense to have kong as service type nodeport act as a backend to a https lb... – Nik Dec 31 '20 at 21:55
  • If this answer was helpful, please mark it as an accepted answer in order for other users to benefit from it – Frank Jan 04 '21 at 17:44
  • Absolutely it clarified lot of things – Nik Jan 05 '21 at 19:16
  • Http LB needs health check and i don't c any health check endpoint being offered by kong service... Any idea – Nik Jan 06 '21 at 20:44
  • If you can `curl` onto a specific node at that specific port (where your kong service is running) and get an HTTP 200 response, then I would recommend [creating](https://cloud.google.com/load-balancing/docs/health-checks) an HTTP health check to said `nodePort`, if you don't get a 200 response, then you can use a TCP health check, but since you are creating a LB from outside your GKE cluster, you must also take care of creating / providing your ouwn configured health check – Frank Jan 06 '21 at 22:31
  • I have created the health check already and it's unhealthy... What I meant was that I don't c kong service exposing any endpoint that could return 200... – Nik Jan 07 '21 at 07:30
  • Then in this scenario, I would use a [TCP Health Check](https://cloud.google.com/load-balancing/docs/health-checks#create-hc), and point that to the active `NodePort` for this specific Kong service, take onto consideration that by default a `NodePort` service is deployed between [30000 - 32767](https://cloud.google.com/kubernetes-engine/docs/concepts/service#service_of_type_nodeport) ports, and it's different each time unless manually specified, so you must use this port for your LB Health Check – Frank Jan 07 '21 at 23:58
  • What I don't understand is why do I need to account for the port.. looking @ the example https://cloud.google.com/kubernetes-engine/docs/how-to/standalone-neg neg should expose port that a service is exposing... Or m I missing something... I have implemented it referencing the link but health check doesn't work... – Nik Jan 17 '21 at 22:44