0

I'm trying to setup Identity Aware Proxy for my backend services parts of which resides in GCP and other on on-prem,according to the instruction given in the following link Enabling IAP for on-premises apps and Overview of IAP for on-premises apps

After, following the guide I ended up in a partial state where services running on GCP serving at https endpoint is perfectly accessible via IAP. However, the app which is running on on-prem is not reachable through pods* and external loadbalancer*.

Current Architecture followed:

enter image description here

Steps Followed

On GCP project

  • Created a VPC network in any region with one subnet in my case (asia-southeast1)

  • Used IAP connector https://github.com/GoogleCloudPlatform/iap-connector

  • Configured the mapping for 2 domains.

    For app in GCP

  • source: gcp.domain.com

  • destination: app1.domain.com (serving at https endpoint)

    For app in on-prem(Another GCP project)

  • source: onprem.domain.com

  • destination: app2.domain.com (serving at https endpoint but not exposed to internet)

  • Configured VPN Tunnel between both the project so the network gets peered

  • Enabled IAP for the loadbalancer which is created by the deployment.

  • Added corresponding accounts to allow access to the services with IAP web-user role.

On-prem

  • Created VPC network in a region with one subnet (asia-southeast1)

  • Created VM on VPC in that region

  • Assigned that VM to an instance group

  • Created Internal Https loadbalancer and chose instance group as backend

  • Secured load balancer http with ssl

  • Setup VPN tunnel to the first project

What I have tried?

  • logged in to pods and pinged different pods. All pods were reachable.
  • logged in to nodes and pinged the remote VM on port 80 and 443 both are reachable.
  • pinged remote VM from inside the pods. Not reachable.

Expected Behaviour:

  • User requests to loadbalancer on the app1.domain.com which IAP authenticates and authorizes user with OAuth and grant access to the webapp.
  • User requests to loadbalancer on the app2.domain.com which IAP authenticates and authorizes user with OAuth and grant access to the webapp running on on-prem.

Actual Behaviour

  • Request to the app1.domain.com prompts OAuth screen after authenticating the website is returned to the user.
  • Request to the app2.domain.com prompts OAuth screen after authenticating the browser returns 503 - "No healthy upstream"

Note:

  • I am using a separate GCP project to simulate on-premise.
  • Both projects are peered via VPN tunnel.
  • Both peering projects have subnets in the same region.
  • I have used internal https loadbalancer in my on-prem project to make my VM visible in my host project so that the external loadbalancer can route request to the VM's https endpoint.

** I'm suspecting that if pod could able to reach the remote VM the problem might as well be resolved. It's just a wild guess.

Thank you so much guys. I'm looking forward for your responses.

Devopsception
  • 87
  • 1
  • 2
  • 8
  • 1
    The latest point is interesting `pinged remote VM from inside the pods. Not reachable`. Do you see network traffic going into your on prem network? My frequent mistake: I forget to define the route back in my on prem router. – guillaume blaquiere Aug 30 '20 at 18:21
  • Hi @guillaumeblaquiere thank you for replying. The traffic is flowing through node from my gcp project to on-prem project. There's a bitnami lampstack running on VM which i'm able to curl on both ports. – Devopsception Aug 31 '20 at 04:17
  • 1
    Is your test VM deployed in the same VPC and subnet as the GKE cluster? GKE doesn't add additional routing for initiated flow. If the traffic works through the VPN for the same originator (VPC/Subnet) it mustn't be a problem for the pods! – guillaume blaquiere Aug 31 '20 at 08:10
  • As I have used the VPN between two projects, google is expected to create a VPC peering on our behalf as a matter of fact that we know the VM is reachable through Node via internal IP, that means they are on the same network. Exactly, it should not be the problem for the pods do you think it could be DNS queries not being forwarded to the pods? – Devopsception Aug 31 '20 at 10:56
  • 1
    Do you see traffic on pods of your app2 domain? You should see this on the request header. Or activate the network logs to track this. Network, it's not funny to debug! – guillaume blaquiere Aug 31 '20 at 11:38
  • My bad. I think the picture is not exactly resembling what i have mentioned above. So, My app2 is running in a VM which belongs to an instance group. I have used the internal https load balancer which accepts traffic from Tunnel and sends to the instance group. My VM is receiving traffic i can track on logs "only" if i'm performing http request from the node of the cluster which is in my first project. The moment I ssh-ed inside pod that's end of the line nothing i'm able to do. – Devopsception Aug 31 '20 at 16:39
  • 1
    Hello. Could you include the correct and complete scheme of your use case? What is the configuration of your VPN between projects. Please write the exact steps you've taken to get into that state. It will allow easier troubleshooting and could shed some light on potential misconfiguration. – Dawid Kruk Sep 01 '20 at 11:56
  • @DawidKruk Hi thanks for replying. I have updated the description and schema can you please refer to it and suggest something. Thank you. – Devopsception Sep 01 '20 at 14:31
  • I'm glad that you edited your question but to be able to fully replicate your scenario and help you troubleshoot you will need to post configs that you used to create your setup. – Dawid Kruk Sep 01 '20 at 15:18

0 Answers0