2

My setup is a bare metal cluster running Kubernetes 1.17. I'm using Traefik 2(.3.2) as a reverse proxy and to get failover for my machines I use kube-keepalive-vip [1].

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-keepalived-vip-config
  namespace: kube-system
data:
  172.111.222.33: kube-system/traefik2-ingress-controller

Therefore my traefik service is of default type cluster IP and references an external IP provided by the kube-keepalive-vip service:

---
apiVersion: v1
kind: Service
metadata:
  name: traefik2-ingress-controller
  namespace: kube-system
spec:
  selector:
    app: traefik2-ingress-controller
  ports:
    - protocol: TCP
      name: web
      port: 80
    - protocol: TCP
      name: webs
      port: 443
  externalIPs:
    - 172.111.222.33

This works as it is. Now I want to restrict some of my applications to be accessible only from a specific subnet inside my network. Since my requests are handled by kube-keepalive-vip and also kube-proxy, the client IP in my requests is not the one of the actual client anymore. But as far as I got the documentation kube-proxy is setting the real ip in the X-Forwarded-For header. So my Middleware looks like this:

internal-ip-whitelist:
  ipWhiteList:
    sourceRange:
      - 10.0.0.0/8 # my subnet
      - 60.120.180.240 # my public ip
    ipStrategy:
      depth: 2 # take second entry from X-Forwarded-For header

Now each request to the ingresses this middleware is attached to is rejected. I checked the Traefik logs and saw, that the requests contain some X-Forwarded-* headers, but there is no X-Forwarded-For :(

Has anyone any experience with this and can point me to my error? Is there probably something wrong with my Kubernetes setup? Or is there something missing in my kube-keepalive-vip config?

Thanks in advance!

[1] https://github.com/aledbf/kube-keepalived-vip

razr
  • 722
  • 7
  • 18

1 Answers1

1

For everyone stumbling upon this, I managed to fix my problem in the meantime.

The main problem is kube-proxy. By default all services are routed through it. And, depending on your CNI provider (I use flannel), the information of your calling client is lost there.

K8s provides a way around that by setting the .spec.externalTrafficPolicy to Local (https://kubernetes.io/docs/concepts/services-networking/service/#aws-nlb-support). But this is not supported for ClusterIP services.

So I got around that by using MetalLB (https://metallb.universe.tf/) which provides load balancing for bare metal clusters. After setting it up with my virtual IP that was assigned to the keepalived container before, I configured the traefik service with type LoadBalancer and requested the one IP I have in MetalLB.

razr
  • 722
  • 7
  • 18