9

I'm trying to accomplish a VERY common task for an application:

Assign a certificate and secure it with TLS/HTTPS.

I've spent nearly a day scouring thru documentation and trying multiple different tactics to get this working but nothing is working for me.

Initially I setup nginx-ingress on EKS using Helm by following the docs here: https://github.com/nginxinc/kubernetes-ingress. I tried to get the sample app working (cafe) using the following config:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: cafe-ingress
spec:
  tls:
  - hosts:
    - cafe.example.com
    secretName: cafe-secret
  rules:
  - host: cafe.example.com
    http:
      paths:
      - path: /tea
        backend:
          serviceName: tea-svc
          servicePort: 80
      - path: /coffee
        backend:
          serviceName: coffee-svc
          servicePort: 80

The ingress and all supported services/deploys worked fine but there's one major thing missing: the ingress doesn't have an associated address/ELB:

NAME           HOSTS                 ADDRESS   PORTS     AGE
cafe-ingress   cafe.example.com                80, 443   12h

Service LoadBalancers create ELB resources, i.e.:

testnodeapp    LoadBalancer   172.20.4.161     a64b46f3588fe...   80:32107/TCP     13h

However, the Ingress is not creating an address. How do I get an Ingress controller exposed externally on EKS to handle TLS/HTTPS?

nickgryg
  • 18,310
  • 5
  • 60
  • 69
Ken J
  • 697
  • 9
  • 16

2 Answers2

13

I've replicated every step necessary to get up and running on EKS with a secure ingress. I hope this helps anybody else that wants to get their application on EKS quickly and securely.

To get up and running on EKS:

  1. Deploy EKS using the CloudFormation template here: Keep in mind that I've restricted access with the CidrIp: 193.22.12.32/32. Change this to suit your needs.

  2. Install Client Tools. Follow the guide here.

  3. Configure the client. Follow the guide here.
  4. Enable the worker nodes. Follow the guide here.

You can verify that the cluster is up and running and you are pointing to it by running:

kubectl get svc

Now you launch a test application with the nginx ingress.

NOTE: Everything is placed under the ingress-nginx namespace. Ideally this would be templated to build under different namespaces, but for the purposes of this example it works.

Deploy nginx-ingress:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml

Fetch rbac.yml from here. Run:

kubectl apply -f rbac.yml

Have a certificate and key ready for testing. Create the necessary secret like so:

kubectl create secret tls cafe-secret --key mycert.key --cert mycert.crt -n ingress-nginx

Copy coffee.yml from here. Copy coffee-ingress.yml from here. Update the domain you want to run this under. Run them like so

kubectl apply -f coffee.yaml
kubectl apply -f coffee-ingress.yaml

Update the CNAME for your domain to point to the ADDRESS for:

kubectl get ing -n ingress-nginx -o wide

Refresh DNS cache and test the domain. You should get a secure page with request stats. I've replicated this multiple times so if it fails to work for you check the steps, config, and certificate. Also, check the logs on the nginx-ingress-controller* pod.

kubectl logs pod/nginx-ingress-controller-*********** -n ingress-nginx

That should give you some indication of what's wrong.

Ken J
  • 3,572
  • 9
  • 46
  • 74
  • Looks like https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml has the ClusterRole and Binding baked into it. Not sure that your rbac.yml is needed. @ken-j can you confirm? – sgraham Sep 15 '18 at 07:06
  • When I do this, the ADDRESS field is blank. – Dan Tenenbaum Jan 04 '20 at 21:18
0

To make an Ingress resource work, the cluster must have an Ingress controller configured.

This is unlike other types of controllers, which typically run as part of the kube-controller-manager binary, and which are typically started automatically as part of cluster creation.

For EKS with helm, you may try:

helm registry install quay.io/coreos/alb-ingress-controller-helm

Next, configure the Ingress resource:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    kubernetes.io/tls-acme: 'true'
spec:
  rules:
  - host: YOUR_DOMAIN
    http:
      paths:
      - path: /
        backend:
          serviceName: ingress-example-test
          servicePort: 80
  tls:
  - secretName: custom-tls-cert
    hosts:
    - YOUR_DOMAIN

Apply the config:

kubectl create -f ingress.yaml

Next, create the secret with TLS certificates:

kubectl create secret tls custom-tls-cert --key /path/to/tls.key --cert /path/to/tls.crt

and reference to them in the Ingress definition:

tls:
  - secretName: custom-tls-cert
    hosts:
    - YOUR_DOMAIN

The following example of configuration shows how to configure the Ingress controller:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  labels:
    k8s-app: nginx-ingress-controller
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: nginx-ingress-controller
  template:
    metadata:
      labels:
        k8s-app: nginx-ingress-controller
    spec:
      # hostNetwork makes it possible to use ipv6 and to preserve the source IP correctly regardless of docker configuration
      # however, it is not a hard dependency of the nginx-ingress-controller itself and it may cause issues if port 10254 already is taken on the host
      # that said, since hostPort is broken on CNI (https://github.com/kubernetes/kubernetes/issues/31307) we have to use hostNetwork where CNI is used
      # like with kubeadm
      # hostNetwork: true
      terminationGracePeriodSeconds: 60
      containers:
      - image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.17.1
        name: nginx-ingress-controller
        readinessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
        livenessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          timeoutSeconds: 1
        ports:
        - containerPort: 80
          hostPort: 80
        - containerPort: 443
          hostPort: 443
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        args:
        - /nginx-ingress-controller
        - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
        - --publish-service=$(POD_NAMESPACE)/nginx-ingress-lb

Next, apply the above configuration, then you can check services for External IP exposed:

kubectl get service nginx-controller -n kube-system

External IP is an address which terminates at one of the Kubernetes nodes as configured by externally configured routing mechanisms. When configured within a service definition, once a request reaches a node, traffic is redirected to a service endpoint.

Documentation of Kubernetes provides more examples.

d0bry
  • 1,917
  • 7
  • 15
  • Your answer is unclear. nginx-controller is not defined in yaml. How is nginx-controller created? – Ken J Jul 17 '18 at 14:00
  • You don't have an namespaces defined for kube-system either. alb-ingress-controller-helm does not create this service. – Ken J Jul 17 '18 at 20:38