Exposing Kubernetes apps with ingress-gce in GKE

How to make apps running in Google Kubernetes Engine accessible publicly through the default ingress controller, HTTPS activated with managed certificates. Use frontendconfig and backendconfig objects to add useful features to the ingress controller.

Exposing Kubernetes apps with ingress-gce in GKE

Overview

When working with Google Kubernetes Engine (GKE), we don't neccessarily have to deploy a custom ingress controller to expose our apps. There is already an ingress controller called ingress-gce that is running on the cluster master nodes.

As we don't have any control over a GKE cluster master nodes, we can't see what is running on them, and therefore won't see the ingress-gce pods.

When we create ingess objects whithout specific configurations indicating for which ingress controller they are created for, the ingress-gce will inspect them and apply the rules they define.

Be aware that each time you create an ingress object that is successfully processed by the ingress-gce, a Google Cloud HTTP(S) loadbalancer (external to the cluster) is also created.

HTTP(S) traffic arriving to that loadbalancer will then be automatically forwarded to the ingress-gce pods running inside the cluster.

When an HTTP(S) request Host header value (for instance www.hackerstack.org) matches againts an ingress rule processed by the ingress-gce pods, that HTTP(S) request is forwarded to one of the cluster internal services as defined by the ingress rule.

Since our apps pods are reachable through those cluster internal services, they will then be reached from the outside of the cluster through the external HTTP(S) loadbalancer.

Creating the ingress object

Here is the manifest for creating an ingress object that are meant to be processed by the ingress-gce ingress controller in GKE:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myapp
  namespace: myapp
  annotations:
    # Use an already existing global static IP as the IP address of the HTTP/HTTPS
    # loadbalancer that will be created for this ingress object
    kubernetes.io/ingress.global-static-ip-name: "my-static-public-ip"
    # Use an already existing managed certificate for rules that use TLS
    networking.gke.io/managed-certificates: "my-managed-certificate"
    # Use the frontendconfig called 'redirect-http-to-https' to redirect http to https
    networking.gke.io/v1beta1.FrontendConfig: "redirect-http-to-https" 
spec:
  rules:
  - host: myapp.example.local
    http:
      paths:
      - backend:
          service:
            name: myapp
            port:
              number: 80
        path: /
        pathType: Prefix
  tls:
    secretName: "my-managed-certificate"
  • We will see how to create the managed certificate and frontendconfig used inside the ingress manifest next, and explain few things about them
  • Regarding the global static IP address, here is the documentation for creating it: gcp-static-external-ip-address-reservation
  • Here is a list of ingress related annotations we can use inside the ingress object manifest for additional features gke-ingress-gce-ingress-object-annotations
  • For infos about all available fields we can use inside the ingress object manifest have a look at kubernetes-api-ref-ingress-v1
  • As you can see inside the ingress object manifest, we defined a rule for HTTP(S) requests with Host header corresponding to myapp.example.local. The backend section defines a cluster internal service name and port on which the requests matching the rule will be forwarded.
  • In order for this to work properly, the backend service must be of type LoadBalancer. Next we show the manifest for creating that backend service.

Creating the backend service

  • The backend service(s) on which the ingress-gce will route eligible HTTP(S) traffic must be of type LoadBalancer when using GKE (can’t be of type ClusterIP for instance, which is supported by other ingress controllers)
apiVersion: v1
kind: Service
metadata:
  name: myapp
  namespace: myapp
  annotations:
    cloud.google.com/load-balancer-type: "Internal"
spec:
  selector:
    app: myapp
  ports:
    - name: http
      port: 80           # port on which the service will be reachable
      targetPort: 8080   # backend pods port on which the associated service will forward resquests 
  type: LoadBalancer

Creating the managed certificate

Here is the manifest we use to create the managed TLS certificate for the public domain name from which our Kubernetes apps will be reached:

apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
  name: "my-managed-certificate"
  namespace: myapp
spec:
  domains:
    - mydomain.example.local

Before creating the managed certificate object, we have to make sure the IP address of the domain name for which we are generating the TLS certificate point to the public IP address (the global static IP we mentionned previously) of the cluster external HTTP(S) loadbalancer we previously mentionned.

Once this pre-requite met, the TLS certificate will be generated and available for use by the HTTP(S) loadbalancer. It is free and will be automatically renewed if necessary.

Frontendconfig and backendconfig

Frontendconfig and backendconfig are custom resources available only in GKE. They can be used to enable additional features when using ingress-gce (the default ingress controller in GKE).

Frontendconfig

When we enable some frontendconfig features, they are available for all the backend services defined inside the ingress objects and therefore for all applications reachable through those backend services.

Here is an example manifest we use to create frontendconfig resources:

apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
  name: redirect-http-to-https
  namespace: myapp
spec:
  redirectToHttps:
    enabled: true
    responseCodeName: "302"

This frontendconfig, once indicated (we will see how next) inside an ingress object that is dedicated to be used by the ingress-gce ingress controller, will enable HTTP to HTTPS redirection for all hosts specified inside that ingress object rules. This will automatically create a rule for the cluster external HTTP(S) Loadbalancer associated to the ingress-gce ingress controller in order to make the redirection.

In addition to HTTP to HTTPS redirection, there are others ingress features that can be enabled through frontendconfig resources. Have a look at the FrontendConfig section in gke-ingress-features for a complete list.

To use an existing frontendconfig resource to enable specific features, we specify the name of that frontendconfig inside the ingress object metadata.annotations section as follow:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  (...)
  annotations:
    (...)
    # Redirect HTTP to HTTPS
    networking.gke.io/v1beta1.FrontendConfig: "redirect-http-to-https" 
    (...)
spec:
  rules:
  - host: myapp.example.local
    (...)
  tls:
    secretName: "my-managed-certificate"
Backendconfig

When we enable some backendconfig features, they are available for specific backend services we choose and therefore available only for applications reachable through the chosen backend services.

Here is an example manifest we use to create backendconfig resources:

apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
  namespace: myapp
  name: my-cloud-armor-rules
spec:
  securityPolicy:
    name: "cloud-armor-default" # an already existing cloud armor security policy

This backendconfig, once used by one or more of the ingress resource backend services, will enable an existing Google Cloud Armor security policy on those backend services, for instance to restrict access only to specific IP addresses or block some well known web application attacks.

In addition to enabling Google Cloud Armor features for specific ingress backends, there are others ingress features that can be enabled through backenconfig resources. Have a look at the BackendConfig section in gke-ingress-features for a complete list.

To use an existing backendconfig resource to enable specific features for chosen ingress backend services, we specify the name of that backendconfig inside the chosen backend service definition metadata.annotations section as follow:

apiVersion: v1
kind: Service
metadata:
  annotations:
    cloud.google.com/backend-config: '{"default": "my-cloud-armor-rules"}'
  (...)

That's all. We now know how to publicly expose our Kubernetes applications in GKE using the default ingress controller and use frontendconfig and backendconfig custom resources to enable additional features for the ingress.