Setting up Ingress fanout for KubernetesLast updated on September 21, 2022
InfrastructureAPITutorial
IntroductionKubernetes ingress provides external access to the services in a cluster, typically through HTTP / HTTPS. Ingress allows easy load balancing and routing rules to direct traffic to certain services depending on various different conditions.In this article, we will go over setting up Kubernetes ingress with routing rules where different URL paths point to different services. This is commonly referred to as ingress-fanout:
Ingress Fanout Diagram
In the above diagram, traffic first hits the ingress service, and then the traffic gets directed to a certain service depending on the route. This ingress setup can provide many benefits, some are:
  • Targeting high-traffic services to scale horizontally, instead of scaling the entire API.
  • Allowing multiple services to run independent of each other, which results in a more robust API when one service fails.
All configs used for this tutorial are on GitHub.Configuring ServicesFirst, we will setup two Hello World path deployments. These will be two services running the d3or/hello-world-path image. This app accesses an environmental variable APP_NAME and displays it when you visit that url path of the service. This will be useful later on.Now we will create the configs for both of our services.The config for our first deployment, hello-app1.yaml:
apiVersion: v1
kind: Service
metadata:
  name: hello-world-path-app1
spec:
  type: ClusterIP
  ports:
  - port: 80
    targetPort: 3000
  selector:
    app: hello-world-path-app1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world-path-app1
spec:
  replicas: 3
  selector:
    matchLabels:
      app: hello-world-path-app1
  template:
    metadata:
      labels:
        app: hello-world-path-app1
    spec:
      containers:
      - name: hello-world-path
        image: d3or/hello-world-path
        ports:
        - containerPort: 3000
        env:
        - name: APP_NAME
          value: app1
Then apply:
kubectl apply -f hello-app1.yaml
Next we will create the second deployment, hello-app2.yaml:
apiVersion: v1
kind: Service
metadata:
  name: hello-world-path-app2
spec:
  type: ClusterIP
  ports:
  - port: 80
    targetPort: 3000
  selector:
    app: hello-world-path-app2
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-world-path-app2
spec:
  replicas: 3
  selector:
    matchLabels:
      app: hello-world-path-app2
  template:
    metadata:
      labels:
        app: hello-world-path-app2
    spec:
      containers:
      - name: hello-world-path
        image: d3or/hello-world-path
        ports:
        - containerPort: 3000
        env:
        - name: APP_NAME
          value: app2
Then apply:
kubectl apply -f hello-app2.yaml
Now verify that both the services are running:
kubectl get service
You should see something like this:
NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
hello-world-path-app1     ClusterIP   10.245.114.21    <none>        80/TCP    24s
hello-world-path-app2     ClusterIP   10.245.251.434   <none>        80/TCP    10s
Both of our services are of type ClusterIP and are exposed inside our cluster at port 80Next, we will work on setting up Nginx Ingress Controller.Nginx Ingress Controller InstallationThere are two ways to install this, either through helm or using the manifest from the Kubernetes Nginx Ingress Controller’s GitHub repo. For brevity, we will just be using the manifest from the repo:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/do/deploy.yaml
Now this will start creating a load balancer for your cluster. You can check the status of its creation via:
kubectl get services -o wide
Ingress ConfigurationNow, we will create an ingress resource to tell it which apps to route the traffic to. Name this file hello-world-path-ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-fanout
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  ## Replace "test.modulenft.xyz" with your domain!
  - host: "test.modulenft.xyz"
    http:
      paths:
      - path: "/app1"
        pathType: Prefix
        backend:
          service:
            name: hello-world-path-app1
            port:
              number: 80
      - path: "/app2"
        pathType: Prefix
        backend:
          service:
            name: hello-world-path-app2
            port:
              number: 80
In the config, we set the host field as the domain we have pointing to our loadBalancer . Afterwards, we can set different paths to route to different services:
http:
      paths:
      - path: "/app1"
        pathType: Prefix
        backend:
          service:
            name: hello-world-path-app1
            port:
              number: 80
      - path: "/app2"
        pathType: Prefix
        backend:
          service:
            name: hello-world-path-app2
            port:
              number: 80
The result of this is test.modulenft.xyz/app1 will point you to hello-world-path-app1 and test.modulenft.xyz/app2 will point you to hello-world-path-app2 .Once you create your ingress resource config, apply it via:
kubectl apply -f hello-world-path-ingress.yaml
Now if you navigate to test.modulenft.xyz/app1:
And if you go to test.modulenft.xyz/app2:
With this setup, you are able to more easily pinpoint scale services and create a more robust API for scale.More: