Vincent De Borger DevOps engineer

Exploring MetalLB for load balancing in scaled workloads in Kubernetes

Published on Jul 03

Note: If you haven’t worked with Deployments/DaemonSets/ReplicaSets, I highly encourage you to check out my previous blog post.

In Kubernetes, we can scale workloads very easily. There are a couple of different resources we can utilise to create replicas; ReplicaSets, Deployments and DaemonSets. Once a workload has been scaled, it’s important we can balance the traffic over these replicas. Kubernetes internally has the capabilities to load balance ClusterIP services, but for the outside world things get a little trickier.

Before we get into actually load balancing the traffic to replicas, we’ll take a deeper look into what service types are available in Kubernetes.

  • NodePort: this service type exposes the service on a static port (by default, between 30000 and 32767) on each worker node. External traffic can access the service by connecting to the IP address any node and the specified port, after which it gets forwarded to the service.
  • ClusterIP: this is the default service type, it exposes the service on an internal IP address which is only reachable within the cluster. This service type has the capability to load balance the traffic sent to it.
  • ExternalName: this is the odd duck in the list of service types, instead of having selectors, it uses DNS names. An ExternalName service maps the service to a CNAME record.
  • LoadBalancer: this service type is the one we’ll be looking at more in-depth in this post. In a cloud environment, it’s capable of automatically provisioning a load balancer which can distribute the incoming traffic across our replicas. It dynamically assigns an external IP address to the service and handles the load balancing of traffic.

Now that we know which service types exist in Kubernetes, it’s time to take a deeper dive into LoadBalancer services. In this post, we’ll be working on an on-premise environment, without any cloud provider load balancing features. In order to be able to create load balancers in an on-premise environment, we’ll be using MetalLB which is a load balancer implementation for bare metal Kubernetes clusters.

Preparing kube-proxy

Assuming you’re using kube-proxy in IPVS mode in your cluster, we need to change the kube-proxy’s configuration before we can install MetalLB. If you’re not using IPVS mode, you can go straight to the installation. IF you’re not sure about this, you can check the value by executing kubectl get configmap -n kube-system kube-proxy -o jsonpath="{.data['config\.conf']}" | grep mode. Unless its value is “ipvs”, you can skip this part.

To edit the kube-proxy config of your cluster, execute kubectl edit configmap -n kube-system kube-proxy and set “strictARP” to true:

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
  strictARP: true

That should be it, you can always double check the value by executing the previous kubectl get configmap command.

MetalLB installation

Now that you’re ready to install MetalLB, we’ll get right on it. Installing MetalLB is as easy as applying the latest manifest file.

export LATEST_VERSION=$(curl -s https://api.github.com/repos/metallb/metallb/releases/latest | grep \"tag_name\" | cut -d : -f 2,3 | tr -d \" | tr -d , | tr -d " ")
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/$LATEST_VERSION/config/manifests/metallb-native.yaml

This will create a couple of resources in your cluster, in the metallb-system namespace. Some of the most noteworthy resources created by the manifest are the following;

  • A deployment called “controller”; this is the cluster-wide component that’s responsible for allocating IP addresses, configuring the load balancer, dynamically updating configurations and performing health checks.
  • A daemonset called “speaker”; this component is deployed on each node and is responsible for ensuring that external traffic can reach the services within the Kubernetes cluster.
  • A couple of service accounts along with RBAC permissions which are necessary for the components to function.

You can verify the deployment of the components by executing the following command:

[vdeborger@node-01 ~]$ kubectl get pods -n metallb-system
NAME                          READY   STATUS    RESTARTS   AGE
controller-595f88d88f-52vzq   1/1     Running   0          1m13s
speaker-fr8xk                 1/1     Running   0          1m13s
speaker-qs45k                 1/1     Running   0          1m13s
speaker-z9rvx                 1/1     Running   0          1m13s

If the components are in a stable “Running” state, the deployment of MetalLB is complete. We can now continue on our journey and take a look at what we need to configure in order to use MetalLB to provision on-premise load balancers.

Usage

In preparation of creating a service of the “LoadBalancer” type, there are a few crucial configurations that require out attention. These configurations are necessary for MetalLB to function, defining the pool of IP addresses it can use and ensuring that we can access the load balancer’s IP address once a service has been assigned one.

IPAddressPools

One of the things we need to configure is an IPAddressPool. This resource serves as a configuration that defines a range of IP addresses which can be used for allocating to services with the “LoadBalancer” type.

I’ve added a configuration below, defining an IP address pool named “production”. This pool contains IP addesses ranging from 10.252.252.100 to 10.252.252.200. Modify these settings to match your own infrastructure.

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: production
  namespace: metallb-system
spec:
  addresses:
    - 10.252.252.100/30

Once you’ve created an MetalLB IP address pool, it’s time to make sure that we can access the services from the IP addresses provided by MetalLB. There are two ways to do this, using BGP or using L2. In my case, I’m not running any BGP routers in my environment, so I’ll be working with L2. However, if you are using BGP in your environment, you can easily change the “L2Advertisement” to “BGPAdvertisement”.

In the Kubernetes manifest below, I’ve configured an L2Advertisement for my “production” pool which I created in the previous manifest.

apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: external
  namespace: metallb-system
spec:
  ipAddressPools:
    - production

Service

Once the IPAddressPool and Advertisement have been configured, we can define a service which uses MetalLB to load balance its traffic. There are 2 important items in a Kubernetes service with a MetalLB load balancer;

  • The type of the service should be set to “LoadBalancer”
  • An annotation called “metallb.universe.tf/address-pool” has to be added.

You can find an example of a Kubernetes service manifest below;

apiVersion: v1
kind: Service
metadata:
  name: nginx
  annotations:
    metallb.universe.tf/address-pool: production
spec:
  ports:
    - port: 80
      targetPort: 80
  selector:
    app: my-cool-application
  type: LoadBalancer

Once the service has been created, you can verify that the service has received an IP address using the kubectl get service command.

[vdeborger@node-01 ~]$ kubectl get service nginx
NAME    TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)        AGE
nginx   LoadBalancer   10.106.252.134   10.252.252.100   80:30821/TCP   1m4s

As we see in the “EXTERNAL-IP” column, our service has received the IP address “10.252.252.100”, which is the first IP in our IPAddressPool. Connecting to this IP address won’t work yet since we don’t have an application running behind the service. Let’s change that.

Testing our setup

Let’s create a Kubernetes Deployment with a demo application that showcases the capabilities of MetalLB. For this purpose, we’ll use NGINX as an example application.

Within this demo application, we’ll include an index page that provides the pod and node name on which the NGINX instance is running. By accessing this page, you’ll be able to gain visibility into the underlying infrastructure and get a further understanding how the distribution of workload across a Kubernetes cluster works.

Using an init container we’ll generate an index page on a volume and by mounting that volume on the NGINX container, we’ll be able to see the pod and node name.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-cool-application
  template:
    metadata:
      labels:
        app: my-cool-application
    spec:
      initContainers:
      - name: init-nginx
        image: busybox:1.28
        command:
        - "sh"
        - "-c"
        - "echo \"POD: \${POD_NAME}\t NODE: \${NODE_NAME}\" > /opt/nginx/index.html"
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        volumeMounts:
        - name: nginx-data
          mountPath: /opt/nginx
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
        volumeMounts:
        - name: nginx-data
          mountPath: /usr/share/nginx/html
      volumes:
      - name: nginx-data
        emptyDir: {}

Once we’ve successfully deployed our demo application and allowed it enough time to pull the container images, we can access the demo application. To do this, copy the service’s load balancer IP address and access it in your browser (or using curl).

You should now have retrieved a response telling you which pod and which node the NGINX instance is running on. In order to really show the capabilities of the load balancer, we can use a simple while loop to access the webpage every second and see the load getting balanced across our replicas.

[vdeborger@node-01 ~]$ while true; do sleep 1; curl 10.252.252.100; done
POD: nginx-app-78d447cb48-z524x	 NODE: node-02
POD: nginx-app-78d447cb48-f2ljl	 NODE: node-02
POD: nginx-app-78d447cb48-b7kxm	 NODE: node-03
POD: nginx-app-78d447cb48-z524x	 NODE: node-02
POD: nginx-app-78d447cb48-b7kxm	 NODE: node-03
POD: nginx-app-78d447cb48-f2ljl	 NODE: node-02

Awesome 🙌! We have successfully load balanced our traffic across the three replicas of our demo application. With MetalLB in action, incoming requests are evenly distributed among the available replicas. This ensures that each replica receives its fair share of traffic, preventing any single replica from being overloaded.

Similar posts


© Vincent De Borger 2024 — All rights reserved.