Building a demo application in Golang
Published on Nov 04
For an upcoming guest lecture, I needed a demo application to visualize pod creation as the application’s load increased, using a Horizontal Pod Autoscaler. I couldn’t find one to my liking, so I decided to build my own while learning some more Golang and Angular.
This small demo application is made up of two components: a backend that communicates with the Kubernetes API and a frontend that displays the data retrieved by the backend.
Backend
This is the part that talks to the Kubernetes API. Since I wanted to use Golang for the backend, this part was fairly easy. Kubernetes has a client for Golang which provides functions for interacting with the Kubernetes cluster’s API.
As I need to be able to get a list of Pods from the backend to the frontend, I choose to use Gin to provide the API endpoints and the static frontend (but more on that later).
Connecting to the cluster
Since the application will run inside a Kubernetes cluster, the InClusterConfig() function can be used to configure the clientset. A clientset is a group of “clients” for various Kubernetes API groups and versions (e.g. CoreV1 for Pods, AppsV1 for Deployments,..). Each API group on its own is responsible for managing a specific set of objects (e.g. Services, ReplicaSets, Pods,..).
In code, that looks something like this;
import (
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/rest"
)
func createClient() *kubernetes.Clientset {
config, err := rest.InClusterConfig()
if err != nil {
return nil, err
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
return nil, err
}
return clientset
}
func main() {
client := createClient()
// Do stuff with client now
}
Retrieving related pods
In order to get related pods, the first thing that needs to happen is to get the current pod in which the application is running. This can be done by defining a POD_NAME
and POD_NAMESPACE
environment variable in the Pod definition. Luckily, Kubernetes is able to retrieve that info like this;
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
This next part is where I was scratching my head a little. As previously discussed, the application runs in a Pod which is part of a Deployment/ReplicaSet (otherwise it can’t be scaled). I wasn’t getting far by just looking at the Pod’s YAML. So I started going through some kubectl
cheat sheets and found a command which gave me an idea. That command was kubectl get pods --selector='<label_name> in (<label_value>)'
, which gets all pods that have a specific label selector and guess what, a Deployment sets label selectors so that Pods get matched to the right Deployment.
So, getting into some code, the first thing I do is retrieve the current Pod, which is fairly easy using the environment variables I set earlier.
podNamespace := os.Getenv("POD_NAMESPACE")
podName := os.Getenv("POD_NAME")
pod, err := clientset.CoreV1().Pods(podNamespace).Get(context.TODO(), podName, metav1.GetOptions{})
if err != nil {
panic(err.Error())
}
After that, I can retrieve the labels which were set as selectors for that specific Pod and retrieve all other Pods that have the same label selectors defined.
for _, ownerRef := range pod.OwnerReferences {
if *ownerRef.Controller {
labelSelector := labels.Set(pod.Labels).AsSelector().String()
podList, err := clientset.CoreV1().Pods(podNamespace).List(context.TODO(), metav1.ListOptions{
LabelSelector: labelSelector,
})
if err != nil {
panic(err.Error())
}
}
}
From that point, I just loop over the podList variable (which contains a list of all Pods that have the same label selectors as the current Pod) and add the information I want to a list that I will later on return using the API endpoint.
for _, p := range podList.Items {
replica := Replica{}
replica.Name = p.Name
replica.NodeName = p.Spec.NodeName
replica.Status = string(p.Status.Phase)
replica.StartTime = p.Status.StartTime.String()
if p.Name == podName {
replica.Current = true
} else {
replica.Current = false
}
pods = append(pods, replica)
}
API endpoints
Alright, all the data has now been gathered, so it’s time to provide the data using API endpoints. As previously explained, I am using Gin to keep things simple and simple is exactly what Gin is. It does what it needs to do without requiring a lot of bells and whistles. So, just setting up the basic API endpoints is as easy as this;
router := gin.Default()
api := router.Group("/api")
// Ping endpoint
api.GET("/ping", func(context *gin.Context) {
context.String(http.StatusOK, "pong")
})
// Pods endpoint
api.GET("/pods", func(context *gin.Context) {
context.JSON(http.StatusOK, gin.H{
"replicas": getRelatedPods(config), // <-- this function retrieves the Pods with the same label selectors as the current pod and returns them in a list
})
})
The code above just sets up Gin in it’s default configuration, adds an /api
endpoint group (which means that all endpoints in this group start with /api/
) and adds a /ping
endpoint for health checks and /pods
which returns the list of related Pods.
Frontend
For the frontend, I wanted to create a clean, user-friendly interface that is also responsive. Since I wanted to use this project as a chance to learn something new, I chose to use Angular for the frontend. As for the interface, I took inspiration from ShadCN’s modern UI components, recreating them using TailwindCSS.
The frontend’s purpose is to display data on pods dynamically, fetching the data that the backend retrieved. This required an API connection and some custom code to handle data refreshes, all of which I achieved by creating Angular services and leveraging the RxJS package.
Retrieving the pods from the backend
To manage the backend requests, I created an Angular service. Angular services are single-instance classes that can be injected into components as needed, which is perfect for this. The ApiService sends a GET request to the /pods
endpoint, which is handled by the backend, and retrieves the pod data.
Here’s what that looks like:
import { Injectable } from "@angular/core";
import { HttpClient } from "@angular/common/http";
import { Observable } from "rxjs";
import { environment } from "../environments/environment";
@Injectable({
providedIn: "root",
})
export class ApiService {
private baseUrl = environment.baseUrl;
private apiUrl = `${this.baseUrl}/pods`;
constructor(private http: HttpClient) {}
getData(): Observable<any> {
return this.http.get<any>(this.apiUrl);
}
}
This service is configured to send requests to the backend’s API, and Angular’s dependency injection allows me to reference ApiService directly in components where the data is needed. By encapsulating API calls in a service, I can easily reuse it across different parts of the application if I want to expand or change functionality later on.
Displaying the pods
Once the data is retrieved, the next step is rendering it in the UI. For this, I created an Angular component, using Angular’s ngOnInit
lifecycle hook to fetch data on component initialization.
To keep the data up to date, I added a periodic refresh using RxJS’s interval function, which calls the fetchData method every five seconds. This allows the frontend to provide users with “real-time” insights into the pods without needing manual refreshes.
@Component({
selector: "app-root",
standalone: true,
imports: [CommonModule],
})
export class AppComponent implements OnInit {
replicas: any[] = [];
constructor(private apiService: ApiService) {}
ngOnInit() {
this.fetchData();
interval(5000).subscribe(() => this.fetchData()); // Refreshes the data every 5 seconds
}
private fetchData() {
this.apiService.getData().subscribe((response) => {
this.replicas = response.replicas.map((replica: any) => ({
...replica,
timeSince: this.calculateTimeSince(replica.startTime), // This function calculates the time since the pod was started
}));
});
}
}
With fetchData called at intervals, a user can see their pods scale up/down in near real-time, making it easy to observe scaling events as they happen.
For visual part, I created a simple grid layout that displays the pod information in a card-like format. TailwindCSS helped me out a lot with that, making responsive design a lot easier. Creating the cards themselves was simple - well, after spending some time trying to figure out how to loop through the replicas
array.
Angular’s *ngFor
directive let me loop through the replicas array to dynamically generate cards for each pod. I used conditional styling to indicate the current pod visually, making it stand out with a border color change.
<div class="grid grid-cols-1 md:grid-cols-2 lg:grid-cols-3 xl:grid-cols-4 gap-3">
<div
*ngFor="let replica of replicas"
[ngClass]="
replica.current
? 'border-2 border-blue-500'
: 'border border-neutral-200 dark:border-neutral-700'
"
class="replica bg-white dark:bg-neutral-900 rounded-lg shadow-sm overflow-hidden transition-all hover:shadow-md p-4"
>
<div class="flex flex-col mb-4">
<div class="flex items-center justify-between gap-2">
<h2 class="text-lg font-semibold whitespace-nowrap overflow-hidden text-ellipsis dark:text-white"'>
{{ replica.name }}
</h2>
<span
*ngIf="replica.current"
class="bg-blue-500 text-white text-xs font-semibold px-2 py-1 rounded-full"
>
Current
</span>
</div>
<p class="text-sm text-neutral-600 dark:text-neutral-400">
Status: {{ replica.status }}
</p>
</div>
<div class="flex flex-col dark:text-white">
<span class="text-sm">
<strong>Node:</strong> {{ replica.nodeName }}
</span>
<span class="text-sm">
<strong>Uptime:</strong> {{ replica.timeSince }}
</span>
</div>
</div>
</div>
Making it work in the cluster
One last thing I needed was to set up the right permissions. Since we’re talking to the Kubernetes API, we need to give our pod the right RBAC permissions:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: pod-viewer
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: pod-viewer
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: pod-viewer
subjects:
- kind: ServiceAccount
name: pod-viewer
namespace: default
roleRef:
kind: ClusterRole
name: pod-viewer
apiGroup: rbac.authorization.k8s.io
In action
After deploying the Pod (full Kubernetes manifest can be found here), there are only two things left; autoscaling the Pod and load balancing traffic to that Pod. I created a post regarding KEDA autoscaling and one on load balancing using MetalLB
When the application is under load (real world load, or using a tool like baton), it’ll start scaling and you should see the demo application update as shown below;
The code for this application (together with the Docker image), can be found here.