Installing NGINX Ingress Controller onto K8s Cluster:
In this article, we are going to deploy an NGINX Ingress Controller using helm on our on-premises (bare-metal) Kubernetes cluster. Because K8s is on-prem setup, we will deploy ingress controller using Nodeport service. The ingress controller adds a layer of abstraction to traffic routing, accepting traffic from outside the Kubernetes platform and load balancing it to Pods running inside the platform.
What we want to achive:
Client sends a request to http://demo.k8s.local/
DNS or /etc/hosts resolves demo.k8s.local to the HAProxy + Keepalived VIP
HAProxy receives traffic on port 80. HAProxy frontend http_front listens on *:80. It load balances TCP traffic to the backend servers (worker01:32080, worker02:32080, worker03:32080)
Ingress Controller (NGINX) receives request on NodePort 32080. It matches the Host header (demo.k8s.local) and path (/) with the Ingress rule.
Ingress forwards to internal service which forwards to the demo Pod’s port 80. Pod responds with "It works" and sent back through
A. Add NGINX Helm Repo
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
B. Install NGINX Ingress Controller:
Because, I use Bare Metal / VM based setup for my K8s cluster; meaning it is not on cloud provider and it is a on-prem environment, I need to install ingress controller using a NodePort. So, it can be accessed from outside. The command below expose HTTP on port 32080 and HTTPS on port 32443.
Let you reach ingress via http://<any-worker-node-ip>:32080
kubectl create namespace ingress-nginx
helm install nginx-ingress ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--set controller.service.type=NodePort \
--set controller.service.nodePorts.http=32080 \
--set controller.service.nodePorts.https=32443 \
--set controller.publishService.enabled=false \
--set rbac.create=true \
--set controller.serviceAccount.create=true \
--set controller.serviceAccount.name=nginx-ingress-serviceaccount
Verify
kubectl get svc -n ingress-nginx
We can summarize the following:
Service: nginx-ingress-ingress-nginx-controller
Type: NodePort
HTTP Port: 32080
HTTPS Port: 32443
Cluster-IP: 10.98.198.163 (used internally)
EXTERNAL-IP <none> (expected in NodePort)
C. Routing traffic through HAProxy VIP to the NodePorts:
Routing client traffic through HAProxy VIP to the NodePorts on this setup is a better approach because;
- High Availability via HAProxy + Keepalived VIP
- Central entry point for HTTP(S) ingress traffic
- Easy to configure wildcard TLS, certbot, or SNI-based routing at ingress
- Great fit for on-prem / VM setups
- Decouples external exposure from internal node IPs
Log on to your haproxy nodes and add the following to the end of your /etc/haproxy/haproxy.cfg config file.
#---------#--------- Ingress Traffic ----------#----------#
frontend http_front
bind *:80
mode tcp
option httplog
default_backend ingress_http
backend ingress_http
mode tcp
balance roundrobin
option httpchk GET /healthz
http-check expect status 200
option forwardfor
server worker01 192.168.1.31:32080 check
server worker02 192.168.1.32:32080 check
server worker03 192.168.1.33:32080 check
frontend https_front
bind *:443
mode tcp
option tcplog
default_backend ingress_https
backend ingress_https
mode tcp
balance roundrobin
server worker01 192.168.1.31:32443 check
server worker02 192.168.1.32:32443 check
server worker03 192.168.1.33:32443 check
#---------#--------- Ingress Traffic ----------#----------#
systemctl reload haproxy
D. Test Ingress Access via HAProxy VIP
To verify if ingress controller is reachable from Haproxy VIP, I need to deploy a test application and expose it via ingress.
Deploy a Test App
kubectl create deployment demo --image=httpd --port=80
kubectl expose deployment demo --port=80 --target-port=80 --type=ClusterIP
Save the following as demo-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-ingress
namespace: default
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: demo.k8s.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: demo
port:
number: 80
kubectl apply -f demo-ingress.yaml
Create a DNS records for demo.k8s.local domain that correspons with the HAProxyVIP or simply modify your host file (C:\Windows\System32\drivers\etc\hosts).
Scale NGINX Ingress to 3 Replicas (One Per Worker):
This tells Helm to deploy 3 replicas of the ingress controller pod under the same service.
helm upgrade nginx-ingress ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--set controller.replicaCount=3
Verify the ingress Pods:
kubectl get pods -n ingress-nginx -o wide
Pick any controller pod and run the command. You should see "It works"
kubectl exec -n ingress-nginx -it <nginx-ingress-pod> -- /bin/sh
curl -H "Host: demo.k8s.local" http://127.0.0.1
Verify the service demo exists and is resolving correctly to the pod on port 80
kubectl get svc demo
kubectl get endpoints demo
kubectl get pods -l app=demo -o wide
Confirms the service works inside the cluster. (You should see "It Works" like below)
kubectl run tmp-shell --rm -i -t --image=busybox -- /bin/sh
/ # wget -qO- http://demo
Finally we can browse http://demo.k8s.local from our management laptop and see "It Works" page.
By deploying the NGINX Ingress Controller with NodePort access and fronting it with a highly available HAProxy + Keepalived VIP, we achieved a production-grade ingress setup for our on-premises Kubernetes cluster. This design ensures reliable and scalable HTTP routing while keeping external traffic entry centralized and resilient. With proper DNS or hosts file configuration, services behind the ingress can now be accessed smoothly via friendly domain names like demo.k8s.local, enabling a clean and efficient development or staging environment.
Enable HTTPS for Your Ingress:
Since our domain is an internal domain, we can not a public certificate from Let's Enrcrypt or any other Public CA. However we can use a self signed certificate.
On my management laptop, I have openssl already installed. I create a self signed certificate and save it to "C:\Program Files\OpenSSL-Win64\bin\mycerts"
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout "C:\Program Files\OpenSSL-Win64\bin\mycerts\demo.key" -out "C:\Program Files\OpenSSL-Win64\bin\mycerts\demo.crt" -subj "/CN=demo.k8s.local/O=demo"
Then create a Kubernetes TLS secret
kubectl create secret tls demo-tls --cert="C:\Program Files\OpenSSL-Win64\bin\mycerts\demo.crt" --key="C:\Program Files\OpenSSL-Win64\bin\mycerts\demo.key" --namespace=default
Modify demo-ingress.yaml like this
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-ingress
namespace: default
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
tls:
- hosts:
- demo.k8s.local
secretName: demo-tls # <-- match the TLS secret name
rules:
- host: demo.k8s.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: demo
port:
number: 80
Apply the updated Ingress
kubectl apply -f demo-ingress.yaml
HAProxy config is already set to support for HTTP and HTTPS. So, we do not need to change something on HAProxy config.
Because the certificate is not public but a self signed cert, it will have warnings but work over HTTPS.
We can now use this ingress controller for all our Kubernetes applications. It acts like a reverse proxy and load balancer at the edge of our cluster. It can handle: multiple domains
path based routing such as /app1, /grafana etc.
TLS termination (wilcard or individual certs)