Kubernetes Service types, OpenShift Routes, Ingress and traffic routing in detail
Services & Routes
Kubernetes networking: Service types, Ingress and OpenShift Routes.
Service Types Overview
| Type | Scope | Use Case |
|---|---|---|
| ClusterIP | Cluster-internal only | Service-to-service communication |
| NodePort | External via node IP:port | Dev/test, bare-metal |
| LoadBalancer | External via cloud LB | Production cloud workloads |
| Headless | No cluster IP, direct pod DNS | StatefulSet, service discovery |
| ExternalName | DNS alias to external service | External service integration |
ClusterIP (Default)
Cluster-internal access only. Other services reach it via <service>.<namespace>.svc.cluster.local.
apiVersion: v1
kind: Service
metadata:
name: backend-api
spec:
type: ClusterIP
selector:
app: backend
ports:
- name: http
port: 80
targetPort: 8080
- name: grpc
port: 50051
targetPort: 50051
# Cluster-internal DNS
curl http://backend-api.default.svc.cluster.local
curl http://backend-api.default:80
curl http://backend-api:80
NodePort
Opens a static port (30000-32767) on every node. Traffic: NodeIP:NodePort → Service → Pod.
apiVersion: v1
kind: Service
metadata:
name: my-app-nodeport
spec:
type: NodePort
selector:
app: my-app
ports:
- port: 80
targetPort: 8080
nodePort: 30080
curl http://<NODE_IP>:30080
LoadBalancer
Provisions a cloud provider load balancer (AWS ELB, GCP LB, Azure LB).
apiVersion: v1
kind: Service
metadata:
name: my-app-lb
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
spec:
type: LoadBalancer
selector:
app: my-app
ports:
- port: 443
targetPort: 8080
protocol: TCP
Headless Service
No ClusterIP assigned. DNS returns individual pod IPs. Essential for StatefulSets.
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
clusterIP: None
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
# Direct pod DNS (StatefulSet)
nslookup postgres-0.postgres.default.svc.cluster.local
nslookup postgres-1.postgres.default.svc.cluster.local
ExternalName
Maps a service to an external DNS name. No proxy, just a CNAME record.
apiVersion: v1
kind: Service
metadata:
name: external-db
spec:
type: ExternalName
externalName: mydb.example.com
Ingress
Layer 7 (HTTP/HTTPS) routing. Requires an Ingress Controller (nginx, traefik, etc.).
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
ingressClassName: nginx
tls:
- hosts:
- api.example.com
- app.example.com
secretName: tls-secret
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: backend-api
port:
number: 80
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
OpenShift Route
OpenShift-native alternative to Ingress. Managed by the built-in HAProxy router.
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: my-app
spec:
host: my-app.apps.cluster.example.com
to:
kind: Service
name: my-app-svc
weight: 100
port:
targetPort: 8080
tls:
termination: edge
insecureEdgeTerminationPolicy: Redirect
Route TLS Termination Types
| Type | TLS at Router | TLS to Pod | Use Case |
|---|---|---|---|
| edge | Yes | No (HTTP) | Most common, router handles TLS |
| passthrough | No (pass-through) | Yes | App handles its own TLS |
| reencrypt | Yes | Yes (re-encrypted) | End-to-end TLS with router cert |
# Passthrough
spec:
tls:
termination: passthrough
# Re-encrypt
spec:
tls:
termination: reencrypt
destinationCACertificate: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
Route vs Ingress
| Feature | Route | Ingress |
|---|---|---|
| Platform | OpenShift only | Any K8s |
| TLS | edge/passthrough/reencrypt | TLS secret based |
| Built-in | Yes (HAProxy) | Needs Ingress Controller |
| Weighted routing | Native | Depends on controller |
Debugging Services
kubectl get svc -o wide
kubectl get endpoints my-service
kubectl describe svc my-service
# DNS test
kubectl run debug --image=busybox --rm -it -- nslookup my-service
kubectl run debug --image=curlimages/curl --rm -it -- curl http://my-service:80
# Check kube-proxy rules
kubectl get pods -n kube-system -l k8s-app=kube-proxy
iptables -t nat -L -n | grep my-service