Routing Traffic to Hybrid Nodes
Now that we have our EKS Hybrid Node instance connected to the cluster, we can
deploy a sample workload. In this case, we will use the nginx
deployment and Ingress
manifests below. In the deployment, we are using nodeAffinity
rules to tell the Kubernetes scheduler to prefer cluster nodes
with the eks.amazonaws.com/compute-type=hybrid
label and value.
namespace/nginx-remote created
service/nginx created
deployment.apps/nginx created
ingress.networking.k8s.io/nginx created
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx
namespace: nginx-remote
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: nginx-remote
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: eks.amazonaws.com/compute-type
operator: In
values:
- hybrid
containers:
- name: nginx
image: public.ecr.aws/nginx/nginx:1.26
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
resources:
requests:
cpu: 200m
limits:
cpu: 200m
ports:
- containerPort: 80
initContainers:
- name: install
image: busybox:1.28
command: [ "sh", "-c"]
args:
- 'echo "Connected to $(POD_IP) on $(NODE_NAME)" > /work-dir/index.html'
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: workdir
mountPath: "/work-dir"
volumes:
- name: workdir
emptyDir: {}
Let’s confirm the pods were scheduled on our hybrid node successfully:
NAME NODE
nginx-787d665f9b-2bcms mi-027504c0970455ba5
nginx-787d665f9b-hgrnp mi-027504c0970455ba5
nginx-787d665f9b-kv4x9 mi-027504c0970455ba5
Great! The three nginx
pods are running on our hybrid node as expected.
The provisioning of the Application Load Balancer may take a couple minutes. Before continuing, ensure the load balancer is in an active
state. Check the status of the load balancer with the following command:
"active"
Once the Application Load Balancer is active, we can check the Address
associated with the Ingress to retrieve the DNS name of the Application Load Balancer:
k8s-nginxrem-nginx-03efa1e84c-012345678.us-west-2.elb.amazonaws.com
With the DNS name of the Application Load Balancer, we can access our deployment through the command line or by entering the address into a web browser. The ALB will then route the traffic to the appropriate pods based on the Ingress rules.
Connected to 10.53.0.5 on mi-027504c0970455ba5
In the output from curl or the browser, we can see the 10.53.0.X
IP address of the pod receiving the request from the load balancer which is running on our hybrid node with the mi-
prefix.
Rerun the curl command or refresh your browser a few times and note that the pod IP changes in each request and the node name stays the same, as all pods are scheduled on the same remote node.
Connected to 10.53.0.5 on mi-027504c0970455ba5
Connected to 10.53.0.11 on mi-027504c0970455ba5
Connected to 10.53.0.84 on mi-027504c0970455ba5
We've successfully deployed a workload to our EKS Hybrid Node, configured it to be accessed through an Application Load Balancer, and verified that the traffic is being properly routed to our pods running on the remote node.
Before we move on to explore more usecases with EKS Hybrid Nodes, let's do a little cleanup.