Skip to main content

Routing Traffic to Hybrid Nodes

Now that we have our EKS Hybrid Node instance connected to the cluster, we can deploy a sample workload. In this case, we will use the nginx deployment and Ingress manifests below. In the deployment, we are using nodeAffinity rules to tell the Kubernetes scheduler to prefer cluster nodes with the eks.amazonaws.com/compute-type=hybrid label and value.

~$kubectl apply -k ~/environment/eks-workshop/modules/networking/eks-hybrid-nodes/kustomize
namespace/nginx-remote created
service/nginx created
deployment.apps/nginx created
ingress.networking.k8s.io/nginx created
~/environment/eks-workshop/modules/networking/eks-hybrid-nodes/kustomize/ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx
namespace: nginx-remote
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
~/environment/eks-workshop/modules/networking/eks-hybrid-nodes/kustomize/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: nginx-remote
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: eks.amazonaws.com/compute-type
operator: In
values:
- hybrid
containers:
- name: nginx
image: public.ecr.aws/nginx/nginx:1.26
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
resources:
requests:
cpu: 200m
limits:
cpu: 200m
ports:
- containerPort: 80
initContainers:
- name: install
image: busybox:1.28
command: [ "sh", "-c"]
args:
- 'echo "Connected to $(POD_IP) on $(NODE_NAME)" > /work-dir/index.html'
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: workdir
mountPath: "/work-dir"
volumes:
- name: workdir
emptyDir: {}

Let’s confirm the pods were scheduled on our hybrid node successfully:

~$kubectl get pods -n nginx-remote -o=custom-columns='NAME:.metadata.name,NODE:.spec.nodeName'
NAME                     NODE
nginx-787d665f9b-2bcms   mi-027504c0970455ba5
nginx-787d665f9b-hgrnp   mi-027504c0970455ba5
nginx-787d665f9b-kv4x9   mi-027504c0970455ba5

Great! The three nginx pods are running on our hybrid node as expected.

tip

The provisioning of the Application Load Balancer may take a couple minutes. Before continuing, ensure the load balancer is in an active state. Check the status of the load balancer with the following command:

~$aws elbv2 describe-load-balancers --query 'LoadBalancers[?contains(LoadBalancerName, `k8s-nginxrem-nginx`) == `true`]' --query 'LoadBalancers[0].State.Code'
"active"

Once the Application Load Balancer is active, we can check the Address associated with the Ingress to retrieve the DNS name of the Application Load Balancer:

~$export ADDRESS=$(kubectl get ingress -n nginx-remote nginx -o jsonpath="{.status.loadBalancer.ingress[*].hostname}{'\n'}") && echo $ADDRESS
k8s-nginxrem-nginx-03efa1e84c-012345678.us-west-2.elb.amazonaws.com

With the DNS name of the Application Load Balancer, we can access our deployment through the command line or by entering the address into a web browser. The ALB will then route the traffic to the appropriate pods based on the Ingress rules.

~$curl $ADDRESS
Connected to 10.53.0.5 on mi-027504c0970455ba5

In the output from curl or the browser, we can see the 10.53.0.X IP address of the pod receiving the request from the load balancer which is running on our hybrid node with the mi- prefix.

Rerun the curl command or refresh your browser a few times and note that the pod IP changes in each request and the node name stays the same, as all pods are scheduled on the same remote node.

~$curl -s $ADDRESS
Connected to 10.53.0.5 on mi-027504c0970455ba5
~$curl -s $ADDRESS
Connected to 10.53.0.11 on mi-027504c0970455ba5
~$curl -s $ADDRESS
Connected to 10.53.0.84 on mi-027504c0970455ba5

We've successfully deployed a workload to our EKS Hybrid Node, configured it to be accessed through an Application Load Balancer, and verified that the traffic is being properly routed to our pods running on the remote node.

Before we move on to explore more usecases with EKS Hybrid Nodes, let's do a little cleanup.

~$kubectl delete -k ~/environment/eks-workshop/modules/networking/eks-hybrid-nodes/kustomize --ignore-not-found=true