Skip to main content

Using Amazon DynamoDB

The first step in this process is to re-configure the carts service to use a DynamoDB table that has already been created for us. The application loads most of its configurations from a ConfigMap. Let's take look at it:

~$kubectl -n carts get -o yaml cm carts
apiVersion: v1
data:
  AWS_ACCESS_KEY_ID: key
  AWS_SECRET_ACCESS_KEY: secret
  CARTS_DYNAMODB_CREATETABLE: true
  CARTS_DYNAMODB_ENDPOINT: http://carts-dynamodb:8000
  CARTS_DYNAMODB_TABLENAME: Items
kind: ConfigMap
metadata:
  name: carts
  namespace: carts

Also, check the current status of the application using the browser. A LoadBalancer type service named ui-nlb is provisioned in the ui namespace from which the application's UI can be accessed.

~$kubectl -n ui get service ui-nlb -o jsonpath='{.status.loadBalancer.ingress[*].hostname}{"\n"}'
k8s-ui-uinlb-647e781087-6717c5049aa96bd9.elb.us-west-2.amazonaws.com

Use the generated URL from the command above to open the UI in your browser. It should open the Retail Store like shown below.

Home

The following kustomization overwrites the ConfigMap removing the DynamoDB endpoint configuration. It tells the SDK to use the real DynamoDB service instead of our test Pod. We've also configured the DynamoDB table name that's already been created for us. The table name is being pulled from the environment variable CARTS_DYNAMODB_TABLENAME.

~/environment/eks-workshop/modules/security/eks-pod-identity/dynamo/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../../../base-application/carts
configMapGenerator:
- name: carts
namespace: carts
env: config.properties
behavior: replace
options:
disableNameSuffixHash: true

Let's check the value of CARTS_DYNAMODB_TABLENAME then run Kustomize to use the real DynamoDB service:

~$echo $CARTS_DYNAMODB_TABLENAME
eks-workshop-carts
~$kubectl kustomize ~/environment/eks-workshop/modules/security/eks-pod-identity/dynamo \
| envsubst | kubectl apply -f-

This will overwrite our ConfigMap with new values:

~$kubectl -n carts get cm carts -o yaml
apiVersion: v1
data:
  CARTS_DYNAMODB_TABLENAME: eks-workshop-carts
kind: ConfigMap
metadata:
  labels:
    app: carts
  name: carts
  namespace: carts

Now, we need to recycle all the carts pods to pick up our new ConfigMap contents:

~$kubectl rollout restart -n carts deployment/carts
deployment.apps/carts restarted
~$kubectl rollout status -n carts deployment/carts --timeout=20s
Waiting for deployment "carts" rollout to finish: 1 old replicas are pending termination...
error: timed out waiting for the condition

It looks like our change failed to deploy properly. We can confirm this by looking at the Pods:

~$kubectl -n carts get pod
NAME                              READY   STATUS             RESTARTS        AGE
carts-5d486d7cf7-8qxf9            1/1     Running            0               5m49s
carts-df76875ff-7jkhr             0/1     CrashLoopBackOff   3 (36s ago)     2m2s
carts-dynamodb-698674dcc6-hw2bg   1/1     Running            0               20m

What's gone wrong?