Kubernetes on AWS
References:
- AWS docs
Steps
- Upgrade your
aws
cli - Set up default profile
aws sso login
- Install
eksctl
,kubectl
- Install the k8s cluster using managed nodes (not Fargate, b/c it would appear Fargate does not work with storage):
eksctl create cluster --name my-cluster --region region-code
- Deploy a sample application
kubectl create namespace eks-sample-app
kubectl apply -f eks-sample-deployment.yaml
for yaml file on that pagekubectl apply -f eks-sample-service.yaml
for yaml file on that pagekubectl get all -n eks-sample-app
kubectl -n eks-sample-app describe service eks-sample-linux-service
kubectl -n eks-sample-app describe pod eks-sample-linux-deployment-xxx
kubectl exec -it eks-sample-linux-deployment-xxx -n eks-sample-app -- /bin/bash
- From the pod shell:
curl eks-sample-linux-service; cat /etc/resolv.conf
kubectl delete namespace eks-sample-app
- Creating an IAM OIDC provider for your cluster
export cluster_name=airbyte oidc_id=$(aws eks describe-cluster --name $cluster_name --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5) aws iam list-open-id-connect-providers | grep $oidc_id | cut -d "/" -f4 # If empty, oidc provider is not set eksctl utils associate-iam-oidc-provider --cluster $cluster_name --approve # Now, you can list it with the prev command
- Creating the Amazon EBS CSI driver IAM role
eksctl create iamserviceaccount \ --name ebs-csi-controller-sa \ --namespace kube-system \ --cluster airbyte \ --role-name AmazonEKS_EBS_CSI_DriverRole_airbyte \ --role-only \ --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \ --approve eksctl create addon --name aws-ebs-csi-driver --cluster airbyte --service-account-role-arn arn:aws:iam::694782716000:role/AmazonEKS_EBS_CSI_DriverRole_airbyte --force
- Deploy a sample application and verify that the CSI driver is working
git clone https://github.com/kubernetes-sigs/aws-ebs-csi-driver.git cd aws-ebs-csi-driver/examples/kubernetes/dynamic-provisioning/ echo "parameters: type: gp3" >> manifests/storageclass.yaml kubectl apply -f manifests/ kubectl get pods --watch # Wait for app pod to be running kubectl get pv kubectl describe pv pvc-xxx kubectl exec -it app -- cat /data/out.txt kubectl delete -f manifests/
Installing Airbyte on AWS Kubernetes
- Create a namespace, and install Airbyte:
kubectl create namespace airbyte helm install airbyte airbyte/airbyte --version 0.45.50 --namespace airbyte --debug
- Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace airbyte -l "app.kubernetes.io/name=webapp" -o jsonpath="{.items[0].metadata.name}") export CONTAINER_PORT=$(kubectl get pod --namespace airbyte $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}") echo "Visit http://127.0.0.1:8080 to use your application" kubectl --namespace airbyte port-forward $POD_NAME 8080:$CONTAINER_PORT