클라우드/CKA

[CKA] Ligtning Lab

happyso 2022. 1. 2. 16:16

문제01

Upgrade the current version of kubernetes from 1.19 to 1.20.0 exactly using the kubeadm utility. Make sure that the upgrade is carried out one node at a time starting with the master node. To minimize downtime, the deployment gold-nginx should be rescheduled on an alternate node before upgrading each node.

Upgrade controlplane node first and drain node node01 before upgrading it. Pods for gold-nginx should run on the controlplane node subsequently.

 [나의 풀이]

https://v1-20.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/

## 컨트롤 플레인 업그레이드

root@controlplane:~# kubectl drain controlplane 
node/controlplane cordoned
error: unable to drain node "controlplane", aborting command...

There are pending nodes to be drained:
 controlplane
error: cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/kube-proxy-wvcw5, kube-system/weave-net-h42jj
root@controlplane:~# kubectl drain controlplane --ignore-daemonsets 
node/controlplane already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-wvcw5, kube-system/weave-net-h42jj
evicting pod kube-system/coredns-f9fd979d6-d4v2x
evicting pod kube-system/coredns-f9fd979d6-jftxh
pod/coredns-f9fd979d6-d4v2x evicted
pod/coredns-f9fd979d6-jftxh evicted
node/controlplane evicted

 

root@controlplane:~# apt-mark unhold kubeadm && apt-get update && apt-get install -y kubeadm=1.20.0-00 && apt-mark hold kubeadm
root@controlplane:~# apt-get update && apt-get install -y --allow-change-held-packages kubeadm=1.20.0-00

 

sudo kubeadm upgrade apply v1.20.0
sudo kubeadm upgrade node

 

root@controlplane:~# apt-mark unhold kubelet kubectl &&     apt-get update && apt-get install -y kubelet=1.20.0-00 kubectl=1.20.0-00 &&     apt-mark hold kubelet kubectl
root@controlplane:~# apt-get update &&     apt-get install -y --allow-change-held-packages kubelet=1.20.0-00 kubectl=1.20.0-00

 

sudo systemctl daemon-reload
sudo systemctl restart kubelet

 

# replace <node-to-drain> with the name of your node
kubectl uncordon <node-to-drain>

 

## 노드 업그레이드

root@controlplane:~# kubectl drain node01 --ignore-daemonsets
root@controlplane:~# ssh node01

# replace x in 1.20.x-00 with the latest patch version
root@node01:~# apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm=1.20.0-00 && \
apt-mark hold kubeadm
-
# since apt-get version 1.1 you can also use the following method
root@node01:~# apt-get update && \
apt-get install -y --allow-change-held-packages kubeadm=1.20.0-00

root@node01:~# sudo kubeadm upgrade node

# replace x in 1.20.x-00 with the latest patch version
root@node01:~# apt-mark unhold kubelet kubectl && \
 apt-get update && apt-get install -y kubelet=1.20.0-00 kubectl=1.20.0-00 && \
 apt-mark hold kubelet kubectl
-
# since apt-get version 1.1 you can also use the following method
root@node01:~# apt-get update && \
apt-get install -y --allow-change-held-packages kubelet=1.20.0-00 kubectl=1.20.0-00

 

root@node01:~# sudo systemctl daemon-reload
root@node01:~# sudo systemctl restart kubelet

root@controlplane:~# kubectl uncordon <node-to-drain>

 

[정답]

Here is the solution for this task. Please note that the output of these commands have not been added here.

On the controlplane node:

root@controlplane:~# kubectl drain controlplane --ignore-daemonsets
root@controlplane:~# apt update
root@controlplane:~# apt-get install kubeadm=1.20.0-00
root@controlplane:~# kubeadm upgrade plan v1.20.0
root@controlplane:~# kubeadm upgrade apply v1.20.0
root@controlplane:~# apt-get install kubelet=1.20.0-00
root@controlplane:~# systemctl daemon-reload
root@controlplane:~# systemctl restart kubelet
root@controlplane:~# kubectl uncordon controlplane 
root@controlplane:~# kubectl drain node01 --ignore-daemonsets

On the node01 node:

root@node01:~# apt update
root@node01:~# apt-get install kubeadm=1.20.0-00
root@node01:~# kubeadm upgrade node
root@node01:~# apt-get install kubelet=1.20.0-00
root@node01:~# systemctl daemon-reload
root@node01:~# systemctl restart kubelet

Back on the controlplane node:

root@controlplane:~# kubectl uncordon node01
root@controlplane:~# kubectl get pods -o wide | grep gold (make sure this is scheduled on node)

 

문제02

Print the names of all deployments in the admin2406 namespace in the following format:
DEPLOYMENT CONTAINER_IMAGE READY_REPLICAS NAMESPACE
<deployment name> <container image used> <ready replica count> <Namespace>
. The data should be sorted by the increasing order of the deployment name.

Example:
DEPLOYMENT CONTAINER_IMAGE READY_REPLICAS NAMESPACE
deploy0 nginx:alpine 1 admin2406
Write the result to the file /opt/admin2406_data.

  • Task completed?

[나의 풀이]

root@controlplane:~# kubectl get deployments.apps -n admin2406 -o=custom-columns='DEPLOYMENT:metadata.name','CONTAINER_IMAGE:spec.template.spec.containers[*].image','READY_REPLICAS:spec.replicas','NAMESPACE:metadata.namespace' --sort-by metadata.name > /opt/admin2406_data
root@controlplane:~# cat /opt/admin2406_data 
DEPLOYMENT   CONTAINER_IMAGE   READY_REPLICAS   NAMESPACE
deploy1      nginx             1                admin2406
deploy2      nginx:alpine      1                admin2406
deploy3      nginx:1.16        1                admin2406
deploy4      nginx:1.17        1                admin2406
deploy5      nginx:latest      1                admin2406

 

문제03

A kubeconfig file called admin.kubeconfig has been created in /root/CKA. There is something wrong with the configuration. Troubleshoot and fix it.

  • Fix /root/CKA/admin.kubeconfig

[나의 풀이]

root@controlplane:~/CKA# kubectl cluster-info 
Kubernetes master is running at https://controlplane:6443
KubeDNS is running at https://controlplane:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
...
MDRqRHkySEpjVU1NSmh4dWtCCnVJc2s1cFd4dzF3YWprOVJaZ2pzcEgwTlBQN2NYdmpsdWpTTAotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://controlplane:4330 --> 6443 으로 변경
  name: kubernetes
contexts:
- context:
...

 

문제04

Create a new deployment called nginx-deploy, with image nginx:1.16 and 1 replica. Next upgrade the deployment to version 1.17 using rolling update.

  • Image: nginx:1.16
  • Task: Upgrade the version of the deployment to 1:17

[나의 풀이]

root@controlplane:~# kubectl create deployment nginx-deploy --image=nginx:1.16 --replicas=1
deployment.apps/nginx-deploy created
root@controlplane:~# kubectl set image deployment/nginx-deploy nginx=nginx:1.17   
deployment.apps/nginx-deploy image updated

 

문제05

A new deployment called alpha-mysql has been deployed in the alpha namespace. However, the pods are not running. Troubleshoot and fix the issue. The deployment should make use of the persistent volume alpha-pv to be mounted at /var/lib/mysql and should use the environment variable MYSQL_ALLOW_EMPTY_PASSWORD=1 to make use of an empty root password.

Important: Do not alter the persistent volume.

  • Troubleshoot and fix the issues

[나의 풀이]

root@controlplane:~# cat pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-alpha-pvc
  namespace: alpha
spec:
  storageClassName: "" # Empty string must be explicitly set otherwise default StorageClass will be set
  volumeName: alpha-pv

 

[정답]

Use the command kubectl describe and try to fix the issue.
Solution manifest file to create a pvc called mysql-alpha-pvc as follows:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-alpha-pvc
  namespace: alpha
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: slow

 

문제06

Take the backup of ETCD at the location /opt/etcd-backup.db on the controlplane node.

  • Troubleshoot and fix the issues

[나의 풀이 & 정답]

root@controlplane:~# ETCDCTL_API=3 etcdctl --endpoints https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key snapshot save /opt/etcd-backup.db                    
Snapshot saved at /opt/etcd-backup.db

 

문제07

Create a pod called secret-1401 in the admin1401 namespace using the busybox image. The container within the pod should be called secret-admin and should sleep for 4800 seconds.

The container should mount a read-only secret volume called secret-volume at the path /etc/secret-volume. The secret being mounted has already been created for you and is called dotfile-secret.

  • Pod created correctly?

[나의 풀이 & 정답]

root@controlplane:~# cat pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: secret-1401
  name: secret-1401
  namespace: admin1401
spec:
  containers:
  - image: busybox
    name: secret-admin
    resources: {}
    command: ["sleep", "4800"]
    volumeMounts:
    - name: secret-volume
      mountPath: "/etc/secret-volume"
      readOnly: true
  volumes:
  - name: secret-volume
    secret:
      secretName: dotfile-secret

 

root@controlplane:~# kubectl apply -f pod.yaml 
pod/secret-1401 created