문제01
Create a new service account with the name pvviewer. Grant this Service account access to list all PersistentVolumes in the cluster by creating an appropriate cluster role called pvviewer-role and ClusterRoleBinding called pvviewer-role-binding.
Next, create a pod called pvviewer with the image: redis and serviceAccount: pvviewer in the default namespace.
- ServiceAccount: pvviewer
- ClusterRole: pvviewer-role
- ClusterRoleBinding: pvviewer-role-binding
- Pod: pvviewer
- Pod configured to use ServiceAccount pvviewer ?
[나의 풀이]
root@controlplane:~# kubectl create sa pvviewer
serviceaccount/pvviewer created
root@controlplane:~# cat role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pvviewer-role
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: [ "list"]
root@controlplane:~# kubectl create rolebinding pvviewer-role-binding --clusterrole=pvviewer-role --serviceaccount=default:pvviewer
rolebinding.rbac.authorization.k8s.io/pvviewer-role-binding created
kubectl run pvviewer --image=redis --dry-run=client -o yaml > pod.yaml
vi pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: pvviewer
spec:
containers:
- image: redis
name: pvviewer
serviceAccountName: pvviewer
kubectl apply -f pod.yaml
[정답]
Pods authenticate to the API Server using ServiceAccounts. If the serviceAccount name is not specified, the default service account for the namespace is used during a pod creation.
Reference: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
Now, create a service account pvviewer:
kubectl create serviceaccount pvviewer
To create a clusterrole:
kubectl create clusterrole pvviewer-role --resource=persistentvolumes --verb=list
To create a clusterrolebinding:
kubectl create clusterrolebinding pvviewer-role-binding --clusterrole=pvviewer-role --serviceaccount=default:pvviewer
Solution manifest file to create a new pod called pvviewer as follows:
---
apiVersion: v1
kind: Pod
metadata:
labels:
run: pvviewer
name: pvviewer
spec:
containers:
- image: redis
name: pvviewer
# Add service account name
serviceAccountName: pvviewer
문제02
List the InternalIP of all nodes of the cluster. Save the result to a file /root/CKA/node_ips.
Answer should be in the format: InternalIP of controlplane<space>InternalIP of node01 (in a single line)
[나의 풀이]
root@controlplane:~# kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}' > /root/CKA/node_ips
https://kubernetes.io/docs/reference/kubectl/cheatsheet/
kubectl Cheat Sheet
This page contains a list of commonly used kubectl commands and flags. Kubectl autocomplete BASH source <(kubectl completion bash) # setup autocomplete in bash into the current shell, bash-completion package should be installed first. echo "source <(kubect
kubernetes.io
문제03
Create a pod called multi-pod with two containers.
Container 1, name: alpha, image: nginx
Container 2: name: beta, image: busybox, command: sleep 4800
Environment Variables:
container 1:
name: alpha
Container 2:
name: beta
- Pod Name: multi-pod
- Container 1: alpha
- Container 2: beta
- Container beta commands set correctly?
- Container 1 Environment Value Set
- Container 2 Environment Value Set
[나의 풀이]
root@controlplane:~# cat multiple-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: multi-pod
spec:
containers:
- name: alpha
image: nginx
ports:
- containerPort: 80
env:
- name: alpha
- name: beta
image: busybox
command: ["sleep", "4800"]
env:
- name: beta
[정답]
Solution manifest file to create a multi-container pod multi-pod as follows:
---
apiVersion: v1
kind: Pod
metadata:
name: multi-pod
spec:
containers:
- image: nginx
name: alpha
env:
- name: name
value: alpha
- image: busybox
name: beta
command: ["sleep", "4800"]
env:
- name: name
value: beta
문제04
Create a Pod called non-root-pod , image: redis:alpine
runAsUser: 1000
fsGroup: 2000
- Pod non-root-pod fsGroup configured
- Pod non-root-pod runAsUser configured
[나의 풀이]
root@controlplane:~# cat non-root-pod.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: non-root-pod
name: non-root-pod
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
containers:
- image: redis:alpine
name: non-root-pod
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
문제05
We have deployed a new pod called np-test-1 and a service called np-test-service. Incoming connections to this service are not working. Troubleshoot and fix it.
Create NetworkPolicy, by the name ingress-to-nptest that allows incoming connections to the service over port 80.
Important: Don't delete any current objects deployed.
- Important: Don't Alter Existing Objects!
- NetworkPolicy: Applied to All sources (Incoming traffic from all pods)?
- NetWorkPolicy: Correct Port?
- NetWorkPolicy: Applied to correct Pod?
[나의 풀이]
root@controlplane:~# cat ingress.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: ingress-to-nptest
spec:
podSelector: {}
ingress:
- from:
ports:
- protocol: TCP
port: 80
policyTypes:
- Ingress
[정답]
https://zgundam.tistory.com/196
Kubernetes Mock Exam 정리(Mock Exam 3)
누군가 이 동영상 강좌의 후기를 CKA 후기에 같이 써놓은게 있어서 본적이 있었다. 그때 Mock Exam에 대한 내용도 언급했었는데 난이도가 1,2,3 순으로 어렵다고 하더니만 진짜 그러했다. 솔까말(이
zgundam.tistory.com
연결 테스트 방법
kubectl run test-np --image=busybox:1.28 --rm -it -- sh
nc -z -v -w 2 np-test-service 80
nc: netcat
-z: 스캔 시 사용, 연결되면 바로 닫는 용도
-v: verbose
-w secs: secs 시간 후 타임아웃
[정답]
Solution manifest file to create a network policy ingress-to-nptest as follows:
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: ingress-to-nptest
namespace: default
spec:
podSelector:
matchLabels:
run: np-test-1
policyTypes:
- Ingress
ingress:
- ports:
- protocol: TCP
port: 80
문제06
Taint the worker node node01 to be Unschedulable. Once done, create a pod called dev-redis, image redis:alpine, to ensure workloads are not scheduled to this worker node. Finally, create a new pod called prod-redis and image: redis:alpine with toleration to be scheduled on node01.
key: env_type, value: production, operator: Equal and effect: NoSchedule
- Key = env_type
- Value = production
- Effect = NoSchedule
- pod 'dev-redis' (no tolerations) is not scheduled on node01?
- Create a pod 'prod-redis' to run on node01
[나의 풀이]
root@controlplane:~# kubectl taint nodes node01 env_type=production:NoSchedule
oot@controlplane:~# kubectl run dev-redis --image=redis:alpine
pod/dev-redis created
root@controlplane:~# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
dev-redis 1/1 Running 0 10s 10.50.0.4 controlplane <none> <none>
np-test-1 1/1 Running 0 5m32s 10.50.192.1 node01 <none> <none>
root@controlplane:~# cat toleration.yaml
apiVersion: v1
kind: Pod
metadata:
name: prod-redis
spec:
containers:
- name: redis
image: redis:alpine
imagePullPolicy: IfNotPresent
tolerations:
- key: "env_type"
operator: "Equal"
value: "production"
effect: "NoSchedule"
문제07
Create a pod called hr-pod in hr namespace belonging to the production environment and frontend tier .
image: redis:alpine
Use appropriate labels and create all the required objects if it does not exist in the system already.
- hr-pod labeled with environment production?
- hr-pod labeled with tier frontend?
[나의 풀이]
root@controlplane:~# cat redis.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: hr-pod
production: environment
frontend: tier
name: hr-pod
namespace: hr
spec:
containers:
- image: redis:alpine
name: hr-pod
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
root@controlplane:~# kubectl apply -f redis.yaml
Error from server (NotFound): error when creating "redis.yaml": namespaces "hr" not found
root@controlplane:~# kubectl create ns hr
namespace/hr created
root@controlplane:~# kubectl apply -f redis.yaml
pod/hr-pod created
[정답]
Create a namespace if it doesn't exist:
kubectl create namespace hr
and then create a hr-pod with given details:
kubectl run hr-pod --image=redis:alpine --namespace=hr --labels=environment=production,tier=fro
문제08
A kubeconfig file called super.kubeconfig has been created under /root/CKA. There is something wrong with the configuration. Troubleshoot and fix it.
- Fix /root/CKA/super.kubeconfig
[나의 풀이]
몰라서 실패
[정답]
root@controlplane:~/CKA# kubectl cluster-info
Kubernetes control plane is running at https://controlplane:6443
KubeDNS is running at https://controlplane:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
root@controlplane:~/CKA# vim super.kubeconfig
9999포트를 6443 포트로 변경해준다.
문제09
We have created a new deployment called nginx-deploy. scale the deployment to 3 replicas. Has the replica's increased? Troubleshoot the issue and fix it.
- deployment has 3 replicas
[나의 풀이]
root@controlplane:~# kubectl scale deployment nginx-deploy --replicas=3
deployment.apps/nginx-deploy scaled
이유를 못찾겠음
[정답]
갯수를 보장해주는 역할을 하는 것은 kube-controller-manager
vi kube-controller-manager.yaml
contro1ler로 되어있는 잘못된 글자를 controller로 변경해준다.
Use the command kubectl scale to increase the replica count to 3.
kubectl scale deploy nginx-deploy --replicas=3
The controller-manager is responsible for scaling up pods of a replicaset. If you inspect the control plane components in the kube-system namespace, you will see that the controller-manager is not running.
kubectl get pods -n kube-system
The command running inside the controller-manager pod is incorrect.
After fix all the values in the file and wait for controller-manager pod to restart.
Alternatively, you can run sed command to change all values at once:
sed -i 's/kube-contro1ler-manager/kube-controller-manager/g' /etc/kubernetes/manifests/kube-controller-manager.yaml
This will fix the issues in controller-manager yaml file.
At last, inspect the deployment by using below command:
kubectl get deplo
'클라우드 > CKA' 카테고리의 다른 글
[CKA] Advanced kubectl commands (0) | 2022.01.03 |
---|---|
[CKA] Ligtning Lab (0) | 2022.01.02 |
[CKA] mock exam 02 (0) | 2021.12.26 |
[CKA] mock exam 01 (0) | 2021.12.15 |
[network] dns (0) | 2021.10.16 |