2020.12.02

pod내에 서비스가 준비가 되었는지
이후에 장애가 발생이 되었는지 점검하는 probe

Method 3가지

Method에 대한 결과로

서비스 객체가 LB역할을 하는데 ep가 준비되지 않았는데 Client가 요청하는 문제를 probe를 사용해서 해결 할 수 있다.

데몬셋, 스테이트풀셋 컨트롤러
디플로이먼트 컨트롤러 = 스테이트리스 프로세스를 배포하기 최적한 컨트롤러

롤링업데이트를 통해 도커를 수정할 수 있다.
도커 이미지 변경 할 수 있는 3가지 방법
set
edit
apply : 기존 리소스가 있더라도 업데이트 수행. upsert 한다

Ondelete : 
제로 다운타임 서비스 : 롤링업데이트
Recreate는 중단이 잠시 발생 할 수 있음

롤링업데이트 관련된 작업 명령들

롤링업데이트 의 맥스서지, 맥스언어베일러블 설정가능

롤링업데이트 블루/그린 배포하기
블루/그린 : 새로운 버전 선 생성 -> 서비스의 Selector를 바꿔 lable 바라보는 값 스위치

볼륨
데이터 공유
영구적 저장
etcd 메소드 설정을 디커플링 방식으로 제공
메타데이터 정보 제공

다양한 볼륨형태
1. pod 내에 공유하는 방법 (컨테이너간에 공유하는 방식)
emptyDir : pod 삭제되면 함께 삭제되는 백앤솔루션

2. pod 외부 공유하는 방법

서로다른 
3가지 타입
NFS

디커플링메소드 (pod 명세서안에 정의하지 않는다. 외부 명세서로 관리한다.) : PV, PVC 

PV
PV와 바인딩된 PVC를 어떻게 할것이냐?
PV와 연결된 백앤스토리지를 어떻게 할것이냐?
Retatin 보존
Delete 삭제 (클라우드 환경의 EBS 사용할때 보안을 위해 Delete 정책 주로 사용한다)
ReCycle 재사용

볼륨모드
파일시스템 : Pod 내에 컨테이너에서 직접 연결
Raw Block

NODE 기준으로 Access Mode 가 동작한다
Once
Many

PVC
조건에 부합하는 PV를 스케쥴링해서 1:1 맵핑에서 사용할 수 있다.

PVC를 통해서 바인딩이 되면 Pod 에서 사용한다.
pod 내에 컨테이에서는 마운트포인터를 지정해서 사용한다.
백엔솔루션으로 
Pod 안에 프로세스가 파일을 저장하면 PV와 마운트된 저장소에 최종 저장된다.

Raw Block
PV 쪽에서는 Block 모드 지정
PVC 에서는 Block 모드로 요청
Pod에서는 Device path를 지정한다. 3박자

Dynamic Provisioning 방식사용해서 스토리지 클래스를 사용한다.

스토리지 클래스 예문
어노테이션에 디폴트로 설정가능
SC를 다양하게 디파인한다. (sc1, sc2, sc3
PVC에 개발자들이 스토리지클래스 네임을 지정하면 
sc 생성
pvc 생성 -> 요청
pv 
실제로 디바이스가 생성됨

ConfigMaps
Secrets
어플리케이션의 동작 방식을 바꿀수 있다. 디커플링 방식으로 제공 ConfigMaps, Secrets
볼륨형태로 해당 Pod 내에 도커 이미지를 사용하여 Start된 상태에서 리소스는 key, value 형태

Secrets : 민감한 데이터를 베이스64로 인코딩해서 제공

ConfigMaps
생성방식 3가지
환경변수
configMap
volume 통해 파일로 전달

Secrets
DB를 pod로 배포를 할 때 즉, DB를 설치할때 아이디/패스워드 민감한 데이터 설정시
네임스페이스를 생성하면 디폴트 Secrets이 생성됨
Secrets의 마운트에 접근하여 정보도 확인 했음.

Scheduler
taint : node레벨에서 설정하는것
각각 워커노드에 특정 taint를 설정하게 되면
taint가 설정된 녀석을 피해서 일반적인 노드에 접근한다.
taint만 설정된 녀석만 붙도록 설정할 수 있다 -> Toleration

Toleration
그렇다 하더라도 node에 머물수 있도록 설정가능
그렇다면 일정 시간 동안 node에 머물수 있도록 설정가능

kubectl drain -> NoExecute 방식
kubectl cordon -> NoSchedule 방식

Pod 레벨의 Toleration
을 통해서 특정 GPU가 설정된 곳에만 POD가 머물수 있도록 스케쥴 설정가능

벨류
오퍼레이터
이펙트
설정을 해준다.

 

2020.12.03

Service 와 DNS Record
1.Headless (SRV)

2.ClusterIP (A)
• --external_ip= 지정 => 인바운드 인터페이스로 활용된다. 
• Service가 L4 역할을 한다.
• Ingress Controller(L7 역할) : 서비스 관문을 만들어준다.
기본은 내부의 pod 끼리 서비스를 연동. 하나의 노드안에서 통신
외부 서비스와 연동해야 한다? external_ip 등록한다.

3.NodePort (A)
• 서비스 객체를 즉시 NodePort 타입으로 변경가능 : 각각 맴버노드에 NodePort가 할당된다.
• 포트는 동일하고, 인바운드 트래픽이 kube proxy에 의해 분산된다. (rule chain 기반)
• Clinet 도메인으로 요청 -> DNS에 의해 라운드로빈으로 분산된다.
모든 노드에서 받을 수 있도록 랜덤 노드포트가 생성되어 외부에서 들어오는 요청을 다 받을 수 있다.

4.LoadBalancer (A)
• 클라우드 환경에서 ELB와 LB를 연동한다.
• ELB에 FQDN이 등록된다.
클라우드 환경에서 싱글엔드포인트 방식으로 내부에서는 NodePort 트래픽을 태워 내부 노드들에게 보낸다.

5.ExternalName (CNAME)

Ingress Controller : 여러 도메인을 내부 서비스와 연결을 통합해서 관리하는 LB 역할을 한다.

'클라우드 컴퓨팅 & NoSQL > k8s' 카테고리의 다른 글

RBAC  (0) 2020.12.06
HPA 설정  (0) 2020.12.06
AutoScaling  (0) 2020.12.06
monitoring 구축  (0) 2020.12.06
Monitoring  (0) 2020.12.06

인증 -> 권한 -> 어드미션컨트롤

RBAC의 핵심요소
1. 주체 (리소스를 접근하는 주체 : user, sa, group)
2. 다양한 리소스의 범위 : 
- 클러스터 레벨의 리소스, : pv, node ...
- 네임스페이스 기반의 리소스 : sts, ds ...

 

RBAC의 Role 정의 기준
1.리소스 지정
2.그리고 액션

 

롤바인딩 : 해당 주체에 RBAC을 assign 한다
- 롤바인딩 생성 -> assign

 

ClusterRoleBinding : 모든 네임스페이스에 결합시

RoleBinding : 특정 네임스페이스에 결합시

'클라우드 컴퓨팅 & NoSQL > k8s' 카테고리의 다른 글

k8s 개념 확인  (0) 2020.12.06
HPA 설정  (0) 2020.12.06
AutoScaling  (0) 2020.12.06
monitoring 구축  (0) 2020.12.06
Monitoring  (0) 2020.12.06

vi hpa-dp.yaml 명세서 작성

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ex-dp
  labels:
    hpa: test
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        resources:
           requests:
              cpu: 200m
        ports:
        - containerPort: 80
          protocol: TCP

 

Deployment 생성 및 확인

root@master1:~/hpa# kubectl create -f hpa-dp.yaml 
deployment.apps/ex-dp created

root@master1:~/hpa# kubectl autoscale deployment ex-dp --cpu-percent=50 --min=1 --max=5
horizontalpodautoscaler.autoscaling/ex-dp autoscaled

root@master1:~/hpa# kubectl get hpa
NAME    REFERENCE          TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
ex-dp   Deployment/ex-dp   <unknown>/50%   1         5         0          12s

root@master1:~/hpa# kubectl get hpa ex-dp -o yaml
apiVersion: autoscaling/v1
	kind: HorizontalPodAutoscaler
	metadata:
	  annotations:
		autoscaling.alpha.kubernetes.io/conditions: '[{"type":"AbleToScale","status":"True","lastTransitionTime":"2020-12-04T06:35:03Z","reason":"SucceededGetScale","message":"the
		  HPA controller was able to get the target''s current scale"},{"type":"ScalingActive","status":"False","lastTransitionTime":"2020-12-04T06:35:03Z","reason":"FailedGetResourceMetric","message":"the
		  HPA was unable to compute the replica count: unable to get metrics for resource
		  cpu: no metrics returned from resource metrics API"}]'
	  creationTimestamp: "2020-12-04T06:34:48Z"
	  managedFields:
	  - apiVersion: autoscaling/v1
		fieldsType: FieldsV1
		fieldsV1:
		  f:spec:
			f:maxReplicas: {}
			f:minReplicas: {}
			f:scaleTargetRef:
			  f:apiVersion: {}
			  f:kind: {}
			  f:name: {}
			f:targetCPUUtilizationPercentage: {}
		manager: kubectl-autoscale
		operation: Update
		time: "2020-12-04T06:34:48Z"
	  - apiVersion: autoscaling/v1
		fieldsType: FieldsV1
		fieldsV1:
		  f:metadata:
			f:annotations:
			  .: {}
			  f:autoscaling.alpha.kubernetes.io/conditions: {}
		  f:status:
			f:currentReplicas: {}
		manager: kube-controller-manager
		operation: Update
		time: "2020-12-04T06:35:03Z"
	  name: ex-dp
	  namespace: default
	  resourceVersion: "454553"
	  selfLink: /apis/autoscaling/v1/namespaces/default/horizontalpodautoscalers/ex-dp
	  uid: db7c37d2-1a31-46b4-bf5f-032db6ec26d3
	spec:
	  maxReplicas: 5
	  minReplicas: 1
	  scaleTargetRef:
		apiVersion: apps/v1
		kind: Deployment
		name: ex-dp
	  targetCPUUtilizationPercentage: 50
	status:
	  currentReplicas: 2
	  desiredReplicas: 0
      
root@master1:~/hpa# kubectl describe hpa ex-dp
Name:                                                  ex-dp
	Namespace:                                             default
	Labels:                                                <none>
	Annotations:                                           <none>
	CreationTimestamp:                                     Fri, 04 Dec 2020 06:34:48 +0000
	Reference:                                             Deployment/ex-dp
	Metrics:                                               ( current / target )
	  resource cpu on pods  (as a percentage of request):  <unknown> / 50%
	Min replicas:                                          1
	Max replicas:                                          5
	Deployment pods:                                       2 current / 0 desired
	Conditions:
	  Type           Status  Reason                   Message
	  ----           ------  ------                   -------
	  AbleToScale    True    SucceededGetScale        the HPA controller was able to get the target's current scale
	  ScalingActive  False   FailedGetResourceMetric  the HPA was unable to compute the replica count: did not receive metrics for any ready pods
	Events:
	  Type     Reason                        Age                From                       Message
	  ----     ------                        ----               ----                       -------
	  Warning  FailedGetResourceMetric       28s (x2 over 43s)  horizontal-pod-autoscaler  unable to get metrics for resource cpu: no metrics returned from resource metrics API
	  Warning  FailedComputeMetricsReplicas  28s (x2 over 43s)  horizontal-pod-autoscaler  invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
	  Warning  FailedGetResourceMetric       12s                horizontal-pod-autoscaler  did not receive metrics for any ready pods
	  Warning  FailedComputeMetricsReplicas  12s                horizontal-pod-autoscaler  invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: did not receive metrics for any ready pods

 

창을 더 띄워서 지속적으로 모니터링한다.

# kubectl get hpa -w

 

부하주기위해 스크립트 작성한다.

vi  cpuhog 
#!/bin/bash

while :
do
x=1
done
ubuntu@master1:~$ chmod +x cpuhog 
ubuntu@master1:~$ kubectl cp cpuhog ex-dp-{tab}:/bin #@ /bin 밑에 복사
ubuntu@master1:~$ kubectl exec ex-dp-{tab} -- /bin/cpuhog

부하 종료 후 스케일-인이 이루어 지는지 확인한다.

#@ replicas를 활용하여 
root@master1:~# kubectl get hpa -w
NAME    REFERENCE          TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
ex-dp   Deployment/ex-dp   0%/50%    1         5         2          95s

ex-dp   Deployment/ex-dp   41%/50%   1         5         2          4m35s
ex-dp   Deployment/ex-dp   249%/50%   1         5         2          5m36s
ex-dp   Deployment/ex-dp   249%/50%   1         5         4          5m51s
ex-dp   Deployment/ex-dp   249%/50%   1         5         5          6m6s

#@ POD가 늘어난 것을 확인할 수 있다.
NAME                                 READY   STATUS    RESTARTS   AGE
pod/ex-dp-d67954d46-5qlxj            1/1     Running   0          93s
pod/ex-dp-d67954d46-ghmhk            1/1     Running   0          78s
pod/ex-dp-d67954d46-jn77k            1/1     Running   0          7m22s
pod/ex-dp-d67954d46-mkpt7            1/1     Running   0          93s
pod/ex-dp-d67954d46-rrgn4            1/1     Running   0          7m22s

'클라우드 컴퓨팅 & NoSQL > k8s' 카테고리의 다른 글

k8s 개념 확인  (0) 2020.12.06
RBAC  (0) 2020.12.06
AutoScaling  (0) 2020.12.06
monitoring 구축  (0) 2020.12.06
Monitoring  (0) 2020.12.06

오토스케링을 통해 아래 정보를 자동으로 조절 가능하다.

  • Pod Level
  • Node Level
  • Manually 미리 예상되는 Traffic 처리에 적합
  • Automatically 예측하기 어려운 Workload 처리에 적합
  • Horizotal 복제 Pod 를 추가하여 Workload 를 처리
    - 확장을 스케일아웃으로 추가
  • Vertical running Pod 에 적용되지 않고 생성시 적용
    - pod내의 용량을 증설한다.

수동 확장

  • RC, RS, Deployment, sts 컨트롤러에 replicas를 통해 스케일아웃가능하다.
    (Deamonset은 고정적이다. 스케일아웃 안됨. replicas 지원하지 않는다.)

 

자동 확장

  • 임계치를 지정할 메트릭스가 필요하다.
  • CPU 사용량에 대해 임계치를 생성한다.
  • 메트릭스서버를 런칭 하여 -> limit range를 설정

HPA(Horizontal Pod Autoscaler)

  • k8s 내에서 수평적확장 (HPA)
  • 스케일을 지원하는 컨트롤러
  • kubelete가 각각 pod, node의 대한 정보를 gathering 한다.

HPA에서 가장 많이 사용하는 메트릭스? : CPU, QPS (초당 쿼리 수)

 

deployment 기반의 pod를 배포

  • request, limit 설정 한다.
  • 각각 controller마다 HPA를 적용해야 한다.

yaml로 hpa 적용 확인 방법

  • 50% 사용율 이상이 되면 replicas 를 이용해 pod의 숫자를 올린다.
  • 50% 사용율 이하면 replicas 를 이용해 pod의 숫자를 내린다.

 

'클라우드 컴퓨팅 & NoSQL > k8s' 카테고리의 다른 글

RBAC  (0) 2020.12.06
HPA 설정  (0) 2020.12.06
monitoring 구축  (0) 2020.12.06
Monitoring  (0) 2020.12.06
statefulset 시나리오  (0) 2020.12.06

구축1 : metrics-server 설정

root@ip-172-31-4-27:~# kubectl create -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
Warning: apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created

pod가 정상적으로 올라왔는지 확인
root@ip-172-31-4-27:~# kubectl -n kube-system get pods | grep metrics
metrics-server-68b849498d-7t5wj          1/1     Running   0          37s

 

deployment 수정

#kubectl -n kube-system edit deployment metrics-server

spec:
   containers:
   - args:
     - --cert-dir=/tmp
     - --secure-port=4443
     - --kubelet-insecure-tls      #@2 lines 추가         
     - --kubelet-preferred-address-types=InternalIP   #@2 lines 추가
     image: k8s.gcr.io/metrics-server-amd64:v0.3.6
     imagePullPolicy: IfNotPresent
     name: metrics-server

 

해당 metrics-server 로그를 확인한다. 에러가 없는지 확인한다.

root@ip-172-31-4-27:~# kubectl -n kube-system logs metrics-server-75f98fdbd5-g99j7 
I1204 05:15:45.506075       1 serving.go:312] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
I1204 05:15:45.792267       1 secure_serving.go:116] Serving securely on [::]:4443

root@ip-172-31-4-27:~# kubectl top pods --all-namespaces
NAMESPACE              NAME                                         CPU(cores)   MEMORY(bytes)   
calico-system          calico-kube-controllers-5c6f449c6f-w2pwg     1m           12Mi            
calico-system          calico-node-9zjx2                            16m          88Mi            
calico-system          calico-node-nrj94                            17m          91Mi            
calico-system          calico-typha-564cccbfc5-r7pww                1m           16Mi            
calico-system          calico-typha-564cccbfc5-znjsc                1m           17Mi            
default                idolized-mule-mariadb-master-0               2m           81Mi            
default                idolized-mule-mariadb-slave-0                2m           81Mi            
kube-system            coredns-f9fd979d6-5t4g6                      2m           8Mi             
kube-system            coredns-f9fd979d6-r9p5f                      2m           8Mi             
kube-system            etcd-ip-172-31-4-27                          16m          84Mi            
kube-system            kube-apiserver-ip-172-31-4-27                51m          407Mi           
kube-system            kube-controller-manager-ip-172-31-4-27       11m          47Mi            
kube-system            kube-proxy-28mlr                             5m           18Mi            
kube-system            kube-proxy-tn6qw                             3m           18Mi            
kube-system            kube-scheduler-ip-172-31-4-27                4m           15Mi            
kube-system            metrics-server-75f98fdbd5-g99j7              2m           10Mi            
kube-system            tiller-deploy-7b56c8dfb7-tcp8l               1m           7Mi             
kubernetes-dashboard   dashboard-metrics-scraper-7b59f7d4df-nsqn6   1m           4Mi             
kubernetes-dashboard   kubernetes-dashboard-74d688b6bc-sjprt        1m           6Mi             
tigera-operator        tigera-operator-6998c47f45-pzsc7             2m           18Mi  


root@ip-172-31-4-27:~# kubectl top nodes
NAME               CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
ip-172-31-13-180   102m         5%     706Mi           18%       
ip-172-31-4-27     205m         10%    1357Mi          35%   

 

구축2 : Configure the Dashboard 구축

root@ip-172-31-4-27:~# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

#@ svc 확인한다
root@ip-172-31-4-27:~# kubectl get svc --all-namespaces | grep dash
kubernetes-dashboard   dashboard-metrics-scraper     ClusterIP   10.100.63.100   <none>        8000/TCP                 24s
kubernetes-dashboard   kubernetes-dashboard          ClusterIP   10.96.178.195   <none>        443/TCP 

#kubectl -n kubernetes-dashboard edit svc kubernetes-dashboard

selector:
type: NodePort #으로 변경 내부에서 확인하려면 NodePort로 바꿔줘야 한다.
status:

#@ dashboard의 NodePort로 접속테스트를 한다.
root@ip-172-31-4-27:~# kubectl  get svc -n kubernetes-dashboard 
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.100.63.100   <none>        8000/TCP        67s
kubernetes-dashboard        NodePort    10.96.178.195   <none>        443:30011/TCP   67s


root@ip-172-31-4-27:~# kubectl create clusterrolebinding kubernetes-dashboard-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:kubernetes-dashboard
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-admin created

토큰 기반 Dashboard 접속을 위해

root@ip-172-31-4-27:~# kubectl -n kubernetes-dashboard describe secrets kubernetes-dashboard-token-m99f2 
Name:         kubernetes-dashboard-token-m99f2
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard
kubernetes.io/service-account.uid: 9b3e254d-b87a-40ab-bfed-7e217d15504d

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1066 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImxoZEdBYnhGY2I2Y2k5N2RYcHBSOThYODNMb1E0ZkpORS1IU2h5Vy0xUU0ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1tOTlmMiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjliM2UyNTRkLWI4N2EtNDBhYi1iZmVkLTdlMjE3ZDE1NTA0ZCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.cLYvbRTf3uidfPi84BOA8Iw62y27g6I7D8rj8L4wPmbXWNru2ja2Jy6AS81rcPWcn6Yc6hZgk-R1IG4fEGZVhPyedPyeYkG4BOhreVpSo_mY9_3FOxL2OhsvVW_hmDtUFACx1mMRQY-ciHj-ZEFeZsq03BnqIZDjb7pAq0LnJuBONTzr6o9HoOqHlP9egH2JO5pm64NIE__jqxNGr77B3GAwq_f45iMV007nEZ-LWvrb-wLbNO91jBVnk5qGC1Uu_7wR0KdArExpYuWhRIWMqCp2VM3e3hDJnDi8foKf7O4oQuyIMPFix_XQuK1s2GD6b7mHMVSkOqS7HJ5FzPFgBg

 

https://15.165.170.115:30011로 접속하면 가능하다.

'클라우드 컴퓨팅 & NoSQL > k8s' 카테고리의 다른 글

HPA 설정  (0) 2020.12.06
AutoScaling  (0) 2020.12.06
Monitoring  (0) 2020.12.06
statefulset 시나리오  (0) 2020.12.06
statefulset  (0) 2020.12.06
  • 노드별 pod 레벨 확인
    각 노드별 kubelet

  • cAdvisor : 각각 워커노드별 리소스 상태 확인
    Heapster/Metrics 
    K8s Dashboard 
    Grafana 
    weavescope

 

  • Resource Requests를 너무 높게 설정시 리소스낭비와 비용초래
  • 낮게 설정시 실제 리소스 부족으로 중단을 가져올 수 있으므로 Requests와 Limits의 최적화가 중요
  • 실제 Workload 수준에서 컨테이너의 리소스 사용량 Monitoring 필요  : 지속적인 모니터링을 통해 최적화 작업
  • 각 노드의 kubelet에는 cAdvisoragent를 포함, 이를통해container별리소스사용량을수집  : 각각 리소스 정보들을 메트릭스 서버로 게더링해서 실시간 사용량을 확인할 수 있다.
  • 중앙에서 이 데이터를 수집하기 위해 포드형 Heapster/Metrics 서비스 구성요소를 활성화

 

weavescope :

  • 실시간으로 컴포넌트들을 실시간 모니터링이 가능하다.
  • 프로세스 레벨 별, 컴포넌트 레벨 별로 모니터링이 가능하다.

 

'클라우드 컴퓨팅 & NoSQL > k8s' 카테고리의 다른 글

AutoScaling  (0) 2020.12.06
monitoring 구축  (0) 2020.12.06
statefulset 시나리오  (0) 2020.12.06
statefulset  (0) 2020.12.06
Helm 설치/Helm을 이용한 mysql 설치  (0) 2020.12.06

sts 의 상태를 확인하자. 이미 mariadb가 설치되어 있으므로 확인한다.

root@ip-172-31-4-27:~/resource# kubectl get po
NAME                             READY   STATUS    RESTARTS   AGE
idolized-mule-mariadb-master-0   1/1     Running   0          81m
idolized-mule-mariadb-slave-0    1/1     Running   0          81m.

root@ip-172-31-4-27:~/resource# kubectl get sts
NAME                           READY   AGE
idolized-mule-mariadb-master   1/1     81m
idolized-mule-mariadb-slave    1/1     81m

root@ip-172-31-4-27:~/resource# kubectl get sts idolized-mule-mariadb-master -o yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  creationTimestamp: "2020-12-04T03:12:33Z"
  generation: 1
  labels:
    app: mariadb
    chart: mariadb-7.3.14
    component: master
    heritage: Tiller
    release: idolized-mule
	생략
	spec:
	  podManagementPolicy: OrderedReady #@ 병렬로 수정가능
	  replicas: 1 #@ RS 지원
	  revisionHistoryLimit: 10
	생략
	spec:
      affinity:
        podAntiAffinity: #@ anti를 통해 master와 worker가 서로 다른 노드에 설치되도록 설정
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchLabels:
                  app: mariadb
                  release: idolized-mule
              topologyKey: kubernetes.io/hostname
            weight: 1
	containers:
      - env: #@ 환경변수 확인
        - name: MARIADB_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              key: mariadb-root-password
              name: idolized-mule-mariadb
        - name: MARIADB_DATABASE
          value: my_database
        - name: MARIADB_REPLICATION_MODE
          value: master
        - name: MARIADB_REPLICATION_USER
          value: replicator
        - name: MARIADB_REPLICATION_PASSWORD
          valueFrom:
            secretKeyRef:
              key: mariadb-replication-password
              name: idolized-mule-mariadb
        image: docker.io/bitnami/mariadb:10.3.22-debian-10-r27
        imagePullPolicy: IfNotPresent
		
	livenessProbe: #@ 현재 동작하는지 상태 판단 후 결과를 전달한다.
          exec:
            command:
            - sh
            - -c
            - |
              password_aux="${MARIADB_ROOT_PASSWORD:-}"
              if [ -f "${MARIADB_ROOT_PASSWORD_FILE:-}" ]; then
                  password_aux=$(cat $MARIADB_ROOT_PASSWORD_FILE)
              fi
              mysqladmin status -uroot -p$password_aux
          failureThreshold: 3
          initialDelaySeconds: 120
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
		  
	readinessProbe:	#@ 아래 조건이 맞아야 ep를 생성하겠다.
          exec:
            command:
            - sh
            - -c
            - |
              password_aux="${MARIADB_ROOT_PASSWORD:-}"
              if [ -f "${MARIADB_ROOT_PASSWORD_FILE:-}" ]; then
                  password_aux=$(cat $MARIADB_ROOT_PASSWORD_FILE)
              fi
              mysqladmin status -uroot -p$password_aux
          failureThreshold: 3
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1

	volumes: #@ 볼륨을 두개 정의하여 사용하고 있다.
      - configMap:
          defaultMode: 420
          name: idolized-mule-mariadb-master
        name: config
      - emptyDir: {} #@ emptyDir를 백앤솔루션으로 사용. 임시저장소로 DB를 저장한다. POD가 삭제되면 저장소가 함께 삭제될 것이다.
        name: data
		
	root@ip-172-31-4-27:~/resource# kubectl get cm
	NAME                           DATA   AGE
	idolized-mule-mariadb-master   1      88m
	idolized-mule-mariadb-slave    1      88m
	idolized-mule-mariadb-tests    1      88m
	
	root@ip-172-31-4-27:~/resource# kubectl get cm idolized-mule-mariadb-master -o yaml
	apiVersion: v1
	data:
	  my.cnf: |-
		[mysqld]
		skip-name-resolve
		explicit_defaults_for_timestamp
		basedir=/opt/bitnami/mariadb
		plugin_dir=/opt/bitnami/mariadb/plugin
		port=3306
		socket=/opt/bitnami/mariadb/tmp/mysql.sock
		tmpdir=/opt/bitnami/mariadb/tmp
		max_allowed_packet=16M
		bind-address=0.0.0.0
		pid-file=/opt/bitnami/mariadb/tmp/mysqld.pid
		log-error=/opt/bitnami/mariadb/logs/mysqld.log
		character-set-server=UTF8
		collation-server=utf8_general_ci
        
#@ 디커플링 메소드로 edit를 제공한다.
root@ip-172-31-4-27:~/resource# kubectl edit cm idolized-mule-mariadb-master -o yaml
Edit cancelled, no changes made.

'클라우드 컴퓨팅 & NoSQL > k8s' 카테고리의 다른 글

monitoring 구축  (0) 2020.12.06
Monitoring  (0) 2020.12.06
statefulset  (0) 2020.12.06
Helm 설치/Helm을 이용한 mysql 설치  (0) 2020.12.06
Helm  (0) 2020.12.06
  • Deployment는 RS의 상위 컨트롤러
    stateless app을 배포하기 위한 목적

    VS

    Statefulset은 
    stateful 한 app을 배포하기 위한 목적

 

  • 서비스 객체를 만들면 clusterip가 부여되는데, headless service는 해당 clusterip를 부여하지 않는다.

 

  • pets and cattle
    온프레미스 : pets
    클라우드 : cattle (소때)

    k8s 안에 여러 pod를 소때로 비유함

    statefulset(sts)는 pets에 해당한다.

 

  • sts로 배포된 pod가 각각의 고유한 strorage를 바라보고 저장하고 있다. 또한 PVC도 각각 고유한 정보를 생성/사용한다.
    영구저장소 이기 때문에 pod가 삭제되더라도 storage는 삭제되지 않는다.
  • replicas=0 를 하게 되면 순차적으로 종료를 수행한다. (안정적으로 종료한다)
    순차 종료 이후 STS를 삭제 한다.
  • StatefulSet Network ID : sts로 배포하면 고유한 네트워크 ID를 사용한다.

'클라우드 컴퓨팅 & NoSQL > k8s' 카테고리의 다른 글

Monitoring  (0) 2020.12.06
statefulset 시나리오  (0) 2020.12.06
Helm 설치/Helm을 이용한 mysql 설치  (0) 2020.12.06
Helm  (0) 2020.12.06
limit range/resource-quota 시나리오  (0) 2020.12.06

Helm Setup

root@ip-172-31-4-27:~/resource# curl https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
Dload  Upload   Total   Spent    Left  Speed
100  7160  100  7160    0     0  17463      0 --:--:-- --:--:-- --:--:-- 17506
Downloading https://get.helm.sh/helm-v2.17.0-linux-amd64.tar.gz
Preparing to install helm and tiller into /usr/local/bin
helm installed into /usr/local/bin/helm
tiller installed into /usr/local/bin/tiller
Run 'helm init' to configure helm.

 

서비스어카운트 생성

#@ 틸러라는 이름의 서비스어카운트를 만든다
root@ip-172-31-4-27:~/resource# kubectl create sa --namespace kube-system tiller
serviceaccount/tiller created

 

clusterrolebinding 리소스 유형의 tiller-cluster-role 을 만들꺼다 --serviceaccount=kube-system:tiller 

root@ip-172-31-4-27:~/resource# kubectl create clusterrolebinding tiller-cluster-role --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-role created

 

틸러라는 서버가 올라간다.

root@ip-172-31-4-27:~/resource# helm init  --service-account tiller
Creating /root/.helm 
Creating /root/.helm/repository 
Creating /root/.helm/repository/cache 
Creating /root/.helm/repository/local 
Creating /root/.helm/plugins 
Creating /root/.helm/starters 
Creating /root/.helm/cache/archive 
Creating /root/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://charts.helm.sh/stable 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://v2.helm.sh/docs/securing_installation/

 

tiller-deploy namespace 확인한다

root@ip-172-31-4-27:~/resource# kubectl get po --all-namespaces
NAMESPACE         NAME                                       READY   STATUS    RESTARTS   AGE
kube-system       tiller-deploy-7b56c8dfb7-tcp8l             1/1     Running   0          3m28s

 

정상적으로 tiller가 올라갔는지 확인한다.

root@ip-172-31-4-27:~/resource# kubectl -n kube-system logs tiller-deploy-7b56c8dfb7-tcp8l 
[main] 2020/12/04 02:11:53 Starting Tiller v2.17.0 (tls=false)
[main] 2020/12/04 02:11:53 GRPC listening on :44134
[main] 2020/12/04 02:11:53 Probes listening on :44135
[main] 2020/12/04 02:11:53 Storage driver is ConfigMap
[main] 2020/12/04 02:11:53 Max history per release is 0

 

helm 설치 확인/업데이트

root@ip-172-31-4-27:~/resource# helm help
The Kubernetes package manager

To begin working with Helm, run the 'helm init' command:

$ helm init

root@ip-172-31-4-27:~/resource# helm home
/root/.helm

root@ip-172-31-4-27:~/resource# helm version
Client: &version.Version{SemVer:"v2.17.0", GitCommit:"a690bad98af45b015bd3da1a41f6218b1a451dbe", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.17.0", GitCommit:"a690bad98af45b015bd3da1a41f6218b1a451dbe", GitTreeState:"clean"}

최근버전으로 업데이트
root@ip-172-31-4-27:~/resource# helm init --upgrade
$HELM_HOME has been configured at /root/.helm.
Tiller (the Helm server-side component) has been updated to ghcr.io/helm/tiller:v2.17.0 .

root@ip-172-31-4-27:~/resource# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete.

#@ DB 서치를 한다.
root@ip-172-31-4-27:~/resource# helm search database
NAME                         	CHART VERSION	APP VERSION            	DESCRIPTION                                                 
stable/cockroachdb           	3.0.8        	19.2.5                 	DEPRECATED -- CockroachDB is a scalable, survivable, stro...
stable/couchdb               	2.3.0        	2.3.1                  	DEPRECATED A database featuring seamless multi-master syn...
stable/dokuwiki              	6.0.11       	0.20180422.201901061035	DEPRECATED DokuWiki is a standards-compliant, simple to u...
stable/ignite                	1.2.2        	2.7.6                  	DEPRECATED - Apache Ignite is an open-source distributed ...
stable/janusgraph            	0.2.6        	1.0                    	DEPRECATED - Open source, scalable graph database.          
stable/kubedb                	0.1.3        	0.8.0-beta.2           	DEPRECATED KubeDB by AppsCode - Making running production...
stable/mariadb               	7.3.14       	10.3.22                	DEPRECATED Fast, reliable, scalable, and easy to use open...
stable/mediawiki             	9.1.9        	1.34.0                 	DEPRECATED Extremely powerful, scalable software and a fe...
stable/mongodb               	7.8.10       	4.2.4                  	DEPRECATED NoSQL document-oriented database that stores J...
stable/mongodb-replicaset    	3.17.2       	3.6                    	DEPRECATED - NoSQL document-oriented database that stores...
stable/mysql                 	1.6.9        	5.7.30                 	DEPRECATED - Fast, reliable, scalable, and easy to use op...
stable/mysqldump             	2.6.2        	2.4.1                  	DEPRECATED! - A Helm chart to help backup MySQL databases...
stable/neo4j                 	3.0.1        	4.0.4                  	DEPRECATED Neo4j is the world's leading graph database      
stable/pgadmin               	1.2.2        	4.18.0                 	pgAdmin is a web based administration tool for PostgreSQL...
stable/postgresql            	8.6.4        	11.7.0                 	DEPRECATED Chart for PostgreSQL, an object-relational dat...
stable/prisma                	1.2.4        	1.29.1                 	DEPRECATED Prisma turns your database into a realtime Gra...
stable/prometheus            	11.12.1      	2.20.1                 	DEPRECATED Prometheus is a monitoring system and time ser...
stable/rethinkdb             	1.1.4        	0.1.0                  	DEPRECATED - The open-source database for the realtime web  
stable/couchbase-operator    	1.0.4        	1.2.2                  	DEPRECATED A Helm chart to deploy the Couchbase Autonomou...
stable/hazelcast             	3.3.2        	4.0.1                  	DEPRECATED Hazelcast IMDG is the most widely used in-memo...
stable/influxdb              	4.3.2        	1.7.9                  	DEPRECATED Scalable datastore for metrics, events, and re...
stable/percona               	1.2.3        	5.7.26                 	DEPRECATED - free, fully compatible, enhanced, open sourc...
stable/percona-xtradb-cluster	1.0.8        	5.7.19                 	DEPRECATED - free, fully compatible, enhanced, open sourc...
stable/redis                 	10.5.7       	5.0.7                  	DEPRECATED Open source, advanced key-value store. It is o...


#@ 이미 stable이 있는 경에는 add를 하지 말아라
root@ip-172-31-4-27:~/resource# helm repo list
WARNING: "kubernetes-charts-incubator.storage.googleapis.com" is deprecated for "incubator" and will be deleted Nov. 13, 2020.
WARNING: You should switch to "https://charts.helm.sh/incubator"
NAME     	URL                                                        
stable   	https://charts.helm.sh/stable                              
local    	http://127.0.0.1:8879/charts                               
incubator	https://kubernetes-charts-incubator.storage.googleapis.com/

#@ 차트 추가 가능
root@ip-172-31-4-27:~/resource# helm repo add  stable http://storage.googleapis.com/kubernetes-charts
"stable" has been added to your repositories
stable/redis-ha              	4.4.6        	5.0.6                  	DEPRECATED - Highly available Kubernetes implementation o...


root@ip-172-31-4-27:~/resource# helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/
"incubator" has been added to your repositories

root@ip-172-31-4-27:~/resource# helm search incubator
WARNING: "kubernetes-charts-incubator.storage.googleapis.com" is deprecated for "incubator" and will be deleted Nov. 13, 2020.
WARNING: You should switch to "https://charts.helm.sh/incubator"
NAME                                          	CHART VERSION	APP VERSION                 	DESCRIPTION                                                 
incubator/artifactory                         	5.2.2        	5.2.0                       	DEPRECATED Universal Repository Manager supporting all ma...
incubator/aws-alb-ingress-controller          	1.0.4        	v1.1.8                      	DEPRECATED A Helm chart for AWS ALB Ingress Controller

 

heml 설치 완료 MariaDB 설치 해보자.

#helm install stable/mariadb --set master.persistence.enabled=false --set slave.persistence.enabled=false

#@ 위의 출력 내용을 통해 DB가 잘 설치 되었는지 확인해 봅니다.
root@ip-172-31-4-27:~/resource# kubectl get all --all-namespaces | grep maria
default           pod/idolized-mule-mariadb-master-0             1/1     Running   0          18m
default           pod/idolized-mule-mariadb-slave-0              1/1     Running   0          18m
default         service/idolized-mule-mariadb         ClusterIP   10.99.82.49    <none>        3306/TCP                 18m
default         service/idolized-mule-mariadb-slave   ClusterIP   10.108.83.83   <none>        3306/TCP                 18m
default     statefulset.apps/idolized-mule-mariadb-master   1/1     18m
default     statefulset.apps/idolized-mule-mariadb-slave    1/1     18m



kubectl run nasal-gerbil-mariadb-client --rm --tty -i --restart='Never' --image  docker.io/bitnami/mariadb:10.3.22-debian-10-r27 --namespace default --command -- bash
. <(helm completion bash) <= helm 자동완성

#@ helm 자동완성
root@ip-172-31-4-27:~/resource# kubectl run nasal-gerbil-mariadb-client --rm --tty -i --restart='Never' --image  docker.io/bitnami/mariadb:10.3.22-debian-10-r27 --namespace default --command -- bash
If you don't see a command prompt, try pressing enter.
I have no name!@nasal-gerbil-mariadb-client:/$ . <(helm completion bash)
bash: helm: command not found
I have no name!@nasal-gerbil-mariadb-client:/$

helm status helm_release
#@ 리포팅 정보를 다시 확인할 수 있다.

ls -R .helm  (자세히 살펴 보세요)
#@ 차트를 확인한다.

root@ip-172-31-4-27:~/resource# helm list
WARNING: "kubernetes-charts-incubator.storage.googleapis.com" is deprecated for "incubator" and will be deleted Nov. 13, 2020.
WARNING: You should switch to "https://charts.helm.sh/incubator"
NAME         	REVISION	UPDATED                 	STATUS  	CHART         	APP VERSION	NAMESPACE
idolized-mule	1       	Fri Dec  4 03:12:33 2020	DEPLOYED	mariadb-7.3.14	10.3.22    	default 
#@ idolized-mule는 helm release 이름이다.



root@ip-172-31-4-27:~/resource# helm status idolized-mule
WARNING: "kubernetes-charts-incubator.storage.googleapis.com" is deprecated for "incubator" and will be deleted Nov. 13, 2020.
WARNING: You should switch to "https://charts.helm.sh/incubator"
LAST DEPLOYED: Fri Dec  4 03:12:33 2020
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME                          DATA  AGE
idolized-mule-mariadb-master  1     21m
idolized-mule-mariadb-slave   1     21m
idolized-mule-mariadb-tests   1     21m


==> v1/Service
NAME                         TYPE       CLUSTER-IP    EXTERNAL-IP  PORT(S)   AGE
idolized-mule-mariadb        ClusterIP  10.99.82.49   <none>       3306/TCP  56m
idolized-mule-mariadb-slave  ClusterIP  10.108.83.83  <none>       3306/TCP  56m

'클라우드 컴퓨팅 & NoSQL > k8s' 카테고리의 다른 글

statefulset 시나리오  (0) 2020.12.06
statefulset  (0) 2020.12.06
Helm  (0) 2020.12.06
limit range/resource-quota 시나리오  (0) 2020.12.06
Pod의 Resource 관리  (0) 2020.12.03
  • 설치를 편리하게 해주는 패키지매니저 (예 : apt-get 같은)
  • helml 서버 런칭 필요
  • kubernetes 패키지인 chart 를 관리하는 Package Manager 
가지 구성요소
Helm Client
Tiller Server
  • 1.k8s서버에 helm을 먼저 올려야 한다.
  • 2.클라이언트 패키지 설치
  • 3.틸러서버가 리퀘스트를 받아서
  • 4.관련 리소스들을 만드는 작업을 수행한다.
  • 차트포멧으로 부터 인덱스 항목을 확인 인그레스 항목이나 
  •  Release : 차트 -> 틸러서버 -> 리소스생성에 대한 요청 -> 해당 리소스를 배포 -> 서비스를 사용할 수 있는 상태로 만들어줌

 

helm 설치 방법/순서

  • 1.패키지 설치
  • 2.서버 런칭 작업
  • 3.틸러서버가 k8s에 서버형태로 올라가야 한다.
  • 4.rbac 설정 필요
  • 5.init를 통해 틸러 서버 런칭
  • 6.upgrade를 통해 helm 최신 버전

 

 

'클라우드 컴퓨팅 & NoSQL > k8s' 카테고리의 다른 글

statefulset  (0) 2020.12.06
Helm 설치/Helm을 이용한 mysql 설치  (0) 2020.12.06
limit range/resource-quota 시나리오  (0) 2020.12.06
Pod의 Resource 관리  (0) 2020.12.03
Ingress Controller  (0) 2020.12.03

시나리오1

rq.yaml 명세서 작성

apiVersion: v1
kind: ResourceQuota
metadata:
  name: rq
  namespace: space1
spec:
  hard:
    requests.memory: 1Gi
    requests.cpu: "1000m" #1000밀리 코어 = 1코어
    limits.memory: 1Gi
    limits.cpu: "1500m"

namespace 생성/namespace 기반의 ResourceQuota 생성

root@ip-172-31-4-27:~/resource# kubectl create ns space1
namespace/space1 created

#@ namespace에 ResourceQuota를 적용한다.
root@ip-172-31-4-27:~/resource# kubectl create -n space1 -f rq.yaml
resourcequota/rq created

#@ResourceQuota 적용 확인
root@ip-172-31-4-27:~/resource# kubectl describe resourcequotas --namespace=space1 
Name:            rq
Namespace:       space1
Resource         Used  Hard
--------         ----  ----
limits.cpu       0     1500m
limits.memory    0     1Gi
requests.cpu     0     1
requests.memory  0     1Gi

#@alias로 간편하게 설정
$ alias ksp1="kubectl -n space1 "

#@ namespace가 space1의 ResourceQuota 정보 확인
root@ip-172-31-4-27:~/resource# ksp1 get resourcequotas
NAME   AGE    REQUEST                                     LIMIT
rq     2m7s   requests.cpu: 0/1, requests.memory: 0/1Gi   limits.cpu: 0/1500m, limits.memory: 0/1Gi

 

pod.yaml 명세서 작성

apiVersion: v1 #request limit 을 적용하지 않은 경우
kind: Pod
metadata:
  name: pod-rq
spec:
  containers:
  - name: nasamjang02
    image: nginx

 

pod 생성시 reject 되는지 확인한다.

#@ reject 사유 : must specify limits.cpu,limits.memory,requests.cpu,requests.memory
root@ip-172-31-4-27:~/resource# ksp1 create -f pod.yaml
Error from server (Forbidden): error when creating "pod.yaml": pods "pod-rq" is forbidden: failed quota: rq: must specify limits.cpu,limits.memory,requests.cpu,requests.memory

 

pod-rq1.yaml 생성

apiVersion: v1
kind: Pod
metadata:
  name: pod-rq
spec:
  containers:
  - name: nasamjang02
    image: nginx
    resources:          #<--------메모리 관련 설정 이하 추가
      requests:
        memory: 0.1Gi
      limits:
        memory: 0.1Gi

 

pod 생성시 forbidden 이유를 확인한다.

#@ 쿼터에 CPU 값도 설정을 했기 떄문에 또 reject이 발생한다.
root@ip-172-31-4-27:~/resource# ksp1 create -f pod-rq1.yaml 
Error from server (Forbidden): error when creating "pod-rq1.yaml": pods "pod-rq" is forbidden: failed quota: rq: must specify limits.cpu,requests.cpu

 

pod-rq2.yaml 명세서 생성

apiVersion: v1
kind: Pod
metadata:
  name: pod-rq
spec:
  containers:
  - name: nasamjang02
    image: nginx
    resources:
      requests:
        memory: 0.1Gi
        cpu: "100m"  #<--------CPU까지 추가
      limits:
        memory: 0.1Gi
        cpu: "100m" #<--------추가

ResourceQuota에 모두 만족하므로 생성 가능

root@ip-172-31-4-27:~/resource# ksp1 create -f pod-rq2.yaml 
pod/pod-rq created

 

rq.yaml 명세서 생성

apiVersion: v1
kind: ResourceQuota
metadata:
  name: rq
  namespace: space1
spec:
  hard:
    requests.memory: 1Gi
    limits.memory: 1Gi
    pods: 2     #<------추가

pod: 2를 추가 하여 ResourceQuota 수정/적용

root@ip-172-31-4-27:~/resource# ksp1 apply -f rq.yaml 
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
resourcequota/rq configured

 

pods : 2를 정상 적용된 것을 확인한다.

root@ip-172-31-4-27:~/resource# ksp1 describe resourcequotas
Name:            rq
Namespace:       space1
Resource         Used           Hard
--------         ----           ----
limits.cpu       100m           1500m
limits.memory    107374182400m  1Gi
pods             1              2
requests.cpu     100m           1
requests.memory  107374182400m  1Gi

pod-rq2.yaml 명세서에서 pod name을 변경

 

pod-rq2 생성 완료 확인

root@ip-172-31-4-27:~/resource# ksp1 create -f pod-rq2.yaml
pod/pod-rq2 created

pod-rq2.yaml 명세서에서 pod name을 변경

 

rq-3에 대해서 forbidden 사유 : pod 갯수가 2개로 제한되어 있기때문에 3은 forbidden 된것을 확인

root@ip-172-31-4-27:~/resource# ksp1 create -f pod-rq2.yaml
Error from server (Forbidden): error when creating "pod-rq2.yaml": pods "pod-rq3" is forbidden: exceeded quota: rq, requested: pods=1, used: pods=2, limited: pods=2

 

 

시나리오2 (이미 pod가 배포된 상태, ResourceQuota를 변경하면 어떻게 되는지 확인하는 시나리오)

신규 space2 namespace 생성

root@ip-172-31-4-27:~/resource# kubectl create ns space2
namespace/space2 created

pod-rq1.yaml 명세서 생성

apiVersion: v1
kind: Pod
metadata:
  name: pod-rq1
spec:
  containers:
  - name: nasamjang02
    image: nginx
    resources:
      requests:
        memory: 1Gi
      limits:
        memory: 1Gi
$alias ksp2='kubectl -n space2 '
root@ip-172-31-4-27:~/resource# ksp2 create -f pod-rq1.yaml
pod/pod-rq1 created

namespace : space2 에는 아직 ResourceQuota를 설정해 주지 않았다.

root@ip-172-31-4-27:~/resource# ksp2 describe resourcequotas
No resources found in space2 namespace.

 

namespace : space2 에 ResourceQuota 적용

root@ip-172-31-4-27:~/resource# ksp2 create -f rq.yaml
resourcequota/rq created

 

namespace : space2 에 적용된 ResourceQuota 확인

root@ip-172-31-4-27:~/resource# ksp2 describe resourcequotas
Name:            rq
Namespace:       space2
Resource         Used  Hard
--------         ----  ----
limits.memory    1Gi   1Gi
pods             1     2
requests.memory  1Gi   1Gi

pod-rq1.yaml 명세서 작성 (이름 변경해서 적용을 했을때 어떻게 되는가?)

 

이미 설정값을 만족했기 때문에 forbidden이 발생한다.

root@ip-172-31-4-27:~/resource# ksp2 create -f pod-rq1.yaml
Error from server (Forbidden): error when creating "pod-rq1.yaml": pods "pod-rq2" is forbidden: exceeded quota: rq, requested: limits.memory=1Gi,requests.memory=1Gi, used: limits.memory=1Gi,requests.memory=1Gi, limited: limits.memory=1Gi,requests.memory=1Gi

 

어떤 오류메시지인지 확인필요

root@ip-172-31-4-27:~/resource# ksp2 create -f pod.yaml 
Error from server (Forbidden): error when creating "pod.yaml": pods "pod-rq" is forbidden: failed quota: rq: must specify limits.memory,requests.memory

 

object-counts.yaml 명세서 작성

apiVersion: v1 
kind: ResourceQuota 
metadata: 
  name: object-counts 
spec: 
  hard: 
    configmaps: "10" 
    persistentvolumeclaims: "4"
    secrets: "10" 
    services: "10" 
    services.loadbalancers: "2"

 

root@ip-172-31-4-27:~/resource# kubectl create -f object-counts.yaml --namespace=space2
resourcequota/object-counts created

 

root@ip-172-31-4-27:~/resource# ksp2 describe resourcequotas
Name:                   object-counts
Namespace:              space2
Resource                Used  Hard
--------                ----  ----
configmaps              0     10
persistentvolumeclaims  0     4
secrets                 1     10
services                0     10
services.loadbalancers  0     2


Name:            rq
Namespace:       space2
Resource         Used  Hard
--------         ----  ----
limits.memory    1Gi   1Gi
pods             1     2
requests.memory  1Gi   1Gi

 

시나리오3 : resource Limit test

limit.yaml 명세서 작성

apiVersion: v1
kind: LimitRange # LimitRange를 설정한다.
metadata:
  name: lr-base
spec:
  limits:
  - type: Pod #Pod 레벨의 limit 설정
    min:
      cpu: 50m
      memory: 5Mi
    max:
      cpu: 1
      memory: 1Gi
  - type: Container
    defaultRequest: # 컨테이너의 기본값 설정
      cpu: 100m
      memory: 10Mi
    default:
       cpu: 200m
       memory: 100Mi
    min:
       cpu: 50m
       memory: 5Mi
    max:
      cpu: 1
      memory: 1Gi
    maxLimitRequestRatio:
      cpu: 4
      memory: 10

 

root@ip-172-31-4-27:~/resource# kubectl create -f limit.yaml --namespace=space2
limitrange/lr-base created

#@ 리소스를 확인한다.
root@ip-172-31-4-27:~/resource# kubectl get -f limit.yaml --namespace=space2
NAME      CREATED AT
lr-base   2020-12-04T01:28:06Z

#@ 파드레벨, 컨테이너레벨의 정보 조회 확인
root@ip-172-31-4-27:~/resource# kubectl describe -f limit.yaml --namespace=space2
Name:       lr-base
Namespace:  space2
Type        Resource  Min  Max  Default Request  Default Limit  Max Limit/Request Ratio
----        --------  ---  ---  ---------------  -------------  -----------------------
Pod         cpu       50m  1    -                -              -
Pod         memory    5Mi  1Gi  -                -              -
Container   cpu       50m  1    100m             200m           4
Container   memory    5Mi  1Gi  10Mi             100Mi          10

#@ 제한된 리소스 초과(메모리, CPU)로 인해 forbidden 발생 확인
root@ip-172-31-4-27:~/resource# kubectl create -f pod.yaml --namespace=space2
Error from server (Forbidden): error when creating "pod.yaml": pods "pod-rq" is forbidden: exceeded quota: rq, requested: limits.memory=100Mi,requests.memory=10Mi, used: limits.memory=1Gi,requests.memory=1Gi, limited: limits.memory=1Gi,requests.memory=1Gi

#@ namespace=space2로 생성된 모든 po를 삭제
root@ip-172-31-4-27:~/resource# kubectl delete po --all --namespace=space2
pod "pod-rq1" deleted

#@ 제한된 리소스를 초과하지 않으므로 pod 생성 완료
root@ip-172-31-4-27:~/resource# kubectl create -f pod.yaml --namespace=space2
pod/pod-rq created

#@ pod.yaml은 limit 을 설정하지 않았었다. 기본 limit_range에 의해서 적용되어 있는지 확인한다.
root@ip-172-31-4-27:~/resource# kubectl get -f pod.yaml --namespace=space2 -o yaml
	생략
	:
	spec:
	  containers:
	  - image: nginx
		imagePullPolicy: Always
		name: nasamjang02
		resources:
		  limits:
			cpu: 200m
			memory: 100Mi
		  requests:
			cpu: 100m
			memory: 10Mi
            
root@ip-172-31-4-27:~/resource# ksp2 describe resourcequotas 
Name:                   object-counts
Namespace:              space2
Resource                Used  Hard
--------                ----  ----
configmaps              0     10
persistentvolumeclaims  0     4
secrets                 1     10
services                0     10
services.loadbalancers  0     2

Name:            rq
Namespace:       space2
Resource         Used   Hard
--------         ----   ----
limits.memory    100Mi  1Gi
pods             1      2
requests.memory  10Mi   1Gi

root@ip-172-31-4-27:~/resource# ksp2 describe limitranges
Name:       lr-base
Namespace:  space2
Type        Resource  Min  Max  Default Request  Default Limit  Max Limit/Request Ratio
----        --------  ---  ---  ---------------  -------------  -----------------------
Pod         cpu       50m  1    -                -              -
Pod         memory    5Mi  1Gi  -                -              -
Container   cpu       50m  1    100m             200m           4
Container   memory    5Mi  1Gi  10Mi             100Mi          10

 

pod-rq2.yaml 명세서 작성 및 실행/확인

apiVersion: v1
kind: Pod
metadata:
  name: pod-rq-3
spec:
  containers:
  - name: nasamjang02
    image: nginx
    resources:
      requests:
        memory: 2Mi  #<--------- min 5Mi 이하 test. limit_range 테스트
        cpu: "100m"
#      limits:
#        memory: 0.1Gi
#        cpu: "100m"

 

#@ ratio에 의해서 forbidden
root@ip-172-31-4-27:~/resource# kubectl create -f pod-rq2.yaml --namespace=space2
Error from server (Forbidden): error when creating "pod-rq2.yaml": pods "pod-rq-3" is forbidden: [minimum memory usage per Pod is 5Mi, but request is 2097152, minimum memory usage per Container is 5Mi, but request is 2Mi, memory max limit to request ratio per Container is 10, but provided ratio is 50.000000]

$ vi pod-rq2.yaml <------- memory: 5Mi  로 수정 (ratio 위배)

root@ip-172-31-4-27:~/resource# kubectl create -f pod-rq2.yaml --namespace=space2
Error from server (Forbidden): error when creating "pod-rq2.yaml": pods "pod-rq-3" is forbidden: memory max limit to request ratio per Container is 10, but provided ratio is 20.000000

$ vi pod-rq2.yaml  <------- memory: 10Mi (ratio 10)

root@ip-172-31-4-27:~/resource# kubectl create -f pod-rq2.yaml --namespace=space2
pod/pod-rq-3 created

root@ip-172-31-4-27:~/resource# kubectl get -f pod-rq2.yaml --namespace=space2
NAME       READY   STATUS    RESTARTS   AGE
pod-rq-3   1/1     Running   0          16s

$ 
spec:
  containers:
  - image: nginx
    imagePullPolicy: Always
    name: nasamjang02
    resources:
      limits:
        cpu: 200m
        memory: 100Mi
      requests:
        cpu: 100m
        memory: 10Mi

root@ip-172-31-4-27:~/resource# ksp2 describe resourcequotas 
Name:                   object-counts
Namespace:              space2
Resource                Used  Hard
--------                ----  ----
configmaps              0     10
persistentvolumeclaims  0     4
secrets                 1     10
services                0     10
services.loadbalancers  0     2


Name:            rq
Namespace:       space2
Resource         Used   Hard
--------         ----   ----
limits.memory    200Mi  1Gi
pods             2      2
requests.memory  20Mi   1Gi

root@ip-172-31-4-27:~/resource# ksp2 describe limitranges
Name:       lr-base
Namespace:  space2
Type        Resource  Min  Max  Default Request  Default Limit  Max Limit/Request Ratio
----        --------  ---  ---  ---------------  -------------  -----------------------
Pod         cpu       50m  1    -                -              -
Pod         memory    5Mi  1Gi  -                -              -
Container   memory    5Mi  1Gi  10Mi             100Mi          10
Container   cpu       50m  1    100m             200m           4

'클라우드 컴퓨팅 & NoSQL > k8s' 카테고리의 다른 글

Helm 설치/Helm을 이용한 mysql 설치  (0) 2020.12.06
Helm  (0) 2020.12.06
Pod의 Resource 관리  (0) 2020.12.03
Ingress Controller  (0) 2020.12.03
Service Controller  (0) 2020.12.03
  • CPU/Memory Resource 제어
  • LimitRange
  • ResourceQuota
  • request : 최소 설정
  • limit : 최대 설정
  • 하나의 pod 안에 db와 web을 동시에 배포한 yaml
  • 리소스를 여러 pod들이 사용하려고 하는 경우 limit 값이 적용된다.
  • 즉, pod가 하나일때는 limit 은 무의미하다.

LimitRange

  • min, max : 해당 범위안에 들었을 경우에만 pod를 관리할 수 있다.
  • maxLimitRequestRatio : cpu는 min max 4배까지 허용하겠다. memory는 10배까지 허용하겠다.
  • 해당 범위안에 들었을 경우에만 pod를 관리할 수 있다.
  • 네임스페이스 별로 스펙사이즈를 설정할 수 있다. 
  • 노드의 가용리소스를 판단하여 리소스 사용에 대한 limit, range를 설정
예) 큰pod들 생성 네임스페이스 : 큰 pod들만 생성할 수 있음 
작은 pod들 생성 네임스이스 : 작은 pod들만 생성할 수 있음 


ResourceQuota

  • Namespace 의 총 Resource Limits 을 설정
  • API server 가 Pod 생성시 유효성 검사를 실시 실행중인 Pod 에 영향을 주지 않음
  • Namespace 마다 CPU/MEM/Storage(PVC)/API object 수를 제한
  • Pod 생성시 Requests/Limits 을 명시해야 하므로 LimitRange 를 같이 사용할 필요가 있다
  • apiserver enable admission plugins= 플래그가 ResourceQuota 인수 중 하나를 가질 때 활성화

'클라우드 컴퓨팅 & NoSQL > k8s' 카테고리의 다른 글

Helm  (0) 2020.12.06
limit range/resource-quota 시나리오  (0) 2020.12.06
Ingress Controller  (0) 2020.12.03
Service Controller  (0) 2020.12.03
Job/CronJob Controller  (0) 2020.12.03
  • Ingress Controller 를 사용하는 이유: L7의 부하분산 역할을 한다. 
  • 하나의 Ingress 를 통해 여러 Service 를 통합 지원
  • HTTP 연결 지원
  • load balancing
  • SSL termination
  • Name 기반 virtual Hosting
  • 외부에서 접근시 https로 접근시 ingress에서 인증서를 관리
  • client -> https -> ingress -> http -> service로 

Ingress Controller 시나리오

-ingress 엔진을 traefik으로 사용한다.
- rules에 host: www.webapp1.com으로 요청이 들어오면 webapp1으로 포워딩한다는 의미
- 서비스 객체를 2개(lab, www) 만들어 놓고 분산을 시킨다.
- path가 /class인 경우, /home인 경우 분산을 시킨다.

1.서비스 객체를 2개 생성 (개별 테스트를 하기위해서 2개 생성)

kubectl create deployment webapp1  --image=nasamjang02/app:v1
kubectl expose deployment webapp1  --type=NodePort --port=80 #@ 외부에서 접근가능하도록 NodePort로 설정
kubectl create deployment webapp2  --image=nasamjang02/app:v2
kubectl expose deployment webapp2  --type=NodePort --port=80


root@ip-172-31-4-27:~# kubectl get deploy,svc,rs,po
NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/webapp1   1/1     1            1           2m33s
deployment.apps/webapp2   1/1     1            1           109s

NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
service/app1         ClusterIP   10.104.31.130    <none>        80/TCP         76m
service/kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        2d7h
service/webapp1      NodePort    10.101.171.207   <none>        80:31048/TCP   2m14s
service/webapp2      NodePort    10.98.238.77     <none>        80:31042/TCP   104s

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/webapp1-5b9448d6c4   1         1         1       2m33s
replicaset.apps/webapp2-5dc7d6fd6    1         1         1       109s

NAME                           READY   STATUS    RESTARTS   AGE
pod/webapp1-5b9448d6c4-46mbt   1/1     Running   0          2m33s
pod/webapp2-5dc7d6fd6-hsrkm    1/1     Running   0          109s

2. 내부 리소스의 접근 권한을 만들어준다. RBAC을 이용하여 리소스 접근 권한을 설정한다.

ingress_rbac.yaml 명세서 작성

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
rules:
  - apiGroups:
      - ""
    resources:
      - services #아래 3개의 접근을 허용
      - endpoints
      - secrets
    verbs:
      - get # 수정불가하고 나머지 3개는 가능
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - ingresses #ingresses에 관한 권한 허용
    verbs:
      - get
      - list
      - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: traefik-ingress-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
- kind: ServiceAccount #ServiceAccount가 pod내에 프로세스에 주체
  name: traefik-ingress-controller
  namespace: kube-system

3. 생성 및 확인

root@ip-172-31-4-27:~/ingress# kubectl create -f ingress_rbac.yaml 
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
clusterrole.rbac.authorization.k8s.io/traefik-ingress-controller created
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/traefik-ingress-controller created

4. traefik-ds.yaml 명세서 작성

apiVersion: v1
kind: ServiceAccount #ServiceAccount에 assign할 주체를 설정
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: traefik-ingress-controller
  namespace: kube-system
  labels:
    k8s-app: traefik-ingress-lb
spec:
  selector:
    matchLabels:
      k8s-app: traefik-ingress-lb
      name: traefik-ingress-lb
  template:
    metadata:
      labels:
        k8s-app: traefik-ingress-lb
        name: traefik-ingress-lb
    spec:
      serviceAccountName: traefik-ingress-controller #pod의 ServiceAccount 지정한다 (프로세스에 대한 User)
      terminationGracePeriodSeconds: 60
      containers:
      - image: traefik:v1.7
        name: traefik-ingress-lb
        ports:
        - name: http
          containerPort: 80 # 컨테이너 하나에 포트2개 80, 8080
          hostPort: 80 # 80으로 요청이 들어오면 80으로 포워딩한다
        - name: admin
          containerPort: 8080
          hostPort: 8080
        securityContext: #보안context 설정한다
          capabilities: #기본특권
            drop:
            - ALL #기본특권 다 버리고
            add:
            - NET_BIND_SERVICE #네트워크 인터페이스 공유권한만 추가 (호스트의 
        args:
        - --api #args 추가
        - --kubernetes
        - --logLevel=INFO
---
kind: Service #Service 객체가 올라간다
apiVersion: v1
metadata:
  name: traefik-ingress-service
  namespace: kube-system
spec:
  selector:
    k8s-app: traefik-ingress-lb
  ports:
    - protocol: TCP
      port: 80
      name: web
    - protocol: TCP
      port: 8080
      name: admin

5. 생성 및 확인

root@ip-172-31-4-27:~/ingress# kubectl create -f traefik-ds.yaml 
serviceaccount/traefik-ingress-controller created
daemonset.apps/traefik-ingress-controller created
service/traefik-ingress-service created

6. ingress_rule.yaml 명세서 작성

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-test
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: www.webapp1.com #로 접근하면 webapp1로 포워딩
    http:
      paths:
      - backend:
          serviceName: webapp1
          servicePort: 80
        path: /
  - host: www.webapp2.com  #로 접근하면 webapp2로 포워딩
    http:  
      paths: 
      - backend: 
           serviceName: webapp2 
           servicePort: 80 
        path: /

7. 생성 및 확인

#kubectl create -f ingress_rule.yaml

	root@ip-172-31-4-27:~/ingress# kubectl create -f ingress_rule.yaml 
	Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
	ingress.extensions/ingress-test created

#curl -H "Host: www.webapp1.com" http://master_ip/
	root@ip-172-31-4-27:~/ingress# curl -H "Host: www.webapp1.com" http://172.31.4.27/
	This is app v1 test…

#curl -H "Host: www.webapp2.com" http://master_ip/
	root@ip-172-31-4-27:~/ingress# curl -H "Host: www.webapp2.com" http://172.31.4.27/
	This is app v2 test…

#kubectl get svc,ds,po -o wide --all-namespaces |grep traefix

#curl -H "Host: www.webapp1.com" http://Service_IP/
	root@ip-172-31-4-27:~/ingress# curl -H "Host: www.webapp1.com" http://172.31.13.180/
	This is app v1 test…

#curl -H "Host: www.webapp1.com" http://Service_IP/

kubectl edit -n kube-system svc traefik-ingress-service
type=NodePort  로 변경 (외부에서 적근가능하도록)
kubectl get -n kube-system svc  (Node port확인 후 접속)

8. 외부에서 접근 확인

'클라우드 컴퓨팅 & NoSQL > k8s' 카테고리의 다른 글

limit range/resource-quota 시나리오  (0) 2020.12.06
Pod의 Resource 관리  (0) 2020.12.03
Service Controller  (0) 2020.12.03
Job/CronJob Controller  (0) 2020.12.03
DaemonSetController  (0) 2020.12.03
  • pod 기반의 서비스를 제공한다고 하면 pod 의 이름이 랜덤하게 생성되고 바뀜
  • 서비스는 LB 역할을 함.
  • 서비스는 LB - L4의 역할을 한다
  • Static IP 와 Port 를 클라이언트에 제공하여 내 외부에서의 접근을 허용
  • API server 를 감시하며 서비스의 endpoint 변경사항을 감지한다
  • 각 노드의 kube proxy 에서 처리
  • 버전에 따라 몇가지 mode 를 지원

iptables

  • iptables : 방화벽, NAT 기능 제공, 룰 등을 적용
  • Client -> iptable (룰 기반으로 분배) 
  • iptables를 통해 성능을 개선한다.

Service 와 DNS Record

  • kube-system 에 dns 기반으로 정보가 올라온다.
  • A레코드에 해당 : ClusterIP, NodePort, LB
  • CNAME 레코드 : ExternalName
  • SRV : Headless

Service Controller 시나리오

1. apply.yaml 명세서 작성

apiVersion: v1
kind: Service # Service로 생성
metadata:
  name: app1
spec:
  selector:
    svc: app1
  ports:
  - port: 80
    targetPort: 80

2. app1-pod.yaml 명세서 작성

apiVersion: v1
kind: Pod
metadata:
  name: pod1
  labels:
    svc: app1 #service와 일치시켜 연결한다
spec:
  containers:
  - name: app1-container
    image: nasamjang02/app:v1

3. client-pod.yaml 명세서 작성

apiVersion: v1
kind: Pod
metadata:
  name: client-pod
spec:
  containers:
  - name: c1
    image: rosehs00/app:k8s
    command: [ 'sh','-c','sleep 3600' ]
  nodeSelector:
    nfs: node1

4. 생성 및 확인

kubectl create -f svc
kubectl exec client-pod -it -- /bin/bash #@client-pod에서 접근

nslookup app1 #@서비스 객체의 이름:app1
	root@ip-172-31-4-27:~# kubectl exec client-pod -it -- /bin/bash 
	root@client-pod:/# nslookup app1
	Server:		10.96.0.10
	Address:	10.96.0.10#53

	Name:	app1.default.svc.cluster.local
	Address: 10.104.31.130

nslookup app1.default.svc.cluster.local
	root@client-pod:/# nslookup app1.default.svc.cluster.local
	Server:		10.96.0.10
	Address:	10.96.0.10#53

	Name:	app1.default.svc.cluster.local
	Address: 10.104.31.130

curl app1 #@ curl 을 통해 접속
curl app1.default.svc.cluster.local

	root@client-pod:/# curl app1
	This is app v1 test…
	root@client-pod:/# curl app1.default.svc.cluster.local
	This is app v1 test…

'클라우드 컴퓨팅 & NoSQL > k8s' 카테고리의 다른 글

Pod의 Resource 관리  (0) 2020.12.03
Ingress Controller  (0) 2020.12.03
Job/CronJob Controller  (0) 2020.12.03
DaemonSetController  (0) 2020.12.03
Scheduler  (0) 2020.12.02
  • 이전 디플로이, 데몬셋은 실시간 반응형
  • job/cronjob은 단일 실행 완료형
  • 완료 가능한 단일 실행 Job 을 관리하는 Contoller
  • 한번 완료되기 전 장애 발생시 다시 시작 되며 성공적인 종료를 보장
  • 포드는 restartPolicy OnFailure 를 가지고 동작

시나리오

 

1. job-1.yaml 명세서 작성

apiVersion: batch/v1
kind: Job
metadata:
  name: test-job-1
spec:
  template:
    spec:
      restartPolicy: OnFailure
      containers:
      - name: job-container
        image: busybox
        command: ["/bin/sleep", "15"] # 15초짜리 job
      terminationGracePeriodSeconds: 0

 

2. 생성 및 확인

root@ip-172-31-4-27:~# kubectl create -f job-1.yaml 
job.batch/test-job-1 created
root@ip-172-31-4-27:~# kubectl get jobs.batch 
NAME         COMPLETIONS   DURATION   AGE
test-job-1   0/1           13s        13s # 1회 성공했는지 확인

root@ip-172-31-4-27:~# kubectl get jobs.batch 
NAME         COMPLETIONS   DURATION   AGE
test-job-1   0/1           16s        16s

root@ip-172-31-4-27:~# kubectl get jobs.batch 
NAME         COMPLETIONS   DURATION   AGE
test-job-1   1/1           19s        21s

root@ip-172-31-4-27:~# kubectl get jobs.batch,po 
NAME                   COMPLETIONS   DURATION   AGE
job.batch/test-job-1   1/1           19s        94s #@ 컨트롤러
NAME                   READY   STATUS      RESTARTS   AGE
pod/test-job-1-xmcs2   0/1     Completed   0          94s #@ 컨트롤러 기반의 pod 상태 확인

	#@ 15초 뒤에 pod의 상태가 0이 되고 job은 완료 했으므로 1로 변경된것을 확인한다.
	root@ip-172-31-4-27:~/job_cron# kubectl get jobs.batch,po
	NAME                   COMPLETIONS   DURATION   AGE
	job.batch/test-job-1   1/1           20s        72s

	NAME                   READY   STATUS      RESTARTS   AGE
	pod/test-job-1-t56tg   0/1     Completed   0          72s

3. job-2.yaml 명세서 작성

apiVersion: batch/v1
kind: Job
metadata:
  name: test-job-2
spec: # 3가지 param의 특징을 이해한다
  completions: 3 # 반복해서 횟수를 지정할 수 있다
#  parallelism: 2
#  activeDeadlineSeconds: 30
  template:
    spec:
      restartPolicy: OnFailure
      containers:
      - name: job-container
        image: busybox
        command: ["/bin/sleep", "15"]
      terminationGracePeriodSeconds: 0

 

4. 생성 및 확인

root@ip-172-31-4-27:~# kubectl create -f job-2.yaml (completions test)
root@ip-172-31-4-27:~# kubectl get jobs.batch,po 
NAME                   COMPLETIONS   DURATION   AGE
job.batch/test-job-1   1/1           19s        5m13s
job.batch/test-job-2   1/3           26s        26s # 3회 완성이 되어야 한다.

NAME                   READY   STATUS      RESTARTS   AGE
pod/test-job-1-xmcs2   0/1     Completed   0          5m13s
pod/test-job-2-jc6rm   1/1     Running     0          7s
pod/test-job-2-jgfs9   0/1     Completed   0          26s

	#@ pod가 complete 될때 마다 job이 1씩 증가됨을 확인
	root@ip-172-31-4-27:~/job_cron# kubectl get jobs.batch,po 
	NAME                   COMPLETIONS   DURATION   AGE
	job.batch/test-job-2   3/3           60s        114s

	NAME                   READY   STATUS      RESTARTS   AGE
	pod/test-job-2-gq9f5   0/1     Completed   0          114s
	pod/test-job-2-rsq65   0/1     Completed   0          74s
	pod/test-job-2-xqlhj   0/1     Completed   0          94s

 

병렬 처리 시나리오

spec:
  completions: 5 <----수정
  parallelism: 2  <----수정 # 병렬처리 테스트 : 2개의 pod 중에 Total 5번을 completions 하면 된다.
#  activeDeadlineSeconds: 30
  template:
    spec:
      restartPolicy: OnFailure
root@ip-172-31-4-27:~# kubectl create -f job-2.yaml (parallelism test)

1. job-2.yaml 명세서 수정

spec:
  completions: 5 
  parallelism: 2 
  activeDeadlineSeconds: 30   <----수정
  template:
    spec:
      restartPolicy: OnFailure

2. 수행 및 확인

root@ip-172-31-4-27:~# kubectl create -f job-2.yaml (activeDeadlineSeconds test)

	#@ 30초 마다 pod 두개가 생성 -> 삭제 되면서 job의 complete가 늘어남을 확인한다.
	root@ip-172-31-4-27:~/job_cron# kubectl get jobs.batch,po
	NAME                   COMPLETIONS   DURATION   AGE
	job.batch/test-job-2   2/5           3m32s      3m32s

	NAME                   READY   STATUS      RESTARTS   AGE
	pod/test-job-2-6tt5t   0/1     Completed   0          3m32s
	pod/test-job-2-bghnw   0/1     Completed   0          3m32s

 

CronJob 시나리오

  • 리눅스의 crontab과 비슷한 기능이다.
  • /1 : 1분 간격 -> 1분마다 job을 생성해서 pod를 생성한다.
  • /15 : 15분 가격 (1시간에 4번)
  • /20 : 20분 간격 (1시간에 3번)

1.CronJob.yaml 명세서 작성

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: cron-job
spec:
  schedule: "*/1 * * * *" # 1분 마다 반복 수행
  jobTemplate:
    spec:
      template:
        spec:
          restartPolicy: OnFailure
          containers:
          - name: job-container
            image: busybox
            command: ["/bin/sleep", "15"] # 수명이 15초인 pod 생성
          terminationGracePeriodSeconds: 0

2. 생성 및 확인

root@ip-172-31-4-27:~# kubectl get -f CronJob.yaml 
NAME       SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cron-job   */1 * * * *   False     0        <none>          34s

root@ip-172-31-4-27:~# kubectl get cronjobs
NAME       SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cron-job   */1 * * * *   False     0        <none>          49s

root@ip-172-31-4-27:~# kubectl get cronjobs,jobs,po
NAME                     SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cronjob.batch/cron-job   */1 * * * *   False     0        24s             81s

NAME                            COMPLETIONS   DURATION   AGE
job.batch/cron-job-1600314240   1/1           20s        24s
job.batch/test-job-1            1/1           19s        12m
job.batch/test-job-2            3/3           59s        7m41s

NAME                            READY   STATUS      RESTARTS   AGE
pod/cron-job-1600314240-vsznn   0/1     Completed   0          24s

	#@ cron-job에 의해서 job 생성 pod 생성됨을 확인한다.
	#@ 1분마다 job이 생성되고 pod가 15초 후 종료 될때 마다 job은 complete 된다.
	root@ip-172-31-4-27:~/job_cron# kubectl get cronjobs,jobs,po
	NAME                     SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
	cronjob.batch/cron-job   */1 * * * *   False     1        10s             5m

	NAME                            COMPLETIONS   DURATION   AGE
	job.batch/cron-job-1606974420   1/1           20s        3m8s
	job.batch/cron-job-1606974480   1/1           20s        2m8s
	job.batch/cron-job-1606974540   1/1           21s        68s
	job.batch/cron-job-1606974600   0/1           8s         8s

	NAME                            READY   STATUS      RESTARTS   AGE
	pod/cron-job-1606974420-9fdg8   0/1     Completed   0          3m8s
	pod/cron-job-1606974480-mzr6x   0/1     Completed   0          2m8s
	pod/cron-job-1606974540-hbfgc   0/1     Completed   0          68s
	pod/cron-job-1606974600-p98tl   1/1     Running     0          8s

 

Suspend 시나리오

  • create : 이미 있으면 에러
  • apply : create + update
  • patch : edit와 달리 특정 속성만 수정할때 사용한다.
  • edit : 전체 속성을 수정할때 사용
  • set : 이미지 업그레이드 할때 set을 사용한다.

1. 수정 및 확인

#kubectl patch cronjobs cron-job -p '{"spec" : {"suspend" : true }}' #일시 중지 옵션 true
# kubectl get cronjobs,jobs,po
	root@ip-172-31-4-27:~/job_cron# kubectl get cronjobs,jobs,po
	NAME                     SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
	cronjob.batch/cron-job   */1 * * * *   True      1        37s             6m27s #@ SUSPEND가 true로 변경되었다.


#kubectl patch cronjobs cron-job -p '{"spec" : {"suspend" : false }}' <-- 재시작
# kubectl get cronjobs,jobs,po
	root@ip-172-31-4-27:~/job_cron# kubectl get cronjobs,jobs,po
	NAME                     SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
	cronjob.batch/cron-job   */1 * * * *   False     1        5s              6m55s
	#@ SUSPEND가 false로 변경되었고 다시 cron-job이 재실행되는것을 확인한다.

'클라우드 컴퓨팅 & NoSQL > k8s' 카테고리의 다른 글

Ingress Controller  (0) 2020.12.03
Service Controller  (0) 2020.12.03
DaemonSetController  (0) 2020.12.03
Scheduler  (0) 2020.12.02
Configmaps  (0) 2020.12.02
  • 데몬셋 기반의 pod를 배포한다.
  • 특징 : 노드마다 하나의 pod만 배포하는 방식 (pod를 일정하게 배포)
  • 데몬셋은 스케일아웃을 지원하지 않는다.
  • 노드가 2개인 경우엔 곧 pod가 2개 배포 된다는 뜻.

DaemonSet 기본 확인

root@ip-172-31-4-27:~/controller# kubectl get po --all-namespaces -o wide | grep ip-172-31-13-180
calico-system     calico-node-nrj94                          1/1     Running   1          2d3h   172.31.13.180    ip-172-31-13-180   <none>           <none>
calico-system     calico-typha-564cccbfc5-r7pww              1/1     Running   0          161m   172.31.13.180    ip-172-31-13-180   <none>           <none>
kube-system       kube-proxy-28mlr                           1/1     Running   1          2d3h   172.31.13.180    ip-172-31-13-180   <none>           <none>


#@ daemonset들이 총 두개 있고, 노드가 두개이므로 DESIRED가 2개인 것을 확인 할 수 있다.
root@ip-172-31-4-27:~/controller# kubectl get ds --all-namespaces 
NAMESPACE       NAME          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
calico-system   calico-node   2         2         2       2            2           kubernetes.io/os=linux   2d3h
kube-system     kube-proxy    2         2         2       2            2           kubernetes.io/os=linux   2d3h

 

DaemonSet 시나리오1

1.ds-1.yaml 명세서 작성

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: ds-1 #ds-1으로 올릴꺼다
spec:
  selector:
    matchLabels: #상위 매치레이블 정보
      type: app
  template:
    metadata:
      labels:
        type: app
    spec:
      containers:
      - name: container
        image: nasamjang02/app:v1
        ports:
        - containerPort: 80
          hostPort: 10000

2.ds-2.yaml 명세서 작성

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: ds-2
spec:
  selector:
    matchLabels:
      type: app
  template:
    metadata:
      labels:
        type: app
    spec:
      nodeSelector: # nodeSelector 옵션 때문에 label등록전엔 pending이 된다.
        env: dev
      containers:
      - name: container
        image: nasamjang02/app:v1
        ports:
        - containerPort: 80

3. 수행 및 확인

#@ 차이점 ds-2는 nodeSelector를 지정했다. ds-1은 안했다.
kubectl create -f ds-1.yaml
	root@ip-172-31-4-27:~/controller# kubectl create -f ds-1.yaml
	daemonset.apps/ds-1 created
	root@ip-172-31-4-27:~/controller# kubectl get po -o wide
	NAME         READY   STATUS    RESTARTS   AGE   IP               NODE               NOMINATED NODE   READINESS GATES
	ds-1-gzgjt   1/1     Running   0          29s   192.168.82.46    ip-172-31-13-180   <none>           <none>
	ds-1-lxqbb   1/1     Running   0          29s   192.168.51.222   ip-172-31-4-27     <none>           <none>
	#@ daemonset이므로 노드가 2개이므로 DESIRED가 2개 즉 pod가 2개 생성됨을 확인 각각의 노드에 배포됨을 확인한다

kubectl create -f ds-2.yaml #@pending이 되는 이유

kubectl label nodes ip-172-31-4-27 env=dev
kubectl label nodes ip-172-31-13-180 env=prod

kubectl -o po -o wide

	root@ip-172-31-4-27:~/controller# kubectl label nodes ip-172-31-4-27 env=dev
	node/ip-172-31-4-27 labeled
	root@ip-172-31-4-27:~/controller# kubectl label nodes ip-172-31-13-180 env=prod
	node/ip-172-31-13-180 labeled
	root@ip-172-31-4-27:~/controller# kubectl get po -o wide
	NAME         READY   STATUS    RESTARTS   AGE     IP               NODE               NOMINATED NODE   READINESS GATES
	ds-1-gzgjt   1/1     Running   0          3m18s   192.168.82.46    ip-172-31-13-180   <none>           <none>
	ds-1-lxqbb   1/1     Running   0          3m18s   192.168.51.222   ip-172-31-4-27     <none>           <none>
	ds-2-z4c84   1/1     Running   0          32s     192.168.51.223   ip-172-31-4-27     <none>           <none>

kubectl label nodes ip-172-31-13-180 env-
kubectl label nodes ip-172-31-13-180 env=dev
kubectl -o po -o wide

	#@ worker에 label을 env=dev로 설정하니 daemonset을 통해 pod가 등록된 것을 확인 할 수 있다.
	root@ip-172-31-4-27:~/controller# kubectl label nodes ip-172-31-13-180 env-
	node/ip-172-31-13-180 labeled
	root@ip-172-31-4-27:~/controller# kubectl label nodes ip-172-31-13-180 env=dev
	node/ip-172-31-13-180 labeled
	root@ip-172-31-4-27:~/controller# kubectl get po -o wide
	NAME         READY   STATUS    RESTARTS   AGE     IP               NODE               NOMINATED NODE   READINESS GATES
	ds-1-gzgjt   1/1     Running   0          3m53s   192.168.82.46    ip-172-31-13-180   <none>           <none>
	ds-1-lxqbb   1/1     Running   0          3m53s   192.168.51.222   ip-172-31-4-27     <none>           <none>
	ds-2-2pxkf   1/1     Running   0          10s     192.168.82.47    ip-172-31-13-180   <none>           <none>
	ds-2-z4c84   1/1     Running   0          67s     192.168.51.223   ip-172-31-4-27     <none>           <none>

 

DaemonSet 시나리오2

  • kubectl edit ds ds-1 (rollingupdate test) #@ds도 rollingupdate 지원한다.
  • --image=nasamjang02/app:v2 #@이미지를 v2로 바꿔서 rollingupdate 가 진행되는지 확인한다.

1. 생성 및 확인

root@master1:~# kubectl edit ds ds-1  (Type=RollingUpdate를 ---> OnDelete , image=rosehs00/app:v1으로 변경 )
daemonset.apps/ds-1 edited

	spec:
      containers:
      - image: nasamjang02/app:v2
        imagePullPolicy: IfNotPresent
        name: container
        ports:


	updateStrategy:
    rollingUpdate:
      maxUnavailable: 1
    type: OnDelete


#@TODO 이미지도 v2로 변경해 주어야 한다.

root@ip-172-31-4-27:~# kubectl get po -o wide
NAME         READY   STATUS    RESTARTS   AGE
ds-1-fptcg   1/1     Running   0          63m
ds-1-krlkw   1/1     Running   0          63m
ds-1-r2f8p   1/1     Running   0          38m
ds-2-4wdsd   1/1     Running   0          66m
ds-2-pp52m   1/1     Running   0          3m57s
secondary    1/1     Running   0          4h17m

root@ip-172-31-4-27:~/controller# curl 172.31.4.27:10000 (node_ip:hostport) #@여전히 v2인 것을 확인한다.
This is app v1 test…


	#@ v2를 변경 후 curl로 요청시 v2가 찍힘을 확인한다.
	root@ip-172-31-4-27:~/controller# kubectl get po -o wide
	NAME         READY   STATUS    RESTARTS   AGE     IP               NODE               NOMINATED NODE   READINESS GATES
	ds-1-92tf4   1/1     Running   0          6m55s   192.168.82.49    ip-172-31-13-180   <none>           <none>
	ds-1-htf6c   1/1     Running   0          6m55s   192.168.51.226   ip-172-31-4-27     <none>           <none>
	ds-2-kgxjf   1/1     Running   0          6m36s   192.168.51.227   ip-172-31-4-27     <none>           <none>
	ds-2-nthxq   1/1     Running   0          6m36s   192.168.82.50    ip-172-31-13-180   <none>           <none>
	root@ip-172-31-4-27:~/controller# kubectl delete po ds-1-htf6c 
	pod "ds-1-htf6c" deleted
	root@ip-172-31-4-27:~/controller# curl 172.31.4.27:10000
	This is app v2 test…



root@ip-172-31-4-27:~# kubectl delete po ds-1-fptcg #@바로 트리거링이 안되는것을 확인한다.
pod "ds-1-fptcg" deleted

root@ip-172-31-4-27:~# curl 172.31.13.91:10000 #@해당 pod의 ip로 접근이 되는지 확인한다.
This is app v1 test...

ec2 instance 추가하기
ami-00099c928597181c5

 

DaemonSet 시나리오3

  • worker2 Join

1. 생성 및 확인

root@ip-172-31-4-27:~# kubeadm token create --print-join-command #@토큰 Create를 다시해야 한다. (기존 토큰은 만료되었음)
kubeadm join 172.31.8.183:6443 --token tp8ek7.57mmzy5w8vv5dzuw     --discovery-token-ca-cert-hash sha256e95285012a67b480fa97b3f9fa0c7bf5cf467722d476fc486aa0350 

root@ip-172-31-4-27:~# kubeadm token list
TOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                      EXTRA GROUPS

tp8ek7.57mmzy5w8vv5dzuw   23h         2020-09-18T02:31:32Z   authentication,signing   <none>                                           system:bootstrappers:kubeadm:default-node-token

worker2>kubeadm join 172.31.8.183:6443 --token tp8ek7.57mmzy5w8vv5dzuw     --discovery-token-ca-cert-hash sha256e95285012a67b480fa97b3f9fa0c7bf5cf467722d476fc486aa0350 

root@ip-172-31-4-27:~# kubectl get nodes
NAME      STATUS     ROLES    AGE     VERSION
kops-m    NotReady   <none>   22s     v1.19.1
ip-172-31-4-27   Ready      master   3d21h   v1.19.1
ip-172-31-13-180   Ready      <none>   3d16h   v1.19.1

root@ip-172-31-4-27:~# kubectl get po #@인스턴스가 추가되어 새로운 worker에 pod가 추가되는것을 확인한다.
NAME         READY   STATUS              RESTARTS   AGE
ds-1-fptcg   1/1     Running             0          24m
ds-1-krlkw   1/1     Running             0          25m
ds-1-r2f8p   0/1     ContainerCreating   0          8s

'클라우드 컴퓨팅 & NoSQL > k8s' 카테고리의 다른 글

Service Controller  (0) 2020.12.03
Job/CronJob Controller  (0) 2020.12.03
Scheduler  (0) 2020.12.02
Configmaps  (0) 2020.12.02
Storage Class  (0) 2020.12.02

- 스케쥴러 컴포넌트는 master에 올라간다
- 4인방 중 하나
- kubectl run 을 이용해서 pod를 실행시킬 수 있는데, 각 가용 리소스들 등의 여러가지 요소들을 스케쥴러를 이용하여 노드를 배정하는 역할을 한다.
- 배정을 스케쥴러가 한 후 api server로 전달한다.
- api-server는 각 worker 의 kubelet에게 전달한다.
- 스케쥴러 -> api-server -> kubelet -> 해당 node의 docker 쪽으로 내용을 전달 -> 컨테이너 생성
- 기본적으로 default scheduler를 사용한다.
- 다중 scheduler도 사용할 수 있다.

 

Advanced Scheduling

  • nodeAffinity : 가중치 분산 스케줄이 가능
• 예를 들어 파드와 다른 노드에 지정될수 있도록 Affinity 설정 가능
• 즉 스케쥴링할때 체크하는 조건이라고 생각하면 된다.
•Pod 명세서 자체에 스케쥴을 명시한다. 
- Prefer : 만족하는 node에 준하면 실행된다.
- required : 만족하는 node가 없으면 pending 된다.

 

  • podAffinity : pod 간 엄격한 선호하는 레벨의 친화성을 제어
• 조건에 만족하는 pod라면 해당 node에 배포
• topologyKey : 호스트네임 의 pod가 배포된 node에 해당 pod도 함께 배포 (같은 가용 zone에 배포 되도록 바운더리를 지정할 수 있다.)

 

Taints and Tolerations 시나리오

1.taint를 설정하여 해당 taint가 설정된 노드에만 pod가 생성되는지 확인한다
2.worker 노드에 taint를 설정한다. (node-type=prod:NoSchedule)
3.dev-pod pod 생성
4.prod-deployment deployment 생성

 

시작

1. node 레벨 taint 로 설정 (taint가 설정된 시스템에 배정 받을 수 있도록 설정 후 테스트한다.)
resource 타입 : node
epect : NoSchedule

#kubectl taint node ip-172-31-13-180 node-type=prod:NoSchedule

2. 일반 pod 배정 -> master에 배정되는지 확인 (worker에 배포가 안되고 master에 배포되는것을 확인)

dev-pod.yaml 명세서 작성

apiVersion: v1
kind: Pod
metadata:
  name: dev-pod
  labels:
    app: busybox
spec:
  containers:
  - name: dev
    image: busybox
    command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']

3. dev-pod 생성

kubectl create -f dev-pod.yaml

4. prod-deployment.yaml 명세서 작성

apiVersion: apps/v1
kind: Deployment
metadata:
  name: prod
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prod
  template:
    metadata:
      labels:
        app: prod
    spec:
      containers:
      - args:
        - sleep
        - "3600"
        image: busybox
        name: main
      tolerations: # tolerations 설정을 했으므로 worker에 배포되는것을 확인한다
      - key: node-type
        operator: Equal
        value: prod
        effect: NoSchedule

5. prod-deplyment 생성

#kubectl create -f prod-deployment.yaml

root@ip-172-31-4-27:~/taint# kubectl get po -o wide
	NAME                    READY   STATUS    RESTARTS   AGE     IP               NODE               NOMINATED NODE   READINESS GATES
	dev-pod                 1/1     Running   0          2m18s   192.168.51.216   ip-172-31-4-27     <none>           <none>
	prod-86b6d56d8d-trgh9   1/1     Running   0          71s     192.168.82.37    ip-172-31-13-180   <none>           <none>
    
#kubectl describe node ip-172-31-13-180

#kubectl taint node ip-172-31-13-180 node-type-
#@ taint 해제하는 방법은 node-type-
	root@ip-172-31-4-27:~/taint# kubectl taint node ip-172-31-13-180 node-type-
	node/ip-172-31-13-180 untainted

Cordon 시나리오

1.worker에 cordon을 설정하여 스케쥴링 제외한다.
2.worker노드에 uncordon으로 해제
3.worker노드에 drain을 설정한다.(cordon 보다 강력 : 스케즁일 제외 + 현재 pod까지 제거)

 

시작

1. 더 이상 스케줄링 하지 못하도록 설정 후 작업

#kubectl cordon ip-172-31-13-180 (이후 스케줄링에서 제외) #@ worker에 cordon을 설정하여 스케쥴링 제외
	root@ip-172-31-4-27:~/taint# kubectl cordon ip-172-31-13-180
	node/ip-172-31-13-180 cordoned
    
root@ip-172-31-4-27:~/taint# kubectl get nodes
	NAME               STATUS                     ROLES    AGE    VERSION
	ip-172-31-13-180   Ready,SchedulingDisabled   <none>   2d     v1.19.4
	ip-172-31-4-27     Ready                      master   2d1h   v1.19.4
    
#kubctl describe node ip-172-31-13-180|grep -i taint
	root@ip-172-31-4-27:~/taint# kubectl describe node ip-172-31-13-180|grep -i taint
	Taints:             node.kubernetes.io/unschedulable:NoSchedule

#kubectl uncordon ip-172-31-13-180 #node로 해제
	root@ip-172-31-4-27:~/taint# kubectl uncordon ip-172-31-13-180
	node/ip-172-31-13-180 uncordoned

#kubectl drain ip-172-31-13-180 (이후 스케줄링에서 제외 + 모든 pod를 제거) #@ cordon 보다 강력 : 스케즁일 제외 + 현재 pod까지 제거
#kubctl describe node ip-172-31-13-180|grep -i taint
#kubectl get nodes 
	root@ip-172-31-4-27:~/taint# kubectl get nodes
	NAME               STATUS                     ROLES    AGE    VERSION
	ip-172-31-13-180   Ready,SchedulingDisabled   <none>   2d     v1.19.4
	ip-172-31-4-27     Ready                      master   2d1h   v1.19.4
	
#kubectl uncordon ip-172-31-13-180  로 해제
	root@ip-172-31-4-27:~/taint# kubectl uncordon ip-172-31-13-180
	node/ip-172-31-13-180 uncordoned

#kubectl get nodes #@ 확인
	root@ip-172-31-4-27:~/taint# kubectl get nodes
	NAME               STATUS   ROLES    AGE    VERSION
	ip-172-31-13-180   Ready    <none>   2d     v1.19.4
	ip-172-31-4-27     Ready    master   2d1h   v1.19.4

CordonAffinity 시나리오

1.master와 worker노드에 label을 설정한다.
2.pod-nodeaffinity pod를 생성한다. (nodeSelectorTerms.matchExpressions으로 team1 레이블을 갖고있는 node에 생성되는지 확인한다)
3.pod-nodeaffinity.yaml  <----key를 team2로 변경, pod_name 변경
4.pod-nodeaffinity pod를 생성한다 (마스터와 워커에 pod가 생성됨을 확인. team2로 변경했으므로)
5.pod-required pod를 생성한다. (prod이라는 lable이 있는지 체크한다. 요구조건에 맞는 node가 없으므로 pending 되는 것을 확인)
6.pod-prefer pod를 생성한다. (preferredDuringSchedulingIgnoredDuringExecution: #조건이 없음에도 불구하고 running 이 되는지 확인)
7.label을 생성한다. (kubectl label nodes ip-172-31-13-180 prod=true)
8.워커의 label을 변경하여 조건에 맞춰 pod-required가 pending에서 running됨을 확인

 

시작

1.master, worker에 label 설정

root@ip-172-31-4-27:~/taint# kubectl label nodes ip-172-31-4-27 team1=dev
node/ip-172-31-4-27 labeled
root@ip-172-31-4-27:~/taint# kubectl label nodes ip-172-31-13-180 team2=dev
node/ip-172-31-13-180 labeled

2.pod-nodeaffinity.yaml 명세서 작성

apiVersion: v1
kind: Pod
metadata:
 name: pod-1
spec:
 affinity:
  nodeAffinity: #노드 레벨로 조사
   requiredDuringSchedulingIgnoredDuringExecution:  #강제로 필수 조건설정 
    nodeSelectorTerms:
    - matchExpressions:
      -  {key: team1, operator: Exists} # team1 레이블을 갖고있는 node
 containers:
 - name: container
   image: nginx
 terminationGracePeriodSeconds: 0

3.pod-required.yaml 명세서 작성

apiVersion: v1
kind: Pod
metadata:
 name: pod-required
spec:
 affinity:
  nodeAffinity:
   requiredDuringSchedulingIgnoredDuringExecution:
    nodeSelectorTerms:
    - matchExpressions:
      - {key: prod, operator: Exists} # prod이라는 lable이 있는지 체크한다. 요구조건에 맞는 node가 없으므로 pending 되는 것을 확인
 containers:
 - name: container
   image: nginx
 terminationGracePeriodSeconds: 0

4.pod-prefer.yaml 명세서 작성

apiVersion: v1
kind: Pod
metadata:
 name: pod-prefer
spec:
 affinity:
  nodeAffinity:
   preferredDuringSchedulingIgnoredDuringExecution: #조건이 없음에도 불구하고 running 이 되는지 확인
    - weight: 1
      preference:
       matchExpressions:
       - {key: prod, operator: Exists}
 containers:
 - name: container
   image: nginx
 terminationGracePeriodSeconds: 0

5.생성

root@ip-172-31-4-27:~/affinity# kubectl create -f pod-nodeaffinity.yaml 
pod/pod-1 created

#@ master에서 구동됨을 확인
root@ip-172-31-4-27:~/affinity# kubectl get po -o wide
NAME    READY   STATUS    RESTARTS   AGE   IP               NODE             NOMINATED NODE   READINESS GATES
pod-1   1/1     Running   0          56s   192.168.51.217   ip-172-31-4-27   <none>           <none>

# vi pod-nodeaffinity.yaml  <----key를 team2로 변경, pod_name 변경

# kubectl create -f pod-nodeaffinity.yaml 
pod/pod-2 created
#@ 마스터와 워커에 pod가 생성됨을 확인
root@ip-172-31-4-27:~/affinity# kubectl get po -o wide
NAME    READY   STATUS    RESTARTS   AGE     IP               NODE               NOMINATED NODE   READINESS GATES
pod-1   1/1     Running   0          2m33s   192.168.51.217   ip-172-31-4-27     <none>           <none>
pod-2   1/1     Running   0          12s     192.168.82.38    ip-172-31-13-180   <none>           <none>

#@ 조건이 맞는게 없으므로 pending 되는 것을 확인
root@ip-172-31-4-27:~/affinity# kubectl get po -o wide
NAME           READY   STATUS    RESTARTS   AGE     IP               NODE               NOMINATED NODE   READINESS GATES
pod-1          1/1     Running   0          3m16s   192.168.51.217   ip-172-31-4-27     <none>           <none>
pod-2          1/1     Running   0          55s     192.168.82.38    ip-172-31-13-180   <none>           <none>
pod-required   0/1     Pending   0          8s      <none>           <none>             <none>           <none>
    
# kubectl create -f pod-prefer.yaml #@조건이 없음에도 불구하고 running 이 되는지 확인

#@조건이 없음에도 불구하고 running 이 되는지 확인
root@ip-172-31-4-27:~/affinity# kubectl get po -o wide
NAME           READY   STATUS    RESTARTS   AGE     IP               NODE               NOMINATED NODE   READINESS GATES
pod-1          1/1     Running   0          3m53s   192.168.51.217   ip-172-31-4-27     <none>           <none>
pod-2          1/1     Running   0          92s     192.168.82.38    ip-172-31-13-180   <none>           <none>
pod-prefer     1/1     Running   0          7s      192.168.82.39    ip-172-31-13-180   <none>           <none>
pod-required   0/1     Pending   0          45s     <none>           <none>             <none>           <none>

# kubectl label nodes ip-172-31-13-180 prod=true
root@ip-172-31-4-27:~/affinity# kubectl label nodes ip-172-31-13-180 prod=true
node/ip-172-31-13-180 labeled

#@ 워커의 label을 변경하여 조건에 맞춰 pod-required가 pending에서 running됨을 확인
root@ip-172-31-4-27:~/affinity# kubectl get po -o wide
NAME           READY   STATUS    RESTARTS   AGE     IP               NODE               NOMINATED NODE   READINESS GATES
pod-1          1/1     Running   0          4m56s   192.168.51.217   ip-172-31-4-27     <none>           <none>
pod-2          1/1     Running   0          2m35s   192.168.82.38    ip-172-31-13-180   <none>           <none>
pod-prefer     1/1     Running   0          70s     192.168.82.39    ip-172-31-13-180   <none>           <none>
pod-required   1/1     Running   0          108s    192.168.82.40    ip-172-31-13-180   <none>           <none>

 

Pod affinity 시나리오

1.front-end pod 생성 시도, pending 됨 (사유 nodeSelector: #해당 team에 맞는게 없어 pending 됨을 확인)
2.db pod 생성 시도,
3.label 생성 시도
kubectl label node ip-172-31-4-27 team=dev #db.yaml이 label이 생성되어 scheduler가 시작된다. 확인필요
kubectl label node ip-172-31-13-180 team=prod
4.db와 front-end가 동일한 node에 올라감을 확인한다.
조건에 맞는 label이 생성되어 pending -> running 됨을 확인한다.

1.db.yaml 명세서 작성

apiVersion: v1
kind: Pod
metadata:
 name: db
 labels:
  type: db
spec:
 nodeSelector: #해당 team에 맞는게 없어 pending 됨을 확인
  team: dev
 containers:
 - name: container
   image: nginx
 terminationGracePeriodSeconds: 0

2.front-end.yaml 명세서 작성

apiVersion: v1
kind: Pod
metadata:
 name: front-end
spec:
 affinity:
  podAffinity:
   requiredDuringSchedulingIgnoredDuringExecution: #required 이기 때문에 조건에 맞지 않아 pending 되는것을 확인
   - topologyKey: team
     labelSelector:
      matchExpressions:
      -  {key: type, operator: In, values: [db]} # db라는 value가 있는 lable을 체크한다
 containers:
 - name: container
   image: nginx
 terminationGracePeriodSeconds: 0

3.생성 및 확인

$ kubectl create -f front-end.yaml 
pod/front-end created
$ kubectl get po
NAME                    READY   STATUS        RESTARTS   AGE
front-end               0/1     Pending       0          6s
$ kubectl describe po 

	#pending 된 이유를 확인한다.
	root@ip-172-31-4-27:~/pod_affinity# kubectl get po
	NAME        READY   STATUS    RESTARTS   AGE
	front-end   0/1     Pending   0          9s

	#노드 셀렉터가 맞지 않아 pending이 발생
	root@ip-172-31-4-27:~/pod_affinity# kubectl describe po 
	생략
	Node-Selectors:  team=dev
	Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
					 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  19s (x2 over 19s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match node selector. #노드 셀렉터가 맞지 않아 pending이 발생

$ kubectl create -f db.yaml
pod/db created
$ kubectl get po
NAME        READY   STATUS    RESTARTS   AGE
db          0/1     Pending   0          4s
front-end   0/1     Pending   0          68s
	
	root@ip-172-31-4-27:~/pod_affinity# kubectl get po
	NAME        READY   STATUS    RESTARTS   AGE
	db          0/1     Pending   0          5s
	front-end   0/1     Pending   0          2m51s


$ kubectl label node ip-172-31-4-27 team=dev #db.yaml이 label이 생성되어 scheduler가 시작된다. 확인필요
node/ip-172-31-4-27 labeled

	root@ip-172-31-4-27:~/pod_affinity# kubectl label node ip-172-31-4-27 team=dev
	node/ip-172-31-4-27 labeled

$ kubectl label node ip-172-31-13-180 team=prod
node/ip-172-31-13-180 labeled

	root@ip-172-31-4-27:~/pod_affinity# kubectl label node ip-172-31-13-180 team=prod
	node/ip-172-31-13-180 labeled

$ kubectl get po
NAME        READY   STATUS    RESTARTS   AGE
db          1/1     Running   0          2m
front-end   1/1     Running   0          3m4s
$ kubectl get po -o wide
NAME        READY   STATUS    RESTARTS   AGE     IP               NODE      NOMINATED NODE   READINESS GATES
db          1/1     Running   0          2m10s   192.168.137.98   ip-172-31-4-27   <none>           <none>
front-end   1/1     Running   0          3m14s   192.168.137.99   ip-172-31-4-27   <none>           <none>

	#@db와 front-end가 동일한 node에 올라감을 확인한다.
	#@조건에 맞는 label이 생성되어 pending -> running 됨을 확인한다.
	root@ip-172-31-4-27:~/pod_affinity# kubectl get po
	NAME        READY   STATUS    RESTARTS   AGE
	db          1/1     Running   0          59s
	front-end   1/1     Running   0          3m45s

 

pod-affinity 시나리오 2

1.pod-affinity 생성 시도 
2.조건에 만족하지 않음으로 pending됨을 확인 (사유 : required -> matchExpressions -> db2라는 label의 pod가 있어야 생성된다)
3.db2 pod 생성
4.조건에 부합하여 pending되어 있던 pod-affinity가 running됨을 확인한다
5.db2라는 label의 pod가 있어야 생성된다

 

시작

1.pod-affinity.yaml 명세서 작성

apiVersion: v1
kind: Pod
metadata:
 name: pod-affinity
spec:
 affinity:
  podAffinity:
   requiredDuringSchedulingIgnoredDuringExecution:   
   - topologyKey: team
     labelSelector:
      matchExpressions:
      -  {key: type, operator: In, values: [db2]} #db2라는 label의 pod가 있어야 생성된다
 containers:
 - name: container
   image: nginx
 terminationGracePeriodSeconds: 0

2.db2.yaml 명세서 작성

apiVersion: v1
kind: Pod
metadata:
  name: db2
  labels:
     type: db2
spec:
  nodeSelector:
    team: prod
  containers:
  - name: container
    image: nginx
  terminationGracePeriodSeconds: 0

3.생성 및 확인

$ kubectl create -f pod-affinity.yaml 
pod/pod-affinity created

$ kubectl get po -o wide
NAME           READY   STATUS    RESTARTS   AGE     IP               NODE      NOMINATED NODE   READINESS GATES
db             1/1     Running   0          5m23s   192.168.137.98   ip-172-31-4-27   <none>           <none>
front-end      1/1     Running   0          6m27s   192.168.137.99   ip-172-31-4-27   <none>           <none>
pod-affinity   0/1     Pending   0          22s     <none>           <none>    <none>           <none>

	#@ 조건에 만족하지 않음으로 pending됨을 확인
	root@ip-172-31-4-27:~/pod_affinity# kubectl get po
	NAME           READY   STATUS    RESTARTS   AGE
	db             1/1     Running   0          2m47s
	front-end      1/1     Running   0          5m33s
	pod-affinity   0/1     Pending   0          7s

$ kubectl create -f db2.yaml
pod/db2 created
$ kubectl get po -o wide
NAME           READY   STATUS              RESTARTS   AGE     IP               NODE      NOMINATED NODE   READINESS GATES
db             1/1     Running             0          5m42s   192.168.137.98   ip-172-31-4-27   <none>           <none>
db2            0/1     ContainerCreating   0          4s      <none>           ip-172-31-13-180   <none>           <none>
front-end      1/1     Running             0          6m46s   192.168.137.99   ip-172-31-4-27   <none>           <none>
pod-affinity   0/1     ContainerCreating   0          41s     <none>           ip-172-31-13-180  


	#@ 조건에 부합하여 pending되어 있던 pod-affinity가 running됨을 확인한다
	#@ #db2라는 label의 pod가 있어야 생성된다
	root@ip-172-31-4-27:~/pod_affinity# kubectl get po
	NAME           READY   STATUS    RESTARTS   AGE
	db             1/1     Running   0          3m12s
	db2            1/1     Running   0          9s
	front-end      1/1     Running   0          5m58s
	pod-affinity   1/1     Running   0          32s

 

Anti-affinity 시나리오

1.primary 생성 시도, 완료
2.secondary 생성 시도, 완료
# primary는 마스터에, secondary는 워커에 배포됨을 확인
# kubectl get nodes --show-labels
# 이전에 kubectl label node ip-172-31-4-27 team=dev
# kubectl label node ip-172-31-13-180 team=prod 로 설정해 두었기 때문에 아래와 같은 결과가 나옴

 

시작

1.primary.yaml 명세서 작성

apiVersion: v1
kind: Pod
metadata:
  name: primary
  labels:
     role: primary
spec:
  nodeSelector:
    team: dev # team dev에 배포가 되도록 셋팅
  containers:
  - name: container
    image: nginx
  terminationGracePeriodSeconds: 0

2.secondary.yaml 명세서 작성

apiVersion: v1
kind: Pod
metadata:
 name: secondary
spec:
 affinity:
  podAntiAffinity: # primary와 다른 node에 배포되도록 설정
   requiredDuringSchedulingIgnoredDuringExecution:   
   - topologyKey: team
     labelSelector:
      matchExpressions:
      -  {key: role, operator: In, values: [primary]}
 containers:
 - name: container
   image: nginx
 terminationGracePeriodSeconds: 0

3. 생성 및 확인

$ kubectl create -f primary.yaml 
pod/primary created
$ kubectl get po -o wide
NAME      READY   STATUS              RESTARTS   AGE   IP       NODE      NOMINATED NODE   READINESS GATES
primary   0/1     ContainerCreating   0          4s    <none>   ip-172-31-4-27   <none>           <none>

	root@ip-172-31-4-27:~/anti_affinity# kubectl get po -o wide
	NAME      READY   STATUS    RESTARTS   AGE   IP               NODE             NOMINATED NODE   READINESS GATES
	primary   1/1     Running   0          19s   192.168.51.220   ip-172-31-4-27   <none>           <none>

$ kubectl create -f secondary.yaml 
pod/secondary created
$ kubectl get po -o wide
NAME        READY   STATUS              RESTARTS   AGE   IP                NODE      NOMINATED NODE   READINESS GATES
primary     1/1     Running             0          21s   192.168.137.100   ip-172-31-4-27   <none>           <none>
secondary   0/1     ContainerCreating   0          4s    <none>            ip-172-31-13-180   <none>           <none>

	# primary는 마스터에, secondary는 워커에 배포됨을 확인
	# 이전에 kubectl label node ip-172-31-4-27 team=dev
	# kubectl label node ip-172-31-13-180 team=prod 로 설정해 두었기 때문에 아래와 같은 결과가 나옴
	root@ip-172-31-4-27:~/anti_affinity# kubectl get po -o wide
	NAME        READY   STATUS    RESTARTS   AGE   IP               NODE               NOMINATED NODE   READINESS GATES
	primary     1/1     Running   0          41s   192.168.51.220   ip-172-31-4-27     <none>           <none>
	secondary   1/1     Running   0          7s    192.168.82.43    ip-172-31-13-180   <none>           <none>

# 배포된것 모두 삭제
	$ kubectl delete deploy --all
	$ kubectl delete po --all  
	pod "primary" deleted
	pod "secondary" deleted

 

시나리오

1.primary에 team: prod 로 nodeSelector를 변경한다. (즉, worker 노드에 생성하겠다는 의미)
2.secondary pod를 생성 시도 (podAntiAffinity 옵션으로 primary와 다른 node에 설정 옵션 있음)
3.secondary가 worker노드에 생성됨
4.primary pod 생성 시도
5.primary가 pending됨 (이유 : worker 노드에 생성되도록 nodeSelector를 설정했는데 이미 secondary가 있으므로)
6.secondary pod 삭제 시도, 완료
7.primary pod 가 pending -> running으로 변경
8.secondary pod 생성 시도, 완료

 

시작

1.primary.yaml 의 team: prod로 변경

 

2. 생성 및 확인

$ kubectl create -f secondary.yaml 
pod/secondary created
$ kubectl get po -o wide
NAME        READY   STATUS              RESTARTS   AGE   IP       NODE      NOMINATED NODE   READINESS GATES
secondary   0/1     ContainerCreating   0          4s    <none>   ip-172-31-13-180   <none>           <none>

	# primary가 없기 때문에 secondary는 아무곳에나 적용됨을 확인
	root@ip-172-31-4-27:~/anti_affinity# kubectl create -f secondary.yaml 
	pod/secondary created
	root@ip-172-31-4-27:~/anti_affinity# kubectl get po -o wide
	NAME        READY   STATUS    RESTARTS   AGE   IP              NODE               NOMINATED NODE   READINESS GATES
	secondary   1/1     Running   0          33s   192.168.82.44   ip-172-31-13-180   <none>           <none>


$ kubectl get po -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP                NODE      NOMINATED NODE   READINESS GATES
secondary   1/1     Running   0          19s   192.168.235.186   ip-172-31-13-180   <none>           <none>

$ kubectl create -f primary.yaml 
pod/primary created
$ kubectl get po -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP                NODE      NOMINATED NODE   READINESS GATES
primary     0/1     Pending   0          3s    <none>            <none>    <none>           <none>
secondary   1/1     Running   0          30s   192.168.235.186   ip-172-31-13-180   <none>           <no

	#anti 때문에 서로 다른 노드에 배정되어야 하는데 secondary때문에 primary가 pending됨을 확인한다
	root@ip-172-31-4-27:~/anti_affinity# kubectl get po -o wide
	NAME        READY   STATUS    RESTARTS   AGE   IP              NODE               NOMINATED NODE   READINESS GATES
	primary     0/1     Pending   0          5s    <none>          <none>             <none>           <none>
	secondary   1/1     Running   0          59s   192.168.82.44   ip-172-31-13-180   <none>           <none>

$kubectl describe po primary

    #생략
	Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  26s (x2 over 26s)  default-scheduler  0/2 nodes are available: 1 node(s) didn't match node selector, 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't satisfy existing pods anti-affinity rules.
    

$ kubectl get nodes --show-labels 
NAME      STATUS   ROLES    AGE     VERSION   LABELS
ip-172-31-4-27   Ready    master   3d17h   v1.19.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-172-31-4-27,kubernetes.io/os=linux,node-role.kubernetes.io/master=,team1=dev,team=dev
ip-172-31-13-180   Ready    <none>   3d12h   v1.19.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-172-31-13-180,kubernetes.io/os=linux,node=worker,prod=true,team2=dev,team=prod

	#@ 마스터와 워커의 label 확인
	root@ip-172-31-4-27:~/anti_affinity# kubectl get nodes --show-labels 
	NAME               STATUS   ROLES    AGE    VERSION   LABELS
	ip-172-31-13-180   Ready    <none>   2d1h   v1.19.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-172-31-13-180,kubernetes.io/os=linux,nfs=node2,prod=true,team2=dev,team=prod
	ip-172-31-4-27     Ready    master   2d1h   v1.19.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-172-31-4-27,kubernetes.io/os=linux,nfs=node1,node-role.kubernetes.io/master=,team1=dev,team=dev

$ kubectl get po -o wide
NAME        READY   STATUS    RESTARTS   AGE     IP                NODE      NOMINATED NODE   READINESS GATES
primary     0/1     Pending   0          2m50s   <none>            <none>    <none>           <none>
secondary   1/1     Running   0          3m17s   192.168.235.186   ip-172-31-13-180   <none>           <none>

$ kubectl delete po secondary 
pod "secondary" deleted

$ kubectl get po -o wide
NAME      READY   STATUS    RESTARTS   AGE     IP                NODE      NOMINATED NODE   READINESS GATES
primary   1/1     Running   0          3m41s   192.168.235.187   ip-172-31-13-180   <none>           <none>


	#@ secondary가 삭제 되었기 때문에 primary가 배포됨을 확인한다
	root@ip-172-31-4-27:~/anti_affinity# kubectl delete po secondary 
	pod "secondary" deleted
	root@ip-172-31-4-27:~/anti_affinity# kubectl get po -o wide
	NAME      READY   STATUS    RESTARTS   AGE     IP              NODE               NOMINATED NODE   READINESS GATES
	primary   1/1     Running   0          2m29s   192.168.82.45   ip-172-31-13-180   <none>           <none>

$ kubectl create -f secondary.yaml 
pod/secondary created

$ kubectl get po -o wide
NAME        READY   STATUS              RESTARTS   AGE     IP                NODE      NOMINATED NODE   READINESS GATES
primary     1/1     Running             0          3m56s   192.168.235.187   ip-172-31-13-180   <none>           <none>
secondary   0/1     ContainerCreating   0          2s      <none>            ip-172-31-4-27   <none>

	#@ 서로 다른 노드에 정상 pod가 생성됨을 확인한다.
	root@ip-172-31-4-27:~/anti_affinity# kubectl create -f secondary.yaml 
	pod/secondary created
	root@ip-172-31-4-27:~/anti_affinity# kubectl get po -o wide
	NAME        READY   STATUS    RESTARTS   AGE    IP               NODE               NOMINATED NODE   READINESS GATES
	primary     1/1     Running   0          3m2s   192.168.82.45    ip-172-31-13-180   <none>           <none>
	secondary   1/1     Running   0          11s    192.168.51.221   ip-172-31-4-27     <none>           <none>

'클라우드 컴퓨팅 & NoSQL > k8s' 카테고리의 다른 글

Job/CronJob Controller  (0) 2020.12.03
DaemonSetController  (0) 2020.12.03
Configmaps  (0) 2020.12.02
Storage Class  (0) 2020.12.02
Volumes (PV와 PVS)  (0) 2020.12.02

ConfigMaps 생성

 

  • Literal : kubectl create configmap dev map from literal=A=10 from literal=B=20 - 문자 그대로 입력
  • file : kubectl create configmap dev map from file=f1 from file=new=f2 - f1, f2 파일에서 입력
  • dir : kubectl create configmap dev map from file= dir -  Configmaps를 만들어 3가지 유형으로 값을 입력 할 수 있다. 

1. ConfigMap 의 Key value 전달 방법 Literal 

root@ip-172-31-4-27:~/pv# kubectl create configmap dev-map --from-literal=A=10 --from-literal=B=20
configmap/dev-map created
root@ip-172-31-4-27:~/pv# kubectl get configmaps dev-map -o yaml
apiVersion: v1
data:
  A: "10"
  B: "20"
kind: ConfigMap
metadata:
  creationTimestamp: "2020-12-02T07:24:46Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        .: {}
        f:A: {}
        f:B: {}
    manager: kubectl-create
    operation: Update
    time: "2020-12-02T07:24:46Z"
  name: dev-map
  namespace: default
  resourceVersion: "219077"
  selfLink: /api/v1/namespaces/default/configmaps/dev-map
  uid: 757fd93b-eac3-4341-8e91-d731809cf4aa

2. ConfigMap 의 Key value 전달 방법 file

root@ip-172-31-4-27:~/pv# kubectl get configmaps dev-map-1 -o yaml
apiVersion: v1
data:
  f1: |
    first
  new: |
    second
kind: ConfigMap
metadata:
  creationTimestamp: "2020-12-02T07:26:42Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        .: {}
        f:f1: {}
        f:new: {}
    manager: kubectl-create
    operation: Update
    time: "2020-12-02T07:26:42Z"
  name: dev-map-1
  namespace: default
  resourceVersion: "219410"
  selfLink: /api/v1/namespaces/default/configmaps/dev-map-1
  uid: d4daaa0c-4f65-46c5-9b42-cb7c89b77008

3. ConfigMap 의 Key value 전달 방법 dir

root@ip-172-31-4-27:~/pv/d1# cat > key1
1
root@ip-172-31-4-27:~/pv/d1# cat > key2
2
root@ip-172-31-4-27:~/pv/d1# cat > key3
3
root@ip-172-31-4-27:~/pv/d1# kubectl create configmap dev-map-2 --from-file=d1
error: error reading d1: no such file or directory
root@ip-172-31-4-27:~/pv/d1# cd ..
root@ip-172-31-4-27:~/pv# kubectl create configmap dev-map-2 --from-file=d1
configmap/dev-map-2 created
root@ip-172-31-4-27:~/pv# kubectl create configmap dev-map-2 --from-file=d1^C
root@ip-172-31-4-27:~/pv# kubectl get configmaps dev-map-2 -o yaml
apiVersion: v1
data:
  key1: |
    1
  key2: |
    2
  key3: |
    3
kind: ConfigMap
metadata:
  creationTimestamp: "2020-12-02T07:29:21Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        .: {}
        f:key1: {}
        f:key2: {}
        f:key3: {}
    manager: kubectl-create
    operation: Update
    time: "2020-12-02T07:29:21Z"
  name: dev-map-2
  namespace: default
  resourceVersion: "219857"
  selfLink: /api/v1/namespaces/default/configmaps/dev-map-2
  uid: 8f0ccb26-0044-421a-8c0f-bae7b0ec8c78

root@ip-172-31-4-27:~/pv# kubectl get cm
NAME        DATA   AGE
dev-map     2      5m33s
dev-map-1   2      3m37s
dev-map-2   3      58s

'클라우드 컴퓨팅 & NoSQL > k8s' 카테고리의 다른 글

DaemonSetController  (0) 2020.12.03
Scheduler  (0) 2020.12.02
Storage Class  (0) 2020.12.02
Volumes (PV와 PVS)  (0) 2020.12.02
Volumes (pod 내 공유 방법, Pod 외부 공유 방법, NFS)  (0) 2020.12.02
  • 좀더 비용효율적으로 관리자 차원에서 필요한 메커니즘
  • 개발자 관점에서 PV 와 PVC 는 사용상의 편리성을 제공하는 반면 관리자 관점의 storage
  • Provisiong 의 편리성은 Storage Class 를 통해 지원
  • 관리자 관점에서 Storage Class Define 이 필요

 

volumeBindingMode: WaitForFirstConsumer

pod가 사용하는 시점까지 waiting하여 비용 효율적으로 사용가능하도록 한다.

1.NFS (백엔스토리지)를 미리 만들고
2.PV랑 연결시킨다 (운영팀 담당)
3.PVC 리소스를 만들어서 생성 (개발팀 담당)
4.최종 POD 연결

 

'클라우드 컴퓨팅 & NoSQL > k8s' 카테고리의 다른 글

Scheduler  (0) 2020.12.02
Configmaps  (0) 2020.12.02
Volumes (PV와 PVS)  (0) 2020.12.02
Volumes (pod 내 공유 방법, Pod 외부 공유 방법, NFS)  (0) 2020.12.02
RollingUpdate Blue/Green 배포 시나리오  (0) 2020.12.02

PV와 PVS

  • 개발자들은 백엔솔루션이 무엇인지 상관없이 알아서 배정되는 방식 : PV, PVC
  • 백엔솔루션 부분을 별도로 분리하여 PV로 관리 개발자는 PVC 요청서만 만들면 된다.
  • 개발자는 더 쉽게 저장공간을 사용할 수 있다.
  • PVC : 사용자 정의 리소스 (Client 사이드 리소스)

1. PV

1.Retain : PV(볼륨)를 보존, 또한 Data도 보존
2.Delete : PV(볼륨) 삭제, Data도 삭제
3.Recycle : 재활용 하겠다. Data는 삭제, PV(볼륨) 보존 (PV만 또 사용하겠다는 의미)

 

  • 추상적인 볼륨 PV1, PV2, PV3를 정의
  • PVC 요청서를 요청 -> Pv2와 연결
  • POD yaml 명세서에 name을 맞춰 PVC와 연결된 상태
  • 만약 이상태에서 PVC를 삭제 하려고 시도하면? 사용하는 POD가 없어야 삭제가 가능하다.
  • 즉 위 3가지는 삭제시 policy 이다.
  • 드라이버마다 3가지 policy를 제공하는 수준이 다르므로 판단이 필요하다.

accessModes: ReadWriteOnce (한쪽에서만 마운트해서 사용하겠다는 의미)

pv.yaml 생성

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-1
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain #PV가 삭제되도 유지한다
  nfs:
    path: /share
    server: ip-172-31-4-27   #<-- Edit to match master node
    readOnly: false

pv2.yaml 생성

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-2
spec:
  capacity:
    storage: 200Mi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  hostPath:
    path: /test

엑세스모드가 일치하는것 이 매칭되는지 확인

root@ip-172-31-4-27:~/pv# kubectl create -f pv.yaml 
	persistentvolume/pv-1 created
	root@ip-172-31-4-27:~/pv# kubectl create -f pv2.yaml 
	persistentvolume/pv-2 created
	root@ip-172-31-4-27:~/pv# kubectl get pv
	NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
	pv-1   1Gi        RWX            Retain           Available                                   16s
	pv-2   200Mi      RWO            Recycle          Available                                   10s

 

PVC

- 개발자가 작성 후 바인딩해서 사용하는 방식

Access Mode

  • ReadWriteOnce
  • ReadOnlyMany
  • ReadWriteMany

PVC 가 생성시 요청 Capacity 를 수용하고 Access Mode 를 포함하는 PV 를 찾아 Binding.
Access Mode 는 Pod 단위가 아닌 Worker node 가 volume 에 동시 사용 가능한지에 대한 항목.

 

pvc.yaml 명세서 정의

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-1 #
spec:
  accessModes:
  - ReadWriteMany #요청 방법 PV에서 제공해줘야 한다
  resources:
     requests:
       storage: 100Mi #100MG있어야 한다.

accessMode 때문에 pv-1과 연결된것을 확인한다.

root@ip-172-31-4-27:~/pv# kubectl create -f pvc.yaml 
	persistentvolumeclaim/pvc-1 created
	root@ip-172-31-4-27:~/pv# kubectl get pv
	NAME   CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM           STORAGECLASS   REASON   AGE
	pv-1   1Gi        RWX            Retain           Bound       default/pvc-1                           107s
	pv-2   200Mi      RWO            Recycle          Available                                           101s

nfs-pod.yaml 명세서 정의

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: nginx
    name: web-container
    volumeMounts:
    - mountPath: /data
      name: cache-volume
  - image: busybox
    name: write-container
    command: ['sh','-c','echo hello k8s! && sleep 1000']
    volumeMounts:
    - mountPath: /app
      name: cache-volume
  volumes:
  - name: cache-volume
    nfs:
      server: ip-172-31-4-27  #nfs-server
      path: /share
      #type: Directory
  terminationGracePeriodSeconds: 0
root@ip-172-31-4-27:~/pv# kubectl create -f nfs-pod.yaml 
	pod/test-pd created
	root@ip-172-31-4-27:~/pv# kubectl exec test-pd -c web-container -- ls /data
	hello.txt

PV 의 Raw Block mode

pvc 생성 요청 -> 여러 PV가 있다면 volumeMode가 같은 PV를 찾고 -> POD의 claimName이 PVC의 name이 일치 하는지 확인

'클라우드 컴퓨팅 & NoSQL > k8s' 카테고리의 다른 글

Configmaps  (0) 2020.12.02
Storage Class  (0) 2020.12.02
Volumes (pod 내 공유 방법, Pod 외부 공유 방법, NFS)  (0) 2020.12.02
RollingUpdate Blue/Green 배포 시나리오  (0) 2020.12.02
Rolling Update 예제  (0) 2020.12.02

• 데이터 공유 
  : pod와 pod간에 데이터 공유
• 영구적 저장 
  : volum을 영구적으로 저장
• 설정 데이터 제공 

  : decoupling 메소드 사용
• 보안상 중요 데이터 제공

메타데이타
: 컨테이너내의 프로세스에서 메타테이터 확인 용도로 볼륨을 사용

 

내 공유 방법

epmtyDir
gitRepo
  • Pod 내부 Container 가 외부 디스크 스토리지에 접근하는 방법
  • 파일 시스템은 컨테이너 이미지에서 제공(컨테이너별 독립적이며 휘발성)
  • Pod 전용으로 생성, 사용되는 디렉토리로 포드 삭제시 제거된다

emptyDir.yaml

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: nginx
    name: web-container
    volumeMounts:
    - mountPath: /data
      name: cache-volume
  - image: busybox
    name: write-container
    command: ['sh','-c','echo hello k8s! && sleep 1000']
    volumeMounts:
    - mountPath: /app
      name: cache-volume
  volumes:
  - name: cache-volume
    emptyDir: {} #@ 백엔솔루션을 emptyDir로 정의. emptyDir은 휘발성인 것을 기억한다
	
	NAME          READY   STATUS    RESTARTS   AGE
	pod/test-pd   2/2     Running   0          72s #@ 두개가 뜬것을 확인

yaml 명세서 생성 후 삭제 테스트를 해본다

#kubectl create -f empty.yaml
# kubectl exec test-pd -c web-container -- ls /data
	root@ip-172-31-4-27:~/volumes# kubectl exec test-pd -c web-container -- df -h
	Filesystem      Size  Used Avail Use% Mounted on
	overlay          20G  3.3G   17G  18% /
	tmpfs            64M     0   64M   0% /dev
	tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
	/dev/nvme0n1p1   20G  3.3G   17G  18% /data
	shm              64M     0   64M   0% /dev/shm
	tmpfs           1.9G   12K  1.9G   1% /run/secrets/kubernetes.io/serviceaccount
	tmpfs           1.9G     0  1.9G   0% /proc/acpi
	tmpfs           1.9G     0  1.9G   0% /proc/scsi
	tmpfs           1.9G     0  1.9G   0% /sys/firmware
		#@ 백엔드로 공유를 하고 있는 상태인 것을 확인 가능

# kubectl exec test-pd -c write-container -- ls /app
# kubectl exec test-pd -c write-container -- touch /app/hello.txt #@파일 생성
# kubectl exec test-pd -c web-container -- ls /data
	root@ip-172-31-4-27:~/volumes# kubectl exec test-pd -c write-container -- touch /app/hello.txt
	root@ip-172-31-4-27:~/volumes# kubectl exec test-pd -c web-container -- ls /data
	hello.txt

# kubectl exec test-pd -c web-container -- touch /data/web.txt
# kubectl exec test-pd -c write-container -- ls /app
hello.txt
web.txt
	root@ip-172-31-4-27:~/volumes# kubectl exec test-pd -c web-container -- touch /data/web.txt
	root@ip-172-31-4-27:~/volumes# kubectl exec test-pd -c web-container -- ls /data
	hello.txt
	web.txt

# kubectl delete po test-pd 
pod "test-pd" deleted
# kubectl create -f empty.yaml 
pod/test-pd created
# kubectl get po test-pd 
NAME      READY   STATUS    RESTARTS   AGE
test-pd   2/2     Running   0          16s
# kubectl exec test-pd -c write-container -- ls /app
# kubectl exec test-pd -c web-container -- ls /data
	root@ip-172-31-4-27:~/volumes# kubectl delete po test-pd 
	pod "test-pd" deleted
	root@ip-172-31-4-27:~/volumes# kubectl exec test-pd -c web-container -- ls /data
	Error from server (NotFound): pods "test-pd" not found
	root@ip-172-31-4-27:~/volumes# kubectl create -f empty.yaml
	pod/test-pd created
	root@ip-172-31-4-27:~/volumes# kubectl exec test-pd -c web-container -- ls /data
	root@ip-172-31-4-27:~/volumes# kubectl exec test-pd -c web-container -- ls /data
	
	#@ delete 이후 다시 생성해도 휘발성이라 안보이는것을 확인한다.

 

외부 공유 방법

hostPath
  • pod 외부의 공유방식
  • 즉, pod가 삭제되도 볼륨은 존재한다.
  • pod들이 같은 worker node 간에는 공유가능 (단 별도의 NAS(NFS)를 통해 공유 가능)
  • Daemonset : 
    /var/log/ 에 로그가 쌓이는 경우
    추상적인 볼륨을 생성하고 -> 마운트path를 통해 -> /var/log 를 바라보도록 설정

hostpath.yaml 명세서 작성

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: nginx
    name: web-container #@ 위와 동일하게 컨테이너 
    volumeMounts:
    - mountPath: /data
      name: cache-volume
  - image: busybox
    name: write-container
    command: ['sh','-c','echo hello k8s! && sleep 1000']
    volumeMounts:
    - mountPath: /app
      name: cache-volume
  volumes:
  - name: cache-volume
    hostPath:
      path: /share #@share 폴더 지정 자동으로 share 폴더가 생성됨을 확인

pod 생성 후 삭제

# kubectl delete po test-pd 
pod "test-pd" deleted
# kubectl create -f hostpath.yaml 
pod/test-pd created
# kubectl get po test-pd -o wide
NAME      READY   STATUS    RESTARTS   AGE   IP                NODE              NOMINATEINESS GATES
test-pd   2/2     Running   0          11s   192.168.157.221   ip-172-31-1-145   <none>  e>

 

ip-172-31-1-145에 /share 확인하고 touch로 hostpath.txt 생성합니다
	root@ip-172-31-4-27:~/volumes# kubectl create -f hostpath.yaml
	pod/test-pd created
	root@ip-172-31-4-27:~/volumes# kubectl get po test-pd -o wide
	NAME      READY   STATUS    RESTARTS   AGE   IP              NODE               NOMINATED NODE   READINESS GATES
	test-pd   2/2     Running   0          10s   192.168.82.29   ip-172-31-13-180   <none>           <none>
	#@worker 쪽에 /share 폴더가 생성되었는지 확인한다.

# kubectl exec test-pd -c web-container -- ls /data
hostpath.txt
# kubectl exec test-pd -c write-container -- ls /app
hostpath.txt
# kubectl exec test-pd -c write-container -- touch /app/write.txt
# kubectl exec test-pd -c web-container -- ls /data
hostpath.txt
write.txt
# kubectl delete -f hostpath.yaml 
pod "test-pd" deleted
	#@삭제 이후에 /share/write.txt가 유지 되는지 확인
	
# kubectl create -f hostpath.yaml 
pod/test-pd created
	root@ip-172-31-4-27:~/volumes# kubectl delete -f hostpath.yaml
	pod "test-pd" deleted
	root@ip-172-31-4-27:~/volumes# kubectl create -f hostpath.yaml 
	pod/test-pd created
	root@ip-172-31-4-27:~/volumes# kubectl exec test-pd -c web-container -- ls /data
	write.txt
	#@ hostPath로 지정되어 있기때문에 파일이 보이는것을 확인 할 수 있다. 동일한 worker node에 생성되었기 때문이다.

root@master1:~/lab/aws-k8s-lab-yaml# kubectl get po test-pd -o wide  (다행히도 동일 시스템에 배포)
NAME      READY   STATUS    RESTARTS   AGE   IP                NODE              NOMINATEINESS GATES
test-pd   2/2     Running   0          33s   192.168.157.222   ip-172-31-1-145   <none>  e>
# kubectl exec test-pd -c write-container -- ls /app
hostpath.txt
write.txt
# kubectl exec test-pd -c web-container -- ls /data
hostpath.txt
write.txt

 

3. NFS

  • 네트워크 스토리지 사용하는 방법

nfs.yaml 명세서 작성

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: nginx
    name: web-container
    volumeMounts:
    - mountPath: /data
      name: cache-volume
  - image: busybox
    name: write-container
    command: ['sh','-c','echo hello k8s! && sleep 1000']
    volumeMounts:
    - mountPath: /app
      name: cache-volume
  volumes:
  - name: cache-volume
    nfs:
      server: ip-172-31-4-27  #nfs server IP로 변경
      path: /share
  nodeSelector:
    nfs: node1 #@nodeSelector 속성을 주어서 각각 따로따로 배포가 되도록 한다.

 

#cp nfs.yaml nfs-2.yaml
#vi nfs-2.yaml
   test-pd2로 이름변경
    nfs: node2로 Selector 변경

ip-172-31-4-27
ip-172-31-13-180

master에서 NFS server 작업

master$ sudo apt-get update && sudo apt-get install -y nfs-kernel-server
$sudo mkdir -m 1777 /share (chomd 1777 /share)
$sudo touch /share/hello.txt
$sudo vim /etc/exports
/share/ *(rw,sync,no_root_squash,subtree_check)
$sudo exportfs -ra 
	#@reload all
$sudo exportfs   (확인)
	root@ip-172-31-4-27:~# exportfs -ra
	root@ip-172-31-4-27:~# exportfs
	/share        	<world>

master/worker NFS Client 작업

worker$sudo apt-get -y install nfs-common

kubectl label node ip-172-31-4-27 nfs=node1
kubectl label node ip-172-31-13-180 nfs=node2
kubectl create -f nfs.yaml
kubectl create -f nfs-2.yaml
서로 다른 노드에 배포 되었는지 확인
kubectl get po -o wide 
	root@ip-172-31-4-27:~/nfs# kubectl get po -o wide
	NAME       READY   STATUS              RESTARTS   AGE   IP               NODE               NOMINATED NODE   READINESS GATES
	test-pd    2/2     Running             0          11s   192.168.51.211   ip-172-31-4-27     <none>           <none>
	test-pd2   0/2     ContainerCreating   0          6s    <none>           ip-172-31-13-180   <none>           <none>

root@master1# touch /share/nfs.txt 
root@master1# kubectl exec test-pd2 -c web-container -- ls /data
hello.txt
nfs.txt
root@master1# kubectl exec test-pd -c web-container -- ls /data #@서로 다른 노드인데 nfs 폴더를 쉐어할 수 있는것을 확인한다.
hello.txt
nfs.txt

'클라우드 컴퓨팅 & NoSQL > k8s' 카테고리의 다른 글

Storage Class  (0) 2020.12.02
Volumes (PV와 PVS)  (0) 2020.12.02
RollingUpdate Blue/Green 배포 시나리오  (0) 2020.12.02
Rolling Update 예제  (0) 2020.12.02
Deployment와 RollingUpdate 설명  (0) 2020.12.02

무중단 Blue/Green 배포 시나리오를 실습해 본다.

#mkdir docker/app #@폴더 생성
#cd docker/app
#vi Dockerfile #@Dockerfile 생성
FROM nginx
ADD index.html /usr/share/nginx/html #@컨테이너 루트도큐먼트 위치에 복사 하겠다.

 

#vi index.html 
This is app v1 test…
#docker build -t app:v1 . #@.은 현재 Dockerfile의 위치
	root@ip-172-31-4-27:~/docker/app# docker build -t app:v1 .
	Sending build context to Docker daemon  3.072kB
	Step 1/2 : FROM nginx
	 ---> bc9a0695f571
	Step 2/2 : ADD index.html /usr/share/nginx/html
	 ---> ba8e7526e7b9
	Successfully built ba8e7526e7b9
	Successfully tagged app:v1

 

#vi index.html 
This is app v2 test...
#docker build -t app:v2 .
	root@ip-172-31-4-27:~/docker/app# docker build -t app:v2 .
	Sending build context to Docker daemon  3.072kB
	Step 1/2 : FROM nginx
	 ---> bc9a0695f571
	Step 2/2 : ADD index.html /usr/share/nginx/html
	 ---> 4eaf1726cc8b
	Successfully built 4eaf1726cc8b
	Successfully tagged app:v2

 

#docker login #@아이디 패스워드 입력 물어본다
	root@ip-172-31-4-27:~/docker/app# docker login
	Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
	Username: nasamjang02	    
	Password: 
	WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
	Configure a credential helper to remove this warning. See
	https://docs.docker.com/engine/reference/commandline/login/#credentials-store

 

#docker tag app:v1 nasamjang02/app:v1  #@나의 Dockerhub에 업로드 하겠다
			원본		링크테크

 

#docker push nasamjang02/app:v1
	root@ip-172-31-4-27:~/docker/app# docker push nasamjang02/app:v1
	The push refers to repository [docker.io/nasamjang02/app]
	54d08aacb5be: Pushed 
	7e914612e366: Mounted from library/nginx 
	f790aed835ee: Mounted from library/nginx 
	850c2400ea4d: Mounted from library/nginx 
	7ccabd267c9f: Mounted from library/nginx 
	f5600c6330da: Mounted from library/nginx 
	v1: digest: sha256:e2d9e2d5b24789e77bd93d2ec7aba7a862c92c7664bc44a33c11fe6395554484 size: 1569

 

#docker tag app:v2 nasamjang02/app:v2
#docker push nasamjang02/app:v2
	root@ip-172-31-4-27:~/docker/app# docker push nasamjang02/app:v2
	The push refers to repository [docker.io/nasamjang02/app]
	b52e0c763eb7: Pushed 
	7e914612e366: Layer already exists 
	f790aed835ee: Layer already exists 
	850c2400ea4d: Layer already exists 
	7ccabd267c9f: Layer already exists 
	f5600c6330da: Layer already exists 
	v2: digest: sha256:f9f42c87d39a2822b11870e75bcff6acaca60ae36dd753019ccd7614333bca3e size: 1569

 

#kubectl create deploy blue --image nasamjang02/app:v1
						#@블루라는 서비스 v1으로 시작
	root@ip-172-31-4-27:~/docker/app# kubectl create deploy blue --image nasamjang02/app:v1
	deployment.apps/blue created

 

#kubectl expose deployment blue --port=80 --name app #@외부에서 접근가능 하도록 expose 설정
	root@ip-172-31-4-27:~/docker/app# kubectl expose deployment blue --port=80 --name app
	service/app exposed

#kubectl get svc
	root@ip-172-31-4-27:~/docker/app# kubectl get svc
	NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
	app          ClusterIP   10.111.169.205   <none>        80/TCP    16s
	kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   25h

#curl 10.110.252.32:80 (service IP)
	root@ip-172-31-4-27:~/docker/app# curl 10.111.169.205:80
	This is app v1 test…

This is app v1 test...
#kubectl create deploy green --image nasamjang02/app:v2
	root@ip-172-31-4-27:~/docker/app# kubectl create deploy green --image nasamjang02/app:v2
	deployment.apps/green created
	
	NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
	deployment.apps/blue    1/1     1            1           2m9s
	deployment.apps/green   1/1     1            1           21s

	NAME                               DESIRED   CURRENT   READY   AGE
	replicaset.apps/blue-6c88d658db    1         1         1       2m9s
	replicaset.apps/green-5f846655d4   1         1         1       21s

	NAME                         READY   STATUS    RESTARTS   AGE
	pod/blue-6c88d658db-2lvqn    1/1     Running   0          2m9s
	pod/green-5f846655d4-8blcs   1/1     Running   0          21s

	
#curl green-pod-ip
This is app v2 test...
#kubectl get deploy --show-labels #@labels 확인
	root@ip-172-31-4-27:~/docker/app# kubectl get deploy --show-labels
	NAME    READY   UP-TO-DATE   AVAILABLE   AGE     LABELS
	blue    1/1     1            1           3m59s   app=blue
	green   1/1     1            1           2m11s   app=green

#kubectl set selector svc app app=green #@app=green으로 바꿔준다
	root@ip-172-31-4-27:~/docker/app# kubectl set selector svc app app=green
	service/app selector updated
#kubectl get ep #@Endpoint 확인
	root@ip-172-31-4-27:~/docker/app# kubectl get ep
	NAME         ENDPOINTS          AGE
	app          192.168.82.26:80   3m47s
	kubernetes   172.31.4.27:6443   26h
	
#curl 10.108.70.86  (service IP)
This is app v2 test...
	root@ip-172-31-4-27:~/docker/app# curl 192.168.82.26:80
	This is app v2 test…
	
	다시 blue로 변경
	root@ip-172-31-4-27:~/docker/app# kubectl set selector svc app app=blue
	service/app selector updated

	endpoint 확인
	root@ip-172-31-4-27:~/docker/app# kubectl get ep
	NAME         ENDPOINTS          AGE
	app          192.168.82.25:80   4m57s
	kubernetes   172.31.4.27:6443   26h
	
	curl 시도
	root@ip-172-31-4-27:~/docker/app# curl 192.168.82.25:80
	This is app v1 test…


kubectl delete deployments.apps blue
	root@ip-172-31-4-27:~# kubectl delete deployments.apps blue
	deployment.apps "blue" deleted
	root@ip-172-31-4-27:~# kubectl delete deployments.apps green
	deployment.apps "green" deleted

nginx를 deployment를 통해 배포

root@ip-172-31-4-27:~# kubectl create deployment nginx --image=nginx
	deployment.apps/nginx created
	root@ip-172-31-4-27:~# kubectl get deploy
	NAME      READY   UP-TO-DATE   AVAILABLE   AGE
	example   1/1     1            1           18m
	nginx     1/1     1            1           12s

deploy, service, endpoint, pod 확인

root@ip-172-31-4-27:~# kubectl scale deployment nginx --replicas=3
root@ip-172-31-4-27:~# kubectl get deploy,svc,ep,po
	NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
	deployment.apps/nginx   3/3     3            3           2m8s

	NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
	service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   25h

	NAME                   ENDPOINTS          AGE
	endpoints/kubernetes   172.31.4.27:6443   25h

	NAME                         READY   STATUS    RESTARTS   AGE
	pod/nginx-6799fc88d8-77kj6   1/1     Running   0          13s
	pod/nginx-6799fc88d8-8t5cp   1/1     Running   0          13s
	pod/nginx-6799fc88d8-v9m2z   1/1     Running   0          23h

nginx image 확인

root@ip-172-31-4-27:~# kubectl describe deployments nginx |grep -i image
    Image:        nginx

record하면 rollout history할때 기록이 남는다. (구체적으로 확인가능)

root@ip-172-31-4-27:~# kubectl set image deploy nginx nginx=nginx:1.14.2 --record
	deployment.apps/nginx image updated

history 확인

root@ip-172-31-4-27:~# kubectl rollout history deployment nginx
	deployment.apps/nginx 
	REVISION  CHANGE-CAUSE
	1         <none>
	2         kubectl set image deploy nginx nginx=nginx:1.14.2 --record=true

revision=2에 nginx:1.14.2 확인

root@ip-172-31-4-27:~# kubectl rollout history deployment nginx --revision=2
	deployment.apps/nginx with revision #2
	Pod Template:
	  Labels:	app=nginx
		pod-template-hash=75d64795db
	  Annotations:	kubernetes.io/change-cause: kubectl set image deploy nginx nginx=nginx:1.14.2 --record=true
	  Containers:
	   nginx:
		Image:	nginx:1.14.2
		Port:	<none>
		Host Port:	<none>
		Environment:	<none>
		Mounts:	<none>
	  Volumes:	<none>

revision=1에 nginx:latest 확인

root@ip-172-31-4-27:~# kubectl rollout history deployment nginx --revision=1
	deployment.apps/nginx with revision #1
	Pod Template:
	  Labels:	app=nginx
		pod-template-hash=6799fc88d8
	  Containers:
	   nginx:
		Image:	nginx
		Port:	<none>
		Host Port:	<none>
		Environment:	<none>
		Mounts:	<none>
	  Volumes:	<none>

 

root@ip-172-31-4-27:~# kubectl get deploy,rs,po
	NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
	deployment.apps/nginx   3/3     3            3           8m33s

	NAME                               DESIRED   CURRENT   READY   AGE
	replicaset.apps/nginx-6799fc88d8   0         0         0       23h
	replicaset.apps/nginx-75d64795db   3         3         3       3m48s

	NAME                         READY   STATUS    RESTARTS   AGE
	pod/nginx-75d64795db-4qjp8   1/1     Running   0          3m30s
	pod/nginx-75d64795db-52l87   1/1     Running   0          3m48s
	pod/nginx-75d64795db-frbc8   1/1     Running   0          3m39s

 

root@ip-172-31-4-27:~# kubectl describe po nginx-75d64795db-4qjp8 |grep -i image
		Image:          nginx:1.14.2
		Image ID:       docker-pullable://nginx@sha256:f7988fb6c02e0ce69257d9bd9cf37ae20a60f1df7563c3a2a6abe24160306b8d
	  Normal  Pulling    7m34s  kubelet            Pulling image "nginx:1.14.2"
	  Normal  Pulled     7m30s  kubelet            Successfully pulled image "nginx:1.14.2" in 3.371438963s

rollout 수행

root@ip-172-31-4-27:~# kubectl rollout undo deployment nginx --to-revision=1
	deployment.apps/nginx rolled back

rollout 상태 확인

root@ip-172-31-4-27:~# kubectl rollout status deployment nginx
	Waiting for deployment "nginx" rollout to finish: 1 old replicas are pending termination...
	Waiting for deployment "nginx" rollout to finish: 1 old replicas are pending termination...
	deployment "nginx" successfully rolled out

최초 nginx latest 버전으로 돌아온 것을 확인

root@ip-172-31-4-27:~# kubectl describe po nginx-6799fc88d8-98x77 |grep -i image
		Image:          nginx
		Image ID:       docker-pullable://nginx@sha256:6b1daa9462046581ac15be20277a7c75476283f969cb3a61c8725ec38d3b01c3
	  Normal  Pulling    43s   kubelet            Pulling image "nginx"
	  Normal  Pulled     40s   kubelet            Successfully pulled image "nginx" in 3.241107792s

Rollingupdate

  • 어플리케이션 버전업을 할때 주로 사용

3 가지 method

  • set : 새로운 RS 생성. SET을 이용하여 도커 이미지를 버전업 하는 방법
  • edit trigger update
  • apply : yaml 파일을 update 하여 적용

 

updateStrategy

  • Ondelete daemonset 에 적용
  • rollingupdate triggger : 서비스 다운 타임을 최소화 할 수 있다.
  • Recreate

참조 : kubernetes in action

- Dyployment 기반의 Replica set 으로 배포가 되어 있는 상태
- 순차적으로 v1 -> v2로 롤링업데이트 됨을 그림으로 확인 가능
- kubectl nudo 를 사용하여 이전 버전으로 롤백도 가능

 

참조 : kubernetes in action

- 속성을 통해서 롤링업데이트 속도 조절이 가능
- maxSurge 만큼 시도
- maxUnavailable : 서비스가 불가한 갯수 허용 0이면 무조건 다 돌아야 한다. 
- set, edit, available 방식으로 v1 -> v2
- maxUnavailable 이 0이므로 신규 생성 후 삭제 (그래야 서비스 불가 갯수 0 이므로)
- 기존 성능을 유지하려면 maxSurge를 1로 그래야 신규 생성이 가능 하므로
- Desired replica count는 목표 서비스 카운트

 

Blue/Green 배포 참조 : kubernetes in action

- 기존 성능에 전혀 영향을 끼치지 않고 배포가능
- 상위 하위 객체간에 Lable, Selector 로 관리 되므로
- pod를 v2로 셋팅 이후 Service의 Selector를 v2로 변경하면 된다.

 

 

'클라우드 컴퓨팅 & NoSQL > k8s' 카테고리의 다른 글

RollingUpdate Blue/Green 배포 시나리오  (0) 2020.12.02
Rolling Update 예제  (0) 2020.12.02
ReadinessProbe,LivenessProbe  (0) 2020.12.01
Lable과 Selector  (0) 2020.12.01
Static Pod  (0) 2020.12.01
  1. LivenessProbe : 서비스 장애를 판단하여 containerd 의 restart 여부를 판단
  2. ReainessProbe : 서비스 준비 여부를 체크하여 endpoint 를 관리

 

-서비스가 가능한 상태인지 알려준다.

1) pod안에 컨테이너안에 서비스 프로세스가 있는데, 준비 시간이 필요해 서비스가 안되는 경우.
2) Service의 endpoint가 만들어진다.
3) 외부서비스는 SVC로 요청 -> 포워딩 -> pod 는 아직 서비스가 준비 안된 상태 -> 에러를 리턴
4) 즉, 준비가 되었는지 확인이 필요하다. 준비된 상태에서 Client 요청이 들어와야 좋다.
5) pod 안에 라이브니스, 리드니스를 정의해 생성이 완료되야 endpoint가 발행되는 방식

 

ReadinessProbe

svc-readiness.yaml에 Selector로 readiness를 명시

root@master1:~# cat svc-readiness.yaml 
apiVersion: v1
kind: Service
metadata:
  name: svc-1
spec:
  selector:
    app: readiness
  ports:
  - port: 8080
    targetPort: 80

pod-readiness.yaml에는

  • label이 readiness 이고
  • volumes.hostPath의 /app/index.html이 존재하지 않아 에러가 발생
  • readinessProbe의 command 옵션으로 만족하지 못한 경우 Service의 endPoint 생성을 못하게 한다.
root@master1:~# cat pod-readiness.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-r-exec
  labels:
    app: readiness
spec:
  containers:
  - name: test-readiness
    image: rosehs00/web:volume
    ports:
    - containerPort: 80
    readinessProbe:
      exec:
        command: ["cat", "/usr/share/nginx/html/index.html"]
      initialDelaySeconds: 3
      periodSeconds: 5
      successThreshold: 3
    volumeMounts:
    - name: hostpath
      mountPath: /usr/share/nginx/html
  volumes:
  - name : hostpath
    hostPath:
      path: /app

LivenessProbe

svc-live.yaml 명세서 작성

apiVersion: v1
kind: Service
metadata:
  name: svc-liveness
spec:
  selector:
    app: liveness
  ports:
  - port: 8080
    targetPort: 80

pod-live.yaml 명세서 작성

apiVersion: v1
kind: Pod
metadata:
  name: pod-live-2
  labels:
    app: liveness
spec:
  containers:
  - name: test-liveness
    image: rosehs00/web:volume
    ports:
    - containerPort: 80
    livenessProbe:
      exec:
        command: ["ls", "/usr/share/nginx/html/health"]
      initialDelaySeconds: 5
      periodSeconds: 3
      failureThreshold: 3
    volumeMounts:
    - name: hostpath
      mountPath: /usr/share/nginx/html
  volumes:
  - name : hostpath
    hostPath:
      path: /app

livenessProbe 필드 :

  • exec 메소드를 활용하여 health 폴더가 존재하면 문제가 없음
  • 없으면 문제 발생 프로세스가 정상적으로 돌고 있지 않은지를 command에 넣어주면 된다.

path 필드:

  • 결국 host의 /app에 /health가 존재해야 정상동작 한다는 의미
  • /app이 삭제되면 아래와 같이 에러가 발생 시도 횟수도 보임
root@ip-172-31-4-27:/app# kubectl get svc,ep,po -o wide
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE   SELECTOR
service/kubernetes     ClusterIP   10.96.0.1       <none>        443/TCP    24h   <none>
service/svc-liveness   ClusterIP   10.107.103.25   <none>        8080/TCP   32m   app=liveness

NAME                     ENDPOINTS          AGE
endpoints/kubernetes     172.31.4.27:6443   24h
endpoints/svc-liveness                      32m

NAME                         READY   STATUS             RESTARTS   AGE   IP               NODE               NOMINATED NODE   READINESS GATES
pod/nginx-6799fc88d8-v9m2z   1/1     Running            0          22h   192.168.82.18    ip-172-31-13-180   <none>           <none>
pod/pod-live-2               0/1     CrashLoopBackOff   11         32m   192.168.51.205   ip-172-31-4-27     <none>           <none>

'클라우드 컴퓨팅 & NoSQL > k8s' 카테고리의 다른 글

Rolling Update 예제  (0) 2020.12.02
Deployment와 RollingUpdate 설명  (0) 2020.12.02
Lable과 Selector  (0) 2020.12.01
Static Pod  (0) 2020.12.01
Namespace  (0) 2020.12.01
  1. 레이블 : 클라우드 인스턴스를 관리하기 위한 용도
  2. 레이블을 이용하여 조회할때 사용 가능
root@ip-172-31-4-27:~# kubectl get po --all-namespaces --show-labels 
NAMESPACE         NAME                                       READY   STATUS    RESTARTS   AGE     LABELS
calico-system     calico-kube-controllers-5c6f449c6f-w2pwg   1/1     Running   0          6h34m   k8s-app=calico-kube-controllers,pod-template-hash=5c6f449c6f

root@ip-172-31-4-27:~# kubectl get po --all-namespaces -l k8s-app=calico-kube-controllers,pod-template-hash=5c6f449c6f
NAMESPACE       NAME                                       READY   STATUS    RESTARTS   AGE
calico-system   calico-kube-controllers-5c6f449c6f-w2pwg   1/1     Running   0          6h35m

마스터 노드에 레이블 추가 후 pod 생성

root@ip-172-31-4-27:~# kubectl label nodes ip-172-31-4-27 disktype=ssd 
(#@ 마스터 노드에 레이블 추가)
node/ip-172-31-4-27 labeled

root@ip-172-31-4-27:~# vi simple.yml 
kind: Pod
apiVersion: v1
metadata:
  name: mypod
spec:
  containers:
  - image: nginx
    name: hwk
  nodeSelector:
    disktype: ssd

root@ip-172-31-4-27:~# kubectl create -f simple.yml 
pod/mypod created

root@ip-172-31-4-27:~# kubectl get po
NAME                     READY   STATUS    RESTARTS   AGE
mypod                    1/1     Running   0          6s

scale을 이용해 3개를 만들고, 그중 하나의 Lable을 강제로 변경 테스트

어떻게 될 것인가? => Replica Set 의 DESIRED를 충족 시키기 위해 새로운 Pod가 생성 된다.

root@ip-172-31-4-27:~# kubectl create -f template_deploy.yaml 
deployment.apps/example created

root@ip-172-31-4-27:~# kubectl scale deployment example --replicas=3
deployment.apps/example scaled

root@ip-172-31-4-27:~# kubectl get deployments.apps 
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
example   3/3     3            3           26s

3개중에 1개의 레이블을 변경해보자.

root@ip-172-31-4-27:~# kubectl edit po example-c4b46fd7d-96xxd

root@ip-172-31-4-27:~# kubectl get deployments.apps,rs,po -o wide --show-labels 
NAME                      READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES   SELECTOR      LABELS
deployment.apps/example   3/3     3            3           4m58s   nginx        nginx    app=example   app=example

레이블명이 달라 RS의 DESIRED 값이 3개 이므로 달라진 녀석 대체로 1개 더 pod를 생성한다.
다시 lable 명을 원복하면 어떻게 될까?
가장 최근에 생성된 pod가 제거가 되고 pod는 3개를 유지한다.

root@ip-172-31-4-27:~# kubectl get deployments.apps,rs,po -o wide --show-labels 
NAME                      READY   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS   IMAGES   SELECTOR      LABELS
deployment.apps/example   3/3     3            3           7m4s   nginx        nginx    app=example   app=example

NAME                                DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES   SELECTOR                                  LABELS
replicaset.apps/example-c4b46fd7d   3         3         3       7m4s    nginx        nginx    app=example,pod-template-hash=c4b46fd7d   app=example,pod-template-hash=c4b46fd7d
replicaset.apps/nginx-6799fc88d8    1         1         1       5h34m   nginx        nginx    app=nginx,pod-template-hash=6799fc88d8    app=nginx,pod-template-hash=6799fc88d8

NAME                          READY   STATUS    RESTARTS   AGE     IP               NODE               NOMINATED NODE   READINESS GATES   LABELS
pod/example-c4b46fd7d-96xxd   1/1     Running   0          6m47s   192.168.82.12    ip-172-31-13-180   <none>           <none>            app=example,pod-template-hash=c4b46fd7d
pod/example-c4b46fd7d-9pnwk   1/1     Running   0          7m4s    192.168.82.11    ip-172-31-13-180   <none>           <none>            app=example,pod-template-hash=c4b46fd7d
pod/example-c4b46fd7d-g9mm9   1/1     Running   0          6m47s   192.168.51.199   ip-172-31-4-27     <none>           <none>            app=example,pod-template-hash=c4b46fd7d
pod/mypod                     1/1     Running   0          12m     192.168.51.198   ip-172-31-4-27     <none>           <none>            <none>
pod/nginx-6799fc88d8-v9m2z    1/1     Running   0  

'클라우드 컴퓨팅 & NoSQL > k8s' 카테고리의 다른 글

Deployment와 RollingUpdate 설명  (0) 2020.12.02
ReadinessProbe,LivenessProbe  (0) 2020.12.01
Static Pod  (0) 2020.12.01
Namespace  (0) 2020.12.01
pod 삭제  (0) 2020.12.01

1) kubectl create 명령어를 쓰면 -> api server에서 받아서 인증 처리 한 후 등록한다.
2) kubelet

root@ip-172-31-4-27:~# ps -ef | grep kubelet
root     22208     1  2 00:40 ?        00:10:30 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2

root@ip-172-31-4-27:~# more /var/lib/kubelet/config.yaml
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
...
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s

root@ip-172-31-4-27:/etc/kubernetes/manifests# ls
etcd.yaml  kube-apiserver.yaml	kube-controller-manager.yaml  kube-scheduler.yaml

4인방이 yaml 명세서로 정의되어 있는 것을 확인 할 수 있다.
1.master가 올라면서 -> 2.kubelet이 실행되고 -> 3.4인 방을 실행한다.

 

root@ip-172-31-4-27:/etc/kubernetes/manifests# kubectl get po -n kube-system 
NAME                                     READY   STATUS    RESTARTS   AGE
coredns-f9fd979d6-5t4g6                  1/1     Running   0          6h21m
coredns-f9fd979d6-r9p5f                  1/1     Running   0          6h21m
etcd-ip-172-31-4-27                      1/1     Running   0          6h21m
kube-apiserver-ip-172-31-4-27            1/1     Running   0          6h21m
kube-controller-manager-ip-172-31-4-27   1/1     Running   0          6h21m
kube-proxy-28mlr                         1/1     Running   0          6h3m
kube-proxy-tn6qw                         1/1     Running   0          6h21m
kube-scheduler-ip-172-31-4-27            1/1     Running   0          6h21m

4인방은 kubelet에 의해 master 호스트가 붙는 것을 확인 할 수 있다.

root@ip-172-31-4-27:~# cp simple.yml /etc/kubernetes/manifests/
root@ip-172-31-4-27:~# ls /etc/kubernetes/manifests/
etcd.yaml  kube-apiserver.yaml	kube-controller-manager.yaml  kube-scheduler.yaml  simple.ym

해당 위치에 yaml 명세서만 존재 해도 바로 시작이 된다. 

이것이 static pod이다.

root@ip-172-31-4-27:~# kubectl get po --all-namespaces | grep 172
default           mypod-ip-172-31-4-27                       1/1     Running   0          74s
kube-system       etcd-ip-172-31-4-27                        1/1     Running   0          6h25m
kube-system       kube-apiserver-ip-172-31-4-27              1/1     Running   0          6h25m
kube-system       kube-controller-manager-ip-172-31-4-27     1/1     Running   0          6h25m
kube-system       kube-scheduler-ip-172-31-4-27              1/1     Running   0          6h25m

system pod로 올리고자 할때 위와 같이 한다.

'클라우드 컴퓨팅 & NoSQL > k8s' 카테고리의 다른 글

ReadinessProbe,LivenessProbe  (0) 2020.12.01
Lable과 Selector  (0) 2020.12.01
Namespace  (0) 2020.12.01
pod 삭제  (0) 2020.12.01
pod 에 내부 명령 실행과 connect  (0) 2020.12.01
  • 대부분의 리소스는 특정 Namespace 에 속함
  • Namespace 를 제거시 모든 리소스를 삭제
  • Service 와 관련하여 Domain name 으로 사용
root@ip-172-31-4-27:~# kubectl create ns sky 
root@ip-172-31-4-27:~# kubectl run nginx --image nginx --namespace sky

root@ip-172-31-4-27:~# kubectl get po
NAME                     READY   STATUS    RESTARTS   AGE
nginx-6799fc88d8-v9m2z   1/1     Running   0          4h35m
root@ip-172-31-4-27:~# kubectl get po -n sky
NAME    READY   STATUS    RESTARTS   AGE
nginx   1/1     Running   0          87s
  • 디폴트 네임스페이스로 할당된 pod는 안보인다.
  • sky 네임스페이스로 조회를 해야 해당 pod가 보인다.
root@ip-172-31-4-27:~# kubectl delete ns sky
namespace "sky" deleted
root@ip-172-31-4-27:~# kubectl get po -n sky
No resources found in sky namespace. #@ 네임스페이스가 삭제되니 해당 pod도 삭제된 것을 확인한다.
root@ip-172-31-4-27:~# kubectl get ns
NAME              STATUS   AGE
calico-system     Active   5h58m
default           Active   6h13m
kube-node-lease   Active   6h13m
kube-public       Active   6h13m
kube-system       Active   6h13m
tigera-operator   Active   5h58m

 

'클라우드 컴퓨팅 & NoSQL > k8s' 카테고리의 다른 글

Lable과 Selector  (0) 2020.12.01
Static Pod  (0) 2020.12.01
pod 삭제  (0) 2020.12.01
pod 에 내부 명령 실행과 connect  (0) 2020.12.01
pod 확장 (scale out)  (0) 2020.12.01

1) Replica Set의 DESIRED 카운트가 1인 경우, Pod가 삭제되면 어떻게 되는가?

  •   : Replica Set 컨트롤러에서 DESIRED 카운트가 1이기 때문에 pod가 삭제 되어도 다시 생성을 한다.

2) 컨트롤러가 없는 pod는 바로 지워진다.

 

3) 상위에 컨트롤러가 있는 pod의 경우 삭제가 안되므로, 상위객체를 삭제해 버린다.

4) 하위 리소스 자동제거 하지 않기 (--cascade 사용)

  • 상위 deployments 컨트롤러만 삭제되고 하위 pod는 남아 있는 것을 확인
  • 여기까지는 객체 단위로 삭제하는 방법

5) 배포 단위로 삭제하는 방법

1)kubectl delete -f sample demo.yaml
  : 등록한 yaml 파일 기반 삭제
2)kubectl delete ns test-ns
  : 네임스페이스 기반 삭제
3)Kubectl delete po | deploy | rs --all
  : 디폴트 네임스페이스의 po | deploy | rs 모두 삭제
4)helm remove release-name
  : k8s 어플리케이션을 helm을 이용하여 설치 가능
  : 리소스 그룹을 release 라고 부른다.

 

'클라우드 컴퓨팅 & NoSQL > k8s' 카테고리의 다른 글

Static Pod  (0) 2020.12.01
Namespace  (0) 2020.12.01
pod 에 내부 명령 실행과 connect  (0) 2020.12.01
pod 확장 (scale out)  (0) 2020.12.01
Service  (0) 2020.12.01

1) pod 안의 환경변수 확인 명령어

root@ip-172-31-4-27:~# kubectl exec -it hwk-66fdbfb65d-gdwcj -- env 
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 
HOSTNAME=hwk-66fdbfb65d-gdwcj 
TERM=xterm 
WEB_HWK_SERVICE_HOST=10.107.229.55 
WEB_HWK_SERVICE_PORT=80 
WEB_HWK_PORT=tcp://10.107.229.55:80 

 

2) pod에 바로 연결 명령어 -it

root@ip-172-31-4-27:~# kubectl exec -it hwk-66fdbfb65d-gdwcj -it -- /bin/bash 
root@hwk-66fdbfb65d-gdwcj:/# cd /usr/share/nginx/html/ 
root@hwk-66fdbfb65d-gdwcj:/usr/share/nginx/html# ls 

 

 

'클라우드 컴퓨팅 & NoSQL > k8s' 카테고리의 다른 글

Namespace  (0) 2020.12.01
pod 삭제  (0) 2020.12.01
pod 확장 (scale out)  (0) 2020.12.01
Service  (0) 2020.12.01
Object Template (--dry run 사용하기)  (0) 2020.12.01

+ Recent posts