https://malwareanalysis.tistory.com/594
EKS 스터디 - 1주차 3편 - eksctl로 EKS생성
1. eksctl란? AWS에서 공식으로 지원하는 eks관리 CLI입니다. kubectl처럼 eks를 관리하고 싶으신 분들에게 매우 좋습니다. eksctl의 단점은 eks만 관리합니다. aws다른 리소스를 관리하지 못합니다. 개인적
malwareanalysis.tistory.com
AWS EKS Workshop에서 배포에 사용하는 테라폼 코드 를 사용하지 않는 이유?
⇒ 테라폼에 대한 추가 (기초) 학습 필요를 제거하기 위함
AWS EKS Workshop에서 사용하는 Cloud9 IDE 를 사용하지 않는 이유?
⇒ 최대한의 비용 절감과 SSH로 접속 후 VSCODE 를 사용해도 실습에 문제 없음
EKS 배치 시 보통 PrivateSubnet 에 NAT GW 를 사용하는데, 스터디 실습에서 사용하지 않는 이유?
⇒ 역시 최대한의 비용 절감을 위함 → 이로 인해, PublicSubnet 에 배치됨을 유의.
EKS 버전을 v1.28 선정한 이유?
⇒ EKS 지원 add-on 및 K8S 생태계 호환되고 검증된 애플리케이션 이 많은 버전
1. 기본 인프라 배포
콘솔로 배포하기와 CLI로 배포하기는 모두 결과물이 같으므로 편한대로 고르면 된다.
본 블로그에서는 콘솔로 배포할 것임.
1) 콘솔로 배포 하기
시작 전 Keypair를 사전에 생성해두도록 하자!
- 링크 ← AWS CloudFormation 페이지로 연결되며, 파라미터 입력 후 스택 실행
- 파라미터 : 아래 빨간색 부분은 설정해주는어야 할 것, 그외 부분은 기본값 사용을 권장
- <<<<< EKSCTL MY EC2 >>>>>
- ClusterBaseName: EKS 클러스터의 기본 이름 (생성되는 리소스들의 주석에 접두어로 활용), EKS 클러스터 이름에 '_(밑줄)' 사용 불가!
- KeyName: EC2 접속에 사용하는 SSH 키페어 지정
- SgIngressSshCidr: eksctl 작업을 수행할 EC2 인스턴스를 접속할 수 있는 IP 주소 입력 (집 공인IP/32 입력)
- MyInstanceType: eksctl 작업을 수행할 EC2 인스턴스의 타입 (기본 t3.medium)
- <<<<< Region AZ >>>>> : 리전과 가용영역을 지정
- <<<<< VPC Subnet >>>>> : VPC, 서브넷 정보 지정
- <<<<< EKSCTL MY EC2 >>>>>
- 파라미터 : 아래 빨간색 부분은 설정해주는어야 할 것, 그외 부분은 기본값 사용을 권장
- 이후 다음 > 다음을 클릭해서 배포하도록 한다. 아래와 같이 스택이 CREATE_COMPLETE 된 것 확인.
2) CLI로 배포하기
로컬 터미널에서 aws configure를 통해 Credential을 등록했다면 CLI로도 배포할 수 있다.
# yaml 파일 다운로드
curl -O https://s3.ap-northeast-2.amazonaws.com/cloudformation.cloudneta.net/K8S/myeks-1week.yaml
# 배포
# aws cloudformation deploy --template-file ~/Downloads/myeks-1week.yaml --stack-name mykops --parameter-overrides KeyName=<My SSH Keyname> SgIngressSshCidr=<My Home Public IP Address>/32 --region <리전>
예시) aws cloudformation deploy --template-file ~/Downloads/myeks-1week.yaml \
--stack-name myeks --parameter-overrides KeyName=kp-gasida SgIngressSshCidr=$(curl -s ipinfo.io/ip)/32 --region ap-northeast-2
# CloudFormation 스택 배포 완료 후 EC2 IP 출력
aws cloudformation describe-stacks --stack-name myeks --query 'Stacks[*].Outputs[*].OutputValue' --output text
예시) 3.35.137.31
# ec2 에 SSH 접속 : root / ******
예시) ssh root@3.35.137.31
ssh root@$(aws cloudformation describe-stacks --stack-name myeks --query 'Stacks[*].Outputs[0].OutputValue' --output text)
root@@X.Y.Z.A's password: ******
(참고) myeks-1week.yaml 내용 : VPC, Subnet4개, EC2 1대 + 보안그룹(자신의 집 IP만 허용)
파일 내용의 password부분은 ****** 대신 입력해야 함.
AWSTemplateFormatVersion: '2010-09-09'
Metadata:
AWS::CloudFormation::Interface:
ParameterGroups:
- Label:
default: "<<<<< EKSCTL MY EC2 >>>>>"
Parameters:
- ClusterBaseName
- KeyName
- SgIngressSshCidr
- MyInstanceType
- LatestAmiId
- Label:
default: "<<<<< Region AZ >>>>>"
Parameters:
- TargetRegion
- AvailabilityZone1
- AvailabilityZone2
- Label:
default: "<<<<< VPC Subnet >>>>>"
Parameters:
- VpcBlock
- PublicSubnet1Block
- PublicSubnet2Block
- PrivateSubnet1Block
- PrivateSubnet2Block
Parameters:
ClusterBaseName:
Type: String
Default: myeks
AllowedPattern: "[a-zA-Z][-a-zA-Z0-9]*"
Description: must be a valid Allowed Pattern '[a-zA-Z][-a-zA-Z0-9]*'
ConstraintDescription: ClusterBaseName - must be a valid Allowed Pattern
KeyName:
Description: Name of an existing EC2 KeyPair to enable SSH access to the instances. Linked to AWS Parameter
Type: AWS::EC2::KeyPair::KeyName
ConstraintDescription: must be the name of an existing EC2 KeyPair.
SgIngressSshCidr:
Description: The IP address range that can be used to communicate to the EC2 instances
Type: String
MinLength: '9'
MaxLength: '18'
Default: 0.0.0.0/0
AllowedPattern: (\d{1,3})\.(\d{1,3})\.(\d{1,3})\.(\d{1,3})/(\d{1,2})
ConstraintDescription: must be a valid IP CIDR range of the form x.x.x.x/x.
MyInstanceType:
Description: Enter t2.micro, t2.small, t2.medium, t3.micro, t3.small, t3.medium. Default is t2.micro.
Type: String
Default: t3.medium
AllowedValues:
- t2.micro
- t2.small
- t2.medium
- t3.micro
- t3.small
- t3.medium
LatestAmiId:
Description: (DO NOT CHANGE)
Type: 'AWS::SSM::Parameter::Value<AWS::EC2::Image::Id>'
Default: '/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2'
AllowedValues:
- /aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2
TargetRegion:
Type: String
Default: ap-northeast-2
AvailabilityZone1:
Type: String
Default: ap-northeast-2a
AvailabilityZone2:
Type: String
Default: ap-northeast-2c
VpcBlock:
Type: String
Default: 192.168.0.0/16
PublicSubnet1Block:
Type: String
Default: 192.168.1.0/24
PublicSubnet2Block:
Type: String
Default: 192.168.2.0/24
PrivateSubnet1Block:
Type: String
Default: 192.168.3.0/24
PrivateSubnet2Block:
Type: String
Default: 192.168.4.0/24
Resources:
# VPC
EksVPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: !Ref VpcBlock
EnableDnsSupport: true
EnableDnsHostnames: true
Tags:
- Key: Name
Value: !Sub ${ClusterBaseName}-VPC
# PublicSubnets
PublicSubnet1:
Type: AWS::EC2::Subnet
Properties:
AvailabilityZone: !Ref AvailabilityZone1
CidrBlock: !Ref PublicSubnet1Block
VpcId: !Ref EksVPC
MapPublicIpOnLaunch: true
Tags:
- Key: Name
Value: !Sub ${ClusterBaseName}-PublicSubnet1
- Key: kubernetes.io/role/elb
Value: 1
PublicSubnet2:
Type: AWS::EC2::Subnet
Properties:
AvailabilityZone: !Ref AvailabilityZone2
CidrBlock: !Ref PublicSubnet2Block
VpcId: !Ref EksVPC
MapPublicIpOnLaunch: true
Tags:
- Key: Name
Value: !Sub ${ClusterBaseName}-PublicSubnet2
- Key: kubernetes.io/role/elb
Value: 1
InternetGateway:
Type: AWS::EC2::InternetGateway
VPCGatewayAttachment:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
InternetGatewayId: !Ref InternetGateway
VpcId: !Ref EksVPC
PublicSubnetRouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref EksVPC
Tags:
- Key: Name
Value: !Sub ${ClusterBaseName}-PublicSubnetRouteTable
PublicSubnetRoute:
Type: AWS::EC2::Route
Properties:
RouteTableId: !Ref PublicSubnetRouteTable
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref InternetGateway
PublicSubnet1RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PublicSubnet1
RouteTableId: !Ref PublicSubnetRouteTable
PublicSubnet2RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PublicSubnet2
RouteTableId: !Ref PublicSubnetRouteTable
# PrivateSubnets
PrivateSubnet1:
Type: AWS::EC2::Subnet
Properties:
AvailabilityZone: !Ref AvailabilityZone1
CidrBlock: !Ref PrivateSubnet1Block
VpcId: !Ref EksVPC
Tags:
- Key: Name
Value: !Sub ${ClusterBaseName}-PrivateSubnet1
- Key: kubernetes.io/role/internal-elb
Value: 1
PrivateSubnet2:
Type: AWS::EC2::Subnet
Properties:
AvailabilityZone: !Ref AvailabilityZone2
CidrBlock: !Ref PrivateSubnet2Block
VpcId: !Ref EksVPC
Tags:
- Key: Name
Value: !Sub ${ClusterBaseName}-PrivateSubnet2
- Key: kubernetes.io/role/internal-elb
Value: 1
PrivateSubnetRouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref EksVPC
Tags:
- Key: Name
Value: !Sub ${ClusterBaseName}-PrivateSubnetRouteTable
PrivateSubnet1RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PrivateSubnet1
RouteTableId: !Ref PrivateSubnetRouteTable
PrivateSubnet2RouteTableAssociation:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PrivateSubnet2
RouteTableId: !Ref PrivateSubnetRouteTable
# EKSCTL-Host
EKSEC2SG:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: eksctl-host Security Group
VpcId: !Ref EksVPC
Tags:
- Key: Name
Value: !Sub ${ClusterBaseName}-HOST-SG
SecurityGroupIngress:
- IpProtocol: '-1'
CidrIp: !Ref SgIngressSshCidr
EKSEC2:
Type: AWS::EC2::Instance
Properties:
InstanceType: !Ref MyInstanceType
ImageId: !Ref LatestAmiId
KeyName: !Ref KeyName
Tags:
- Key: Name
Value: !Sub ${ClusterBaseName}-host
NetworkInterfaces:
- DeviceIndex: 0
SubnetId: !Ref PublicSubnet1
GroupSet:
- !Ref EKSEC2SG
AssociatePublicIpAddress: true
PrivateIpAddress: 192.168.1.100
BlockDeviceMappings:
- DeviceName: /dev/xvda
Ebs:
VolumeType: gp3
VolumeSize: 30
DeleteOnTermination: true
UserData:
Fn::Base64:
!Sub |
#!/bin/bash
hostnamectl --static set-hostname "${ClusterBaseName}-host"
# Config Root account
echo 'root:******' | chpasswd
sed -i "s/^#PermitRootLogin yes/PermitRootLogin yes/g" /etc/ssh/sshd_config
sed -i "s/^PasswordAuthentication no/PasswordAuthentication yes/g" /etc/ssh/sshd_config
rm -rf /root/.ssh/authorized_keys
systemctl restart sshd
# Config convenience
echo 'alias vi=vim' >> /etc/profile
echo "sudo su -" >> /home/ec2-user/.bashrc
sed -i "s/UTC/Asia\/Seoul/g" /etc/sysconfig/clock
ln -sf /usr/share/zoneinfo/Asia/Seoul /etc/localtime
# Install Packages
yum -y install tree jq git htop
# Install kubectl & helm
cd /root
curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.28.5/2024-01-04/bin/linux/amd64/kubectl
install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
curl -s https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
# Install eksctl
curl -sL "https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_Linux_amd64.tar.gz" | tar xz -C /tmp
mv /tmp/eksctl /usr/local/bin
# Install aws cli v2
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip >/dev/null 2>&1
./aws/install
complete -C '/usr/local/bin/aws_completer' aws
echo 'export AWS_PAGER=""' >>/etc/profile
export AWS_DEFAULT_REGION=${AWS::Region}
echo "export AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION" >> /etc/profile
# Install YAML Highlighter
wget https://github.com/andreazorzetto/yh/releases/download/v0.4.0/yh-linux-amd64.zip
unzip yh-linux-amd64.zip
mv yh /usr/local/bin/
# Install krew
curl -L https://github.com/kubernetes-sigs/krew/releases/download/v0.4.4/krew-linux_amd64.tar.gz -o /root/krew-linux_amd64.tar.gz
tar zxvf krew-linux_amd64.tar.gz
./krew-linux_amd64 install krew
export PATH="$PATH:/root/.krew/bin"
echo 'export PATH="$PATH:/root/.krew/bin"' >> /etc/profile
# Install kube-ps1
echo 'source <(kubectl completion bash)' >> /etc/profile
echo 'alias k=kubectl' >> /etc/profile
echo 'complete -F __start_kubectl k' >> /etc/profile
git clone https://github.com/jonmosco/kube-ps1.git /root/kube-ps1
cat <<"EOT" >> /root/.bash_profile
source /root/kube-ps1/kube-ps1.sh
KUBE_PS1_SYMBOL_ENABLE=false
function get_cluster_short() {
echo "$1" | cut -d . -f1
}
KUBE_PS1_CLUSTER_FUNCTION=get_cluster_short
KUBE_PS1_SUFFIX=') '
PS1='$(kube_ps1)'$PS1
EOT
# Install krew plugin
kubectl krew install ctx ns get-all neat # ktop df-pv mtail tree
# Install Docker
amazon-linux-extras install docker -y
systemctl start docker && systemctl enable docker
# CLUSTER_NAME
export CLUSTER_NAME=${ClusterBaseName}
echo "export CLUSTER_NAME=$CLUSTER_NAME" >> /etc/profile
# Create SSH Keypair
ssh-keygen -t rsa -N "" -f /root/.ssh/id_rsa
Outputs:
eksctlhost:
Value: !GetAtt EKSEC2.PublicIp
2. 작업용 EC2 접속 확인 및 기본 설정
위 과정을 거쳐서 리소스가 정상 생성이 완료되면 작업용 EC2에 접속할 수 있다.
# ec2 에 SSH 접속 : root / ******(password)
예시) ssh root@3.35.137.31(EC2 Public IP)
ssh root@$(aws cloudformation describe-stacks --stack-name myeks --query 'Stacks[*].Outputs[0].OutputValue' --output text)
root@@X.Y.Z.A's password: ******
# 실제 접속
ssh root@***.***.***.***
root@***.***.***.***'s password:
Last login: Fri Mar 8 22:52:48 2024 from ***.***.***.***
, #_
~\_ ####_ Amazon Linux 2
~~ \_#####\
~~ \###| AL2 End of Life is 2025-06-30.
~~ \#/ ___
~~ V~' '->
~~~ / A newer version of Amazon Linux is available!
~~._. _/
_/ _/ Amazon Linux 2023, GA and supported until 2028-03-15.
_/m/' https://aws.amazon.com/linux/amazon-linux-2023/
9 package(s) needed for security, out of 9 available
Run "sudo yum update" to apply all updates.
- 기본 정보 확인IAM User 자격 증명 설정 및 VPC 확인 및 변수 지정
AWS Root계정 또는 IAM User 의 Credential을 미리 발급해두자.
# 자격 구성 설정 없이 확인
[root@myeks-host ~]# aws ec2 describe-instances
Unable to locate credentials. You can configure credentials by running "aws configure".
# IAM User 자격 구성 : 실습 편리를 위해 administrator 권한을 가진 IAM User 의 자격 증명 입력
a[root@myeks-host ~]# aws configure
AWS Access Key ID [None]: AKIAVL*************
AWS Secret Access Key [None]: 2ms6I**************************
Default region name [None]: ap-northeast-2
Default output format [None]: json
# 자격 구성 적용 확인 : 노드 IP 확인
[root@myeks-host ~]# aws ec2 describe-instances
{
"Reservations": [
{
"Groups": [],
"Instances": [
{
"AmiLaunchIndex": 0,
"ImageId": "ami-01dc121da1bd8cc87",
"InstanceId": "i-00af3d622d0f64f2f",
"InstanceType": "t3.medium",
"KeyName": "aews",
"LaunchTime": "2024-03-08T13:38:40+00:00",
"Monitoring": {
"State": "disabled"
},
"Placement": {
"AvailabilityZone": "ap-northeast-2a",
"GroupName": "",
"Tenancy": "default"
.....
# EKS 배포할 VPC 정보 확인
[root@myeks-host ~]# aws ec2 describe-vpcs --filters "Name=tag:Name,Values=$CLUSTER_NAME-VPC" | jq
{
"Vpcs": [
{
"CidrBlock": "192.168.0.0/16",
"DhcpOptionsId": "dopt-086cdccfd7bf99005",
"State": "available",
"VpcId": "vpc-0ef2cec8f7b1d24f5",
"OwnerId": "368373242244",
"InstanceTenancy": "default",
"CidrBlockAssociationSet": [
{
"AssociationId": "vpc-cidr-assoc-01c4487653dbf9a53",
"CidrBlock": "192.168.0.0/16",
"CidrBlockState": {
"State": "associated"
........
[root@myeks-host ~]# aws ec2 describe-vpcs --filters "Name=tag:Name,Values=$CLUSTER_NAME-VPC" | jq Vpcs[]
[root@myeks-host ~]# aws ec2 describe-vpcs --filters "Name=tag:Name,Values=$CLUSTER_NAME-VPC" | jq Vpcs[].VpcId
[root@myeks-host ~]# aws ec2 describe-vpcs --filters "Name=tag:Name,Values=$CLUSTER_NAME-VPC" | jq -r .Vpcs[].VpcId
vpc-0ef2cec8f7b1d24f5
[root@myeks-host ~]# export VPCID=$(aws ec2 describe-vpcs --filters "Name=tag:Name,Values=$CLUSTER_NAME-VPC" | jq -r .Vpcs[].VpcId)
[root@myeks-host ~]# echo "export VPCID=$VPCID" >> /etc/profile
[root@myeks-host ~]# echo $VPCID
vpc-0ef2cec8f7b1d24f5
# EKS 배포할 VPC에 속한 Subnet 정보 확인
aws ec2 describe-subnets --filters "Name=vpc-id,Values=$VPCID" --output json | jq
aws ec2 describe-subnets --filters "Name=vpc-id,Values=$VPCID" --output yaml
## 퍼블릭 서브넷 ID 확인
[root@myeks-host ~]# aws ec2 describe-subnets --filters Name=tag:Name,Values="$CLUSTER_NAME-PublicSubnet1" | jq
{
"Subnets": [
{
"AvailabilityZone": "ap-northeast-2a",
"AvailabilityZoneId": "apne2-az1",
"AvailableIpAddressCount": 250,
"CidrBlock": "192.168.1.0/24",
"DefaultForAz": false,
"MapPublicIpOnLaunch": true,
"MapCustomerOwnedIpOnLaunch": false,
"State": "available",
"SubnetId": "subnet-024d5feb2f2a26cb6",
".......
[root@myeks-host ~]# aws ec2 describe-subnets --filters Name=tag:Name,Values="$CLUSTER_NAME-PublicSubnet1" --query "Subnets[0].[SubnetId]" --output text
subnet-024d5feb2f2a26cb6
[root@myeks-host ~]# export PubSubnet1=$(aws ec2 describe-subnets --filters Name=tag:Name,Values="$CLUSTER_NAME-PublicSubnet1" --query "Subnets[0].[SubnetId]" --output text)
[root@myeks-host ~]# export PubSubnet2=$(aws ec2 describe-subnets --filters Name=tag:Name,Values="$CLUSTER_NAME-PublicSubnet2" --query "Subnets[0].[SubnetId]" --output text)
[root@myeks-host ~]# echo "export PubSubnet1=$PubSubnet1" >> /etc/profile
[root@myeks-host ~]# echo "export PubSubnet2=$PubSubnet2" >> /etc/profile
[root@myeks-host ~]# echo $PubSubnet1
subnet-024d5feb2f2a26cb6
[root@myeks-host ~]# echo $PubSubnet2
subnet-0131e5ca4fb788e95
3. 연습하기
# eksctl help
eksctl
eksctl create
eksctl create cluster --help
eksctl create nodegroup --help
eksctl utils schema | jq
# 현재 지원 버전 정보 확인
eksctl create cluster -h | grep version
# eks 클러스터 생성 + 노드그룹없이
[root@myeks-host ~]# eksctl create cluster --name myeks --region=ap-northeast-2 --without-nodegroup --dry-run | yh
accessConfig:
authenticationMode: API_AND_CONFIG_MAP
apiVersion: eksctl.io/v1alpha5
availabilityZones:
- ap-northeast-2a
- ap-northeast-2c
- ap-northeast-2b
cloudWatch:
clusterLogging: {}
iam:
vpcResourceControllerPolicy: true
withOIDC: false
kind: ClusterConfig
kubernetesNetworkConfig:
ipFamily: IPv4
metadata:
name: myeks
region: ap-northeast-2
version: "1.29"
privateCluster:
enabled: false
skipEndpointCreation: false
vpc:
autoAllocateIPv6: false
cidr: 192.168.0.0/16
clusterEndpoints:
privateAccess: false
publicAccess: true
manageSharedNodeSecurityGroupRules: true
nat:
gateway: Single
# eks 클러스터 생성 + 노드그룹없이 & 사용 가용영역(2a,2c)
[root@myeks-host ~]# eksctl create cluster --name myeks --region=ap-northeast-2 --without-nodegroup --zones=ap-northeast-2a,ap-northeast-2c --dry-run | yh
accessConfig:
authenticationMode: API_AND_CONFIG_MAP
apiVersion: eksctl.io/v1alpha5
availabilityZones:
- ap-northeast-2a
- ap-northeast-2c
cloudWatch:
clusterLogging: {}
iam:
vpcResourceControllerPolicy: true
withOIDC: false
kind: ClusterConfig
kubernetesNetworkConfig:
ipFamily: IPv4
metadata:
name: myeks
region: ap-northeast-2
version: "1.29"
privateCluster:
enabled: false
skipEndpointCreation: false
vpc:
autoAllocateIPv6: false
cidr: 192.168.0.0/16
clusterEndpoints:
privateAccess: false
publicAccess: true
manageSharedNodeSecurityGroupRules: true
nat:
gateway: Single
# eks 클러스터 생성 + 관리형노드그룹생성(이름, 인스턴스 타입, EBS볼륨사이즈) & 사용 가용영역(2a,2c) + VPC 대역 지정
[root@myeks-host ~]# eksctl create cluster --name myeks --region=ap-northeast-2 --nodegroup-name=mynodegroup --node-type=t3.medium --node-volume-size=30 \
--zones=ap-northeast-2a,ap-northeast-2c --vpc-cidr=172.20.0.0/16 --dry-run | yh
accessConfig:
authenticationMode: API_AND_CONFIG_MAP
apiVersion: eksctl.io/v1alpha5
availabilityZones:
- ap-northeast-2a
- ap-northeast-2c
cloudWatch:
clusterLogging: {}
iam:
vpcResourceControllerPolicy: true
withOIDC: false
kind: ClusterConfig
kubernetesNetworkConfig:
ipFamily: IPv4
managedNodeGroups:
- amiFamily: AmazonLinux2
desiredCapacity: 2
disableIMDSv1: true
disablePodIMDS: false
iam:
withAddonPolicies:
albIngress: false
appMesh: false
appMeshPreview: false
autoScaler: false
awsLoadBalancerController: false
certManager: false
cloudWatch: false
ebs: false
efs: false
externalDNS: false
fsx: false
imageBuilder: false
xRay: false
instanceSelector: {}
instanceType: t3.medium
labels:
alpha.eksctl.io/cluster-name: myeks
alpha.eksctl.io/nodegroup-name: mynodegroup
maxSize: 2
minSize: 2
name: mynodegroup
privateNetworking: false
releaseVersion: ""
securityGroups:
withLocal: null
withShared: null
ssh:
allow: false
publicKeyPath: ""
tags:
alpha.eksctl.io/nodegroup-name: mynodegroup
alpha.eksctl.io/nodegroup-type: managed
volumeIOPS: 3000
volumeSize: 30
volumeThroughput: 125
volumeType: gp3
metadata:
name: myeks
region: ap-northeast-2
version: "1.29"
privateCluster:
enabled: false
skipEndpointCreation: false
vpc:
autoAllocateIPv6: false
cidr: 172.20.0.0/16
clusterEndpoints:
privateAccess: false
publicAccess: true
manageSharedNodeSecurityGroupRules: true
nat:
gateway: Single
# eks 클러스터 생성 + 관리형노드그룹생성(AMI:Ubuntu 20.04, 이름, 인스턴스 타입, EBS볼륨사이즈, SSH접속허용) & 사용 가용영역(2a,2c) + VPC 대역 지정
[root@myeks-host ~]# eksctl create cluster --name myeks --region=ap-northeast-2 --nodegroup-name=mynodegroup --node-type=t3.medium --node-volume-size=30 \
--zones=ap-northeast-2a,ap-northeast-2c --vpc-cidr=172.20.0.0/16 --ssh-access --node-ami-family Ubuntu2004 --dry-run | yh
accessConfig:
authenticationMode: API_AND_CONFIG_MAP
apiVersion: eksctl.io/v1alpha5
availabilityZones:
- ap-northeast-2a
- ap-northeast-2c
cloudWatch:
clusterLogging: {}
iam:
vpcResourceControllerPolicy: true
withOIDC: false
kind: ClusterConfig
kubernetesNetworkConfig:
ipFamily: IPv4
managedNodeGroups:
- amiFamily: Ubuntu2004
desiredCapacity: 2
disableIMDSv1: true
disablePodIMDS: false
iam:
withAddonPolicies:
albIngress: false
appMesh: false
appMeshPreview: false
autoScaler: false
awsLoadBalancerController: false
certManager: false
cloudWatch: false
ebs: false
efs: false
externalDNS: false
fsx: false
imageBuilder: false
xRay: false
instanceSelector: {}
instanceType: t3.medium
labels:
alpha.eksctl.io/cluster-name: myeks
alpha.eksctl.io/nodegroup-name: mynodegroup
maxSize: 2
minSize: 2
name: mynodegroup
privateNetworking: false
releaseVersion: ""
securityGroups:
withLocal: null
withShared: null
ssh:
allow: true
publicKeyPath: ~/.ssh/id_rsa.pub
tags:
alpha.eksctl.io/nodegroup-name: mynodegroup
alpha.eksctl.io/nodegroup-type: managed
volumeIOPS: 3000
volumeSize: 30
volumeThroughput: 125
volumeType: gp3
metadata:
name: myeks
region: ap-northeast-2
version: "1.29"
privateCluster:
enabled: false
skipEndpointCreation: false
vpc:
autoAllocateIPv6: false
cidr: 172.20.0.0/16
clusterEndpoints:
privateAccess: false
publicAccess: true
manageSharedNodeSecurityGroupRules: true
nat:
gateway: Single
4. 배포 실습
# 변수 확인
[root@myeks-host ~]# echo $AWS_DEFAULT_REGION
ap-northeast-2
[root@myeks-host ~]# echo $CLUSTER_NAME
myeks
[root@myeks-host ~]# echo $VPCID
vpc-0ef2cec8f7b1d24f5
[root@myeks-host ~]# echo $PubSubnet1,$PubSubnet2
subnet-024d5feb2f2a26cb6,subnet-0131e5ca4fb788e95
# 옵션 [터미널1] EC2 생성 모니터링
while true; do aws ec2 describe-instances --query "Reservations[*].Instances[*].{PublicIPAdd:PublicIpAddress,PrivateIPAdd:PrivateIpAddress,InstanceName:Tags[?Key=='Name']|[0].Value,Status:State.Name}" --filters Name=instance-state-name,Values=running --output text ; echo "------------------------------" ; sleep 1; done
aws ec2 describe-instances --query "Reservations[*].Instances[*].{PublicIPAdd:PublicIpAddress,PrivateIPAdd:PrivateIpAddress,InstanceName:Tags[?Key=='Name']|[0].Value,Status:State.Name}" --filters Name=instance-state-name,Values=running --output table
# eks 클러스터 & 관리형노드그룹 배포 전 정보 확인
eksctl create cluster --name $CLUSTER_NAME --region=$AWS_DEFAULT_REGION --nodegroup-name=$CLUSTER_NAME-nodegroup --node-type=t3.medium \
--node-volume-size=30 --vpc-public-subnets "$PubSubnet1,$PubSubnet2" --version 1.28 --ssh-access --external-dns-access --dry-run | yh
...
vpc:
autoAllocateIPv6: false
cidr: 192.168.0.0/16
clusterEndpoints:
privateAccess: false
publicAccess: true
id: vpc-0505d154771a3dfdf
manageSharedNodeSecurityGroupRules: true
nat:
gateway: Disable
subnets:
public:
ap-northeast-2a:
az: ap-northeast-2a
cidr: 192.168.1.0/24
id: subnet-0d98bee5a7c0dfcc6
ap-northeast-2c:
az: ap-northeast-2c
cidr: 192.168.2.0/24
id: subnet-09dc49de8d899aeb7
# eks 클러스터 & 관리형노드그룹 배포: 총 15분 소요
[root@myeks-host ~]# eksctl create cluster --name $CLUSTER_NAME --region=$AWS_DEFAULT_REGION --nodegroup-name=$CLUSTER_NAME-nodegroup --node-type=t3.medium \
--node-volume-size=30 --vpc-public-subnets "$PubSubnet1,$PubSubnet2" --version 1.28 --ssh-access --external-dns-access --verbose 4
2024-03-08 23:31:13 [▶] Setting credentials expiry window to 30 minutes
2024-03-08 23:31:13 [▶] role ARN for the current session is "arn:aws:iam::************:user/aews"
2024-03-08 23:31:13 [ℹ] eksctl version 0.173.0
2024-03-08 23:31:13 [ℹ] using region ap-northeast-2
...
4-1) 배포 확인
- AWS CloudFormation 확인해보기
- AWS EKS 확인해보기
- AWS EC2 확인해보기
- AWS EC2 ASG 확인해보기
4-2) EKS 정보 확인
# krew 플러그인 확인
(aews@myeks:N/A) [root@myeks-host ~]# kubectl krew list
PLUGIN VERSION
ctx v0.9.5
get-all v1.3.8
krew v0.4.4
neat v2.0.3
ns v0.9.5
(aews@myeks:N/A) [root@myeks-host ~]# kubectl ctx
aews@myeks.ap-northeast-2.eksctl.io
(aews@myeks:N/A) [root@myeks-host ~]# kubectl ns
default
kube-node-lease
kube-public
kube-system
(aews@myeks:N/A) [root@myeks-host ~]# kubectl ns default
Context "aews@myeks.ap-northeast-2.eksctl.io" modified.
Active namespace is "default".
(aews@myeks:default) [root@myeks-host ~]# kubectl get-all # 모든 네임스페이스에서 모든 리소스 확인
W0308 23:57:32.125309 6626 warnings.go:70] v1 ComponentStatus is deprecated in v1.19+
NAME NAMESPACE AGE
componentstatus/etcd-0 <unknown>
componentstatus/controller-manager <unknown>
componentstatus/scheduler <unknown>
.....
# eks 클러스터 정보 확인
(aews@myeks:default) [root@myeks-host ~]# kubectl cluster-info
Kubernetes control plane is running at https://76EC4BC184B1C32800EAB3619F024B26.sk1.ap-northeast-2.eks.amazonaws.com
CoreDNS is running at https://76EC4BC184B1C32800EAB3619F024B26.sk1.ap-northeast-2.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
(aews@myeks:default) [root@myeks-host ~]# eksctl get cluster
NAME REGION EKSCTL CREATED
myeks ap-northeast-2 True
(aews@myeks:default) [root@myeks-host ~]# aws eks describe-cluster --name $CLUSTER_NAME | jq
{
"cluster": {
"name": "myeks",
"arn": "arn:aws:eks:ap-northeast-2:****************:cluster/myeks",
(aews@myeks:default) [root@myeks-host ~]# aws eks describe-cluster --name $CLUSTER_NAME | jq -r .cluster.endpoint
https://76EC4BC184B1C32800EAB3619F024B26.sk1.ap-northeast-2.eks.amazonaws.com
## dig 조회 : 해당 IP 소유 리소스는 어떤것일까요?
APIDNS=$(aws eks describe-cluster --name $CLUSTER_NAME | jq -r .cluster.endpoint | cut -d '/' -f 3)
dig +short $APIDNS
# eks API 접속 시도 : 도메인 or 출력되는 ip 주소로 https://<IP>/version 외부에서도 접속 가능!
curl -k -s $(aws eks describe-cluster --name $CLUSTER_NAME | jq -r .cluster.endpoint)
curl -k -s $(aws eks describe-cluster --name $CLUSTER_NAME | jq -r .cluster.endpoint)/version | jq
# eks 노드 그룹 정보 확인
eksctl get nodegroup --cluster $CLUSTER_NAME --name $CLUSTER_NAME-nodegroup
aws eks describe-nodegroup --cluster-name $CLUSTER_NAME --nodegroup-name $CLUSTER_NAME-nodegroup | jq
# 노드 정보 확인 : OS와 컨테이너런타임 확인
kubectl get node --label-columns=node.kubernetes.io/instance-type,eks.amazonaws.com/capacityType,topology.kubernetes.io/zone
kubectl get node --label-columns=node.kubernetes.io/instance-type
kubectl get node
kubectl get node -owide
kubectl get node -v=6
I0423 02:00:38.691443 5535 loader.go:374] Config loaded from file: /root/.kube/config
I0423 02:00:39.519097 5535 round_trippers.go:553] GET https://C813D20E6263FBDC356E60D2971FCBA7.gr7.ap-northeast-2.eks.amazonaws.com/api/v1/nodes?limit=500 200 OK in 818 milliseconds
...
# 노드의 capacityType 확인
kubectl get node --label-columns=eks.amazonaws.com/capacityType
NAME STATUS ROLES AGE VERSION CAPACITYTYPE
ip-192-168-1-76.ap-northeast-2.compute.internal Ready <none> 19m v1.24.11-eks-a59e1f0 ON_DEMAND
ip-192-168-2-21.ap-northeast-2.compute.internal Ready <none> 19m v1.24.11-eks-a59e1f0 ON_DEMAND
# 인증 정보 확인
cat /root/.kube/config
kubectl config view
kubectl ctx
aws eks get-token --cluster-name $CLUSTER_NAME --region $AWS_DEFAULT_REGION
# 파드 정보 확인
kubectl get pod -n kube-system
kubectl get pod -n kube-system -o wide
kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system aws-node-m2dgw 1/1 Running 0 7m50s
kube-system aws-node-pcghp 1/1 Running 0 7m51s
kube-system coredns-dc4979556-87wk2 1/1 Running 0 14m
kube-system coredns-dc4979556-l99f9 1/1 Running 0 14m
kube-system kube-proxy-hzcvz 1/1 Running 0 7m50s
kube-system kube-proxy-m629f 1/1 Running 0 7m51s
# kube-system 네임스페이스에 모든 리소스 확인
kubectl get-all -n kube-system
# 모든 네임스페이스에서 모든 리소스 확인
kubectl get-all
# 모든 파드의 컨테이너 이미지 정보 확인
kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" | tr -s '[[:space:]]' '\n' | sort | uniq -c
2 602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon-k8s-cni:v1.15.1-eksbuild.1
2 602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/amazon/aws-network-policy-agent:v1.0.4-eksbuild.1
2 602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/coredns:v1.10.1-eksbuild.4
2 602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/kube-proxy:v1.28.2-minimal-eksbuild.2
# AWS ECR에서 컨테이너 이미지 가져오기 시도
docker pull 602401143452.dkr.ecr.ap-northeast-2.amazonaws.com/eks/coredns:v1.10.1-eksbuild.4
4-3) 노드정보 상세 확인
- 노드 SSH 접속
# 노드 IP 확인 및 PrivateIP 변수 지정
(aews@myeks:default) [root@myeks-host ~]# aws ec2 describe-instances --query "Reservations[*].Instances[*].{PublicIPAdd:PublicIpAddress,PrivateIPAdd:PrivateIpAddress,InstanceName:Tags[?Key=='Name']|[0].Value,Status:State.Name}" --filters Name=instance-state-name,Values=running --output table
----------------------------------------------------------------------------
| DescribeInstances |
+-----------------------------+----------------+---------------+-----------+
| InstanceName | PrivateIPAdd | PublicIPAdd | Status |
+-----------------------------+----------------+---------------+-----------+
| myeks-myeks-nodegroup-Node | 192.168.2.55 | 3.3*.***.** | running |
| myeks-myeks-nodegroup-Node | 192.168.1.187 | 43.***.**.** | running |
| myeks-host | 192.168.1.100 | 52.**.***.** | running |
+-----------------------------+----------------+---------------+-----------+
kubectl get node --label-columns=topology.kubernetes.io/zone
kubectl get node --label-columns=topology.kubernetes.io/zone --selector=topology.kubernetes.io/zone=ap-northeast-2a
kubectl get node --label-columns=topology.kubernetes.io/zone --selector=topology.kubernetes.io/zone=ap-northeast-2c
N1=$(kubectl get node --label-columns=topology.kubernetes.io/zone --selector=topology.kubernetes.io/zone=ap-northeast-2a -o jsonpath={.items[0].status.addresses[0].address})
N2=$(kubectl get node --label-columns=topology.kubernetes.io/zone --selector=topology.kubernetes.io/zone=ap-northeast-2c -o jsonpath={.items[0].status.addresses[0].address})
echo $N1, $N2
echo "export N1=$N1" >> /etc/profile
echo "export N2=$N2" >> /etc/profile
# eksctl-host 에서 노드의IP나 coredns 파드IP로 ping 테스트
ping <IP>
ping -c 1 $N1
ping -c 1 $N2
# 노드 보안그룹 ID 확인
aws ec2 describe-security-groups --filters Name=group-name,Values=*nodegroup* --query "SecurityGroups[*].[GroupId]" --output text
NGSGID=$(aws ec2 describe-security-groups --filters Name=group-name,Values=*nodegroup* --query "SecurityGroups[*].[GroupId]" --output text)
echo $NGSGID
echo "export NGSGID=$NGSGID" >> /etc/profile
# 노드 보안그룹에 eksctl-host 에서 노드(파드)에 접속 가능하게 룰(Rule) 추가 설정
aws ec2 authorize-security-group-ingress --group-id $NGSGID --protocol '-1' --cidr 192.168.1.100/32
# eksctl-host 에서 노드의IP나 coredns 파드IP로 ping 테스트
(aews@myeks:default) [root@myeks-host ~]# ping -c 2 $N1
PING 192.168.1.187 (192.168.1.187) 56(84) bytes of data.
64 bytes from 192.168.1.187: icmp_seq=1 ttl=255 time=0.234 ms
64 bytes from 192.168.1.187: icmp_seq=2 ttl=255 time=0.166 ms
# 워커 노드 SSH 접속
ssh -i ~/.ssh/id_rsa ec2-user@$N1 hostname
ssh -i ~/.ssh/id_rsa ec2-user@$N2 hostname
(aews@myeks:default) [root@myeks-host ~]# ssh ec2-user@$N2
Last login: Thu Mar 7 07:15:20 2024 from 54.240.230.190
, #_
~\_ ####_ Amazon Linux 2
~~ \_#####\
~~ \###| AL2 End of Life is 2025-06-30.
~~ \#/ ___
~~ V~' '->
~~~ / A newer version of Amazon Linux is available!
~~._. _/
_/ _/ Amazon Linux 2023, GA and supported until 2028-03-15.
_/m/' https://aws.amazon.com/linux/amazon-linux-2023/
[ec2-user@ip-192-168-2-55 ~]$ exit
ssh ec2-user@$N2
exit
- 노드 네트워크 정보 확인
# AWS VPC CNI 사용 확인
(aews@myeks:default) [root@myeks-host ~]# kubectl -n kube-system get ds aws-node
cribe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -f 2NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
aws-node 2 2 2 2 2 <none> 36m
(aews@myeks:default) [root@myeks-host ~]# kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -f 2
amazon-k8s-cni-init:v1.15.1-eksbuild.1
amazon-k8s-cni:v1.15.1-eksbuild.1
amazon
# 파드 IP 확인
(aews@myeks:default) [root@myeks-host ~]# kubectl get pod -n kube-system -o wide
pod -n kube-system -l k8s-app=kube-dns -owideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
aws-node-78l7g 2/2 Running 0 31m 192.168.2.55 ip-192-168-2-55.ap-northeast-2.compute.internal <none> <none>
aws-node-f49c7 2/2 Running 0 31m 192.168.1.187 ip-192-168-1-187.ap-northeast-2.compute.internal <none> <none>
coredns-56dfff779f-mcd2q 1/1 Running 0 37m 192.168.1.76 ip-192-168-1-187.ap-northeast-2.compute.internal <none> <none>
coredns-56dfff779f-zltdv 1/1 Running 0 37m 192.168.1.104 ip-192-168-1-187.ap-northeast-2.compute.internal <none> <none>
kube-proxy-4zgj7 1/1 Running 0 31m 192.168.2.55 ip-192-168-2-55.ap-northeast-2.compute.internal <none> <none>
kube-proxy-vvfwt 1/1 Running 0 31m 192.168.1.187 ip-192-168-1-187.ap-northeast-2.compute.internal <none> <none>
(aews@myeks:default) [root@myeks-host ~]# kubectl get pod -n kube-system -l k8s-app=kube-dns -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-56dfff779f-mcd2q 1/1 Running 0 37m 192.168.1.76 ip-192-168-1-187.ap-northeast-2.compute.internal <none> <none>
coredns-56dfff779f-zltdv 1/1 Running 0 37m 192.168.1.104 ip-192-168-1-187.ap-northeast-2.compute.internal <none> <none>
# 노드 정보 확인
(aews@myeks:default) [root@myeks-host ~]# for i in $N1 $N2; do echo ">> node $i <<"; ssh ec2-user@$i hostname; echo; done
>> node 192.168.1.187 <<
ip-192-168-1-187.ap-northeast-2.compute.internal
>> node 192.168.2.55 <<
ip-192-168-2-55.ap-northeast-2.compute.internal
(aews@myeks:default) [root@myeks-host ~]# for i in $N1 $N2; do echo ">> node $i <<"; ssh ec2-user@$i sudo ip -c addr; echo; done
>> node 192.168.1.187 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 02:91:1d:02:2f:03 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.187/24 brd 192.168.1.255 scope global dynamic eth0
valid_lft 3406sec preferred_lft 3406sec
inet6 fe80::91:1dff:fe02:2f03/64 scope link
valid_lft forever preferred_lft forever
3: enid97a0bd974f@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether da:15:bb:47:3b:49 brd ff:ff:ff:ff:ff:ff link-netns cni-3edc0922-837e-366e-61b2-1e8a4ada39b6
inet6 fe80::d815:bbff:fe47:3b49/64 scope link
valid_lft forever preferred_lft forever
4: enic995060da37@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc noqueue state UP group default
link/ether 3a:ef:14:cc:4a:23 brd ff:ff:ff:ff:ff:ff link-netns cni-40748679-75b6-ef98-244a-ef23c7ad8632
inet6 fe80::38ef:14ff:fecc:4a23/64 scope link
valid_lft forever preferred_lft forever
5: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 02:5f:3e:4b:85:73 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.69/24 brd 192.168.1.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::5f:3eff:fe4b:8573/64 scope link
valid_lft forever preferred_lft forever
>> node 192.168.2.55 <<
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 0a:8a:b8:45:b6:eb brd ff:ff:ff:ff:ff:ff
inet 192.168.2.55/24 brd 192.168.2.255 scope global dynamic eth0
valid_lft 3319sec preferred_lft 3319sec
inet6 fe80::88a:b8ff:fe45:b6eb/64 scope link
valid_lft forever preferred_lft forever
(aews@myeks:default) [root@myeks-host ~]# for i in $N1 $N2; do echo ">> node $i <<"; ssh ec2-user@$i sudo ip -c route; echo; done
>> node 192.168.1.187 <<
default via 192.168.1.1 dev eth0
169.254.169.254 dev eth0
192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.187
192.168.1.76 dev enid97a0bd974f scope link
192.168.1.104 dev enic995060da37 scope link
>> node 192.168.2.55 <<
(aews@myeks:default) [root@myeks-host ~]# for i in $N1 $N2; do echo ">> node $i <<"; ssh ec2-user@$i sudo iptables -t nat -S; echo; done
>> node 192.168.1.187 <<
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N AWS-CONNMARK-CHAIN-0
............
- Node cgroup version : v1(tmpfs), v2(cgroup2fs) - Link
for i in $N1 $N2; do echo ">> node $i <<"; ssh ec2-user@$i stat -fc %T /sys/fs/cgroup/; echo; done
- 노드 프로세스 정보 확인
for i in $N1 $N2; do echo ">> node $i <<"; ssh ec2-user@$i sudo systemctl status kubelet; echo; done
for i in $N1 $N2; do echo ">> node $i <<"; ssh ec2-user@$i sudo pstree; echo; done
for i in $N1 $N2; do echo ">> node $i <<"; ssh ec2-user@$i ps afxuwww; echo; done
for i in $N1 $N2; do echo ">> node $i <<"; ssh ec2-user@$i ps axf |grep /usr/bin/containerd; echo; done
for i in $N1 $N2; do echo ">> node $i <<"; ssh ec2-user@$i ls /etc/kubernetes/manifests/; echo; done
for i in $N1 $N2; do echo ">> node $i <<"; ssh ec2-user@$i ls /etc/kubernetes/kubelet/; echo; done
for i in $N1 $N2; do echo ">> node $i <<"; ssh ec2-user@$i cat /etc/kubernetes/kubelet/kubelet-config.json; echo; done
- 노드 스토리지 정보 확인
for i in $N1 $N2; do echo ">> node $i <<"; ssh ec2-user@$i lsblk; echo; done
for i in $N1 $N2; do echo ">> node $i <<"; ssh ec2-user@$i df -hT /; echo; done
- EKS owned ENI 확인
# kubelet, kube-proxy 통신 Peer Address는 어딘인가요?
for i in $N1 $N2; do echo ">> node $i <<"; ssh ec2-user@$i sudo ss -tnp; echo; done
...
ESTAB 0 0 192.168.1.187:60820 3.38.48.142:443 users:(("kube-proxy",pid=3140,fd=7))
...
...
ESTAB 0 0 192.168.2.55:43680 3.37.155.27:443 users:(("kube-proxy",pid=3143,fd=7))
...
# 참고
APIDNS=$(aws eks describe-cluster --name $CLUSTER_NAME | jq -r .cluster.endpoint | cut -d '/' -f 3)
dig +short $APIDNS
# [터미널] aws-node 데몬셋 파드 1곳에 bash 실행해두기
kubectl exec daemonsets/aws-node -it -n kube-system -c aws-eks-nodeagent -- bash
# exec 실행으로 추가된 연결 정보의 Peer Address는 어딘인가요? >> AWS 네트워크 인터페이스 ENI에서 해당 IP 정보 확인
for i in $N1 $N2; do echo ">> node $i <<"; ssh ec2-user@$i sudo ss -tnp; echo; done
...
ESTAB 0 0 [::ffff:192.168.1.187]:10250 [::ffff:192.168.1.148]:43662 users:(("kubelet",pid=2929,fd=11))
...
- EKS owned ENI : 관리형 노드 그룹의 워커 노드는 내 소유지만, 연결된 ENI(NIC)의 인스턴스(관리형 노드)는 AWS 소유이다
- EC2 메타데이터 확인(IAM Role) : 상세한 내용은 보안에서 다룸 - 링크
# 노드1 접속
ssh ec2-user@$N1
----------------
# IMDSv2 메타데이터 정보 확인
TOKEN=`curl -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"`
curl -s -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/
curl -s -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/iam/security-credentials/ ;echo
eksctl-myeks-nodegroup-myeks-nodeg-NodeInstanceRole-4nyRq1rhLJLd
curl -s -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/iam/security-credentials/eksctl-myeks-nodegroup-myeks-nodeg-NodeInstanceRole-4nyRq1rhLJLd | jq
4-4) EKS 보안 그룹
각 보안 그룹이 어떻게 적용되는지 생각해보자! ⇒ 크롬에서 규칙 확인이 잘 안될 경우, 사파리에서 확인 하자
# 보안그룹 ID와 보안그룹 이름(Name아님을 주의!) 확인
aws ec2 describe-security-groups --query 'SecurityGroups[*].[GroupId, GroupName]' --output text
sg-0d0f2645067c469ff eksctl-myeks-cluster-ClusterSharedNodeSecurityGroup-12YGG7TN6AEZB
sg-06d88a2342ac57013 default
sg-0eec130da40e36d74 eksctl-myeks-cluster-ControlPlaneSecurityGroup-MD990KLX17T3
sg-062868d8fb1081c86 eks-cluster-sg-myeks-104368993
sg-05c2bd151c8c02eeb myeks-EKSEC2SG-UO8FZQ60N6GN
sg-0c2fd66d9be730b2c default
sg-0f985e1c506bcc2be eksctl-myeks-nodegroup-myeks-nodegroup-remoteAccess
ex)
sg-08fb7735ecb810e51 eksctl-myeks-cluster-ControlPlaneSecurityGroup-1SDT1PGKXDVKG # 컨트롤플레인 접속 사용 시, Communication between the control plane and worker nodegroups
sg-081a71090edaf3ca0 eksctl-myeks-cluster-ClusterSharedNodeSecurityGroup-146FE70CAM0TG # 관리노드 + 셀프노드 간에 서로 통신 시 허용 정책, Communication between all nodes in the cluster
sg-0efe39249c676d721 eks-cluster-sg-myeks-104368993 # 컨트롤플레인 + 관리노드 + 셀프노드 모두 간에 서로 통신 시 허용 정책, , EKS created security group applied to ENI that is attached to EKS Control Plane master nodes, as well as any managed workloads.
sg-03cf625dccddaf53f eksctl-myeks-nodegroup-myeks-nodegroup-remoteAccess # 노드그룹에 SSH 접속 허용 정책
sg-03d74ab2fe08e1c7d myeks-EKSEC2SG-1SVMDAAJO9FFC # 작업용EC2 : eksctl-host Security Group
# 보안그룹 확인
aws ec2 describe-security-groups --group-ids sg-08fb7735ecb810e51 --output yaml | yh
aws ec2 describe-security-groups --group-ids sg-081a71090edaf3ca0 --output yaml | yh
aws ec2 describe-security-groups --group-ids sg-0efe39249c676d721 --output yaml | yh
aws ec2 describe-security-groups --group-ids sg-03cf625dccddaf53f --output yaml | yh
https://logonme.net/tech/aews_w1/#%EB%B3%B4%EC%95%88%EA%B7%B8%EB%A3%B9_%EC%8B%AC%ED%99%94
4-5) (참고) 배포 실습 명령어 모음
# yaml 파일 다운로드
curl -O https://s3.ap-northeast-2.amazonaws.com/cloudformation.cloudneta.net/K8S/myeks-1week.yaml
# 기본 환경 배포
aws cloudformation deploy --template-file ~/Downloads/myeks-1week.yaml --stack-name myeks --parameter-overrides KeyName=kp-gasida SgIngressSshCidr=$(curl -s ipinfo.io/ip)/32 --region ap-northeast-2
# SSH 접속
ssh -i ~/.ssh/kp-gasida.pem ec2-user@$(aws cloudformation describe-stacks --stack-name myeks --query 'Stacks[*].Outputs[0].OutputValue' --output text)
# IAM User 자격 구성 : 실습 편리를 위해 administrator 권한을 가진 IAM User 의 자격 증명 입력
aws configure
# EKS 배포할 VPC 정보 확인
export VPCID=$(aws ec2 describe-vpcs --filters "Name=tag:Name,Values=$CLUSTER_NAME-VPC" | jq -r .Vpcs[].VpcId)
echo "export VPCID=$VPCID" >> /etc/profile
export PubSubnet1=$(aws ec2 describe-subnets --filters Name=tag:Name,Values="$CLUSTER_NAME-PublicSubnet1" --query "Subnets[0].[SubnetId]" --output text)
export PubSubnet2=$(aws ec2 describe-subnets --filters Name=tag:Name,Values="$CLUSTER_NAME-PublicSubnet2" --query "Subnets[0].[SubnetId]" --output text)
echo "export PubSubnet1=$PubSubnet1" >> /etc/profile
echo "export PubSubnet2=$PubSubnet2" >> /etc/profile
# EKS 클러스터 배포
eksctl create cluster --name $CLUSTER_NAME --region=$AWS_DEFAULT_REGION --nodegroup-name=$CLUSTER_NAME-nodegroup --node-type=t3.medium \
--node-volume-size=30 --vpc-public-subnets "$PubSubnet1,$PubSubnet2" --version 1.28 --ssh-access --external-dns-access --verbose 4
---
# 노드 IP 확인 및 PrivateIP 변수 지정
N1=$(kubectl get node --label-columns=topology.kubernetes.io/zone --selector=topology.kubernetes.io/zone=ap-northeast-2a -o jsonpath={.items[0].status.addresses[0].address})
N2=$(kubectl get node --label-columns=topology.kubernetes.io/zone --selector=topology.kubernetes.io/zone=ap-northeast-2c -o jsonpath={.items[0].status.addresses[0].address})
echo "export N1=$N1" >> /etc/profile
echo "export N2=$N2" >> /etc/profile
# 노드 보안그룹에 eksctl-host 에서 노드(파드)에 접속 가능하게 룰(Rule) 추가 설정
NGSGID=$(aws ec2 describe-security-groups --filters Name=group-name,Values=*nodegroup* --query "SecurityGroups[*].[GroupId]" --output text)
echo "export NGSGID=$NGSGID" >> /etc/profile
aws ec2 authorize-security-group-ingress --group-id $NGSGID --protocol '-1' --cidr 192.168.1.100/32
# 워커 노드 SSH 접속
ssh -i ~/.ssh/id_rsa ec2-user@$N1 hostname
ssh -i ~/.ssh/id_rsa ec2-user@$N2 hostname
5. 관리 편리성
- kubectl 자동 완성 기능과 alias 사용하기 : 설치 확인에서 이미 설정함
# 자동 완성 및 alias 축약 설정
source <(kubectl completion bash)
alias k=kubectl
complete -F __start_kubectl k'
- kubectl cli 플러그인 매니저 쿠버네티스 크루(krew) 설치 : 이미 설치 되어 있음 - 링크
# 설치
curl -fsSLO https://github.com/kubernetes-sigs/krew/releases/download/v0.4.4/krew-linux_amd64.tar.gz
tar zxvf krew-linux_amd64.tar.gz
./krew-linux_amd64 install krew
tree -L 3 /root/.krew/bin
# krew 확인
kubectl krew
kubectl krew update
kubectl krew search
kubectl krew list
- krew 로 kube-ctx, kube-ns 설치 및 사용 : 이미 설치 되어 있음
- kube-ctx : 쿠버네티스 컨텍스트 사용
# 설치
kubectl krew install ctx
# 컨텍스트 확인
kubectl ctx
# 컨텍스트 사용
kubectl ctx <각자 자신의 컨텍스트 이름>
- kube-ctx : 쿠버네티스 컨텍스트 사용
# 설치
kubectl krew install ctx
# 컨텍스트 확인
kubectl ctx
# 컨텍스트 사용
kubectl ctx <각자 자신의 컨텍스트 이름>
- kube-ns : 네임스페이스(단일 클러스터 내에서 가상 클러스터) 사용
# 설치
kubectl krew install ns
# 네임스페이스 확인
kubectl ns
# 터미널1
watch kubectl get pod
# kube-system 네임스페이스 선택 사용
kubectl ns kube-system
# default 네임스페이스 선택 사용
kubectl ns -
혹은
kubectl ns default
- krew list 확인
kubectl krew list
- krew 로 기타 플러그인 설치 및 사용 : df-pv get-all ktop neat oomd view-secret
# 설치
kubectl krew install df-pv get-all ktop neat oomd view-secret # mtail tree
# get-all 사용
kubectl get-all
kubectl get-all -n kube-system
# ktop 사용
kubectl ktop
# oomd 사용
kubectl oomd
# df-pv 사용
kubectl df-pv
# view-secret 사용 : 시크릿 복호화
kubectl view-secret
- kube-ps1 설치 및 사용 : 이미 설치 되어 있음
# 설치 및 설정
git clone https://github.com/jonmosco/kube-ps1.git /root/kube-ps1
cat <<"EOT" >> /root/.bash_profile
source /root/kube-ps1/kube-ps1.sh
KUBE_PS1_SYMBOL_ENABLE=true
function get_cluster_short() {
echo "$1" | cut -d . -f1
}
KUBE_PS1_CLUSTER_FUNCTION=get_cluster_short
KUBE_PS1_SUFFIX=') '
PS1='$(kube_ps1)'$PS1
EOT
# 적용
exit
exit
# default 네임스페이스 선택
kubectl ns default
6. 기본 사용
- 선언형 (멱등성) 알아보기 실습
# 터미널1 (모니터링)
watch -d 'kubectl get pod'
# 터미널2
# Deployment 배포(Pod 3개)
kubectl create deployment my-webs --image=gcr.io/google-samples/kubernetes-bootcamp:v1 --replicas=3
kubectl get pod -w
# 파드 증가 및 감소
kubectl scale deployment my-webs --replicas=6 && kubectl get pod -w
kubectl scale deployment my-webs --replicas=3
kubectl get pod
# 강제로 파드 삭제 : 바라는상태 + 선언형에 대한 대략적인 확인! ⇒ 어떤 일이 벌어지는가?
kubectl delete pod --all && kubectl get pod -w
kubectl get pod
# 실습 완료 후 Deployment 삭제
kubectl delete deploy my-webs
- 서비스/파드(mario 게임) 배포 테스트 with CLB
# 터미널1 (모니터링)
watch -d 'kubectl get pod,svc'
# 수퍼마리오 디플로이먼트 배포
curl -s -O https://raw.githubusercontent.com/gasida/PKOS/main/1/mario.yaml
kubectl apply -f mario.yaml
cat mario.yaml | yh
# 배포 확인 : CLB 배포 확인
kubectl get deploy,svc,ep mario
# 마리오 게임 접속 : CLB 주소로 웹 접속
kubectl get svc mario -o jsonpath={.status.loadBalancer.ingress[0].hostname} | awk '{ print "Maria URL = http://"$1 }'
- AWS EKS 관리 콘솔에서 리소스 확인(파드, 서비스) → AWS EC2 ELB(CLB) 확인
- 삭제 : kubectl delete -f mario.yaml
- 노드에 배포된 컨테이너 정보 확인 Containerd clients 3종 : ctr, nerdctl, crictl - 링크
# ctr 버전 확인
ssh ec2-user@$N1 ctr --version
ctr github.com/containerd/containerd v1.7.11
# ctr help
ssh ec2-user@$N1 ctr
NAME:
ctr -
__
_____/ /______
/ ___/ __/ ___/
/ /__/ /_/ /
\___/\__/_/
containerd CLI
...
DESCRIPTION:
ctr is an unsupported debug and administrative client for interacting with the containerd daemon.
Because it is unsupported, the commands, options, and operations are not guaranteed to be backward compatible or stable from release to release of the containerd project.
COMMANDS:
plugins, plugin provides information about containerd plugins
version print the client and server versions
containers, c, container manage containers
content manage content
events, event display containerd events
images, image, i manage images
leases manage leases
namespaces, namespace, ns manage namespaces
pprof provide golang pprof outputs for containerd
run run a container
snapshots, snapshot manage snapshots
tasks, t, task manage tasks
install install a new package
oci OCI tools
shim interact with a shim directly
help, h Shows a list of commands or help for one command
# 네임스페이스 확인
ssh ec2-user@$N1 sudo ctr ns list
NAME LABELS
k8s.io
# 컨테이너 리스트 확인
for i in $N1 $N2; do echo ">> node $i <<"; ssh ec2-user@$i sudo ctr -n k8s.io container list; echo; done
CONTAINER IMAGE RUNTIME
28b6a15c475e32cd8777c1963ba684745573d0b6053f80d2d37add0ae841eb45 602401143452.dkr.ecr-fips.us-east-1.amazonaws.com/eks/pause:3.5 io.containerd.runc.v2
4f266ebcee45b133c527df96499e01ec0c020ea72785eb10ef63b20b5826cf7c 602401143452.dkr.ecr-fips.us-east-1.amazonaws.com/eks/pause:3.5 io.containerd.runc.v2
...
# 컨테이너 이미지 확인
for i in $N1 $N2; do echo ">> node $i <<"; ssh ec2-user@$i sudo ctr -n k8s.io image list --quiet; echo; done
...
# 태스크 리스트 확인
for i in $N1 $N2; do echo ">> node $i <<"; ssh ec2-user@$i sudo ctr -n k8s.io task list; echo; done
...
## 예시) 각 테스크의 PID(3706) 확인
ssh ec2-user@$N1 sudo ps -c 3706
PID CLS PRI TTY STAT TIME COMMAND
3099 TS 19 ? Ssl 0:01 kube-proxy --v=2 --config=/var/lib/kube-proxy-config/config --hostname-override=ip-192-168-1-229.ap-northeast-2.compute.internal
- ECR 퍼블릭 Repository 사용 : 퍼블릭 Repo 는 설정 시 us-east-1 를 사용 - 링크
# 퍼블릭 ECR 인증
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws
cat /root/.docker/config.json | jq
# 퍼블릭 Repo 기본 정보 확인
aws ecr-public describe-registries --region us-east-1 | jq
# 퍼블릭 Repo 생성
NICKNAME=<각자자신의닉네임>
NICKNAME=gasida
aws ecr-public create-repository --repository-name $NICKNAME/nginx --region us-east-1
# 생성된 퍼블릭 Repo 확인
aws ecr-public describe-repositories --region us-east-1 | jq
REPOURI=$(aws ecr-public describe-repositories --region us-east-1 | jq -r .repositories[].repositoryUri)
echo $REPOURI
# 이미지 태그
docker pull nginx:alpine
docker images
docker tag nginx:alpine $REPOURI:latest
docker images
# 이미지 업로드
docker push $REPOURI:latest
# 파드 실행
kubectl run mynginx --image $REPOURI
kubectl get pod
kubectl delete pod mynginx
# 퍼블릭 이미지 삭제
aws ecr-public batch-delete-image \
--repository-name $NICKNAME/nginx \
--image-ids imageTag=latest \
--region us-east-1
# 퍼블릭 Repo 삭제
aws ecr-public delete-repository --repository-name $NICKNAME/nginx --force --region us-east-1
- 관리형노드에 노드 추가 및 삭제 Scale - 링크
# 옵션 [터미널1] EC2 생성 모니터링
#aws ec2 describe-instances --query "Reservations[*].Instances[*].{PublicIPAdd:PublicIpAddress,PrivateIPAdd:PrivateIpAddress,InstanceName:Tags[?Key=='Name']|[0].Value,Status:State.Name}" --filters Name=instance-state-name,Values=running --output table
while true; do aws ec2 describe-instances --query "Reservations[*].Instances[*].{PublicIPAdd:PublicIpAddress,PrivateIPAdd:PrivateIpAddress,InstanceName:Tags[?Key=='Name']|[0].Value,Status:State.Name}" --filters Name=instance-state-name,Values=running --output text ; echo "------------------------------" ; sleep 1; done
# eks 노드 그룹 정보 확인
eksctl get nodegroup --cluster $CLUSTER_NAME --name $CLUSTER_NAME-nodegroup
# 노드 2개 → 3개 증가
eksctl scale nodegroup --cluster $CLUSTER_NAME --name $CLUSTER_NAME-nodegroup --nodes 3 --nodes-min 3 --nodes-max 6
# 노드 확인
kubectl get nodes -o wide
kubectl get nodes -l eks.amazonaws.com/nodegroup=$CLUSTER_NAME-nodegroup
# 노드 3개 → 2개 감소 : 적용까지 시간이 소요됨
aws eks update-nodegroup-config --cluster-name $CLUSTER_NAME --nodegroup-name $CLUSTER_NAME-nodegroup --scaling-config minSize=2,maxSize=2,desiredSize=2
- 현재 정보 확인
# [모니터링1] 설정 후 변경 확인
APIDNS=$(aws eks describe-cluster --name $CLUSTER_NAME | jq -r .cluster.endpoint | cut -d '/' -f 3)
dig +short $APIDNS
while true; do dig +short $APIDNS ; echo "------------------------------" ; date; sleep 1; done
# [모니터링2] 노드에서 ControlPlane과 통신 연결 확인
N1=$(kubectl get node --label-columns=topology.kubernetes.io/zone --selector=topology.kubernetes.io/zone=ap-northeast-2a -o jsonpath={.items[0].status.addresses[0].address})
N2=$(kubectl get node --label-columns=topology.kubernetes.io/zone --selector=topology.kubernetes.io/zone=ap-northeast-2c -o jsonpath={.items[0].status.addresses[0].address})
while true; do ssh ec2-user@$N1 sudo ss -tnp | egrep 'kubelet|kube-proxy' ; echo ; ssh ec2-user@$N2 sudo ss -tnp | egrep 'kubelet|kube-proxy' ; echo "------------------------------" ; date; sleep 1; done
# Public(IP제한)+Private 로 변경 : 설정 후 8분 정도 후 반영
aws eks update-cluster-config --region $AWS_DEFAULT_REGION --name $CLUSTER_NAME --resources-vpc-config endpointPublicAccess=true,publicAccessCidrs="$(curl -s ipinfo.io/ip)/32",endpointPrivateAccess=true
- kubectl 확인
# kubectl 사용 확인
kubectl get node -v=6
kubectl cluster-info
ss -tnp | grep kubectl # 신규 터미널에서 확인 해보자 >> EKS 동작 VPC 내부에서는 Cluster Endpoint 도메인 질의 시 Private IP 정보를 리턴해준다
# 아래 dig 조회가 잘 안될 경우 myeks 호스트EC2를 재부팅 후 다시 해보자
dig +short $APIDNS
# EKS ControlPlane 보안그룹 ID 확인
aws ec2 describe-security-groups --filters Name=group-name,Values=*ControlPlaneSecurityGroup* --query "SecurityGroups[*].[GroupId]" --output text
CPSGID=$(aws ec2 describe-security-groups --filters Name=group-name,Values=*ControlPlaneSecurityGroup* --query "SecurityGroups[*].[GroupId]" --output text)
echo $CPSGID
# 노드 보안그룹에 eksctl-host 에서 노드(파드)에 접속 가능하게 룰(Rule) 추가 설정
aws ec2 authorize-security-group-ingress --group-id $CPSGID --protocol '-1' --cidr 192.168.1.100/32
# kubectl 사용 확인
kubectl get node -v=6
kubectl cluster-info
- 노드에서 확인
# 모니터링 : tcp peer 정보 변화 확인
while true; do ssh ec2-user@$N1 sudo ss -tnp | egrep 'kubelet|kube-proxy' ; echo ; ssh ec2-user@$N2 sudo ss -tnp | egrep 'kubelet|kube-proxy' ; echo "------------------------------" ; date; sleep 1; done
# kube-proxy rollout
kubectl rollout restart ds/kube-proxy -n kube-system
# kubelet 은 노드에서 systemctl restart kubelet으로 적용해보자
for i in $N1 $N2; do echo ">> node $i <<"; ssh ec2-user@$i sudo systemctl restart kubelet; echo; done
- 자신의 개인 PC에서 EKS API 정보 확인
# 외부이기 때문에 Cluster Endpoint 도메인 질의 시 Public IP 정보를 리턴해준다
dig +short $APIDNS
* 기시다님 의견 : 꼭 Full Private Endpoint 가 아니더라도 접근 편리성을 위해서 Public + Private 를 설정하고, Public API 사용하는 사용자의 IP 대역 통제 설정(여러개 IP나 대역이 적용되는지 확인 필요)을 하는게 좋아 보입니다.
- (심화) Managing etcd database size on Amazon EKS clusters - 링크
# monitor the metric etcd_db_total_size_in_bytes to track the etcd database size
kubectl get --raw /metrics | grep "apiserver_storage_size_bytes"
# HELP apiserver_storage_size_bytes [ALPHA] Size of the storage database file physically allocated in bytes.
# TYPE apiserver_storage_size_bytes gauge
apiserver_storage_size_bytes{cluster="etcd-0"} 2.957312e+06
# How do I identify what is consuming etcd database space?
kubectl get --raw=/metrics | grep apiserver_storage_objects |awk '$2>10' |sort -g -k 2
# HELP apiserver_storage_objects [STABLE] Number of stored objects at the time of last check split by kind.
# TYPE apiserver_storage_objects gauge
apiserver_storage_objects{resource="flowschemas.flowcontrol.apiserver.k8s.io"} 13
apiserver_storage_objects{resource="roles.rbac.authorization.k8s.io"} 15
apiserver_storage_objects{resource="rolebindings.rbac.authorization.k8s.io"} 16
apiserver_storage_objects{resource="apiservices.apiregistration.k8s.io"} 31
apiserver_storage_objects{resource="serviceaccounts"} 40
apiserver_storage_objects{resource="clusterrolebindings.rbac.authorization.k8s.io"} 67
apiserver_storage_objects{resource="events"} 67
apiserver_storage_objects{resource="clusterroles.rbac.authorization.k8s.io"} 81
7.(실습 완료 후) 자원 삭제
🚨 실습 완료 후 반드시 실습 리소스를 삭제해주시기 바랍니다. 인스턴스 비용 발생을 최소화하기 위함입니다.
# Amazon EKS 클러스터 삭제(10분 정도 소요
(aews@myeks:default) [root@myeks-host ~]# eksctl delete cluster --name $CLUSTER_NAME
# (클러스터 삭제 완료 확인 후) AWS CloudFormation 스택 삭제
(aews@myeks:default) [root@myeks-host ~]# aws cloudformation delete-stack --stack-name myeks
'AWS > EKS' 카테고리의 다른 글
1주차 (3/3) 도전편 - Amazon EKS 1주차 도전과제 (0) | 2024.03.09 |
---|---|
1주차 (1/3) 이론편 - Amazon EKS 소개 (0) | 2024.03.08 |