Kubernetes can be used to manage container based cluster deployment, so hundred of thousands of containers can communicate, discover, and function as expected under the control of kubernetes. Although it sounds complex, but fundamentally it is just a software application managing the running of your application in containers.
1. Installation with kubeadm
There are three main tools for kerbenetes installation and management, they need to be installed on both master and worker nodes with below command
apt-get install -y kubeadm=1.18.1-00 kubelet=1.18.1-00 kubectl=1.18.1-00
1.1 kubeadm: for installation of master node
A regular linux package running on your virtual machines or local boxes's terminal console to install the kubernetes software, it sets up master node's kubernetes cluster manager, and also allow other VM to join the kubernetes as worker node. Before installing kubeadm, docker needs to be installed first on the VM, which is the basic running unit in the kubernetes cluster
//1. update apt-get
sudo -i
apt-get update
apt-get upgrade -y
//install docker
apt-get install -y docker.io
//2. prepare kubernetes repo by adding a new line in the below file
//install vim if it is not installed on the current node
apt-get install -y vim
vim /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
apt-get update
//3 install kubeadm, kubelet, kubectl
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl
//4 install network add on
wget https://docs.projectcalico.org/manifests/calico.yaml
//uncomment below lines in the file to used for kubeadm init pod-network-cidr parameter
- name: CALICO_IPV4POOL_CIDR
value: "192.168.0.0/24"
//get ens4 ip address, for example, 10.0.0.6
ip addr show
//add the host ip to host table for k8smaster
root@master: ̃# vim /etc/hosts
10.0.0.6 k8smaster
//create kubeadm-config.yaml file as below at /root folder
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: 1.20.0
controlPlaneEndpoint: "k8smaster:6443"
networking:
podSubnet: 192.168.0.0/24
// run kubeadm init to start cluster, the output has information for how to add other nodes
root@master:~# kubeadm init --config=kubeadm-config.yaml --upload-certs
root@master:~# exit
//allow no root user to manage cluster
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
//apply network addon calico
sudo cp /root/calico.yaml .
kubectl apply -f calico.yaml
//verify master node, and major kube services (calico, coredns, etcd, apiserver, kubeproxy, scheduler) are ready
kubectl get nodes
kubectl -n kube-system get pods
1.2 kubeadm: for installation of worker node
//1. update apt-get and install docker
sudo -i
apt-get update
apt-get upgrade -y
//install docker
apt-get install -y docker.io
//2. prepare kubernetes repo by adding a new line in the below file
//install vim if it is not installed on the current node
apt-get install -y vim
vim /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
apt-get update
//3 install kubeadm, kubelet, kubectl
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl
//4 run the below commands on master node to get master ip, for example 10.0.0.6
ip addr show ens4
//use below command to get token from master
sudo kubeadm token list
//if the token is expired, use the below command to generate new one
sudo kubeadm token create
//run the command to get cert hash
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/ˆ.* //'
//output is (stdin)= fb5f18060d9c9363763da1e0ddf164dd1b41e640204d0e5ab01c366c3f5b944b
//5 go back to worker node, to add master node ip as k8smaster to host table
root@worker: ̃# vim /etc/hosts
//add below line
10.0.0.6 k8smaster
//6 add worker node to cluster
kubeadm join k8smaster:6443 --token xwqbt0.ig282rtsalg84ndw --discovery-token-ca-cert-hash sha256:fb5f18060d9c9363763da1e0ddf164dd1b41e640204d0e5ab01c366c3f5b944b
root@worker: ̃# exit
//6 run below command on master node to confirm the worker node is joined
student@master:~$ kubectl get nodes
1.3 cluster upgrade
1.3.1 First upgrade kubeadm package on master node
kubeadm upgrade plan
//get upgrade information
apt-get install kubeadm=1.19.0
//download new version of kubeadmin on master node
1.3.2 upgrade cluster
kubeadm upgrade apply v1.19.0
//apply the kubeadm upgrade of master node, running the command on master node
1.3.3 upgrade kubelet to new version on master node
apt-get install kubelet=1.19.0
or
apt-get upgrade -y kubelet=1.19.0
//restart kubelet
sudo systemctl daemon-reload
sudo systemctl restart kubelet
1.3.4 upgrade kubelet on worker node,
//first drain the worker node from master, then run "ssh workernodename" to login to worker node
apt-get upgrade -y kubelet=1.19.0
apt-get upgrade -y kubeadm=1.19.0
kubeadm upgrade node
//the above command also applies to worker node and other control plan (not master control node)
sudo systemctl daemon-reload
sudo systemctl restart kubelet
at last, uncordon the worker node from master node
2. Manage cluster with kubectl
After kubernetes is installed, on the master node's VM console terminal, user can use kubectl to communicate with kubernetes to manage kubernetes cluster, such as, create/delete nodes/pods, get info from nodes/pods, scale up and down. Internally, kuberctl sends http requests to kubernetes API server, which communicates to different services and controllers to be sure all worker nodes are in good state.
To get the version for yaml file on different resource type, run the below command
kubectl explain typeofresource, for example
kubectl explain replicaset
2.1 creating pod
// create a pod with run command
kubectl run mypodname --image dockerimagename
kubectl run mytestpod --image=nginx --dry-run=client -o yaml > mypod.yaml
//get pod info
kubectl get pods
kubectl get pods -o wide
kubectl describe pod podname
//delete pod
kubectl delete pod podname
//use dry run to generate pod create yaml file
kubectl run mytestpod --image=nginx --dry-run=client -o yaml > mypod.yaml
//generated yaml file as below
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: mytestpod
name: mytestpod
spec:
containers:
- image: nginx
name: mytestpod
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
// create a pod with a yaml file
kubectl create -f pod.yml
//apply the content in yaml file
kubectl apply -f pod.yaml
//update pod config
kubectl edit pod podname
//select pod based on label matching
kubectl get pod --selector env=prod,bu=finance,tier=frontend
//generate a yaml file based on exist pod
kubectl get pod mypod -o yaml > mypod.yaml
//create a pod with resource limit and request
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: app
image: images.my-company.example/app:v4
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
- name: log-aggregator
image: images.my-company.example/log-aggregator:v6
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
2.2 Create Replicaset
Replicaset yaml file's spec section must include pod creating template, selector and replica number fields
kubectl create -f replicaset -f myreplicaset.yaml
kubectl get replicaset -o wide
kubectl delete replicaset myreplicasetname
kubectl replace -f myreplicasetfilename.yaml
kubectl scale replicaset --replicas=10 myreplicasetname
kubectl describe replicaset myreplicasetname
//yaml file to create replicaset
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# modify replicas according to your case
replicas: 3
selector:
matchLabels:
tier: frontend
template:
metadata:
labels:
tier: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3
2.3 create Deployment
kubectl create deployment --image=nginx mynginx --replicas=4
kubectl create deployment --image=nginx mynginx --dry-run=client -o yaml
kubectl create deployment --image=nginx mynginx --dry-run=client -o yaml > nginx-deployment.yaml
kubectl expose deployment mynginx --port 80
kubectl get deployment mydeployment -o yaml
kubectl edit service mynginx
kubectl edit deployment mydeployment
kubectl get pods --namespace=dev
kubectl config set-context $(kubectl config current-context) --namespace=dev
kubectl get pods --all-namespaces
2.5 get statistic from pod and node
kubectl top node
kubectl top pod
2.6 get log
kubectl logs -f podname containername
2.7 rollout deployment
Rollout operation is based on the deployment name, the deployment with the same name can be restart pause, resume, undo, etc
//roll back a deployment to previous change
kubectl rollout undo deployment/mydeployment
2.8 command and arg
In pod yaml file, it can uses command and args to replace Docker file's ENTRYPOINT (execution command) and CMD (argument) field
apiVersion: v1
kind: Pod
metadata:
name: command-demo
labels:
purpose: demonstrate-command
spec:
containers:
- name: command-demo-container
image: debian
command: ["printenv"]
args: ["HOSTNAME", "KUBERNETES_PORT"]
In pod yaml file, it can uses command and args to replace Docker file's ENTRYPOINT (execution command) and CMD (argument) field
To execute a shell command in container
kubectl exec mypodname -c mycontainername -i -t -- mybashcommand
//run interactive bash commandkubectl exec mypodname -it -- sh
2.9 ConfigMaps for Environment variables
kubectl create configmap myconfigmap --from-literal=mykey1=myvalue1 --from-literal=mykey2=myvalue2
kubectl create configmap myconfigmap --from-file=mypropertyfile
Using yaml file to create config map
kubectl create -f myconfigmapyamlfile
apiVersion: v1
kind: ConfigMap
metadata:
name: game-config-env-file
namespace: default
resourceVersion: "809965"
uid: d9d1ca5b-eb34-11e7-887b-42010a8002b8
data:
allowed: '"true"'
enemies: aliens
lives: "3"
apiVersion: v1
kind: ConfigMap
metadata:
name: game-config-env-file
namespace: default
resourceVersion: "809965"
uid: d9d1ca5b-eb34-11e7-887b-42010a8002b8
data:
allowed: '"true"'
enemies: aliens
lives: "3"
sample pod yaml file to set environment from config map
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: special-config
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: special.how
- name: LOG_LEVEL
valueFrom:
configMapKeyRef:
name: env-config
key: log_level
2.10 secret for pod
kubectl create secret generic mysecret --from-literal=mykey1=myvalue1 --from-literal=mykey2=myvalue2
kubectl create secret myconfigmap --from-file=mypropertyfile
kubectl create -f mysecretfile
Generate hash value for secret fileecho -n "myvalue" | based64
decode hash value echo -n 'myhashedvalue" | base64 --decode
Set environment variable from secretapiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
apiVersion: v1
kind: Pod
metadata:
name: secret-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- secretRef:
name: mysecret
restartPolicy: Never
create docker registry secret
kubectl create secret docker-registry private-reg-cred --docker-username=docker_user --docker-password=dock_password --docker-server=myprivateregistry.com:5000 --docker-email=dock_user@myprivateregistry.com
apiVersion: v1
kind: Pod
metadata:
name: secret-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- secretRef:
name: mysecret
restartPolicy: Never
create docker registry secret
kubectl create secret docker-registry private-reg-cred --docker-username=docker_user --docker-password=dock_password --docker-server=myprivateregistry.com:5000 --docker-email=dock_user@myprivateregistry.com
2.11 certificate management
kubectl get csr
get certificate signing request
based 64 encode the csr and remove \n
cat john.csr | base64 | tr -d "\n"
kubectl certificate approve mycsrname
signing a certificate request and create the signed certificate
kubectl certificate deny mycsrname
deny a certificate signing request
kubectl certificate delete csr myscrname
delete certificate signing request
After the certificate is signed, use the below command to view the cert, the generated cert is in status/certificate field. The content is base 64 encoded, it need to be base64 decoded.
kubectl get csr mycsrname -o yaml
Certificate signing request yaml file
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: john
spec:
groups:
- system:authenticated
request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0dZVzVuWld4aE1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQTByczhJTHRHdTYxakx2dHhWTTJSVlRWMDNHWlJTWWw0dWluVWo4RElaWjBOCnR2MUZtRVFSd3VoaUZsOFEzcWl0Qm0wMUFSMkNJVXBGd2ZzSjZ4MXF3ckJzVkhZbGlBNVhwRVpZM3ExcGswSDQKM3Z3aGJlK1o2MVNrVHF5SVBYUUwrTWM5T1Nsbm0xb0R2N0NtSkZNMUlMRVI3QTVGZnZKOEdFRjJ6dHBoaUlFMwpub1dtdHNZb3JuT2wzc2lHQ2ZGZzR4Zmd4eW8ybmlneFNVekl1bXNnVm9PM2ttT0x1RVF6cXpkakJ3TFJXbWlECklmMXBMWnoyalVnald4UkhCM1gyWnVVV1d1T09PZnpXM01LaE8ybHEvZi9DdS8wYk83c0x0MCt3U2ZMSU91TFcKcW90blZtRmxMMytqTy82WDNDKzBERHk5aUtwbXJjVDBnWGZLemE1dHJRSURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBR05WdmVIOGR4ZzNvK21VeVRkbmFjVmQ1N24zSkExdnZEU1JWREkyQTZ1eXN3ZFp1L1BVCkkwZXpZWFV0RVNnSk1IRmQycVVNMjNuNVJsSXJ3R0xuUXFISUh5VStWWHhsdnZsRnpNOVpEWllSTmU3QlJvYXgKQVlEdUI5STZXT3FYbkFvczFqRmxNUG5NbFpqdU5kSGxpT1BjTU1oNndLaTZzZFhpVStHYTJ2RUVLY01jSVUyRgpvU2djUWdMYTk0aEpacGk3ZnNMdm1OQUxoT045UHdNMGM1dVJVejV4T0dGMUtCbWRSeEgvbUNOS2JKYjFRQm1HCkkwYitEUEdaTktXTU0xMzhIQXdoV0tkNjVoVHdYOWl4V3ZHMkh4TG1WQzg0L1BHT0tWQW9FNkpsYWFHdTlQVmkKdjlOSjVaZlZrcXdCd0hKbzZXdk9xVlA3SVFjZmg3d0drWm89Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=
signerName: kubernetes.io/kube-apiserver-client
usages:
- client auth
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: john
spec:
groups:
- system:authenticated
request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1ZqQ0NBVDRDQVFBd0VURVBNQTBHQTFVRUF3d0dZVzVuWld4aE1JSUJJakFOQmdrcWhraUc5dzBCQVFFRgpBQU9DQVE4QU1JSUJDZ0tDQVFFQTByczhJTHRHdTYxakx2dHhWTTJSVlRWMDNHWlJTWWw0dWluVWo4RElaWjBOCnR2MUZtRVFSd3VoaUZsOFEzcWl0Qm0wMUFSMkNJVXBGd2ZzSjZ4MXF3ckJzVkhZbGlBNVhwRVpZM3ExcGswSDQKM3Z3aGJlK1o2MVNrVHF5SVBYUUwrTWM5T1Nsbm0xb0R2N0NtSkZNMUlMRVI3QTVGZnZKOEdFRjJ6dHBoaUlFMwpub1dtdHNZb3JuT2wzc2lHQ2ZGZzR4Zmd4eW8ybmlneFNVekl1bXNnVm9PM2ttT0x1RVF6cXpkakJ3TFJXbWlECklmMXBMWnoyalVnald4UkhCM1gyWnVVV1d1T09PZnpXM01LaE8ybHEvZi9DdS8wYk83c0x0MCt3U2ZMSU91TFcKcW90blZtRmxMMytqTy82WDNDKzBERHk5aUtwbXJjVDBnWGZLemE1dHJRSURBUUFCb0FBd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBR05WdmVIOGR4ZzNvK21VeVRkbmFjVmQ1N24zSkExdnZEU1JWREkyQTZ1eXN3ZFp1L1BVCkkwZXpZWFV0RVNnSk1IRmQycVVNMjNuNVJsSXJ3R0xuUXFISUh5VStWWHhsdnZsRnpNOVpEWllSTmU3QlJvYXgKQVlEdUI5STZXT3FYbkFvczFqRmxNUG5NbFpqdU5kSGxpT1BjTU1oNndLaTZzZFhpVStHYTJ2RUVLY01jSVUyRgpvU2djUWdMYTk0aEpacGk3ZnNMdm1OQUxoT045UHdNMGM1dVJVejV4T0dGMUtCbWRSeEgvbUNOS2JKYjFRQm1HCkkwYitEUEdaTktXTU0xMzhIQXdoV0tkNjVoVHdYOWl4V3ZHMkh4TG1WQzg0L1BHT0tWQW9FNkpsYWFHdTlQVmkKdjlOSjVaZlZrcXdCd0hKbzZXdk9xVlA3SVFjZmg3d0drWm89Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=
signerName: kubernetes.io/kube-apiserver-client
usages:
- client auth
2.12 cluster config
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
2. Persistent Volume
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /tmp
server: 172.17.0.2
PersistentVolumeClaim yaml sample
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 8Gi
storageClassName: slow
selector:
matchLabels:
release: "stable"
matchExpressions:
- {key: environment, operator: In, values: [dev]}
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
StorageClass yaml file sample
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:
- debug
volumeBindingMode: Immediate
3. Scheduler
The basic scheduler uses nodeName to select a node with matched name, and run on it.
3.2 taints and toleration
Taint allows a node to define a rule to expel all the pods it does not want, so only those pods with matched toleration can be hosted, in this case, node is rule decider. taint has three effects:
NoSchedule
PreferNoSchedule
NoExecute
//set taint for a node, which means no pod can be schedule onto node1 unless it has a match toleration
kutectl taint nodes node1 app=blue:NoSchedule
Pod with matched toleration can be hosted in the nodes with taint
3.3 node selection and affinity
Pods selection and affinity let pods decide which node can host it, so it expels all other node it does not want. In this case, pod is decision maker.
3.3.1 pod creation yaml file can contain nodeSelector element which indicates which node can be used to host the pod.
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disktype: ssd
3.3.2 node affinity
Node affinity are configuration which uses complex express in pod definition to match which node can host the pod
3.4 resource request and limit
Pod can specify its resource request and resource limit to let scheduler to decide which node can be used to put the pod
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: app
image: images.my-company.example/app:v4
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
- name: log-aggregator
image: images.my-company.example/log-aggregator:v6
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
3.5 node maintenance
kubectl drain nodename --ignore-daemonsets --force
Move current pods in the specified node to other nodes, and also mark the specified node as unable to schedule.
kubectl cordon mynode
mark the specified node as unable to schedule.
kubectl uncordon mynodename
mark a node to be able to schedule
Backup cluster data
ETCDCTL_API=3
etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key snapshot save /opt/snapshot-pre-boot.db
restore to a new data folder
etcdctl snapshot restore restorefilepath --data-dir=newdatapath
edit etcd manifest file to set data directory to the new data folder
vim /etc/kubernetes/manifests/etcd.yaml
4. Node
In Kubernetes, a node is a VM or a local computer hosting pods. Kubernetes has two types of nodes: master and worker node (), A node can have multiple pods, and each pod can have multiple containers.
kubectl get nodes
Kubernetes scheduler decides which node will host a particular pod.
Label the nodes for pod selection
kubectl label nodes mynodename labelkey=labelvalue
To execute bash command on a node in cluster, use ssh command.
To execute bash command on a container in a pod, use kubectl exec -it podname -- sh
Master:
master node manages the kubernetes cluster. It contains a component Kube-APIServe to communicate with other management components and all worker nodes.
Master node contains services to allow pods to communicate to each other and to the outside world, master node also contains controllers to manage and upgrade the deployment of pods
Worker:
worker nodes contain one or more pods, a pod contains one or more (docker) containers. A pod has a unique ip address and mounted storage shared by all containers within it.
5. Network configuration:
Cluster IP
cluster ip groups the pods with same function and expose them with a single ip address (cluster IP) and port number. When requests sent to the cluster ip and port, it will be redirect to one of matched pods. on the targetPort.
The output of
kubectl describe service myclusteripservice -o wide
includes endpoint field, which is the matched pods' ip address and ports.
//create a service of ClusterIP to expose the pods on a particular port
kubectl expose pod myredis --port=6379 --name=redis-service --dry-run=client -o yaml
Cluster IP is used for internal communication between function providers to function consumers.
yaml file to create cluster ip
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
type: ClusterIP
ports:
- targetPort: 80
port: 81
selector:
app: mybackendapp
NodePort
NodePort is a service running on a worker node which listens on a particular port on the node, and then forward the requests received on that port to one of matched pod. The client request must send to a particular pod's ip address and node port.
The NodePort definition also defines the listening port on the matched pod using targetPort field.
yaml file to create service:
apiVersion: v1
kind: Service
metadata:
name: myservice
spec:
type: NodePort
ports:
- targetPort: 80 //listening port on the matched pods for handling external request
port: 88 //service port number to be connected by pods or other services
nodePort: 30008 //listening port on node for accepting external request
selector:
app: myapp //label for marching pod
yaml file for creating matched pod:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
app: myapp //matching label for pods
name: mytestpod
spec:
containers:
- image: nginx
name: mytestpod
Client can send requests from anywhere to node's external ip address and Nodeport (30008) in order to access the pod's listening port 80.
External load balancer
Using external loadbalance with kubernetes can expose services to external with NodePort, the client request will be assigned to different NodePort port to spread the load balance.
Ingress (Internal loadbalancer)
Ingress within kubernetes receives the external requests and distribute the requests to different services' cluster IP based on ingress rule. It is preferred than external loadbalancer.
I am impressed! Very useful information specially the closing part about Kubernetes. thank you dear Achasoda
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDelete