User Tools

Site Tools


devops:docker:kubernetes

This is an old revision of the document!


Kubernetes

Kubernetes Command line control

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
# check status
kubectl cluster-info
 
Kubernetes control plane is running at https://1E8A67830070D01D369595AAD4DAB03D.gr7.eu-central-1.eks.amazonaws.com
CoreDNS is running at https://1E8A67830070D01D369595AAD4DAB03D.gr7.eu-central-1.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
 
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
 
 
# list clusterrole bindings for all namespaces
kubectl get clusterrolebindings system:node --all-namespaces -o json
 
{
    "apiVersion": "rbac.authorization.k8s.io/v1",
    "kind": "ClusterRoleBinding",
    "metadata": {
        "annotations": {
            "rbac.authorization.kubernetes.io/autoupdate": "true"
        },
        "creationTimestamp": "2023-11-19T11:19:05Z",
        "labels": {
            "kubernetes.io/bootstrapping": "rbac-defaults"
        },
        "name": "system:node",
        "resourceVersion": "141",
        "uid": "e9d7ef15-9313-4ec0-9676-521fd79073c3"
    },
    "roleRef": {
        "apiGroup": "rbac.authorization.k8s.io",
        "kind": "ClusterRole",
        "name": "system:node"
    }
}
 
 
 
# cubeconfig is the config-file, which makes k8s accessible
# generates a "kubeconfig" in ~/.kube/config
aws eks update-kubeconfig --name alf-dev-eks-auth0-eks --alias alf-dev-eks-auth0-eks
 
 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyY0F3SUJBZ0lJYmFmWWVvbHRNWDh3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUSFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TXpFeE1Ua3hNVEV6TXpCYUZ3MHpNekV4TVRZeE1URTRNekJhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUNldCtjeGlLNWFPV21mUit1ZUZGRVZId3JEbEdaRU5DcjBGaEx2QzBmZFRWTnJvYS9HWWRsdGIxNEMKOVF0OWErbk5pTytsWDRSUWVUMks1ZlZVUkdHaVQwV0RvR05DdFp5QkhlTXdOeThWakhtOWV4ckNnTnJ1QTVERgpGVGw5OE5vTUdYVGdjV1BKNUk4NGxSU3E4WEVCZzdJYVZNdVVMeGFyczNnb3JoN1IvaWFvdjFLNkRXYmxhODNkCk9qUkpRR28rUXFHVkFENFRieFZPYmszR21JU1ROcFp2bStUQ251MkdFcXI2MUxqWXdHNjVyb3pZbUxyd3lZNncKSU14VFdtSXIzSHlYZjc1eXRmNDkwa04xSG43T1FTNkxFSlZqdjhmMTdHMDByU2xwQTR3QlN1RlhrV3hDbFZ0QwppOUp0RFVRNzJ1QkJCNlFFVUU3T3M4eWNvTGtWQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJSYUM3WnJIZmM3UjJUY0Q5a2FDTDdIMDRsaVNUQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQVdYM0N1TXJ2QgpjeDRIa1FnNWRPZ0thb3J4OWVheGpPRGl6L3FBT2tIOStGaUU5RnJaTUpzQmYvYzUwZ3JJMzlSNDVmQWpvSm5SCml4UTVTOEs1dmRqdUlOQ3J1L0lMcVkyY09pZG56VWowMmtlME43QXpFWDlUaXNwUkpPdXRHZlY3UjNZTUFucEMKd0ttZTJyMGpjemNvcXlWK1NxTmNyZFcvNzJqVGxuQkc2YlRPWmVZKzJWZFZkeU1VQm9JaWhPUUJQb2lia0x3awovUlZFcUxLdHRpZTVqcFRTaFlmSzdhTnB5UUprQTFxbWxqWk5nZlVkcjVmUitReHlqc2h0MG9DaTFUODM2eTRGCnFjZld0L2xjY2hjMXU2ZmJzZWlvemh1OW5ndkdNL3FVWXFVNkFVZ1daVjIzYVNzQjhqSlJna09hUlBMUXZ4anUKbjdMMkdnekUyTWkwCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://1E8A67830070D01D369595NMD4DAB03D.gr7.eu-central-1.eks.amazonaws.com
  name: arn:aws:eks:eu-central-1:123456789012:cluster/alf-dev-eks-auth0-eks
contexts:
- context:
    cluster: arn:aws:eks:eu-central-1:123456789012:cluster/alf-dev-eks-auth0-eks
    user: arn:aws:eks:eu-central-1:123456789012:cluster/alf-dev-eks-auth0-eks
  name: arn:aws:eks:eu-central-1:123456789012:cluster/alf-dev-eks-auth0-eks
current-context: arn:aws:eks:eu-central-1:123456789012:cluster/alf-dev-eks-auth0-eks
kind: Config
preferences: {}
users:
- name: arn:aws:eks:eu-central-1:123456789012:cluster/alf-dev-eks-auth0-eks
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - --region
      - eu-central-1
      - eks
      - get-token
      - --cluster-name
      - alf-dev-eks-auth0-eks
      - --output
      - json
      command: aws

Helm : What is helm?

The package manager for Kubernetes. Helm is the best way to find, share, and use software built for Kubernetes

https://circleci.com/blog/what-is-helm/

Helm Charts

A Helm chart is a package that contains all the necessary resources to deploy an application to a Kubernetes cluster. This includes YAML configuration files for deployments, services, secrets, and config maps that define the desired state of your application.

Each Helm chart can be versioned and managed independently, making it easy to maintain multiple versions of an application with different configurations.

The whole idea of HELM - is about splitting IaC in - Infrastructure code (templates) - Parameters per environment

This is how HELM helps apply DontRepeatYourself

Repository

Public HELM repository: https://artifacthub.io/

Achitecture

Vagrant environment

Use the Vagrant environment for the experiments https://github.com/skipidar/Vagrant-Kubernetes

Glossary

On Windows - Dont deploy in minikube

Why not Minukube: the minikube may only be started from disk C:\ Otherwise it will throw an error, that it does not recognize the path.

On Windows - Deploy in Linux-guest Vagrant VM - Minikube distribution

OS: Ubuntu Kubernetes: Minikube

For experiments, you can start a vagrant environment. See https://akos.ma/blog/vagrant-k3s-and-virtualbox/

Vagrant file

Start minikube with no vm driver, dynamic audit enabled

1
2
3
4
sudo minikube start --driver=none \
  --apiserver-ips 127.0.0.1 \
  --listen-address 0.0.0.0 \
  --apiserver-name localhost

Deleting minikube

1
minikube delete

install Minikube

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
    # The most common configuration options are documented and commented below.
    # For a complete reference, please see the online documentation at
 
    # Every Vagrant development environment requires a box. You can search for
    config.vm.box = "geerlingguy/ubuntu2004"
 
    PATH_VAGRANT_PROJECT=File.dirname(__FILE__)
 
    config.ssh.insert_key = false
 
    config.vm.provision "file", source: "#{PATH_VAGRANT_PROJECT}\\.aws\\config", destination: "/home/vagrant/.aws/config"
    config.vm.provision "file", source: "#{PATH_VAGRANT_PROJECT}\\.aws\\credentials", destination: "/home/vagrant/.aws/credentials"
     
    config.vm.network "public_network"
 
    # forwarding ports
    config.vm.network "forwarded_port", guest: 3000, host: 3000, protocol: "tcp" # node.js apps
    config.vm.network "forwarded_port", guest: 3000, host: 3000, protocol: "udp" # node.js apps
    config.vm.network "forwarded_port", guest: 4200, host: 4200, protocol: "tcp" # node.js apps
    config.vm.network "forwarded_port", guest: 4200, host: 4200, protocol: "udp" # node.js apps
    config.vm.network "forwarded_port", guest: 5000, host: 5000, protocol: "tcp" # demo auth0 apps
    config.vm.network "forwarded_port", guest: 5000, host: 5000, protocol: "udp" # demo auth0 apps
    config.vm.network "forwarded_port", guest: 8000, host: 8000, protocol: "tcp" # dev dynamo DB
    config.vm.network "forwarded_port", guest: 8000, host: 8000, protocol: "udp" # dev dynamo DB
    config.vm.network "forwarded_port", guest: 8001, host: 8001, protocol: "tcp" # dev dynamo DB
    config.vm.network "forwarded_port", guest: 8001, host: 8001, protocol: "udp" # dev dynamo DB
    config.vm.network "forwarded_port", guest: 8443, host: 8443, protocol: "tcp" # dev dynamo DB
    config.vm.network "forwarded_port", guest: 8443, host: 8443, protocol: "udp" # dev dynamo DB
    config.vm.network "forwarded_port", guest: 389, host: 389, protocol: "tcp" # demo auth0 apps
    config.vm.network "forwarded_port", guest: 389, host: 389, protocol: "udp" # demo auth0 apps
    config.vm.network "forwarded_port", guest: 636, host: 636, protocol: "tcp" # demo auth0 apps
    config.vm.network "forwarded_port", guest: 636, host: 636, protocol: "udp" # demo auth0 apps
    config.vm.network "forwarded_port", guest: 5432, host: 5432 # postgres apps
    config.vm.network "forwarded_port", guest: 8002, host: 8002 # postgres DB admin ui
    config.vm.network "forwarded_port", guest: 8003, host: 8003 # free
    config.vm.network "forwarded_port", guest: 8004, host: 8004 # free
    config.vm.network "forwarded_port", guest: 8005, host: 8005 # free
    config.vm.network "forwarded_port", guest: 8006, host: 8006 # free
    config.vm.network "forwarded_port", guest: 8007, host: 8007 # free
    config.vm.network "forwarded_port", guest: 8008, host: 8008 # free
    config.vm.network "forwarded_port", guest: 8009, host: 8009 # free
    config.vm.network "forwarded_port", guest: 3300, host: 3300, protocol: "tcp" # For ssh tunnel
    config.vm.network "forwarded_port", guest: 3300, host: 3300, protocol: "udp" # For ssh tunnel
    config.vm.network "forwarded_port", guest: 8888, host: 8888 # chronograph
    config.vm.network "forwarded_port", guest: 8092, host: 8092 # telegraph
    config.vm.network "forwarded_port", guest: 8125, host: 8125 # telegraph
    config.vm.network "forwarded_port", guest: 8094, host: 8094 # telegraph
    config.vm.network "forwarded_port", guest: 9092, host: 9092 # kapacitor
    config.vm.network "forwarded_port", guest: 8086, host: 8086 # influxdb
    config.vm.network "forwarded_port", guest: 8080, host: 8080 # typical web apps
    config.vm.network "forwarded_port", guest: 27017, host: 27017 # mongodb
    config.vm.network "forwarded_port", guest: 8081, host: 8081 # mongo-express
 
 
    # make the root disk available
    config.vm.synced_folder "c:/", "//mnt/c/"
 
    config.vm.provider "virtualbox" do |v|
        v.memory = 8112
        v.cpus = 4
    end
 
 
 
    # Use shell script to provision
    config.vm.provision "shell", inline: <<-SHELL
     
 
     
 
# adding custom scripts to the path
chmod -R +x /home/vagrant/shell
cp /home/vagrant/shell/*.sh /usr/local/bin/
 
 
 
export DEBIAN_FRONTEND=noninteractive
echo "export DEBIAN_FRONTEND=noninteractive" >> /home/vagrant/.bashrc
echo "export DEBIAN_FRONTEND=noninteractive" >> /root/.bashrc
 
 
 
#### Configure ssh jump host key ####
chmod 700 /home/vagrant/.ssh/ssh-jumphost.priv.openssh.ppk
 
 
 
#### Configure ssh for proxy usage ####
echo -e "\
Host code.privategitlabrepo.com \n \
    IdentityFile /home/vagrant/.ssh/gitlab.ppk \n \
 \n " > /home/vagrant/.ssh/config
 
 
 
# copy required file to the root home. File provisioner cant do that, permissions
cp -R /home/vagrant/.aws /root/
cp -R /home/vagrant/.ssh /root/
 
 
 
# set the time zone
sudo timedatectl set-timezone Europe/Berlin
 
 
# update
apt-get update -y
 
 
 
 
 
 
##################################################################################################################################
##################################################################################################################################
 
 
 
# brew
apt-get install -y linuxbrew-wrapper
 
# refresh session
su - $USER
 
 
 
 
 
# DOCKER
 
#Install packages to allow apt to use a repository over HTTPS:
sudo apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
software-properties-common
 
 
 
 
 
 
# Add Docker’s official GPG key:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
 
# add the docker stable repository
sudo add-apt-repository \
$(lsb_release -cs) \
stable"
 
# update
apt-get update -y
 
# install docker
apt-get install -y docker-ce
 
 
# configure the proxy for docker
sudo mkdir -p /etc/systemd/system/docker.service.d/
 
 
sudo systemctl daemon-reload
sudo systemctl restart docker
 
 
 
 
 
# DOCKER COMPOSE
 
# Docker-compose
COMPOSE_VERSION="2.3.3"
 
sudo curl -L "https://github.com/docker/compose/releases/download/v2.12.2/docker-compose-$(uname -s)-$(uname -m)"  -o /usr/local/bin/docker-compose
sudo mv /usr/local/bin/docker-compose /usr/bin/docker-compose
sudo chmod +x /usr/bin/docker-compose
 
 
 
 
 
 
# Kubectl
sudo apt-get update -y && sudo apt-get install -y apt-transport-https gnupg2
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update -y
sudo apt-get install -y kubectl
 
 
 
 
# Helm
chmod 700 get_helm.sh
./get_helm.sh
rm ./get_helm.sh
 
 
 
 
 
 
 
# MINIKUBE
 
# Minikube Variables
KUBERNETES_VERSION="1.23.2"
 
 
 
 
 
# Kubernetes >1.20.2 requirements
sudo apt-get install -y conntrack
 
## /usr/sbin/iptables needs to be in path for minikube driver=none
export PATH=$PATH:/usr/sbin/
 
# Install minikube
curl -sLo minikube https://storage.googleapis.com/minikube/releases/v${KUBERNETES_VERSION}/minikube-linux-amd64 2>/dev/null
chmod +x minikube
sudo cp minikube /usr/local/bin && rm minikube
 
# Start minikube with no vm driver, dynamic audit enabled
minikube start --driver=none \
  --apiserver-ips 127.0.0.1 \
  --listen-address 0.0.0.0 \
  --apiserver-name localhost
  # --feature-gates=DynamicAuditing=true \
  # --extra-config=apiserver.audit-dynamic-configuration=true \
  # --extra-config=apiserver.runtime-config=auditregistration.k8s.io/v1alpha1
 
# Assign kubeconfig
sudo cp -R /root/.kube /root/.minikube /home/vagrant/
sudo chown -R vagrant /root/.kube /root/.minikube /root /home/vagrant/.kube
 
 
 
 
 
 
 
 
 
######################################### 2
 
    SHELL
 
end

To Start K8s do

1
2
3
4
5
6
7
8
# Start minikube with no vm driver, dynamic audit enabled
minikube start --driver=none \
  --apiserver-ips 127.0.0.1 \
  --apiserver-name localhost
  # --feature-gates=DynamicAuditing=true \
  # --extra-config=apiserver.audit-dynamic-configuration=true \
  # --extra-config=apiserver.runtime-config=auditregistration.k8s.io/v1alpha1
   

Check if running via

`sudo minikube status`

1
2
3
4
5
6
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

`sudo kubectl cluster-info`

1
2
3
4
Kubernetes control plane is running at https://localhost:8443
 
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Deploy K3s distribution

OS: Alpine Kubernetes: K3s

For experiments, you can start a vagrant environment. See https://akos.ma/blog/vagrant-k3s-and-virtualbox/

Vagrant file

Starts 3 machine: - server - agent1 - agent2

and installs a small Kubernetes on them.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
# -*- mode: ruby -*-
# vi: set ft=ruby :
 
server_ip = "192.168.33.10"
 
agents =    {
                "agent1" => "192.168.33.11",
                "agent2" => "192.168.33.12"
            }
 
forward_webport_server = 8091
forward_webport_agents_start = 8092
 
 
server_script = <<-SHELL
    sudo -i
    apk add curl
    export INSTALL_K3S_EXEC="--bind-address=#{server_ip} --node-external-ip=#{server_ip} --flannel-iface=eth1"
    curl -sfL https://get.k3s.io | sh -
    echo "Sleeping for 5 seconds to wait for k3s to start"
    sleep 5
    cp /var/lib/rancher/k3s/server/token /vagrant_shared
    cp /etc/rancher/k3s/k3s.yaml /vagrant_shared
    SHELL
     
agent_script = <<-SHELL
    sudo -i
    apk add curl
    export K3S_TOKEN_FILE=/vagrant_shared/token
    export K3S_URL=https://#{server_ip}:6443
    export INSTALL_K3S_EXEC="--flannel-iface=eth1"
    curl -sfL https://get.k3s.io | sh -
    SHELL
 
Vagrant.configure("2") do |config|
    config.vm.box = "generic/alpine314"
 
    config.ssh.insert_key = false
   
      # make the root disk available
    config.vm.synced_folder "c:/", "//mnt/c/"
    config.vm.synced_folder "d:/", "//mnt/d/"
 
 
    config.vm.define "server", primary: true do |server|
     
        server.vm.post_up_message = "For server forward port #{forward_webport_server}"
        server.vm.network "forwarded_port", guest: 80, host: forward_webport_server # nginx
        forward_webport_server = forward_webport_server+1
     
        server.vm.network "private_network", ip: server_ip
        server.vm.synced_folder "./shared", "/vagrant_shared"
        server.vm.hostname = "server"
        server.vm.provider "virtualbox" do |vb|
          vb.memory = "2048"
          vb.cpus = "2"
        end
        server.vm.provision "shell", inline: server_script
    end
 
 
    agents.each do |agent_name, agent_ip|
        config.vm.define agent_name do |agent|
         
          agent.vm.post_up_message = "For agent forward port #{forward_webport_agents_start}"
          agent.vm.network "forwarded_port", guest: 80, host: forward_webport_agents_start
          forward_webport_agents_start = forward_webport_agents_start+1
         
          agent.vm.network "private_network", ip: agent_ip
          agent.vm.synced_folder "./shared", "/vagrant_shared"
          agent.vm.hostname = agent_name
          agent.vm.provider "virtualbox" do |vb|
            vb.memory = "2048"
            vb.cpus = "2"
          end
          agent.vm.provision "shell", inline: agent_script
        end
                 
    end
end

Validate Server

Login to the server

vagrant ssh server

Check if K3s is running

netstat -nlp|grep 6443

1
2
server:~$ sudo netstat -nlp|grep 6443
tcp        0      0 192.168.33.10:6443      0.0.0.0:*               LISTEN      2999/k3s server

Check which process is listening

ps -a | grep 2999

1
2
3
server:~$ ps -a | grep 2999
 2999 root      4:09 {k3s-server} /usr/local/bin/k3s server
 6784 vagrant   0:00 grep 2999

list the Kubernetes nodes, to see, that also the agent is now successfully registered

1
2
3
4
server:~$ sudo k3s kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
agent1   Ready    <none>                 11h   v1.24.6+k3s1
server   Ready    control-plane,master   11h   v1.24.6+k3s1

Add Agent1 to cluster

Login to the server

vagrant ssh agent1

Check if K3 is running

1
2
3
4
5
6
7
8
9
10
agent1:~$ sudo netstat -nlp | grep k3
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      2932/k3s agent
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      2932/k3s agent
tcp        0      0 127.0.0.1:6444          0.0.0.0:*               LISTEN      2932/k3s agent
tcp        0      0 127.0.0.1:10256         0.0.0.0:*               LISTEN      2932/k3s agent
tcp        0      0 :::10250                :::*                    LISTEN      2932/k3s agent
unix  2      [ ACC ]     STREAM     LISTENING      12745 2932/k3s agent      /var/lib/kubelet/pod-resources/1725829875
unix  2      [ ACC ]     STREAM     LISTENING      12805 2932/k3s agent      /var/lib/kubelet/device-plugins/kubelet.sock
unix  2      [ ACC ]     STREAM     LISTENING      13608 3007/containerd     /run/k3s/containerd/containerd.sock.ttrpc
unix  2      [ ACC ]     STREAM     LISTENING      12712 3007/containerd     /run/k3s/containerd/containerd.sock

Commands and usage

Cluster info

Overview Master, nodes. URL.

1
2
3
4
5
sudo kubectl cluster-info
 
Kubernetes control plane is running at https://192.168.33.10:6443
CoreDNS is running at https://192.168.33.10:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://192.168.33.10:6443/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy

K3s addons in kube-system

kube-system is the namespace for objects created by the Kubernetes system.

kube-system contains service accounts which are used to run the kubernetes controllers. These service accounts are granted significant permissions (create pods anywhere, for instance).

1
2
3
4
5
6
7
sudo kubectl get deploy --namespace=kube-system
 
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
local-path-provisioner   0/1     1            0           8d
metrics-server           0/1     1            0           8d
coredns                  0/1     1            0           8d
traefik                  1/1     1            1           8d

local-path-provisioner - k3s installs itself with “Local Path Provisioner”, a simple controller whose job it is to create local volumes on each k3s node. If you only have one node, or you just want something simple to start learning with, then local-path is ideal, since it requires no further setup.

metrics-server - some monitoring solution

coredns - probybly a controller of private domains, a private DNS server

traefik - a reverse proxy, like nginx.

Cluster configs

View the current default config of kubectl, in home/USERNAME/.kube/config

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
sudo kubectl config view
 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.33.10:6443
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

List pods

1
2
3
4
sudo kubectl get pods
 
NAME        READY   STATUS    RESTARTS   AGE
webserver   1/1     Running   0          61s

Listing all pods in all namespaces

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
sudo kubectl get pods -A
 
NAMESPACE     NAME                                      READY   STATUS      RESTARTS     AGE
kube-system   helm-install-traefik-crd-488j5            0/1     Completed   0            2d12h
kube-system   helm-install-traefik-htfhx                0/1     Completed   1            2d12h
kube-system   svclb-traefik-ae4a9542-fvh6n              2/2     Running     2 (2d ago)   2d12h
kube-system   traefik-7cd4fcff68-bhxqp                  1/1     Running     1 (2d ago)   2d12h
kube-system   coredns-b96499967-rsc9q                   1/1     Running     1 (2d ago)   2d12h
kube-system   svclb-traefik-ae4a9542-vkbjw              2/2     Running     2 (2d ago)   2d12h
kube-system   local-path-provisioner-7b7dc8d6f5-2nzqf   1/1     Running     2 (2d ago)   2d12h
kube-system   metrics-server-668d979685-d5g48           1/1     Running     1 (2d ago)   2d12h
default       webserver-647c579b69-djl56                1/1     Running     0            2d
kube-system   svclb-traefik-ae4a9542-zz2k7              2/2     Running     2 (2d ago)   2d12h
default       webserver-647c579b69-qzpgq                1/1     Running     0            2d
default       webserver-647c579b69-tlvlc                1/1     Running     0            2d
default       curl-service-fqdn-pqlv9                   0/1     Error       0            6m36s
default       curl-service-fqdn-tqqlb                   0/1     Error       0            6m31s
default       curl-service-fqdn-wkvkl                   0/1     Error       0            6m26s
default       curl-service-fqdn-s5mc6                   0/1     Error       0            6m21s
default       curl-service-fqdn-5wp52                   0/1     Error       0            6m16s
default       curl-service-fqdn-x8bnb                   0/1     Error       0            6m11s
default       curl-service-fqdn-bjbrc                   0/1     Error       0            6m6s

List pods with detailed status

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
sudo kubectl get pod webserver -o yaml
 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2022-10-10T05:30:30Z"
  labels:
    app: webserver
  name: webserver
  namespace: default
  resourceVersion: "12295"
  uid: 1b773bbf-e1c4-4d5f-9b59-4d961cd94d12
spec:
  containers:
  - image: nginx:1.20.1
    imagePullPolicy: IfNotPresent
    name: webserver
    ports:
    - containerPort: 80
      protocol: TCP
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-nppmb
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: agent1
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: kube-api-access-nppmb
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2022-10-10T05:30:30Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2022-10-10T05:30:43Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2022-10-10T05:30:43Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2022-10-10T05:30:30Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: containerd://0e5f2f9084ca5ce2b07f95cb94385f48acc9da317a12b990abb31415b983311e
    image: docker.io/library/nginx:1.20.1
    imageID: docker.io/library/nginx@sha256:a98c2360dcfe44e9987ed09d59421bb654cb6c4abe50a92ec9c912f252461483
    lastState: {}
    name: webserver
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2022-10-10T05:30:43Z"
  hostIP: 192.168.33.11
  phase: Running
  podIP: 10.42.1.4
  podIPs:
  - ip: 10.42.1.4
  qosClass: BestEffort
  startTime: "2022-10-10T05:30:30Z"

Listing all the pods with matching nodes and ips of pods

1
2
3
4
5
sudo kubectl get pods -o wide
NAME                         READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
webserver-845b6b6d5c-zqt7r   1/1     Running   0          23h   10.42.1.16   agent1   <none>           <none>
webserver-845b6b6d5c-754pg   1/1     Running   0          23h   10.42.0.16   server   <none>           <none>
webserver-845b6b6d5c-p7mlg   1/1     Running   0          23h   10.42.1.17   agent1   <none>           <none>

Pod events, startup errors

Learn, why the pod is not healthy

1
kubectl -n localnews describe pod news-backend-56c8bcdfdc-xj4xx

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
...
Volumes:
  kube-api-access-tgm56:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                     From               Message
  ----     ------     ----                    ----               -------
  Normal   Scheduled  4m1s                    default-scheduler  Successfully assigned localnews/news-backend-56c8bcdfdc-xj4xx to vagrant
  Normal   Pulling    4m                      kubelet            Pulling image "quay.io/k8snativedev/news-backend"
  Normal   Pulled     3m58s                   kubelet            Successfully pulled image "quay.io/k8snativedev/news-backend" in 1.998601423s
  Normal   Created    3m58s                   kubelet            Created container news-backend
  Normal   Started    3m58s                   kubelet            Started container news-backend
  Warning  Unhealthy  3m57s (x2 over 3m57s)   kubelet            Readiness probe failed: Get "http://172.17.0.5:8080/q/health/ready": dial tcp 172.17.0.5:8080: connect: connection refused
  Warning  Unhealthy  3m31s (x18 over 3m56s)  kubelet            Readiness probe failed: HTTP probe failed with statuscode: 503
  Warning  Unhealthy  3m31s                   kubelet            Readiness probe failed: Get "http://172.17.0.5:8080/q/health/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

logs inside Pod

If there is an application error in pod

1
sudo kubectl -n localnews logs news-backend-native-644b689b-hfqk4 -p

1
2
...
exec ./application: exec format error

Replica Set

Creating a replica set of 3 nodes https://github.com/Apress/Kubernetes-Native-Development/blob/main/snippets/chapter1/webserver-rs.yaml

1
2
3
4
5
6
7
8
9
10
11
sudo kubectl create -f webserver-rs.yaml
replicaset.apps/webserver created
 
sudo kubectl get pods                                                                                                              
 
NAME              READY   STATUS    RESTARTS   AGE
webserver-ag3dd   1/1     Running   0          15s
webserver-wwcxx   1/1     Running   0          15s
webserver-bm8jh   1/1     Running   0          15s
 
sudo kubectl delete -f webserver-rs.yaml

Deployment

Responsibility:

  • scale Pods horizontally, in replica sets
  • (Rolling) Updating every running instance of an application
    • Rolling back all running instances of an application to another version

https://github.com/Apress/Kubernetes-Native-Development/blob/main/snippets/chapter1/webserver-deployment.yaml

Performing a Deployment named “webserver” of nginx 1.2.0. Deployment creates a Replica-Set and maintains it.

1
2
3
4
5
6
7
8
9
kubectl create -f webserver-deployment.yaml
 
sudo kubectl get deployment webserver
NAME        READY   UP-TO-DATE   AVAILABLE   AGE
webserver   3/3     3            3           10s
 
sudo kubectl get rs
NAME                   DESIRED   CURRENT   READY   AGE
webserver-845b6b6d5c   0         0         0       10s

https://github.com/Apress/Kubernetes-Native-Development/blob/main/snippets/chapter1/webserver-deployment-1.21.0.yaml Performing a rolling update of deployment “webserver” to a new version of nginx. Via the deployment name - K8s knows, what to upgrade.

At the beginning there are some pods from Replica Set, which are running for hours.

Afterwards the replica set is corrected, by changing to version “1.21.0”

K8s starts a rolling update, by starting new containers one by one. See “ContainerCreating”. And removing old containers. After some time all OLD containers in Replica-Set are replaced by new containers.

Only new pods are running then.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
sudo kubectl get pods
NAME                         READY   STATUS    RESTARTS      AGE
webserver-845b6b6d5c-x564x   1/1     Running   1 (13h ago)   23h
webserver-845b6b6d5c-9gv6m   1/1     Running   1 (13h ago)   23h
webserver-845b6b6d5c-m2tdv   1/1     Running   1 (13h ago)   23h
 
sudo kubectl apply -f webserver-deployment-1.21.0.yaml
 
sudo kubectl get pods
NAME                         READY   STATUS              RESTARTS      AGE
webserver-845b6b6d5c-x564x   1/1     Running             1 (13h ago)   23h
webserver-845b6b6d5c-m2tdv   1/1     Running             1 (13h ago)   23h
webserver-848cf84857-dcdpb   1/1     Running             0             20s
webserver-848cf84857-cfl84   0/1     ContainerCreating   0             6s
 
sudo kubectl get rs
NAME                   DESIRED   CURRENT   READY   AGE
webserver-848cf84857   3         3         3       53s
webserver-845b6b6d5c   0         0         0       23h
  
 
sudo kubectl get pods
NAME                         READY   STATUS    RESTARTS   AGE
webserver-848cf84857-dcdpb   1/1     Running   0          56s
webserver-848cf84857-cfl84   1/1     Running   0          42s
webserver-848cf84857-6tv2h   1/1     Running   0          28s

Rollback the current “webserver” deployment nginx 1.21.0 to previous one, which was containing nginx 1.20.1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
sudo kubectl rollout undo deployment webserver
deployment.apps/webserver rolled back
 
sudo kubectl get pods
NAME                         READY   STATUS    RESTARTS   AGE
webserver-845b6b6d5c-zqt7r   1/1     Running   0          19s
webserver-845b6b6d5c-754pg   1/1     Running   0          17s
webserver-845b6b6d5c-p7mlg   1/1     Running   0          16s
 
sudo kubectl get pod webserver-845b6b6d5c-zqt7r -o yaml
apiVersion: v1
...
  - containerID: containerd://285b3dabf467dd8c68a0abe86ffe1aaac09fc216de1207c41c56a4589f782d10
    image: docker.io/library/nginx:1.20.1
    imageID: docker.io/library/nginx@sha256:a98c2360dcfe44e9987ed09d59421bb654cb6c4abe50a92ec9c912f252461483
    lastState: {}
    name: webserver
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2022-10-13T06:52:29Z"
  hostIP: 192.168.33.11
  phase: Running
  podIP: 10.42.1.16
  podIPs:
  - ip: 10.42.1.16
  qosClass: BestEffort
  startTime: "2022-10-13T06:52:28Z"

Job

A job allows to execute a one time command within the cluster, using private network of cluster.

Here it will call /bin/sh -c – curl -s -f –connect-timeout 5 http://10.42.0.16

The ip of the pod to curl is from sudo kubectl get pods -o wide

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
sudo kubectl create -f curl-job-podip.yaml
job.batch/curl created
 
sudo kubectl get pods -o wide
NAME                         READY   STATUS      RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
webserver-845b6b6d5c-zqt7r   1/1     Running     0          23h   10.42.1.16   agent1   <none>           <none>
webserver-845b6b6d5c-754pg   1/1     Running     0          23h   10.42.0.16   server   <none>           <none>
webserver-845b6b6d5c-p7mlg   1/1     Running     0          23h   10.42.1.17   agent1   <none>           <none>
curl-7zgrt                   0/1     Completed   0          46s   10.42.1.18   agent1   <none>           <none>
 
sudo kubectl logs curl-7zgrt
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
 
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
 
<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Service

Responsibility:

  • Abstract access to PODs, which might die. Make access IP independent.
    • Services are not just for pods. Services can abstract access to DBs, external hosts, or even other services.

https://github.com/Apress/Kubernetes-Native-Development/blob/main/snippets/chapter1/curl-job-service.yaml https://github.com/Apress/Kubernetes-Native-Development/blob/main/snippets/chapter1/webserver-svc.yaml

You can curl a service using a service ip /bin/sh -c – curl -s -f –connect-timeout 5 http://10.43.21.45

Or you can curl a service using its internal DS, by service name /bin/sh -c – curl -s -f –connect-timeout 5 http://webserver

A ClusterIP service is the default Kubernetes service. It gives you a service inside your cluster that other apps inside your cluster can access. There is no external access.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
sudo kubectl apply -f webserver-svc.yaml
service/webserver created
 
sudo kubectl get service
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.43.0.1     <none>        443/TCP   7d11h
webserver    ClusterIP   10.43.21.45   <none>        80/TCP    38s
 
sudo kubectl
 create -f curl-job-service.yaml
job.batch/curl-service created
 
sudo kubectl get pods
NAME                         READY   STATUS      RESTARTS   AGE
webserver-845b6b6d5c-zqt7r   1/1     Running     0          23h
webserver-845b6b6d5c-754pg   1/1     Running     0          23h
webserver-845b6b6d5c-p7mlg   1/1     Running     0          23h
curl-7zgrt                   0/1     Completed   0          22m
curl-service-kx97d           0/1     Completed   0          42s
 
sudo kubectl logs curl-service-kx97d
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
 
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
 
<p><em>Thank you for using nginx.</em></p>
</body>
</html>

We see that the service has its own IP address though from a different IP range as the Pods. We could now use this IP address instead of the Pod IPs, and the Service would distribute the traffic to our web server replicas in a round-robin manner.

service > type: NodePort

https://github.com/Apress/Kubernetes-Native-Development/blob/main/snippets/chapter1/webserver-svc-nodeport.yaml

type: NodePort makes the service accessible from public node IPs, on a random port.

A NodePort service is the most primitive way to get external traffic directly to your service. NodePort, as the name implies, opens a specific port on all the Nodes (the VMs), and any traffic that is sent to this port is forwarded to the service.

I know, that app “webserver” is present on node server. find out the NODE IP of node server. 192.168.33.10

find out random port 31516, on which NodePort service is running

curl 192.168.33.10:31516 and get nginx

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
sudo kubectl apply -f webserver-svc-nodeport.yaml
service/webserver configured
 
# see that the service with type=NodePort - is running app webserver. But on which nodes?
 
sudo kubectl get service -o yaml | grep type
    type: ClusterIP
        {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"webserver","namespace":"default"},"spec":{"ports":[{"port":80,"protocol":"TCP","targetPort":80}],"selector":{"app":"webserver"},"type":"NodePort"}}
    type: NodePort
 
# see that webserver is running on nodes  "server" and "agent1". Cause webserver-* matches app webserver on nodes.
sudo kubectl get pods -o wide
NAME                         READY   STATUS      RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
webserver-845b6b6d5c-zqt7r   1/1     Running     0          24h   10.42.1.16   agent1   <none>           <none>
webserver-845b6b6d5c-754pg   1/1     Running     0          24h   10.42.0.16   server   <none>           <none>
webserver-845b6b6d5c-p7mlg   1/1     Running     0          24h   10.42.1.17   agent1   <none>           <none>
curl-7zgrt                   0/1     Completed   0          62m   10.42.1.18   agent1   <none>           <none>
curl-service-kx97d           0/1     Completed   0          40m   10.42.1.19   agent1   <none>           <none>
 
# I know, that app "webserver" is present on node server. find out the NODE IP of node server. 192.168.33.10
sudo kubectl get nodes -o wide
NAME     STATUS   ROLES                  AGE     VERSION        INTERNAL-IP     EXTERNAL-IP     OS-IMAGE             KERNEL-VERSION    CONTAINER-RUNTIME
agent1   Ready    <none>                 7d11h   v1.24.6+k3s1   192.168.33.11   <none>          Alpine Linux v3.14   5.10.144-0-virt   containerd://1.6.8-k3s1
server   Ready    control-plane,master   7d11h   v1.24.6+k3s1   192.168.33.10   192.168.33.10   Alpine Linux v3.14   5.10.144-0-virt   containerd://1.6.8-k3s1
 
 
sudo kubectl --namespace default get services
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.43.0.1     <none>        443/TCP        7d11h
webserver    NodePort    10.43.21.45   <none>        80:31516/TCP   26m
 
# curl NODE ip 192.168.33.10 and random port 31516, on which NodePort service si running
curl 192.168.33.10:31516
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
 
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
 
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
     

When using Vagrant - you can expose a node INSIDE vagrant,

- by forwarding the NodePort (e.g. 31134) - from the host machine - to the guest machine

After that you can open the k8s application in browser - it will redirect to the node with application on it:

http://127.0.0.1:31134/
http://127.0.0.1.nip.io:31134/

Ingress

https://github.com/Apress/Kubernetes-Native-Development/blob/main/snippets/chapter1/webserver-ingress.yaml

An Ingress can be compared to a reverse proxy that maps requests from a single proxy server URL to internal URLs and is usually used whenever you expose internal hosts to the outside world.

Unlike all the above examples, Ingress is actually NOT a type of service. Instead, it sits in front of multiple services and act as a “smart router” or entrypoint into your cluster.

Ingress is the most useful if you want to expose multiple services under the same IP address, and these services all use the same Layer7 protocol. Because Ingress distributes traffic by domain.

See https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0

Persistent Value and claims

Persistent Value https://github.com/Apress/Kubernetes-Native-Development/blob/main/snippets/chapter1/webserver-pv.yaml

Persistent Value Claim https://github.com/Apress/Kubernetes-Native-Development/blob/main/snippets/chapter1/webserver-pvc.yaml

Modify Deployment, to mount the “Persistent Value Claim” into the container “webserver” as a volume: https://github.com/Apress/Kubernetes-Native-Development/blob/main/snippets/chapter1/webserver-pvc-deployment.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# a folder for the persistent value
mkdir /mnt/data
chmod 777 /mnt/data
 
 
sudo kubectl create -f webserver-pv.yaml
 
persistentvolume/webserver-pv created
 
 
sudo kubectl get PersistentVolume
 
NAME           CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
webserver-pv   100Mi      RWO            Retain           Available           manual                  39s
 
 
sudo kubectl create -f webserver-pvc.yaml
 
persistentvolume/webserver-pv created
 
 
sudo kubectl get PersistentVolumeClaim
 
NAME       STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
ws-claim   Pending                                      local-path     103s

Attaching to deployment

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
sudo kubectl apply -f webserver-pvc-deployment.yaml
 
deployment.apps/webserver configured
 
 
sudo kubectl get Deployment
 
NAME        READY   UP-TO-DATE   AVAILABLE   AGE
webserver   3/3     3            3           11h
 
# ws-claim is attached
sudo kubectl get Deployment -o yaml
apiVersion: v1
items:
- apiVersion: apps/v1
  kind: Deployment
  metadata:
    annotations:
      deployment.kubernetes.io/revision: "2"
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"webserver"},"name":"webserver","namespace":"default"},"spec":{"replicas":3,"selector":{"matchLabels":{"app":"webserver"}},"template":{"metadata":{"labels":{"app":"webserver"}},"spec":{"containers":[{"image":"nginx:1.20.1","name":"webserver","ports":[{"containerPort":80}],"volumeMounts":[{"mountPath":"/usr/share/nginx/html","name":"html"}]}],"volumes":[{"name":"html","persistentVolumeClaim":{"claimName":"ws-claim"}}]}}}}
    creationTimestamp: "2022-10-16T18:14:59Z"
    generation: 2
    labels:
      app: webserver
    name: webserver
    namespace: default
    resourceVersion: "2360"
    uid: 748c66e2-c029-4abb-8782-3aef7b937197
  spec:
    progressDeadlineSeconds: 600
    replicas: 3
    revisionHistoryLimit: 10
    selector:
      matchLabels:
        app: webserver
    strategy:
      rollingUpdate:
        maxSurge: 25%
        maxUnavailable: 25%
      type: RollingUpdate
    template:
      metadata:
        creationTimestamp: null
        labels:
          app: webserver
      spec:
        containers:
        - image: nginx:1.20.1
          imagePullPolicy: IfNotPresent
          name: webserver
          ports:
          - containerPort: 80
            protocol: TCP
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /usr/share/nginx/html
            name: html
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        schedulerName: default-scheduler
        securityContext: {}
        terminationGracePeriodSeconds: 30
        volumes:
        - name: html
          persistentVolumeClaim:
            claimName: ws-claim
  status:
    availableReplicas: 3
    conditions:
    - lastTransitionTime: "2022-10-17T06:10:10Z"
      lastUpdateTime: "2022-10-17T06:10:10Z"
      message: Deployment has minimum availability.
      reason: MinimumReplicasAvailable
      status: "True"
      type: Available
    - lastTransitionTime: "2022-10-16T18:14:59Z"
      lastUpdateTime: "2022-10-17T06:10:40Z"
      message: ReplicaSet "webserver-647c579b69" has successfully progressed.
      reason: NewReplicaSetAvailable
      status: "True"
      type: Progressing
    observedGeneration: 2
    readyReplicas: 3
    replicas: 3
    updatedReplicas: 3
kind: List
metadata:
  resourceVersion: ""
 
 
 
 
sudo kubectl get PersistentVolumeClaim
NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
ws-claim   Bound    pvc-175af05c-e918-45b0-941c-babaff7b38fa   100Mi      RWO            local-path     13m

Attaching “ws-claim” to a deployment https://github.com/Apress/Kubernetes-Native-Development/blob/main/snippets/chapter1/webserver-pvc-deployment.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
sudo kubectl get deployment
NAME        READY   UP-TO-DATE   AVAILABLE   AGE
webserver   3/3     3            3           35h
 
sudo kubectl get deployment -o yaml
 
...
        - image: nginx:1.20.1
          imagePullPolicy: IfNotPresent
          name: webserver
          ports:
          - containerPort: 80
            protocol: TCP
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
          - mountPath: /usr/share/nginx/html
            name: html
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        schedulerName: default-scheduler
        securityContext: {}
        terminationGracePeriodSeconds: 30
        volumes:
        - name: html
          persistentVolumeClaim:
            claimName: ws-claim

Copy a file index.html into a pod

1
2
3
4
5
6
7
8
9
sudo kubectl get pods
 
 
NAME                         READY   STATUS    RESTARTS   AGE
webserver-647c579b69-qzpgq   1/1     Running   0          24h
webserver-647c579b69-tlvlc   1/1     Running   0          24h
webserver-647c579b69-djl56   1/1     Running   0          24h
 
sudo kubectl cp index.html webserver-647c579b69-qzpgq:/usr/share/nginx/

ConfigMap

https://github.com/Apress/Kubernetes-Native-Development/blob/main/snippets/chapter1/webserver-configmap.yaml

https://github.com/Apress/Kubernetes-Native-Development/blob/main/snippets/chapter1/webserver-configmap-deployment.yaml

*subPath:

Sometimes, it is useful to share one volume for multiple uses in a single pod. The volumeMounts.subPath property specifies a sub-path inside the referenced volume instead of its root.

https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath

Namespaces

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
sudo kubectl create ns dev                                                                                                          namespace/dev created
 
 
sudo kubectl get ns
NAME              STATUS   AGE
default           Active   2d12h
kube-system       Active   2d12h
kube-public       Active   2d12h
kube-node-lease   Active   2d12h
dev               Active   12s
 
 
sudo kubectl apply -f curl-job-service-fqdn.yaml -n dev
job.batch/curl-service-fqdn created
 
 
sudo kubectl get job -n dev
NAME                COMPLETIONS   DURATION   AGE
curl-service-fqdn   0/1           34s        34s
 
sudo kubectl describe job curl-service-fqdn -n dev
...
Events:
  Type     Reason                Age   From            Message
  ----     ------                ----  ----            -------
  Normal   SuccessfulCreate      92s   job-controller  Created pod: curl-service-fqdn-2gp48
  Normal   SuccessfulCreate      88s   job-controller  Created pod: curl-service-fqdn-qmfmg
  Normal   SuccessfulCreate      82s   job-controller  Created pod: curl-service-fqdn-zmnm6
  Normal   SuccessfulCreate      78s   job-controller  Created pod: curl-service-fqdn-w88j7
  Normal   SuccessfulCreate      73s   job-controller  Created pod: curl-service-fqdn-d45c4
  Normal   SuccessfulCreate      68s   job-controller  Created pod: curl-service-fqdn-zcz2n
  Normal   SuccessfulCreate      63s   job-controller  Created pod: curl-service-fqdn-5cvft
  Warning  BackoffLimitExceeded  58s   job-controller  Job has reached the specified backoff limit
   
sudo kubectl logs curl-service-fqdn-2gp48 -n dev
 
 
sudo kubectl delete -f curl-job-service-fqdn.yaml -n dev
job.batch "curl-service-fqdn" deleted

NetworkPolicies

NetworkPolicies – A resource to define firewall rules on the network layer for the communication between Pods. We can, for example, define rules based on different protocols, ports, Namespaces, and Pods.

DaemonSet

DaemonSet – A set of Pods rolled out on each Kubernetes node. This makes sure that this Pod will also be placed on newly provisioned Kubernetes nodes. We discussed it briefly in line with the Kube-Proxy.

CronJobs

CronJobs – Are similar to Jobs but are scheduled via a cron expression, for example, if we want to start a batch job every Sunday at 9 p.m.

StatefulSets

StatefulSets – A resource similar to ReplicaSets; however, this is used for stateful applications that need, for example, a static Pod name or an individual data volume per Pod instance.

downward-api

The API to READ settings from K8s POD

by embedding certain fields into your settings, so that those are updated at runtime

https://kubernetes.io/docs/concepts/workloads/pods/downward-api/

1
2
3
4
5
6
7
8
9
10
11
12
13
spec:
 containers:
 - image: quay.io/k8snativedev/news-frontend
 name: news-frontend
 ports:
 - containerPort: 80
 env:
  - name: NODE_PORT
  value: "30000"
  - name: NODE_IP
    valueFrom:
    fieldRef:
      fieldPath: status.hostIP

REST-api

Expose the K8s api for CURL

1
sudo kubectl proxy --port=8888 --address='0.0.0.0'

1
2
3
4
5
6
7
8
9
10
curl http://localhost:8888/apis/apps/v1/namespaces/localnews/deployments/
 
{
  "kind": "DeploymentList",
  "apiVersion": "apps/v1",
  "metadata": {
    "resourceVersion": "6036"
  },
  "items": []
}

Custom Resources

To create templates of resources use “Custom Resources”

Define a custom resource via a schema

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: feedanalyses.kubdev.apress.com
spec:
  group: kubdev.apress.com
  versions:
    - name: v1
      served: true
      storage: true
      schema:
        openAPIV3Schema:
          type: object
          properties:
            spec:
              type: object
              properties:
                urls:
                  type: array
                  items:
                    type: string
  scope: Namespaced
  names:
    plural: feedanalyses
    singular: feedanalysis
    kind: FeedAnalysis
    shortNames:
    - fa
     

Apply custom resources

1
2
3
4
5
6
7
apiVersion: kubdev.apress.com/v1
kind: FeedAnalysis
metadata:
  name: my-feed
spec:
  urls:
    - https://www.nytimes.com/svc/collections/v1/publish/https://www.nytimes.com/section/world/rss.xml

Then you can create resources using templates

kubectl -n localnews create -f snippets/chapter4/crd/my-feed-analysis.yaml

kubectl get feedanalyses -n localnews

Operators

An operator runs your K8s application, measuring and scaling up and down.

There are

- HELM based operators - Ansible based operators, which generate ansible and apply - GoLang Operators

Maturity of operators

Yaml syntax

The precise format of the object spec is different for every Kubernetes object, and contains nested fields specific to that object.

The Kubernetes API Reference (https://kubernetes.io/docs/reference/kubernetes-api/) can help you find the spec format for all of the objects you can create using Kubernetes.

E.g.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: postgis
  name: postgis
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgis
  template:
    metadata:
      labels:
        app: postgis
    spec:
      containers:
        - env:
            - name: PGDATA
              value: /tmp/pgdata
            - name: POSTGRESQL_DATABASE
              value: news
            - name: POSTGRES_DB
              value: news
            - name: POSTGRES_PASSWORD
              value: banane
            - name: POSTGRES_USER
              value: postgres
          image: postgis/postgis:12-master
          name: postgis
          ports:
            - containerPort: 5432

Persistent Volume

A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It is a resource in the cluster just like a node is a cluster resource.

A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see AccessModes).

1
 

Tools

Helm

Install chart from subfolder k8s/helm-chart

1
2
3
cd parent_folder_of_k8s_helm-chart
 
helm install news-backend-dev k8s/helm-chart -n news-backend-dev --set newsbackend.deployment="off"

1
2
3
4
5
sudo helm list -n news-backend-dev
sudo helm list --all-namespaces
 
NAME                    NAMESPACE               REVISION        UPDATED                                 STATUS         CHART                    APP VERSION
news-backend-dev        news-backend-dev        1               2023-02-14 08:04:56.911772489 +0100 CET deployed       localnews-helm-1.0.0     1.0.0

1
2
3
4
5
6
7
8
9
sudo minikube service news-frontend -n news-backend-dev
 
|------------------|---------------|-------------|------------------------|
|    NAMESPACE     |     NAME      | TARGET PORT |          URL           |
|------------------|---------------|-------------|------------------------|
| news-backend-dev | news-frontend |          80 | http://10.0.2.15:31111 |
|------------------|---------------|-------------|------------------------|
🎉  Opening service news-backend-dev/news-frontend in default browser...
👉  http://10.0.2.15:31111

The structure of a helm chart

Values example

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# Default values for helm-chart.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
 
localnews:
  imagePullPolicy: Always
  minikubeIp: fill-in-minikube-ip
  domain: nip.io
 
feedscraper:
  deployment: "on"
  name: feed-scraper
  replicaCount: 1
  image: quay.io/k8snativedev/feed-scraper
  imageTag: latest
  envVars:
    feeds:
      key: SCRAPER_FEEDS_URL
      value: http://feeds.bbci.co.uk/news/world/rss.xml
    backend:
      key: SCRAPER_FEED_BACKEND_HOST
      value: news-backend

Template using values

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
{{ if eq .Values.feedscraper.deployment "on" }}
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Values.feedscraper.name }}
spec:
  replicas: {{ .Values.feedscraper.replicaCount }}
  selector:
    matchLabels:
      app: {{ .Values.feedscraper.name }}
  template:
    metadata:
      labels:
        app: {{ .Values.feedscraper.name }}
    spec:
      containers:
        - command:
            - java
            - -jar
            - /usr/local/lib/feed-scraper.jar
          image: {{ .Values.feedscraper.image }}:{{ .Values.feedscraper.imageTag }}
          name: {{ .Values.feedscraper.name }}
          env:
            - name: {{ .Values.feedscraper.envVars.feeds.key }}
              value: {{ .Values.feedscraper.envVars.feeds.value }}
            - name: {{ .Values.feedscraper.envVars.backend.key }}
              value: {{ .Values.feedscraper.envVars.backend.value }}
{{ end }}

Create a new chart

1
sudo helm create phoniexnap

Odo

Tool for hot deploy of code into container https://odo.dev/

Telepresence

https://www.telepresence.io/docs/latest/quick-start/

The tool sends some of the traffic - to the dedicated container

Install

1
2
3
4
5
# 1. Download the latest binary (~50 MB):
sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/latest/telepresence -o /usr/local/bin/telepresence
 
# 2. Make the binary executable:
sudo chmod a+x /usr/local/bin/telepresence

Set up telepresence

1
2
3
4
telepresence connect
telepresence list -n location-extractor-dev
cd components/location_extractor
telepresence intercept --port 5000:8081 --namespace locationextractor-dev --env-file location-extractor.env locationextractor

Now start the local python/flask container

1
2
3
docker build -f Dockerfile.dev -t location-extractor:dev .
docker run --name location-extractor -p 5000:5000 -v $(pwd)/
src:/app/src/ --env-file location-extractor.env locationextractor:de

devops/docker/kubernetes.1700686908.txt.gz · Last modified: by skipidar