24 Feb 2023 by Simon Greaves
The following blog series contains my notes from the Udemy training course in preperation for the Certified Kubernetes Administrator (CKA) exam.
Use the code - DEVOPS15 - while registering for the CKA or CKAD exams at Linux Foundation to get a 15% discount.
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:
Nodes Types
Master Node Components
All nodes require container runtime engines such as Docker, ContainerD, or Rocket. If containers are run on the master node, a container runtime engine like Docker must also be installed on there.
kubeadm tool configures these master node components as pods in the master node automatically but you can also configure them manually.
Worker Nodes
Distributed key-value store.
runs on port 2379.
installation steps:
curl -L ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-darwin-amd64.zip -o /tmp/etcd-${ETCD_VER}-darwin-amd64.zip
unzip /tmp/etcd-${ETCD_VER}-darwin-amd64.zip -d /tmp && rm -f /tmp/etcd-${ETCD_VER}-darwin-amd64.zip
mv /tmp/etcd-${ETCD_VER}-darwin-amd64/* /tmp/etcd-download-test && rm -rf mv /tmp/etcd-${ETCD_VER}-darwin-amd64z
./etcd
/tmp/etcd-download-test/etcd --version
/tmp/etcd-download-test/etcdctl version
ETCD Control
./etcdctl --version
etcdctl version 3.x is API version 2.
run ./etcdctl
on its own to see available versions.
can change the API version using this
ETCDCTL_API=3 {command}
or export it as an environmental variable using
export etcdtl_api=3 {command}
Note: in v2 you ran --version
, whereas now you just run ./etcdctl version
on its own.
Set a value with
./etcdctl put key1 value1
Get a value with
./etcdctl get key1
when running the commands, link together the API version, together with the certificate files that ETCDCTL can authenticate against the ETCD API server with.
for example
kubectl exec etcd-master -n kube-system -- sh -c "ETCDCTL_API=3 etcdctl get / --prefix --keys-only --limit=10 --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt --key /etc/kubernetes/pki/etcd/server.key"
Responsibilities
View options
kubectl get pods -n kube-system
Available options are in the manifest yaml file.
cat /etc/kubernetes/manifests/kube-apiserver.yaml
kube-apiserver service location
cat /etc/systemd/system/kube-apiserver.service
view running process and available options with
ps -aux | grep kube-apiserver
Comprised of many controllers, including
Replication-Controller
Install
wget https://xx
decides which pods go on which nodes only, does not place the pods, thats the job of the kubelet (the kubelet is the captain of the ship).
The scheduler
View the kube-scheduler options
cat/kubernetes/manifests/kube-scheduler-yaml
Process list
ps -aux | grep kube-scheduler
View available nodes
kubectl get nodes
Like a captain on the ship. Leads all activities on the ship as instructed by the scheduler
Kubeadmin does not deploy Kubelets, you must install manually using the wget xxxx
command.
View the running process and options with
ps -aux | grep kubelet
As a nodes application IP address can change, K8s uses services to map a link between the service, such as database, with the current IP address.
e.g.
service: db 10.96.0.12
kube-proxy looks for new services and creates rules on each node to forward traffic heading to the service to the IP of the POD where it is running.
download with wget
.
view kubeproxy with
kubectl get pods -n kube-system
or
kubectl get daemonset -n kube-system
smallest manageable object in K8s.
Usually contains one container.
Can container helper containers in a POD alongside the application container.
kubectl get pods
- get a list of PODs and their running state.
all K8s pod definition files contain
apiVersion:
kind:
metadata:
spec:
example of a file called pod-definition.yml
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
type: front-end
spec:
containers:
- name: nginx-container
image: nginx
Definitions of entries
apiVersion:
- API version of the object being created. String value item
app/v1
kind:
The kind of object type. String value item
Service, ReplicaSet, Deployment
metadata:
- data about the object. Dictionary value item, spaces in front of the object makes it a child object, so in the example above app
is a child of labels:
spec:
- Specification section of the object, types vary based on object type. spec:
is a dictionary, containers
is an array.
-
right before the name:
variable means its the 1st item in the listTo create the pod-definition.yml file listed above, type:
kubectl create -f pod-definition.yml
To create and run the pod run
kubectl run -f pod-definition.yml
To create and run a pod without specifying the yml file, run
kubectl run nginx --image=nginx
to create a pod definition file with the correct format/layout, run
kubectl run redis --image=redis --dry-run=client -o yaml > redis.yaml
to see a list of pods run
kubectl get pods
to see detailed information run
kubectl describe pod myapppod
Create a new object. Can use either create
or apply
kubectl apply -f pod.yaml
See which node the pods are running on
kubectl get pods -o wide
To change the configuration of a running pod, run
kubectl apply -f redis.yaml
To delete a running pod and then redeploy based on a YAML, run
kubectl replace --force -f redis.yaml
There are limits on what you can edit on Pods, only the following specifications can be edited.
If you edit a running Pod it will fail to save, however it will save it to a temporary file which you can then edit, delete the running Pod, and then redeploy from this yaml file.
Alternatively you can edit a Deployments (covered below) that contains the Pod definition and redeploy from that.
Replica sets is the newer technology that replaces the replication controllers. Fundamentally they fulfil the same purpose.
Replica Sets and Replication Controllers are defined using definition files, similar to the pods earlier. They contain the same core elements. apiVersion:
, kind:
, metadata:
, and spec:
Under spec:
you add a template:
child object and then you place the pod data that is part of this replica set as a child of the template:
line, except for the apiVersion:
and kind:
example Replication Controller yml file.
apiVersion: v1
kind: ReplicationController
metadata:
name: myapp-rc
labels:
app: myapp
type: front-end
spec:
template:
metadata:
name: myapp-pod
labels:
app: myapp
type: front-end
spec:
containers:
- name: nginx-container
image: nginx
The number of replicas to add is defined by the replica:
line. As this is a property of the Replication Controller spec:
, it aligns with the replication controller spec:
definitions.
Example Replication Controller yml file with the number of replicas defined.
apiVersion: v1
kind: ReplicationController
metadata:
name: myapp-rc
labels:
app: myapp
type: front-end
spec:
template:
metadata:
name: myapp-pod
labels:
app: myapp
type: front-end
spec:
containers:
- name: nginx-container
image: nginx
replicas: 3
To create, run
kubectl create -f rc-definition.yml
This will create the 3 pods first, then the replication controller. To view run
kubectl get replicationcontroller
To see pods created by the replication controller, run
kubectl get pods
replica Sets have an additional selector:
setting. This defines the pod to use in this selection. A replica set can manage pods that are already configured/deployed so you have to add which pod to select when defining the replica set. The selector must contain matchLabels:
and type:
. As you can imagine matchLabels:
matches the label defined in the pod.
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myapp-replicaset
labels:
app: myapp
type: front-end
spec:
template:
metadata:
name: myapp-pod
labels:
app: myapp
type: front-end
spec:
containers:
- name: nginx-container
image: nginx
replicas: 3
selector:
matchLabels:
type: front-end
To create run
kubectl create -f replicaset-definition.yml
To see Replica Sets run
kubectl get replicaset
To edit run
kubectl edit replicaset replicasetname
If unsure of what the correct format of the yml file is, use the explain
command.
kubectl explain replicaset
ReplicaSet shorthand in commands is rs
.
kubectl get rs
To scale the number of replicas in a set, you can either update the yml file with the revised number of replicas and run the replace command.
kubectl replace -f replicaset-definition.yml
Alternatively run the scale
command instead.
kubectl scale --replicas=6 -f replicaset-definition.yml
The disadvantage of this is that the definition file still only contains 3 replicas and not the revised 6.
Or you can run it against the running replica set without updating the file manually with
kubectl scale rs new-replica-set --replica=5
A k8s object that contains the replica sets and PODs.
Controlled using YAML files.
Example
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-replicaset
labels:
app: myapp
type: front-end
spec:
template:
metadata:
name: myapp-pod
labels:
app: myapp
type: front-end
spec:
containers:
- name: nginx-container
image: nginx
replicas: 3
selector:
matchLabels:
type: front-end
View deployments with
kubectl get deployments
or
kubectl get deploy
To view all the deployments and PODs, run
kubectl get all
Quick ways to create objects using the run command without having to first create YAML files.
https://kubernetes.io/docs/reference/kubectl/conventions/
Create an NGINX Pod
kubectl run nginx --image=nginx
Generate POD Manifest YAML file on the screen (-o yaml). Don’t create it(–dry-run). Save it to a file (> nginx.yaml)
kubectl run nginx --image=nginx --dry-run=client -o yaml > nginx.yml
Create a deployment
kubectl create deployment --image=nginx nginx
Generate Deployment YAML file (-o yaml). Don’t create it(–dry-run)
kubectl create deployment --image=nginx nginx --dry-run=client -o yaml
Generate Deployment YAML file (-o yaml). Don’t create it(–dry-run) with 4 Replicas (–replicas=4)
kubectl create deployment --image=nginx nginx --dry-run=client -o yaml > nginx-deployment.yaml
Save it to a file, make necessary changes to the file (for example, adding more replicas) and then create the deployment.
kubectl create -f nginx-deployment.yaml
OR
In k8s version 1.19+, we can specify the –replicas option to create a deployment with 4 replicas.
kubectl create deployment --image=nginx nginx --replicas=4 --dry-run=client -o yaml > nginx-deployment.yaml
Used to connect pods together by service types. Services enables connectivity between the pods.
Enables loose coupling between micro services in the deployment.
Listens to network ports on the K8s node and then forwards requests to Pods.
Has its own IP called the clusternode service IP.
Services use a selector within a service definition file that contains a label from the pod definition file.
Service types
Service Name | NodePort | ClusterIP | LoadBalancer |
---|---|---|---|
Role | Performs port redirects | A common name/IP address shared by services | Take advantage of native cloud load balancers |
Port | 3000 - 32767 | 3000 - 32767 | 3000 - 32767 |
Definition File mapping to a single pod
In this example below, the NodePort service definition file is looking for the pod with the label app:
that is called myapp
. It is also looking for the pod with type: front-end
.
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
type: NodePort
ports:
- targetPort: 80
port: 80
nodePort: 30008
selector:
app: myapp
type: front-end
Get services
kubectl get services
Definition File mapping to multiple pods
Looks very similar to the above, the selector app: myapp
is looking for all pods with the label app: myapp
and it will group them all together in the NodePort service.
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
type: NodePort
ports:
- targetPort: 80
port: 80
nodePort: 30008
selector:
app: myapp
type: front-end
When the services are grouped together like this the service acts like a build in load balancer and will balancer loads across the pods using the following settings.
If the pods map across multiple nodes in the cluster, K8s will automatically span the service across all nodes in the cluster.
In this situation, the port 30008 will be made available across any node in the K8s cluster to the same service.
Create a Service named nginx-service of type NodePort to expose pod nginx’s port 80 on port 30080 on the nodes:
kubectl expose pod nginx --type=NodePort --port=80 --name=nginx-service --dry-run=client -o yaml
Note: You use the --port=XX
option here as we are telling the pod which port to open, whereas when creating services we are defining a type of port like --node-port=30008
or --target-port=80
or if you want to open the service port you can use either --port=80
, which will assume a protocol of TCP or use --protocol=TCP
together with --port=80
. Alternatively you can short hand this as --tcp=80:80
or --udp=53:53
.
(This will automatically use the pod’s labels as selectors, but you cannot specify the node port. You have to generate a definition file and then add the node port in manually before creating the service with the pod.)
Or
kubectl create service nodeport nginx --tcp=80:80 --node-port=30080 --dry-run=client -o yaml
(This will not use the pods labels as selectors)
Both the above commands have their own challenges. While one of it cannot accept a selector the other cannot accept a node port. I would recommend going with the kubectl expose
command. If you need to specify a node port, generate a definition file using the same command and manually input the NodePort before creating the service.
You can combine creating a POD and creating a ClusterIP service and exposing it using,
kubectl run httpd --image=httpd:alpine --port=80 --expose=true
Definition file
apiVersion: v1
kind: Service
metadata:
name: back-end
spec:
type: ClusterIP
ports:
- targetPort: 80
port: 80
selector:
app: myapp
type: back-end
Get Services
kubectl get svc
Create a service named redis-service of type ClusterIP to expose pod redis on port 6379.
kubectl expose pod redis --port=6379 --name redis-service --dry-run=client -o yaml
kubectl create service clusterip redis --tcp=6379:6379 --dry-run=client -o yaml
(This will not use the pods labels as selectors, instead it will assume selectors as app=redis. You cannot pass in selectors as an option. So it does not work very well if your pod has a different label set. So generate the file and modify the selectors before creating the service)
The LoadBalancer type allow you to take advantage of native cloud load balancers within the K8s system. Only certain load balancer types are supported, such as GCP, AWS, or Azure.
apiVersion: v1
kind: Service
metadata:
name: back-end
spec:
type: LoadBalancer
ports:
- targetPort: 80
port: 80
selector:
app: myapp
type: back-end
Default one is called Default.
All pods, deployments and services sit within a namespace.
Default pods
you can create your own to separate out all the resources, each namespace can have their own policies.
To allow cross namespace communitcation you have to append the namespace of the other namespace.
The default domain name for K8s is cluster.local
The default domain name for services is svc.cluster.local
A dev namespace within the local cluster you would create it in dev.svc.cluster.local
To address a service within the default cluster in the dev namespace from the default namespace you would connect to it via servicename.dev.svc.cluster.local
To create a pod within another namespace you would run kubectl create -f pod-definition.yaml --namespace=dev
Alternatively you can add namespace: dev
to the yaml pod definition file like below.
apiVersion: v1
kind Pod
metadata:
name: myapp-pod
namespace: dev
labels:
app: myapp
type: front-end
spec:
containers:
- name: nginx-container
image: nginx
Create with yaml definition file.
Namespace definition file.
apiVersion: v1
kind: Namespace
metadata:
name: dev
Or run a create command
kubectl create namespace dev
View pods in other namespaces
kubectl get pods --namespace=dev
Alternatively
kubectl get pods -n=dev
Or
kubectl get pods --all-namespaces
Or
kubectl get pods -A
To set the context for future commands as a static variable, you set the context like this.
kubectl config set-context $(kubectl config current-context) --namespace=dev
Once complete, you can just run this to see pods in the dev namespace.
kubectl get pods
Limit resources with a yaml file.
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-quota
namespace: dev
spec:
hard:
pods: "10"
requests.cpu: "4"
requests.memory: 5Gi
limits.cpu: "10"
limits.memory: 10Gi
Create with
kubectl create -f compute-quota.yaml
step by step instructions. How to do something.
Create and update (edit) commands are imperative.
If you are editing images, update the yaml file and use the kubectl replace --force -f nginx.yaml
rather than editing a live object.
just declaring the final destination, what to do, not how to do.
Declarative command can carry out modifications of the entities by using the apply command.
kubectl apply -f nginx.yaml
Create object with declarative approach
kubectl apply -f nginx.yaml
To create multiple objects at once, run
kubectl apply -f /path/to/config-files
Update running ones, run
kubectl apply -f nginx.yaml
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands
https://kubernetes.io/docs/reference/kubectl/conventions/
Embed the expose option as part of the run command. This will create a pod and map a service to it to make it accessible.
kubectl run httpd --image=httpd:alpine --port=80 --expose=true
there is a ‘last applied configuration’ JSON file that contains the most recently applied configuration. It is used to compare what has changed over time and which settings should be applied to the live running YAML. It is stored in the metadata of the live configuration YAML and only used when the apply command is run.
When you use the apply command it compares what is in the local yaml file against what is in the K8s live object configuration in K8s memory.
Once you use the apply command, don’t switch to using imperative commands because the configuration in the local yaml file won’t contain those command line run imperative created objects.
Live configuration without using the apply command
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2023-01-12T10:59:19Z"
labels:
run: httpd
name: httpd
namespace: default
resourceVersion: "856"
uid: 6a3c37d8-163e-4f6c-a9bd-44a5af858657
spec:
containers:
- image: httpd:alpine
imagePullPolicy: IfNotPresent
name: httpd
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-njlz4
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: controlplane
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
Live configuration when using the apply command to create the object.
apiVersion: v1
kind: Pod
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"run":"httpd"},"name":"httpd","namespace":"default"},"spec":{"containers":[{"image":"httpd:alpine","name":"httpd","ports":[{"containerPort":8080}],"resources":{}}],"dnsPolicy":"ClusterFirst","restartPolicy":"Always"},"status":{}}
creationTimestamp: "2023-01-12T11:08:35Z"
labels:
run: httpd
name: httpd
namespace: default
resourceVersion: "780"
uid: 44e43f37-7a8e-4f99-a064-7fb4ec05dca2
spec:
containers:
- image: httpd:alpine
imagePullPolicy: IfNotPresent
name: httpd
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-jrkbv
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: controlplane
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
Live configuration after apply command run to change the image type to apache
apiVersion: v1
kind: Pod
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"run":"httpd"},"name":"httpd","namespace":"default"},"spec":{"containers":[{"image":"httpd:apache","name":"httpd","ports":[{"containerPort":8080}],"resources":{}}],"dnsPolicy":"ClusterFirst","restartPolicy":"Always"},"status":{}}
creationTimestamp: "2023-01-12T11:16:07Z"
labels:
run: httpd
name: httpd
namespace: default
resourceVersion: "987"
uid: ebc7e191-b612-4ec6-b32a-057791cbf8ea
spec:
containers:
- image: httpd:apache
imagePullPolicy: IfNotPresent
name: httpd
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-swpn8
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: controlplane
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
Tagged with: Command Line K8s
Comments are closed for this post.