Prepare environment for installing k8s offline¶
We need to prepare three things before we are able to install k8s offline - A system packages repo which provides below packages (e.g. kubeadm, containerd, crictl, runc, etc. ) - A container image repo which provides container images (e.g. kube-apiserver, kube-proxy, etc ) - A helm chart repo which provides chart to deploy services on k8s cluster (e.g. ingress-nginx).
1. Build a private apt package repo¶
As our scenario is offline installation, so we need to have a private debian package repo.
For more details on how to build a private apt package repo, you can go to docs/debian_server/private_package_repo
1.1 Configure all vms in k8s cluster to use the private apt repo as system package repo¶
Suppose we already have a debian package repo is built by using aptly. It has all the basic packages of
debian 11, k8s-main(kubeadm, kubelet, etc.), containerd, and docker. The url of this repo server is deb.casd.local
To configure vms to use it, follow the below steps
# step1: add the repo in your source list of the target server
# open the config file
sudo vim /etc/apt/sources.list
# comments the default config, below are some example
# deb http://deb.debian.org/debian bullseye main
# deb http://security.debian.org/debian-security bullseye-security main
# deb http://deb.debian.org/debian bullseye-updates main
# add the private repo, suppose the url is deb.casd.local. we don't want to use ssl
deb [trusted=yes] http://deb.casd.local/ bullseye main
# if you want to enable ssl, you need to add the pgp key of the repo into your target server
wget -qO- http://deb.casd.local/casd_gpg_key.asc | sudo tee /etc/apt/trusted.gpg.d/casd_gpg_key.asc
deb https://deb.casd.local/ bullseye main
# step2: Update the package cache in the target server
sudo apt update
# step3: install the containerd package to test
sudo apt install containerd.io
2. Container image registry¶
As kubeadm can't pull image from the internet, we need to build a private image repo. For more details on how to build a private image repo, you can go to docs/container/Image_registry/harbor/02.Harbor_installation.md
The official installation guide of harbor can be found here: https://goharbor.io/docs/2.12.0/install-config/
2.1 Get the required container image in image registry¶
To init a cluster k8, we must have the required images in the image registry.
The below list is the minimum container images that you need to deploy a k8s cluster:
- k8s cluster required images:
- pause: Every Kubernetes Pod has a pause container that holds the network namespace and acts as the parent of all
other containers in the Pod. Other containers in the Pod share its PID, network, and IPC namespaces.
It ensures that networking and IPC resources remain stable even if app containers restart.
- kube-apiserver:
- kube-controller-manager
- kube-scheduler
- kube-proxy
- etcd: image to run etcd(a distributed database to store the k8s cluster stat and information).
- coredns
- calico required images:
- kube-controllers
- node
- cni
- ingress-nginx require images:
- controller
2.1.1 Mirror k8s cluster required images¶
You can get the complete image list that the k8s cluster(of a specific version) requires with the below command
k8s_version=1.31.1
kubeadm config images list --kubernetes-version=$k8s_version
# output example
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/pause:3.10
registry.k8s.io/etcd:3.5.15-0
The official Kubernetes image registry on Google Container Registry (GCR) is called
registry.k8s.io
Below script (k8s_img_sync.bash) pull k8s images from official repo and pushes them to casd repo
#!/bin/bash
# before running the script, make sure to adapt your config
repo_url=reg.casd.local
project_name=k8s_image
images=(
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/pause:3.10
registry.k8s.io/etcd:3.5.15-0
)
for image_name in ${images[@]} ; do
docker pull $image_name
casd_image_name="${repo_url}/${project_name}/${image_name#*/}"
docker tag $image_name $casd_image_name
docker push $casd_image_name
done
2.1.2 Mirror the calico required images¶
calico network addon handles all the virtual networks of the k8s cluster. The calico project git page is here. You need to choose the version which fits better your k8s cluster.
The calico service needs the below images to run
- docker.io/calico/cni:<version>
- docker.io/calico/node:<version>
- docker.io/calico/kube-controllers:<version>
2.2 Configure all vms in k8s cluster to use the private image registry¶
We build a container image repo by using harbor. You need to configure the containerd daemon of all the servers in the k8s cluster to use the private image repo for pulling images.
2.3 Use private image registry to init k8s cluster¶
You can find the official doc of kubeadm here.
The complete list of configuration options for kubeadm init can be found here
kubeadmin init \
--apiserver-advertise-address=192.168.32.128\
--image-repository reg.casd.local/k8s_images
--control-plane-endpoint=k8s-master \
--kubernetes-version v1.31.1
The default value of the image-repository is k8s.gcr.io for kubeadmin. So to user our private image registry, we
need to change the default value.
We recommend you to use a
config.yamlto encapsulate all k8s configurations
2.4 Use private image registry to init calico¶
Below is an exampl of yaml file to deploy calico kube controllers. If you use the private image registry, you need to change
the docker.io/calico/kube-controllers:v3.25.1 to reg.casd.local/calico/kube-controllers:v3.25.1. If there is
authentication required, you need to add also imagePullSecrets specs.
---
# Source: calico/templates/calico-kube-controllers.yaml
# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
# The controllers can only have a single active instance.
replicas: 1
selector:
matchLabels:
k8s-app: calico-kube-controllers
strategy:
type: Recreate
template:
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
nodeSelector:
kubernetes.io/os: linux
tolerations:
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
effect: NoSchedule
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
serviceAccountName: calico-kube-controllers
priorityClassName: system-cluster-critical
containers:
- name: calico-kube-controllers
image: docker.io/calico/kube-controllers:v3.25.1
imagePullPolicy: IfNotPresent
env:
# Choose which controllers to run.
- name: ENABLED_CONTROLLERS
value: node
- name: DATASTORE_TYPE
value: kubernetes
livenessProbe:
exec:
command:
- /usr/bin/check-status
- -l
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
timeoutSeconds: 10
readinessProbe:
exec:
command:
- /usr/bin/check-status
- -r
periodSeconds: 10
Appendix: Download the image as tar files¶
If you don't have an image registry, you can download the image and package it as a tar file.
The below script save_k8s_images.bash, save the all images in the list as .tar file.
#!/bin/bash
#change the output path to a dir where you want
out_path=.
images=(
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/pause:3.10
registry.k8s.io/etcd:3.5.15-0
)
for image_name in ${images[@]} ; do
docker pull $image_name
tar_name="${out_path}/${image_name##*/}.tar"
docker save -o $tar_name $image_name
done
The image in .tar still has the origin repository tag. For example, the api server image still has the tag
registry.k8s.io/kube-apiserver
You can use the below script to mirror the calico images
Make sure the image version is compatible with the calico.yaml version. And make sure the calico you want to install is compatible with the k8s cluster.
#!/bin/bash
# before running the script, make sure to adapt your config
repo_url=reg.casd.local
project_name=calico
images=(
docker.io/calico/cni:v3.28.2
docker.io/calico/node:v3.28.2
docker.io/calico/kube-controllers:v3.28.2
)
for image_name in "${images[@]}" ; do
docker pull "${image_name}"
casd_image_name="${repo_url}/${project_name}/${image_name#*/}"
docker tag "${image_name}" "${casd_image_name}"
docker push "${casd_image_name}"
done
The below script download calico image as tar
#!/bin/bash
#change the output path to a dir where you want
out_path=.
images=(
docker.io/calico/cni:v3.28.2
docker.io/calico/node:v3.28.2
docker.io/calico/kube-controllers:v3.28.2
)
for image_name in "${images[@]}" ; do
docker pull "${image_name}"
tar_name="${out_path}/${image_name##*/}.tar"
docker save -o "${tar_name}" "${image_name}"
done