Home / Articles / Devops / kubernetes / 1.24.0 / kubernetes-multi-node-cluster

Install Multi-Node Kubernetes Cluster

System Setup

PackageVersion
OSRHEL 7.6
Kubernetes1.24.0
Container RuntimeContainerd

Server NameRoleCPURAMPrivate IPInternet Facing IP
kube-master1Master 2 vCPU 2GB 192.168.50.171 192.168.1.171
kube-worker1 Worker1 2 vCPU 2GB 192.168.50.175 192.168.1.175
kube-worker2 Worker2 2 vCPU 2GB 192.168.50.176 192.168.1.176

Initial Configuration

Note: Perform these steps on all Master and Worker servers

  • Login as root user or user having root privileges
  • Setup hosts file with the master and worker node entries
  • # vi /etc/hosts 192.168.50.171 kube-master1 kube-master1.linuxtechspace.com 192.168.50.175 kube-worker1 kube-worker1.linuxtechspace.com 192.168.50.176 kube-worker2 kube-worker2.linuxtechspace.com
  • Disable SWAP
  • # swapoff -a
    # vi /etc/fstab
    Comment entry containing keyword "swap"
  • Add br_netfilter module
  • # vi /etc/modules-load.d/k8s.conf br_netfilter # modprobe br_netfilter
  • Setup kernel parameter
  • # swapoff -a
    # vi /etc/sysctl.conf
    net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 # sysctl -p
  • Create user named "kadmin"
  • # useradd kadmin
  • Grant SUDO privileges to kadmin user
  • # vi /etc/sudoers kadmin ALL=(ALL) NOPASSWD:ALL
  • Disable SELINUX and IPTABLES

Install Container Runtime (containerd)

Note: Perform these steps on all Master and Worker servers

  • Create YUM Repo file
  • # vi /etc/yum.repos.d/docker-ce.repo [docker-ce-stable] name=Docker CE Stable - $basearch baseurl=https://download.docker.com/linux/centos/$releasever/$basearch/stable enabled=1 gpgcheck=1 gpgkey=https://download.docker.com/linux/centos/gpg [centos-extra] name=CentOS extra baseurl=http://mirror.centos.org/centos/7/extras/x86_64/ enabled=1 gpgcheck=0
  • Install container.io package
  • # swapoff -a
    # yum -y install containerd.io
    Loaded plugins: product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. centos-extra | 2.9 kB 00:00:00 docker-ce-stable | 3.5 kB 00:00:00 localrepo | 3.7 kB 00:00:00 (1/4): localrepo/group_gz | 144 kB 00:00:00 (2/4): centos-extra/primary_db | 247 kB 00:00:00 (3/4): docker-ce-stable/7Server/x86_64/updateinfo | 55 B 00:00:00 (4/4): docker-ce-stable/7Server/x86_64/primary_db | 78 kB 00:00:00 Resolving Dependencies --> Running transaction check ---> Package containerd.io.x86_64 0:1.6.4-3.1.el7 will be installed --> Processing Dependency: container-selinux >= 2:2.74 for package: containerd.io-1.6.4-3.1.el7.x86_64 --> Running transaction check ---> Package container-selinux.noarch 2:2.119.2-1.911c772.el7_8 will be installed --> Processing Dependency: policycoreutils-python for package: 2:container-selinux-2.119.2-1.911c772.el7_8.noarch --> Running transaction check ---> Package policycoreutils-python.x86_64 0:2.5-29.el7 will be installed --> Processing Dependency: setools-libs >= 3.3.8-4 for package: policycoreutils-python-2.5-29.el7.x86_64 --> Processing Dependency: libsemanage-python >= 2.5-14 for package: policycoreutils-python-2.5-29.el7.x86_64 --> Processing Dependency: audit-libs-python >= 2.1.3-4 for package: policycoreutils-python-2.5-29.el7.x86_64 --> Processing Dependency: python-IPy for package: policycoreutils-python-2.5-29.el7.x86_64 --> Processing Dependency: libqpol.so.1(VERS_1.4)(64bit) for package: policycoreutils-python-2.5-29.el7.x86_64 --> Processing Dependency: libqpol.so.1(VERS_1.2)(64bit) for package: policycoreutils-python-2.5-29.el7.x86_64 --> Processing Dependency: libcgroup for package: policycoreutils-python-2.5-29.el7.x86_64 --> Processing Dependency: libapol.so.4(VERS_4.0)(64bit) for package: policycoreutils-python-2.5-29.el7.x86_64 --> Processing Dependency: checkpolicy for package: policycoreutils-python-2.5-29.el7.x86_64 --> Processing Dependency: libqpol.so.1()(64bit) for package: policycoreutils-python-2.5-29.el7.x86_64 --> Processing Dependency: libapol.so.4()(64bit) for package: policycoreutils-python-2.5-29.el7.x86_64 --> Running transaction check ---> Package audit-libs-python.x86_64 0:2.8.4-4.el7 will be installed ---> Package checkpolicy.x86_64 0:2.5-8.el7 will be installed ---> Package libcgroup.x86_64 0:0.41-20.el7 will be installed ---> Package libsemanage-python.x86_64 0:2.5-14.el7 will be installed ---> Package python-IPy.noarch 0:0.75-6.el7 will be installed ---> Package setools-libs.x86_64 0:3.3.8-4.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ============================================================================================================================================================== Package Arch Version Repository Size ============================================================================================================================================================== Installing: containerd.io x86_64 1.6.4-3.1.el7 docker-ce-stable 33 M Installing for dependencies: audit-libs-python x86_64 2.8.4-4.el7 localrepo 76 k checkpolicy x86_64 2.5-8.el7 localrepo 295 k container-selinux noarch 2:2.119.2-1.911c772.el7_8 centos-extra 40 k libcgroup x86_64 0.41-20.el7 localrepo 66 k libsemanage-python x86_64 2.5-14.el7 localrepo 113 k policycoreutils-python x86_64 2.5-29.el7 localrepo 456 k python-IPy noarch 0.75-6.el7 localrepo 32 k setools-libs x86_64 3.3.8-4.el7 localrepo 620 k Transaction Summary ============================================================================================================================================================== Install 1 Package (+8 Dependent packages) Total download size: 35 M Installed size: 130 M Downloading packages: (1/9): audit-libs-python-2.8.4-4.el7.x86_64.rpm | 76 kB 00:00:00 (2/9): checkpolicy-2.5-8.el7.x86_64.rpm | 295 kB 00:00:00 (3/9): libcgroup-0.41-20.el7.x86_64.rpm | 66 kB 00:00:00 (4/9): libsemanage-python-2.5-14.el7.x86_64.rpm | 113 kB 00:00:00 (5/9): policycoreutils-python-2.5-29.el7.x86_64.rpm | 456 kB 00:00:00 (6/9): setools-libs-3.3.8-4.el7.x86_64.rpm | 620 kB 00:00:00 (7/9): python-IPy-0.75-6.el7.noarch.rpm | 32 kB 00:00:00 (8/9): container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm | 40 kB 00:00:00 warning: /var/cache/yum/x86_64/7Server/docker-ce-stable/packages/containerd.io-1.6.4-3.1.el7.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEY Public key for containerd.io-1.6.4-3.1.el7.x86_64.rpm is not installed (9/9): containerd.io-1.6.4-3.1.el7.x86_64.rpm | 33 MB 00:00:02 -------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 12 MB/s | 35 MB 00:00:02 Retrieving key from https://download.docker.com/linux/centos/gpg Importing GPG key 0x621E9F35: Userid : "Docker Release (CE rpm) " Fingerprint: 060a 61c5 1b55 8a7f 742b 77aa c52f eb6b 621e 9f35 From : https://download.docker.com/linux/centos/gpg Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : audit-libs-python-2.8.4-4.el7.x86_64 1/9 Installing : setools-libs-3.3.8-4.el7.x86_64 2/9 Installing : python-IPy-0.75-6.el7.noarch 3/9 Installing : libsemanage-python-2.5-14.el7.x86_64 4/9 Installing : checkpolicy-2.5-8.el7.x86_64 5/9 Installing : libcgroup-0.41-20.el7.x86_64 6/9 Installing : policycoreutils-python-2.5-29.el7.x86_64 7/9 Installing : 2:container-selinux-2.119.2-1.911c772.el7_8.noarch 8/9 setsebool: SELinux is disabled. Installing : containerd.io-1.6.4-3.1.el7.x86_64 9/9 Verifying : libcgroup-0.41-20.el7.x86_64 1/9 Verifying : checkpolicy-2.5-8.el7.x86_64 2/9 Verifying : libsemanage-python-2.5-14.el7.x86_64 3/9 Verifying : policycoreutils-python-2.5-29.el7.x86_64 4/9 Verifying : 2:container-selinux-2.119.2-1.911c772.el7_8.noarch 5/9 Verifying : python-IPy-0.75-6.el7.noarch 6/9 Verifying : containerd.io-1.6.4-3.1.el7.x86_64 7/9 Verifying : setools-libs-3.3.8-4.el7.x86_64 8/9 Verifying : audit-libs-python-2.8.4-4.el7.x86_64 9/9 Installed: containerd.io.x86_64 0:1.6.4-3.1.el7 Dependency Installed: audit-libs-python.x86_64 0:2.8.4-4.el7 checkpolicy.x86_64 0:2.5-8.el7 container-selinux.noarch 2:2.119.2-1.911c772.el7_8 libcgroup.x86_64 0:0.41-20.el7 libsemanage-python.x86_64 0:2.5-14.el7 policycoreutils-python.x86_64 0:2.5-29.el7 python-IPy.noarch 0:0.75-6.el7 setools-libs.x86_64 0:3.3.8-4.el7 Complete!
  • Enable container runtime
  • Comment the entry containing "cri" parameter.

    # vi /etc/containerd/config.toml # disabled_plugins = ["cri"]
  • Restart and Enable containerd service
  • # systemctl restart containerd
    # systemctl enable containerd

Install Kubernetes

Note: Perform these steps on all Master and Worker servers

  • Setup YUM repo file
  • # vi /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch enabled=1 gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg exclude=kubelet kubeadm kubectl
  • Install kubernetes
  • # yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes Loaded plugins: product-id, search-disabled-repos, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. kubernetes | 1.4 kB 00:00:00 kubernetes/x86_64/primary | 108 kB 00:00:00 kubernetes 797/797 Resolving Dependencies --> Running transaction check ---> Package kubeadm.x86_64 0:1.24.0-0 will be installed --> Processing Dependency: kubernetes-cni >= 0.8.6 for package: kubeadm-1.24.0-0.x86_64 --> Processing Dependency: cri-tools >= 1.19.0 for package: kubeadm-1.24.0-0.x86_64 ---> Package kubectl.x86_64 0:1.24.0-0 will be installed ---> Package kubelet.x86_64 0:1.24.0-0 will be installed --> Processing Dependency: socat for package: kubelet-1.24.0-0.x86_64 --> Processing Dependency: conntrack for package: kubelet-1.24.0-0.x86_64 --> Running transaction check ---> Package conntrack-tools.x86_64 0:1.4.4-4.el7 will be installed --> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64 --> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64 --> Processing Dependency: libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64 --> Processing Dependency: libnetfilter_queue.so.1()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64 --> Processing Dependency: libnetfilter_cttimeout.so.1()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64 --> Processing Dependency: libnetfilter_cthelper.so.0()(64bit) for package: conntrack-tools-1.4.4-4.el7.x86_64 ---> Package cri-tools.x86_64 0:1.23.0-0 will be installed ---> Package kubernetes-cni.x86_64 0:0.8.7-0 will be installed ---> Package socat.x86_64 0:1.7.3.2-2.el7 will be installed --> Running transaction check ---> Package libnetfilter_cthelper.x86_64 0:1.0.0-9.el7 will be installed ---> Package libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7 will be installed ---> Package libnetfilter_queue.x86_64 0:1.0.2-2.el7_2 will be installed --> Finished Dependency Resolution Dependencies Resolved ============================================================================================================================================================== Package Arch Version Repository Size ============================================================================================================================================================== Installing: kubeadm x86_64 1.24.0-0 kubernetes 9.5 M kubectl x86_64 1.24.0-0 kubernetes 9.9 M kubelet x86_64 1.24.0-0 kubernetes 20 M Installing for dependencies: conntrack-tools x86_64 1.4.4-4.el7 localrepo 186 k cri-tools x86_64 1.23.0-0 kubernetes 7.1 M kubernetes-cni x86_64 0.8.7-0 kubernetes 19 M libnetfilter_cthelper x86_64 1.0.0-9.el7 localrepo 18 k libnetfilter_cttimeout x86_64 1.0.0-6.el7 localrepo 18 k libnetfilter_queue x86_64 1.0.2-2.el7_2 localrepo 23 k socat x86_64 1.7.3.2-2.el7 localrepo 290 k Transaction Summary ============================================================================================================================================================== Install 3 Packages (+7 Dependent packages) Total download size: 66 M Installed size: 288 M Downloading packages: (1/10): conntrack-tools-1.4.4-4.el7.x86_64.rpm | 186 kB 00:00:00 warning: /var/cache/yum/x86_64/7Server/kubernetes/packages/4d300a7655f56307d35f127d99dc192b6aa4997f322234e754f16aaa60fd8906-cri-tools-1.23.0-0.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 3e1ba8d5: NOKEY Public key for 4d300a7655f56307d35f127d99dc192b6aa4997f322234e754f16aaa60fd8906-cri-tools-1.23.0-0.x86_64.rpm is not installed (2/10): 4d300a7655f56307d35f127d99dc192b6aa4997f322234e754f16aaa60fd8906-cri-tools-1.23.0-0.x86_64.rpm | 7.1 MB 00:00:01 (3/10): dda11ee75bc7fcb01e32512cefb8f686dc6a7383516b8b0828adb33761fe602e-kubeadm-1.24.0-0.x86_64.rpm | 9.5 MB 00:00:02 (4/10): 0c7a02e05273d05ea82ca13546853b65fbc257dd159565ce6eb658a0bdf31c9f-kubectl-1.24.0-0.x86_64.rpm | 9.9 MB 00:00:01 (5/10): libnetfilter_cthelper-1.0.0-9.el7.x86_64.rpm | 18 kB 00:00:00 (6/10): libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm | 23 kB 00:00:00 (7/10): socat-1.7.3.2-2.el7.x86_64.rpm | 290 kB 00:00:00 (8/10): libnetfilter_cttimeout-1.0.0-6.el7.x86_64.rpm | 18 kB 00:00:00 (9/10): db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm | 19 MB 00:00:02 (10/10): 363f3fbfa8b89bb978e2d089e52ba59847f143834f8ea1b559afa864d8c5c011-kubelet-1.24.0-0.x86_64.rpm | 20 MB 00:00:02 -------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 13 MB/s | 66 MB 00:00:05 Retrieving key from https://packages.cloud.google.com/yum/doc/yum-key.gpg Importing GPG key 0x6B4097C2: Userid : "Rapture Automatic Signing Key (cloud-rapture-signing-key-2022-03-07-08_01_01.pub)" Fingerprint: e936 7157 4236 81a4 7ec3 93c3 7325 816a 6b40 97c2 From : https://packages.cloud.google.com/yum/doc/yum-key.gpg Importing GPG key 0x307EA071: Userid : "Rapture Automatic Signing Key (cloud-rapture-signing-key-2021-03-01-08_01_09.pub)" Fingerprint: 7f92 e05b 3109 3bef 5a3c 2d38 feea 9169 307e a071 From : https://packages.cloud.google.com/yum/doc/yum-key.gpg Importing GPG key 0x836F4BEB: Userid : "gLinux Rapture Automatic Signing Key (//depot/google3/production/borg/cloud-rapture/keys/cloud-rapture-pubkeys/cloud-rapture-signing-key-2020-12-03-16_08_05.pub) " Fingerprint: 59fe 0256 8272 69dc 8157 8f92 8b57 c5c2 836f 4beb From : https://packages.cloud.google.com/yum/doc/yum-key.gpg Retrieving key from https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg Importing GPG key 0x3E1BA8D5: Userid : "Google Cloud Packages RPM Signing Key " Fingerprint: 3749 e1ba 95a8 6ce0 5454 6ed2 f09c 394c 3e1b a8d5 From : https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : socat-1.7.3.2-2.el7.x86_64 1/10 Installing : cri-tools-1.23.0-0.x86_64 2/10 Installing : libnetfilter_cthelper-1.0.0-9.el7.x86_64 3/10 Installing : libnetfilter_queue-1.0.2-2.el7_2.x86_64 4/10 Installing : kubectl-1.24.0-0.x86_64 5/10 Installing : libnetfilter_cttimeout-1.0.0-6.el7.x86_64 6/10 Installing : conntrack-tools-1.4.4-4.el7.x86_64 7/10 Installing : kubelet-1.24.0-0.x86_64 8/10 Installing : kubernetes-cni-0.8.7-0.x86_64 9/10 Installing : kubeadm-1.24.0-0.x86_64 10/10 Verifying : kubernetes-cni-0.8.7-0.x86_64 1/10 Verifying : kubeadm-1.24.0-0.x86_64 2/10 Verifying : libnetfilter_cttimeout-1.0.0-6.el7.x86_64 3/10 Verifying : kubectl-1.24.0-0.x86_64 4/10 Verifying : libnetfilter_queue-1.0.2-2.el7_2.x86_64 5/10 Verifying : libnetfilter_cthelper-1.0.0-9.el7.x86_64 6/10 Verifying : cri-tools-1.23.0-0.x86_64 7/10 Verifying : conntrack-tools-1.4.4-4.el7.x86_64 8/10 Verifying : socat-1.7.3.2-2.el7.x86_64 9/10 Verifying : kubelet-1.24.0-0.x86_64 10/10 Installed: kubeadm.x86_64 0:1.24.0-0 kubectl.x86_64 0:1.24.0-0 kubelet.x86_64 0:1.24.0-0 Dependency Installed: conntrack-tools.x86_64 0:1.4.4-4.el7 cri-tools.x86_64 0:1.23.0-0 kubernetes-cni.x86_64 0:0.8.7-0 libnetfilter_cthelper.x86_64 0:1.0.0-9.el7 libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7 libnetfilter_queue.x86_64 0:1.0.2-2.el7_2 socat.x86_64 0:1.7.3.2-2.el7 Complete!
  • Start & Enable kubelet service
  • # systemctl start kubelet
    # systemctl enable kubelet

Setup Kubernetes Cluster

Note: Perform these steps only on Master node

  • Login as non-privileged user kadmin
  • # su - kadmin
  • Create a new cluster
  • # sudo kubeadm init --control-plane-endpoint=192.168.50.171 --apiserver-advertise-address=192.168.50.171 --pod-network-cidr=192.168.0.0/16 [init] Using Kubernetes version: v1.24.0 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kube-master1.linuxtechspace.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.50.171] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [kube-master1.linuxtechspace.com localhost] and IPs [192.168.50.171 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [kube-master1.linuxtechspace.com localhost] and IPs [192.168.50.171 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 12.091188 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node kube-master1.linuxtechspace.com as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node kube-master1.linuxtechspace.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule] [bootstrap-token] Using token: d0rjy8.fezbmqce1ekfohrr [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join 192.168.50.171:6443 --token d0rjy8.fezbmqce1ekfohrr \ --discovery-token-ca-cert-hash sha256:0b156b722cbd38ed443cbc854d2f86eca42f4c9de2e6ffe26fc45dba01fefa3e \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.50.171:6443 --token d0rjy8.fezbmqce1ekfohrr \ --discovery-token-ca-cert-hash sha256:0b156b722cbd38ed443cbc854d2f86eca42f4c9de2e6ffe26fc45dba01fefa3e
  • Setup configuration file
  • # mkdir -p $HOME/.kube
    # sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    # sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • Check Status of Cluster Node
  • # kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME kube-master1.linuxtechspace.com NotReady control-plane 106s v1.24.0 192.168.50.171 Red Hat Enterprise Linux Server 7.6 (Maipo) 3.10.0-957.el7.x86_64 containerd://1.6.4
  • Check Status of PODS
  • # kubectl get pods -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-6d4b75cb6d-d74kz 0/1 Pending 0 110s kube-system coredns-6d4b75cb6d-jjncb 0/1 Pending 0 110s kube-system etcd-kube-master1.linuxtechspace.com 1/1 Running 0 2m7s 192.168.50.171 kube-master1.linuxtechspace.com kube-system kube-apiserver-kube-master1.linuxtechspace.com 1/1 Running 0 2m8s 192.168.50.171 kube-master1.linuxtechspace.com kube-system kube-controller-manager-kube-master1.linuxtechspace.com 1/1 Running 1 2m5s 192.168.50.171 kube-master1.linuxtechspace.com kube-system kube-proxy-4765m 1/1 Running 0 110s 192.168.50.171 kube-master1.linuxtechspace.com kube-system kube-scheduler-kube-master1.linuxtechspace.com 1/1 Running 1 2m6s 192.168.50.171 kube-master1.linuxtechspace.com

    Note: The Node Status is showing "NotReady" and there are two DNS pods that are in "pending" state. This is because the pod networking is not installed yet. We will install Calico in next step to enable pod networking.

    Install Calico (POD Networking)

  • Disable "NoSchedule" parameter for Master node if you want it to run pods. This is optional step.
  • # kubectl taint node kube-master1.linuxtechspace.com node-role.kubernetes.io/control-plane:NoSchedule- node/kube-master1.linuxtechspace.com untainted
  • Install calico for enabling pod networking
  • # kubectl create -f https://docs.projectcalico.org/manifests/calico.yaml configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created serviceaccount/calico-node created deployment.apps/calico-kube-controllers created serviceaccount/calico-kube-controllers created poddisruptionbudget.policy/calico-kube-controllers created
  • Check the status of Master node and should be in Ready state.
  • # kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME kube-master1.linuxtechspace.com Ready control-plane 16m v1.24.0 192.168.50.171 Red Hat Enterprise Linux Server 7.6 (Maipo) 3.10.0-957.el7.x86_64 containerd://1.6.4
  • Check the status of PODS. All PODS should be in Running state.
  • # kubectl get pods -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system calico-kube-controllers-56cdb7c587-hfxv9 1/1 Running 0 3m16s 192.168.37.130 kube-master1.linuxtechspace.com kube-system calico-node-g7rvz 1/1 Running 0 3m16s 192.168.50.171 kube-master1.linuxtechspace.com kube-system coredns-6d4b75cb6d-d74kz 1/1 Running 0 9m50s 192.168.37.131 kube-master1.linuxtechspace.com kube-system coredns-6d4b75cb6d-jjncb 1/1 Running 0 9m50s 192.168.37.129 kube-master1.linuxtechspace.com kube-system etcd-kube-master1.linuxtechspace.com 1/1 Running 0 10m 192.168.50.171 kube-master1.linuxtechspace.com kube-system kube-apiserver-kube-master1.linuxtechspace.com 1/1 Running 0 10m 192.168.50.171 kube-master1.linuxtechspace.com kube-system kube-controller-manager-kube-master1.linuxtechspace.com 1/1 Running 1 10m 192.168.50.171 kube-master1.linuxtechspace.com kube-system kube-proxy-4765m 1/1 Running 0 9m50s 192.168.50.171 kube-master1.linuxtechspace.com kube-system kube-scheduler-kube-master1.linuxtechspace.com 1/1 Running 1 10m 192.168.50.171 kube-master1.linuxtechspace.com
  • Generate Token for Worker Nodes to join the cluster
  • Note: During cluster install the initial Token is auto generated which can be used. But in case we need to add the worker node later on then we can generate new token and use it to additional join worker nodes.

    # kubeadm token create --print-join-command kubeadm join 192.168.50.171:6443 --token ks2mvm.7ipce60k40ekxmrn --discovery-token-ca-cert-hash sha256:0b156b722cbd38ed443cbc854d2f86eca42f4c9de2e6ffe26fc45dba01fefa3e

Join Worker Node to Cluster

Note: Perform these steps only on all Worker Nodes

  • Join both the worker nodes to the cluster.
  • Note: We need to use the command that was generated in last step using "kubeadm token" command

    # swapoff -a
    # vi /etc/fstab
    [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Check Cluster Status

Note: Perform these steps only on Master node

  • Check the Nodes status
  • Note: All the nodes should be in "Ready" state.

    # kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME kube-master1.linuxtechspace.com Ready control-plane 40m v1.24.0 192.168.50.171 Red Hat Enterprise Linux Server 7.6 (Maipo) 3.10.0-957.el7.x86_64 containerd://1.6.4 kube-worker1.linuxtechspace.com Ready 3m18s v1.24.0 192.168.50.175 Red Hat Enterprise Linux Server 7.6 (Maipo) 3.10.0-957.el7.x86_64 containerd://1.6.4 kube-worker2.linuxtechspace.com Ready 81s v1.24.0 192.168.50.176 Red Hat Enterprise Linux Server 7.6 (Maipo) 3.10.0-957.el7.x86_64 containerd://1.6.4
  • Check the PODS status
  • Note: All the PODS should be in "Running" state.

    # kubectl get pods -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system calico-kube-controllers-56cdb7c587-hfxv9 1/1 Running 0 36m 192.168.37.130 kube-master1.linuxtechspace.com kube-system calico-node-6hqr4 0/1 Running 2 (36s ago) 3m52s 192.168.50.176 kube-worker2.linuxtechspace.com kube-system calico-node-g7rvz 1/1 Running 0 36m 192.168.50.171 kube-master1.linuxtechspace.com kube-system calico-node-jvsm8 0/1 Running 4 (12s ago) 5m49s 192.168.50.175 kube-worker1.linuxtechspace.com kube-system coredns-6d4b75cb6d-d74kz 1/1 Running 0 43m 192.168.37.131 kube-master1.linuxtechspace.com kube-system coredns-6d4b75cb6d-jjncb 1/1 Running 0 43m 192.168.37.129 kube-master1.linuxtechspace.com kube-system etcd-kube-master1.linuxtechspace.com 1/1 Running 0 43m 192.168.50.171 kube-master1.linuxtechspace.com kube-system kube-apiserver-kube-master1.linuxtechspace.com 1/1 Running 0 43m 192.168.50.171 kube-master1.linuxtechspace.com kube-system kube-controller-manager-kube-master1.linuxtechspace.com 1/1 Running 1 43m 192.168.50.171 kube-master1.linuxtechspace.com kube-system kube-proxy-4765m 1/1 Running 0 43m 192.168.50.171 kube-master1.linuxtechspace.com kube-system kube-proxy-r7kzw 1/1 Running 0 5m49s 192.168.50.175 kube-worker1.linuxtechspace.com kube-system kube-proxy-xtf4p 1/1 Running 0 3m52s 192.168.50.176 kube-worker2.linuxtechspace.com kube-system kube-scheduler-kube-master1.linuxtechspace.com 1/1 Running 1 43m 192.168.50.171 kube-master1.linuxtechspace.com

Multi Node Kubernetes cluster setup is complete.