Tanzu Kubernetes Grid(TKGm) 1.2.1 の環境を 1.3.0 にアップグレードする

先日Tanzu Kubernetes Grid(TKG) 1.3.0がリリースされました。既存環境1.2.1 から1.3.0 にアップグレードしたので、その際に実施した手順のメモです。


環境


手順

TKG 1.3.0 のOVAファイルのアップロード

My VMware から TKG 1.3.0 のアップグレードに必要なパッケージを一式ダウンロードしておきます。"get-tkg" と検索すると、検索結果の一番上にMy VMware のTKG のリンクが出てくると思います。

TKG で利用するノードのベースOS イメージをvCenter にアップロードし、仮想マシンのテンプレートとして登録します。この環境においては、以下の手順で実施しました。

$ cat tkgm-env.sh
export GOVC_USERNAME=
export GOVC_PASSWORD=
export GOVC_DATACENTER=
export GOVC_NETWORK=
export GOVC_DATASTORE=
export GOVC_RESOURCE_POOL=
export GOVC_INSECURE=1
export TEMPLATE_FOLDER=
export GOVC_URL=
$ source tkgm-env.sh
$ cat options.json
{
"DiskProvisioning": "thin"
}
$ govc import.ova --options=options.json -folder ${TEMPLATE_FOLDER} ~/Downloads/_packages/photon-3-kube-v1.20.4-vmware.1-tkg.0-2326554155028348692.ova
$ govc import.ova --options=options.json -folder ${TEMPLATE_FOLDER} ~/Downloads/_packages/photon-3-kube-v1.19.8-vmware.1-tkg.0-15338136437231643652.ova
$ govc import.ova --options=options.json -folder ${TEMPLATE_FOLDER} ~/Downloads/_packages/photon-3-kube-v1.18.16-vmware.1-tkg.0-5916237137517405506.ova
$ govc import.ova --options=options.json -folder ${TEMPLATE_FOLDER} ~/Downloads/_packages/photon-3-kube-v1.17.16-vmware.2-tkg.0-2766760546902094721.ova
$ govc import.ova --options=options.json -folder ${TEMPLATE_FOLDER} ~/Downloads/_packages/ubuntu-2004-kube-v1.20.4-vmware.1-tkg.0-16153464878630780629.ova
$ govc import.ova --options=options.json -folder ${TEMPLATE_FOLDER} ~/Downloads/_packages/ubuntu-2004-kube-v1.18.16-vmware.1-tkg.0-14744207219736322255.ova
$ govc import.ova --options=options.json -folder ${TEMPLATE_FOLDER} ~/Downloads/_packages/ubuntu-2004-kube-v1.19.80-vmware.1-tkg.0-18171857641727074969.ova
$ govc vm.markastemplate ${TEMPLATE_FOLDER}/photon-3-kube-v1.20.4
$ govc vm.markastemplate ${TEMPLATE_FOLDER}/photon-3-kube-v1.19.8
$ govc vm.markastemplate ${TEMPLATE_FOLDER}/photon-3-kube-v1.18.16
$ govc vm.markastemplate ${TEMPLATE_FOLDER}/photon-3-kube-v1.17.16
$ govc vm.markastemplate ${TEMPLATE_FOLDER}/ubuntu-2004-kube-v1.20.4
$ govc vm.markastemplate ${TEMPLATE_FOLDER}/ubuntu-2004-kube-v1.18.16
$ govc vm.markastemplate ${TEMPLATE_FOLDER}/ubuntu-2004-kube-v1.19.8


tanzu CLI のインストール

TKG v1.3.0 からTKG を操作するインターフェースとして、tkg CLI から tanzu CLI に変更されました。こちらの手順に従ってインストールしていきます。

tanzu CLI をインストールする前に、既存のTKG の設定ファイルのバックアップを取得します。

$ cp -r ~/.tkg ~/.tkg-20210326

tanzu CLI をインストールしていきます。

$ tar xvf tanzu-cli-bundle-linux-amd64.tar
cli/
cli/core/
cli/core/v1.3.0/
cli/core/v1.3.0/tanzu-core-linux_amd64
cli/core/plugin.yaml
cli/cluster/
cli/cluster/v1.3.0/
cli/cluster/v1.3.0/tanzu-cluster-linux_amd64
cli/cluster/plugin.yaml
cli/login/
cli/login/v1.3.0/
cli/login/v1.3.0/tanzu-login-linux_amd64
cli/login/plugin.yaml
cli/pinniped-auth/
cli/pinniped-auth/v1.3.0/
cli/pinniped-auth/v1.3.0/tanzu-pinniped-auth-linux_amd64
cli/pinniped-auth/plugin.yaml
cli/kubernetes-release/
cli/kubernetes-release/v1.3.0/
cli/kubernetes-release/v1.3.0/tanzu-kubernetes-release-linux_amd64
cli/kubernetes-release/plugin.yaml
cli/management-cluster/
cli/management-cluster/v1.3.0/
cli/management-cluster/v1.3.0/tanzu-management-cluster-linux_amd64
cli/management-cluster/v1.3.0/test/
cli/management-cluster/v1.3.0/test/tanzu-management-cluster-test-linux_amd64
cli/management-cluster/plugin.yaml
cli/manifest.yaml
cli/ytt-linux-amd64-v0.30.0+vmware.1.gz
cli/kapp-linux-amd64-v0.33.0+vmware.1.gz
cli/imgpkg-linux-amd64-v0.2.0+vmware.1.gz
cli/kbld-linux-amd64-v0.24.0+vmware.1.gz
$ ls
cli  tanzu-cli-bundle-linux-amd64.tar  tkg  tkg-extensions-manifests-v1.2.0-vmware.1.tar.gz  tkg-extensions-v1.2.0+vmware.1  tkg-linux-amd64-v1.2.1-vmware.1.tar.gz
$ tree cli
cli
├── cluster
│   ├── plugin.yaml
│   └── v1.3.0
│       └── tanzu-cluster-linux_amd64
├── core
│   ├── plugin.yaml
│   └── v1.3.0
│       └── tanzu-core-linux_amd64
├── imgpkg-linux-amd64-v0.2.0+vmware.1.gz
├── kapp-linux-amd64-v0.33.0+vmware.1.gz
├── kbld-linux-amd64-v0.24.0+vmware.1.gz
├── kubernetes-release
│   ├── plugin.yaml
│   └── v1.3.0
│       └── tanzu-kubernetes-release-linux_amd64
├── login
│   ├── plugin.yaml
│   └── v1.3.0
│       └── tanzu-login-linux_amd64
├── management-cluster
│   ├── plugin.yaml
│   └── v1.3.0
│       ├── tanzu-management-cluster-linux_amd64
│       └── test
│           └── tanzu-management-cluster-test-linux_amd64
├── manifest.yaml
├── pinniped-auth
│   ├── plugin.yaml
│   └── v1.3.0
│       └── tanzu-pinniped-auth-linux_amd64
└── ytt-linux-amd64-v0.30.0+vmware.1.gz

13 directories, 18 files
$ sudo install core/v1.3.0/tanzu-core-linux_amd64 /usr/local/bin/tanzu
$ tanzu plugin install -u --local cli all
$ tanzu plugin list
  NAME                LATEST VERSION  DESCRIPTION                                                        REPOSITORY  VERSION  STATUS
  alpha               v1.3.0          Alpha CLI commands                                                 core                 not installed
  cluster             v1.3.0          Kubernetes cluster operations                                      core        v1.3.0   installed
  login               v1.3.0          Login to the platform                                              core        v1.3.0   installed
  pinniped-auth       v1.3.0          Pinniped authentication operations (usually not directly invoked)  core        v1.3.0   installed
  kubernetes-release  v1.3.0          Kubernetes release operations                                      core        v1.3.0   installed
  management-cluster  v1.3.0          Kubernetes management cluster operations                           tkg         v1.3.0   installed
$ tanzu management-cluster import
the old providers folder /home/demo/.tkg/providers is backed up to /home/demo/.tkg/providers-20210326031826-41zs7chn
successfully imported server: tkgm-mylab

Management cluster configuration imported successfully
$ tanzu login
? Select a server tkgm-mylab          ()
✔  successfully logged in to management cluster using the kubeconfig tkgm-mylab
$ tanzu cluster list --include-management-cluster
  NAME        NAMESPACE   STATUS   CONTROLPLANE  WORKERS  KUBERNETES         ROLES       PLAN
  devsecops   default     running  1/1           1/1      v1.18.10+vmware.1  <none>      dev
  istio       default     running  1/1           1/1      v1.18.10+vmware.1  <none>      dev
  tkgm-mylab  tkg-system  running  1/1           1/1      v1.19.1+vmware.2   management  dev

tanzu CLI のインストールが出来たので、Management Cluster のアップグレードを実施していきます。

Management Cluster のアップグレード

TKG v1.3.0 からベースOS イメージとして、PhotonOS に加えて、Ubuntu も利用出来る様になりました。Ubuntu を選択肢、Management Cluster のアップグレードを実施したいと思います。
Management Cluster に関しては、PhotonOS -> Ubuntu でアップグレードは出来たものの、Workload Cluster をアップグレードする際に必要な<kubenetes-release> の情報が見えてこないという事象に当たったため、以下の手順を経て、Management Cluster のアップグレード実施しています。

  • TKG v1.2.1 -> v1.3.0 へアップグレード(ベースOSをPhotonOS -> Ubuntu へ変更)
  • TKG v1.3.0 にManagement ClusterのベースOSをUbuntu -> PhotonOS に変更

アップグレードの手順は以下のドキュメントに沿って実施しました。

kapp-controller がManagement Cluster / Workload Cluster で動いているかどうか確認します。

$ kubectl config current-context
devsecops-admin@devsecops
$ kubectl get deployments -A
NAMESPACE                NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
build-service            cert-injection-webhook     1/1     1            1           12d
build-service            secret-syncer-controller   1/1     1            1           12d
build-service            warmer-controller          1/1     1            1           12d
concourse                concourse1-web             1/1     1            1           7d3h
default                  simple-app                 1/1     1            1           47h
kpack                    kpack-controller           1/1     1            1           12d
kpack                    kpack-webhook              1/1     1            1           12d
kube-system              antrea-controller          1/1     1            1           13d
kube-system              coredns                    2/2     2            2           13d
kube-system              vsphere-csi-controller     1/1     1            1           13d
metallb-system           controller                 1/1     1            1           13d
stacks-operator-system   controller-manager         1/1     1            1           12d

$ kubectl config use-context istio-admin@istio
Switched to context "istio-admin@istio".
$ kubectl get deployments -A
NAMESPACE        NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
istio-system     grafana                  1/1     1            1           5d5h
istio-system     istio-egressgateway      1/1     1            1           5d6h
istio-system     istio-ingressgateway     1/1     1            1           5d6h
istio-system     istiod                   1/1     1            1           5d6h
istio-system     jaeger                   1/1     1            1           5d5h
istio-system     kiali                    1/1     1            1           5d5h
istio-system     prometheus               1/1     1            1           5d5h
istio-test       details-v1               1/1     1            1           5d5h
istio-test       productpage-v1           1/1     1            1           5d5h
istio-test       ratings-v1               1/1     1            1           5d5h
istio-test       reviews-v1               1/1     1            1           5d5h
istio-test       reviews-v2               1/1     1            1           5d5h
istio-test       reviews-v3               1/1     1            1           5d5h
kube-system      antrea-controller        1/1     1            1           5d6h
kube-system      coredns                  2/2     2            2           5d6h
kube-system      vsphere-csi-controller   1/1     1            1           5d6h
metallb-system   controller               1/1     1            1           5d5h

$ kubectl config use-context tkgm-mylab-admin@tkgm-mylab
Switched to context "tkgm-mylab-admin@tkgm-mylab".
$ kubectl get deployments -A
NAMESPACE                           NAME                                            READY   UP-TO-DATE   AVAILABLE   AGE
capi-kubeadm-bootstrap-system       capi-kubeadm-bootstrap-controller-manager       1/1     1            1           37d
capi-kubeadm-control-plane-system   capi-kubeadm-control-plane-controller-manager   1/1     1            1           37d
capi-system                         capi-controller-manager                         1/1     1            1           37d
capi-webhook-system                 capi-controller-manager                         1/1     1            1           37d
capi-webhook-system                 capi-kubeadm-bootstrap-controller-manager       1/1     1            1           37d
capi-webhook-system                 capi-kubeadm-control-plane-controller-manager   1/1     1            1           37d
capi-webhook-system                 capv-controller-manager                         1/1     1            1           37d
capv-system                         capv-controller-manager                         1/1     1            1           37d
cert-manager                        cert-manager                                    1/1     1            1           37d
cert-manager                        cert-manager-cainjector                         1/1     1            1           37d
cert-manager                        cert-manager-webhook                            1/1     1            1           37d
kube-system                         antrea-controller                               1/1     1            1           37d
kube-system                         coredns                                         2/2     2            2           37d
kube-system                         vsphere-csi-controller                          1/1     1            1           37d


Management Cluster を以下の通り、アップグレードしました。
$ tanzu cluster list --include-management-cluster
  NAME        NAMESPACE   STATUS   CONTROLPLANE  WORKERS  KUBERNETES         ROLES       PLAN
  devsecops   default     running  1/1           1/1      v1.18.10+vmware.1  <none>       dev
  istio       default     running  1/1           1/1      v1.18.10+vmware.1  <none>       dev
  tkgm-mylab  tkg-system  running  1/1           1/1      v1.19.1+vmware.2   management  dev
$ tanzu version
version: v1.3.0
buildDate: 2021-03-19
sha: 06ddc9a
$ tanzu management-cluster upgrade --os-name ubuntu
Upgrading management cluster 'tkgm-mylab' to TKG version 'v1.3.0' with Kubernetes version 'v1.20.4+vmware.1'. Are you sure? [y/N]: y
Upgrading management cluster providers...
Checking cert-manager version...
Cert-manager is already up to date
Performing upgrade...
Deleting Provider="cluster-api" Version="" TargetNamespace="capi-system"
Installing Provider="cluster-api" Version="v0.3.14" TargetNamespace="capi-system"
Deleting Provider="bootstrap-kubeadm" Version="" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.14" TargetNamespace="capi-kubeadm-bootstrap-system"
Deleting Provider="control-plane-kubeadm" Version="" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.14" TargetNamespace="capi-kubeadm-control-plane-system"
Deleting Provider="infrastructure-vsphere" Version="" TargetNamespace="capv-system"
Installing Provider="infrastructure-vsphere" Version="v0.7.6" TargetNamespace="capv-system"
Management cluster providers upgraded successfully...
Upgrading management cluster kubernetes version...
Verifying kubernetes version...
Retrieving configuration for upgrade cluster...
Create InfrastructureTemplate for upgrade...
Upgrading control plane nodes...
Patching KubeadmControlPlane with the kubernetes version v1.20.4+vmware.1...
Waiting for kubernetes version to be updated for control plane nodes
Upgrading worker nodes...
Patching MachineDeployment with the kubernetes version v1.20.4+vmware.1...
Waiting for kubernetes version to be updated for worker nodes...
updating additional components: 'metadata/tkg,addons-management/kapp-controller,addons-management/tanzu-addons-manager,tkr/tkr-controller' ...
Management cluster 'tkgm-mylab' successfully upgraded to TKG version 'v1.3.0' with kubernetes version 'v1.20.4+vmware.1'

アップグレードが完了したので、Management Cluster の状態を確認していきます。
$ tanzu cluster list --include-management-cluster
  NAME        NAMESPACE   STATUS   CONTROLPLANE  WORKERS  KUBERNETES         ROLES       PLAN
  devsecops   default     running  1/1           1/1      v1.18.10+vmware.1  <none>       dev
  istio       default     running  1/1           1/1      v1.18.10+vmware.1  <none>       dev
  tkgm-mylab  tkg-system  running  1/1           1/1      v1.20.4+vmware.1   management  dev
$ kubectl get nodes -o wide
NAME                               STATUS   ROLES                  AGE     VERSION            INTERNAL-IP     EXTERNAL-IP     OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
tkgm-mylab-control-plane-vcqgk     Ready    control-plane,master   9m56s   v1.20.4+vmware.1   xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   Ubuntu 20.04.2 LTS   5.4.0-66-generic   containerd://1.4.3
tkgm-mylab-md-0-7b477ff6d8-jvdvw   Ready    <none>                  2m42s   v1.20.4+vmware.1   xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   Ubuntu 20.04.2 LTS   5.4.0-66-generic   containerd://1.4.3

kubernetes-release を確認すると何も表示されませんでした。。。
$ tanzu kubernetes-release get
  NAME  VERSION  COMPATIBLE  UPGRADEAVAILABLE

この状態だとWorkload Cluster がアップグレード出来なかったので、Management Cluster のOS イメージをUbuntu -> Photon に変更します。
$ tanzu management-cluster upgrade --os-name photon
Upgrading management cluster 'tkgm-mylab' to TKG version 'v1.3.0' with Kubernetes version 'v1.20.4+vmware.1'. Are you sure? [y/N]: y
Upgrading management cluster providers...
Checking cert-manager version...
Deleting cert-manager Version="v0.11.0"
Installing cert-manager Version="v0.16.1"
Waiting for cert-manager to be available...
Performing upgrade...
Deleting Provider="cluster-api" Version="" TargetNamespace="capi-system"
Installing Provider="cluster-api" Version="v0.3.14" TargetNamespace="capi-system"
Deleting Provider="bootstrap-kubeadm" Version="" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.14" TargetNamespace="capi-kubeadm-bootstrap-system"
Deleting Provider="control-plane-kubeadm" Version="" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.14" TargetNamespace="capi-kubeadm-control-plane-system"
Deleting Provider="infrastructure-vsphere" Version="" TargetNamespace="capv-system"
Installing Provider="infrastructure-vsphere" Version="v0.7.6" TargetNamespace="capv-system"
Management cluster providers upgraded successfully...
Upgrading management cluster kubernetes version...
Verifying kubernetes version...
Retrieving configuration for upgrade cluster...
Create InfrastructureTemplate for upgrade...
Upgrading control plane nodes...
Patching KubeadmControlPlane with the kubernetes version v1.20.4+vmware.1...
Waiting for kubernetes version to be updated for control plane nodes
Upgrading worker nodes...
Patching MachineDeployment with the kubernetes version v1.20.4+vmware.1...
Waiting for kubernetes version to be updated for worker nodes...
updating additional components: 'metadata/tkg,addons-management/kapp-controller,addons-management/tanzu-addons-manager,tkr/tkr-controller' ...
Management cluster 'tkgm-mylab' successfully upgraded to TKG version 'v1.3.0' with kubernetes version 'v1.20.4+vmware.1'

アップグレードも完了し、kubernetes-release もちゃんと確認出来ました。
$ kubectl get nodes -o wide
NAME                              STATUS   ROLES                  AGE   VERSION            INTERNAL-IP     EXTERNAL-IP     OS-IMAGE                 KERNEL-VERSION   CONTAINER-RUNTIME
tkgm-mylab-control-plane-44mrq    Ready    control-plane,master   14m   v1.20.4+vmware.1   xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   VMware Photon OS/Linux   4.19.174-5.ph3   containerd://1.4.3
tkgm-mylab-md-0-b88cb5777-hkcsk   Ready    <none>                 15m   v1.20.4+vmware.1   xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   VMware Photon OS/Linux   4.19.174-5.ph3   containerd://1.4.3
$ tanzu kubernetes-release get
  NAME                       VERSION                  COMPATIBLE  UPGRADEAVAILABLE
  v1.17.16---vmware.2-tkg.1  v1.17.16+vmware.2-tkg.1  True        True
  v1.18.16---vmware.1-tkg.1  v1.18.16+vmware.1-tkg.1  True        True
  v1.19.8---vmware.1-tkg.1   v1.19.8+vmware.1-tkg.1   True        True
  v1.20.4---vmware.1-tkg.1   v1.20.4+vmware.1-tkg.1   True        False
$ tanzu cluster list --include-management-cluster
  NAME        NAMESPACE   STATUS   CONTROLPLANE  WORKERS  KUBERNETES         ROLES       PLAN
  devsecops   default     running  1/1           1/1      v1.18.10+vmware.1  <none>      dev
  istio       default     running  1/1           1/1      v1.18.10+vmware.1  <none>      dev
  tkgm-mylab  tkg-system  running  1/1           1/1      v1.20.4+vmware.1   management  dev

Management Cluster のアップグレードが完了したので、続いてWorkload Cluster のアップグレードを実施していきます。

Workload Cluster のアップグレード

2つあるWorkload Cluster を順番にアップグレードしていきます。先ずは、istio クラスタをアップグレードします。

$ tanzu cluster upgrade istio --tkr v1.19.8---vmware.1-tkg.1 --os-name ubuntu
Upgrading workload cluster 'istio' to kubernetes version 'v1.19.8+vmware.1'. Are you sure? [y/N]: y
Validating configuration...
Verifying kubernetes version...
Retrieving configuration for upgrade cluster...
Create InfrastructureTemplate for upgrade...
Upgrading control plane nodes...
Patching KubeadmControlPlane with the kubernetes version v1.19.8+vmware.1...
Skipping KubeadmControlPlane patch as kubernetes versions are already same v1.19.8+vmware.1
Waiting for kubernetes version to be updated for control plane nodes
Upgrading worker nodes...
Patching MachineDeployment with the kubernetes version v1.19.8+vmware.1...
Skipping MachineDeployment patch as kubernetes versions are already same v1.19.8+vmware.1
Waiting for kubernetes version to be updated for worker nodes...
updating additional components: 'metadata/tkg,addons-management/kapp-controller' ...
Cluster 'istio' successfully upgraded to kubernetes version 'v1.19.8+vmware.1'

ノードの情報を確認してみると、Worker ノードがSchedulingDisabled で残ったままになっていました。
$ kubectl get nodes -o wide
NAME                          STATUS                     ROLES    AGE     VERSION             INTERNAL-IP     EXTERNAL-IP     OS-IMAGE                 KERNEL-VERSION     CONTAINER-RUNTIME
istio-control-plane-dbv2w     Ready                      master   6m19s   v1.19.8+vmware.1    xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   Ubuntu 20.04.2 LTS       5.4.0-66-generic   containerd://1.4.3
istio-control-plane-gg7zf     Ready,SchedulingDisabled   master   5d7h    v1.18.10+vmware.1   xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   VMware Photon OS/Linux   4.19.150-1.ph3     containerd://1.4.1
istio-md-0-6df599dcd7-x848j   Ready                      <none>   5d7h    v1.18.10+vmware.1   xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   VMware Photon OS/Linux   4.19.150-1.ph3     containerd://1.4.1
$ kubectl get nodes -o wide
NAME                          STATUS                        ROLES    AGE     VERSION             INTERNAL-IP     EXTERNAL-IP     OS-IMAGE                 KERNEL-VERSION     CONTAINER-RUNTIME
istio-control-plane-dbv2w     Ready                         master   9m40s   v1.19.8+vmware.1    xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   Ubuntu 20.04.2 LTS       5.4.0-66-generic   containerd://1.4.3
istio-control-plane-gg7zf     NotReady,SchedulingDisabled   master   5d7h    v1.18.10+vmware.1   xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   VMware Photon OS/Linux   4.19.150-1.ph3     containerd://1.4.1
istio-md-0-6df599dcd7-x848j   Ready                         <none>   5d7h    v1.18.10+vmware.1   xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   VMware Photon OS/Linux   4.19.150-1.ph3     containerd://1.4.1
$ kubectl get nodes -o wide
NAME                          STATUS                     ROLES    AGE    VERSION             INTERNAL-IP     EXTERNAL-IP     OS-IMAGE                 KERNEL-VERSION     CONTAINER-RUNTIME
istio-control-plane-dbv2w     Ready                      master   86m    v1.19.8+vmware.1    xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   Ubuntu 20.04.2 LTS       5.4.0-66-generic   containerd://1.4.3
istio-md-0-6df599dcd7-x848j   Ready,SchedulingDisabled   <none>   5d8h   v1.18.10+vmware.1   xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   VMware Photon OS/Linux   4.19.150-1.ph3     containerd://1.4.1
istio-md-0-8797f69b7-6hg79    Ready                      <none>   73m    v1.19.8+vmware.1    xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   Ubuntu 20.04.2 LTS       5.4.0-66-generic   containerd://1.4.3
$ kubectl get nodes -o wide
NAME                          STATUS                     ROLES    AGE    VERSION             INTERNAL-IP     EXTERNAL-IP     OS-IMAGE                 KERNEL-VERSION     CONTAINER-RUNTIME
istio-control-plane-dbv2w     Ready                      master   87m    v1.19.8+vmware.1    xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   Ubuntu 20.04.2 LTS       5.4.0-66-generic   containerd://1.4.3
istio-md-0-6df599dcd7-x848j   Ready,SchedulingDisabled   <none>   5d8h   v1.18.10+vmware.1   xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   VMware Photon OS/Linux   4.19.150-1.ph3     containerd://1.4.1
istio-md-0-8797f69b7-6hg79    Ready                      <none>   73m    v1.19.8+vmware.1    xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   Ubuntu 20.04.2 LTS       5.4.0-66-generic   containerd://1.4.3

istio クラスタで稼働しているリソース自体は問題無さそうなので、ノードを削除しました。
$ kubectl get all -A
NAMESPACE        NAME                                                    READY   STATUS    RESTARTS   AGE
istio-system     pod/grafana-94f5bf75b-vlzkr                             1/1     Running   0          73m
istio-system     pod/istio-egressgateway-7f9744768f-8pg2r                1/1     Running   0          5d8h
istio-system     pod/istio-ingressgateway-7c89f4dbb-jsgkt                1/1     Running   0          5d8h
istio-system     pod/istiod-5749997fb9-zw7s5                             1/1     Running   0          5d8h
istio-system     pod/jaeger-5c7675974-2zpch                              1/1     Running   0          73m
istio-system     pod/kiali-d4fdb9cdb-gc94h                               1/1     Running   0          73m
istio-system     pod/prometheus-7d76687994-8jnxm                         2/2     Running   0          73m
istio-test       pod/details-v1-66b6955995-5dgvl                         2/2     Running   0          73m
istio-test       pod/productpage-v1-5d9b4c9849-7n6sh                     2/2     Running   0          73m
istio-test       pod/ratings-v1-fd78f799f-5pq4v                          2/2     Running   0          73m
istio-test       pod/reviews-v1-6549ddccc5-kqjzv                         2/2     Running   0          73m
istio-test       pod/reviews-v2-76c4865449-spck4                         2/2     Running   0          73m
istio-test       pod/reviews-v3-6b554c875-7wbfk                          2/2     Running   0          73m
kube-system      pod/antrea-agent-98hf9                                  2/2     Running   3          87m
kube-system      pod/antrea-agent-rmb8r                                  2/2     Running   0          5d8h
kube-system      pod/antrea-agent-vxttz                                  2/2     Running   0          73m
kube-system      pod/antrea-controller-5c5fb7579b-4c8t7                  1/1     Running   0          73m
kube-system      pod/coredns-5b7b55f9f8-29zxw                            1/1     Running   0          73m
kube-system      pod/coredns-5b7b55f9f8-s7z2m                            1/1     Running   0          74m
kube-system      pod/etcd-istio-control-plane-dbv2w                      1/1     Running   5          82m
kube-system      pod/kube-apiserver-istio-control-plane-dbv2w            1/1     Running   4          82m
kube-system      pod/kube-controller-manager-istio-control-plane-dbv2w   1/1     Running   0          81m
kube-system      pod/kube-proxy-27jnr                                    1/1     Running   0          74m
kube-system      pod/kube-proxy-9pc8r                                    1/1     Running   0          73m
kube-system      pod/kube-proxy-pg5mj                                    1/1     Running   0          74m
kube-system      pod/kube-scheduler-istio-control-plane-dbv2w            1/1     Running   0          82m
kube-system      pod/kube-vip-istio-control-plane-dbv2w                  1/1     Running   0          81m
kube-system      pod/vsphere-cloud-controller-manager-7472v              1/1     Running   0          82m
kube-system      pod/vsphere-csi-controller-6bd6f88f9c-wwnxx             5/5     Running   0          73m
kube-system      pod/vsphere-csi-node-2mkt7                              3/3     Running   0          73m
kube-system      pod/vsphere-csi-node-hkdlc                              3/3     Running   0          87m
kube-system      pod/vsphere-csi-node-qkb2r                              3/3     Running   0          5d8h
metallb-system   pod/controller-6568695fd8-4rx7m                         1/1     Running   0          73m
metallb-system   pod/speaker-dkrbc                                       1/1     Running   0          5d8h
metallb-system   pod/speaker-pcgcf                                       1/1     Running   0          73m
metallb-system   pod/speaker-tq2qt                                       1/1     Running   0          82m

NAMESPACE      NAME                               TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)                                                                      AGE
default        service/kubernetes                 ClusterIP      100.64.0.1       <none>           443/TCP                                                                      5d8h
istio-system   service/grafana                    ClusterIP      100.70.13.60     <none>           3000/TCP                                                                     5d7h
istio-system   service/istio-egressgateway        ClusterIP      100.70.91.13     <none>           80/TCP,443/TCP,15443/TCP                                                     5d8h
istio-system   service/istio-ingressgateway       LoadBalancer   100.64.163.178   xxx.xxx.xx.xxx   15021:32500/TCP,80:31272/TCP,443:30877/TCP,31400:30786/TCP,15443:30756/TCP   5d8h
istio-system   service/istiod                     ClusterIP      100.68.18.243    <none>           15010/TCP,15012/TCP,443/TCP,15014/TCP                                        5d8h
istio-system   service/jaeger-collector           ClusterIP      100.68.245.63    <none>           14268/TCP,14250/TCP                                                          5d7h
istio-system   service/kiali                      ClusterIP      100.64.102.154   <none>           20001/TCP,9090/TCP                                                           5d7h
istio-system   service/prometheus                 ClusterIP      100.69.42.164    <none>           9090/TCP                                                                     5d7h
istio-system   service/tracing                    ClusterIP      100.69.146.45    <none>           80/TCP                                                                       5d7h
istio-system   service/zipkin                     ClusterIP      100.65.162.115   <none>           9411/TCP                                                                     5d7h
istio-test     service/details                    ClusterIP      100.68.144.112   <none>           9080/TCP                                                                     5d8h
istio-test     service/productpage                ClusterIP      100.71.188.179   <none>           9080/TCP                                                                     5d8h
istio-test     service/ratings                    ClusterIP      100.71.193.208   <none>           9080/TCP                                                                     5d8h
istio-test     service/reviews                    ClusterIP      100.67.101.161   <none>           9080/TCP                                                                     5d8h
kube-system    service/antrea                     ClusterIP      100.66.250.52    <none>           443/TCP                                                                      5d8h
kube-system    service/cloud-controller-manager   NodePort       100.70.58.23     <none>           443:32434/TCP                                                                5d8h
kube-system    service/kube-dns                   ClusterIP      100.64.0.10      <none>           53/UDP,53/TCP,9153/TCP                                                       5d8h

NAMESPACE        NAME                                              DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                     AGE
kube-system      daemonset.apps/antrea-agent                       3         3         3       3            3           kubernetes.io/os=linux            5d8h
kube-system      daemonset.apps/kube-proxy                         3         3         3       3            3           kubernetes.io/os=linux            5d8h
kube-system      daemonset.apps/vsphere-cloud-controller-manager   1         1         1       1            1           node-role.kubernetes.io/master=   5d8h
kube-system      daemonset.apps/vsphere-csi-node                   3         3         3       3            3           <none>                            5d8h
metallb-system   daemonset.apps/speaker                            3         3         3       3            3           kubernetes.io/os=linux            5d8h

NAMESPACE        NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
istio-system     deployment.apps/grafana                  1/1     1            1           5d7h
istio-system     deployment.apps/istio-egressgateway      1/1     1            1           5d8h
istio-system     deployment.apps/istio-ingressgateway     1/1     1            1           5d8h
istio-system     deployment.apps/istiod                   1/1     1            1           5d8h
istio-system     deployment.apps/jaeger                   1/1     1            1           5d7h
istio-system     deployment.apps/kiali                    1/1     1            1           5d7h
istio-system     deployment.apps/prometheus               1/1     1            1           5d7h
istio-test       deployment.apps/details-v1               1/1     1            1           5d8h
istio-test       deployment.apps/productpage-v1           1/1     1            1           5d8h
istio-test       deployment.apps/ratings-v1               1/1     1            1           5d8h
istio-test       deployment.apps/reviews-v1               1/1     1            1           5d8h
istio-test       deployment.apps/reviews-v2               1/1     1            1           5d8h
istio-test       deployment.apps/reviews-v3               1/1     1            1           5d8h
kube-system      deployment.apps/antrea-controller        1/1     1            1           5d8h
kube-system      deployment.apps/coredns                  2/2     2            2           5d8h
kube-system      deployment.apps/vsphere-csi-controller   1/1     1            1           5d8h
metallb-system   deployment.apps/controller               1/1     1            1           5d8h

NAMESPACE        NAME                                                DESIRED   CURRENT   READY   AGE
istio-system     replicaset.apps/grafana-94f5bf75b                   1         1         1       5d7h
istio-system     replicaset.apps/istio-egressgateway-7f9744768f      1         1         1       5d8h
istio-system     replicaset.apps/istio-ingressgateway-7c89f4dbb      1         1         1       5d8h
istio-system     replicaset.apps/istiod-5749997fb9                   1         1         1       5d8h
istio-system     replicaset.apps/jaeger-5c7675974                    1         1         1       5d7h
istio-system     replicaset.apps/kiali-d4fdb9cdb                     1         1         1       5d7h
istio-system     replicaset.apps/prometheus-7d76687994               1         1         1       5d7h
istio-test       replicaset.apps/details-v1-66b6955995               1         1         1       5d8h
istio-test       replicaset.apps/productpage-v1-5d9b4c9849           1         1         1       5d8h
istio-test       replicaset.apps/ratings-v1-fd78f799f                1         1         1       5d8h
istio-test       replicaset.apps/reviews-v1-6549ddccc5               1         1         1       5d8h
istio-test       replicaset.apps/reviews-v2-76c4865449               1         1         1       5d8h
istio-test       replicaset.apps/reviews-v3-6b554c875                1         1         1       5d8h
kube-system      replicaset.apps/antrea-controller-5c5fb7579b        1         1         1       5d8h
kube-system      replicaset.apps/coredns-585d96fb45                  0         0         0       5d8h
kube-system      replicaset.apps/coredns-5b7b55f9f8                  2         2         2       74m
kube-system      replicaset.apps/coredns-f75686746                   0         0         0       74m
kube-system      replicaset.apps/vsphere-csi-controller-6bd6f88f9c   1         1         1       5d8h
metallb-system   replicaset.apps/controller-6568695fd8               1         1         1       5d8h
$ kubectl delete node istio-md-0-6df599dcd7-x848j
node "istio-md-0-6df599dcd7-x848j" deleted
$ kubectl get nodes -o wide
NAME                         STATUS   ROLES    AGE    VERSION            INTERNAL-IP     EXTERNAL-IP     OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
istio-control-plane-dbv2w    Ready    master   124m   v1.19.8+vmware.1   xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   Ubuntu 20.04.2 LTS   5.4.0-66-generic   containerd://1.4.3
istio-md-0-8797f69b7-6hg79   Ready    <none>   111m   v1.19.8+vmware.1   xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   Ubuntu 20.04.2 LTS   5.4.0-66-generic   containerd://1.4.3

続いて、devsecops クラスタをアップグレードしてみます。
$ tanzu cluster upgrade devsecops --tkr v1.19.8---vmware.1-tkg.1 --os-name ubuntu
Upgrading workload cluster 'devsecops' to kubernetes version 'v1.19.8+vmware.1'. Are you sure? [y/N]: y
Validating configuration...
Verifying kubernetes version...
Retrieving configuration for upgrade cluster...
Create InfrastructureTemplate for upgrade...
Upgrading control plane nodes...
Patching KubeadmControlPlane with the kubernetes version v1.19.8+vmware.1...
Waiting for kubernetes version to be updated for control plane nodes
Upgrading worker nodes...
Patching MachineDeployment with the kubernetes version v1.19.8+vmware.1...
Waiting for kubernetes version to be updated for worker nodes...
updating additional components: 'metadata/tkg,addons-management/kapp-controller' ...
Cluster 'devsecops' successfully upgraded to kubernetes version 'v1.19.8+vmware.1'

こちらのクラスタに関しては手動でノードの削除をすること無く、無事にアップグレードが完了した様です。
$ kubectl get nodes -o wide
NAME                            STATUS   ROLES    AGE   VERSION             INTERNAL-IP     EXTERNAL-IP     OS-IMAGE                 KERNEL-VERSION   CONTAINER-RUNTIME
devsecops-control-plane-kw5wx   Ready    master   13d   v1.18.10+vmware.1   xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   VMware Photon OS/Linux   4.19.150-1.ph3   containerd://1.4.1
devsecops-md-0-7f8445b7-42fqz   Ready    <none>   9d    v1.18.10+vmware.1   xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   VMware Photon OS/Linux   4.19.150-1.ph3   containerd://1.4.1
$ kubectl get nodes -o wide
NAME                            STATUS   ROLES    AGE    VERSION             INTERNAL-IP     EXTERNAL-IP     OS-IMAGE                 KERNEL-VERSION     CONTAINER-RUNTIME
devsecops-control-plane-kw5wx   Ready    master   13d    v1.18.10+vmware.1   xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   VMware Photon OS/Linux   4.19.150-1.ph3     containerd://1.4.1
devsecops-control-plane-nlffq   Ready    master   112s   v1.19.8+vmware.1    xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   Ubuntu 20.04.2 LTS       5.4.0-66-generic   containerd://1.4.3
devsecops-md-0-7f8445b7-42fqz   Ready    <none>   9d     v1.18.10+vmware.1   xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   VMware Photon OS/Linux   4.19.150-1.ph3     containerd://1.4.1
$ kubectl get nodes -o wide
NAME                            STATUS                        ROLES    AGE     VERSION             INTERNAL-IP     EXTERNAL-IP     OS-IMAGE                 KERNEL-VERSION     CONTAINER-RUNTIME
devsecops-control-plane-kw5wx   NotReady,SchedulingDisabled   master   13d     v1.18.10+vmware.1   xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   VMware Photon OS/Linux   4.19.150-1.ph3     containerd://1.4.1
devsecops-control-plane-nlffq   Ready                         master   6m46s   v1.19.8+vmware.1    xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   Ubuntu 20.04.2 LTS       5.4.0-66-generic   containerd://1.4.3
devsecops-md-0-7f8445b7-42fqz   Ready                         <none>   9d      v1.18.10+vmware.1   xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   VMware Photon OS/Linux   4.19.150-1.ph3     containerd://1.4.1
$ kubectl get nodes -o wide
NAME                            STATUS                        ROLES    AGE     VERSION             INTERNAL-IP     EXTERNAL-IP     OS-IMAGE                 KERNEL-VERSION     CONTAINER-RUNTIME
devsecops-control-plane-kw5wx   NotReady,SchedulingDisabled   master   13d     v1.18.10+vmware.1   xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   VMware Photon OS/Linux   4.19.150-1.ph3     containerd://1.4.1
devsecops-control-plane-nlffq   Ready                         master   8m46s   v1.19.8+vmware.1    xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   Ubuntu 20.04.2 LTS       5.4.0-66-generic   containerd://1.4.3
devsecops-md-0-7f8445b7-42fqz   Ready                         <none>   9d      v1.18.10+vmware.1   xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   VMware Photon OS/Linux   4.19.150-1.ph3     containerd://1.4.1
$ kubectl get nodes -o wide
NAME                            STATUS                        ROLES    AGE     VERSION             INTERNAL-IP     EXTERNAL-IP     OS-IMAGE                 KERNEL-VERSION     CONTAINER-RUNTIME
devsecops-control-plane-kw5wx   NotReady,SchedulingDisabled   master   13d     v1.18.10+vmware.1   xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   VMware Photon OS/Linux   4.19.150-1.ph3     containerd://1.4.1
devsecops-control-plane-nlffq   Ready                         master   8m49s   v1.19.8+vmware.1    xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   Ubuntu 20.04.2 LTS       5.4.0-66-generic   containerd://1.4.3
devsecops-md-0-7f8445b7-42fqz   Ready                         <none>   9d      v1.18.10+vmware.1   xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   VMware Photon OS/Linux   4.19.150-1.ph3     containerd://1.4.1
$ kubectl get nodes -o wide
NAME                            STATUS   ROLES    AGE   VERSION             INTERNAL-IP     EXTERNAL-IP     OS-IMAGE                 KERNEL-VERSION     CONTAINER-RUNTIME
devsecops-control-plane-nlffq   Ready    master   10m   v1.19.8+vmware.1    xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   Ubuntu 20.04.2 LTS       5.4.0-66-generic   containerd://1.4.3
devsecops-md-0-7f8445b7-42fqz   Ready    <none>   9d    v1.18.10+vmware.1   xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   VMware Photon OS/Linux   4.19.150-1.ph3     containerd://1.4.1
$ kubectl get nodes -o wide
NAME                             STATUS   ROLES    AGE   VERSION             INTERNAL-IP     EXTERNAL-IP     OS-IMAGE                 KERNEL-VERSION     CONTAINER-RUNTIME
devsecops-control-plane-nlffq    Ready    master   11m   v1.19.8+vmware.1    xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   Ubuntu 20.04.2 LTS       5.4.0-66-generic   containerd://1.4.3
devsecops-md-0-7f8445b7-42fqz    Ready    <none>   9d    v1.18.10+vmware.1   xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   VMware Photon OS/Linux   4.19.150-1.ph3     containerd://1.4.1
devsecops-md-0-fbb5cf4d5-wptfs   Ready    <none>   34s   v1.19.8+vmware.1    xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   Ubuntu 20.04.2 LTS       5.4.0-66-generic   containerd://1.4.3
$ kubectl get nodes -o wide
NAME                             STATUS                     ROLES    AGE   VERSION             INTERNAL-IP     EXTERNAL-IP     OS-IMAGE                 KERNEL-VERSION     CONTAINER-RUNTIME
devsecops-control-plane-nlffq    Ready                      master   11m   v1.19.8+vmware.1    xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   Ubuntu 20.04.2 LTS       5.4.0-66-generic   containerd://1.4.3
devsecops-md-0-7f8445b7-42fqz    Ready,SchedulingDisabled   <none>   9d    v1.18.10+vmware.1   xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   VMware Photon OS/Linux   4.19.150-1.ph3     containerd://1.4.1
devsecops-md-0-fbb5cf4d5-wptfs   Ready                      <none>   36s   v1.19.8+vmware.1    xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   Ubuntu 20.04.2 LTS       5.4.0-66-generic   containerd://1.4.3
$ kubectl get nodes -o wide
NAME                             STATUS   ROLES    AGE   VERSION            INTERNAL-IP     EXTERNAL-IP     OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
devsecops-control-plane-nlffq    Ready    master   40h   v1.19.8+vmware.1   xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   Ubuntu 20.04.2 LTS   5.4.0-66-generic   containerd://1.4.3
devsecops-md-0-fbb5cf4d5-wptfs   Ready    <none>   40h   v1.19.8+vmware.1   xxx.xxx.xxx.xxx   xxx.xxx.xxx.xxx   Ubuntu 20.04.2 LTS   5.4.0-66-generic   containerd://1.4.3
$ kubectl get pods -A
NAMESPACE                NAME                                                    READY   STATUS    RESTARTS   AGE
build-service            build-pod-image-fetcher-p8q7s                           5/5     Running   0          40h
build-service            cert-injection-webhook-5f6d8bf4bf-f5x8h                 1/1     Running   0          40h
build-service            secret-syncer-controller-58f8d9cf6-xpkcv                1/1     Running   0          40h
build-service            smart-warmer-image-fetcher-2zljd                        4/4     Running   0          40h
build-service            warmer-controller-9b5cbb6f6-pklcf                       1/1     Running   0          40h
concourse                concourse1-postgresql-0                                 1/1     Running   0          40h
concourse                concourse1-web-68d866988f-8xx7q                         1/1     Running   0          40h
concourse                concourse1-worker-0                                     1/1     Running   0          40h
concourse                concourse1-worker-1                                     1/1     Running   0          40h
default                  simple-app-c86dfcd6d-xbmv7                              1/1     Running   0          40h
kpack                    kpack-controller-6cccff9ddb-dn4rh                       1/1     Running   0          40h
kpack                    kpack-webhook-665498689d-6jxzz                          1/1     Running   0          40h
kube-system              antrea-agent-cxxqz                                      2/2     Running   0          40h
kube-system              antrea-agent-z9zbs                                      2/2     Running   1          40h
kube-system              antrea-controller-5c5fb7579b-xc8kk                      1/1     Running   0          40h
kube-system              coredns-5b7b55f9f8-k6hxr                                1/1     Running   0          40h
kube-system              coredns-5b7b55f9f8-q22j4                                1/1     Running   0          40h
kube-system              etcd-devsecops-control-plane-nlffq                      1/1     Running   0          40h
kube-system              kube-apiserver-devsecops-control-plane-nlffq            1/1     Running   0          40h
kube-system              kube-controller-manager-devsecops-control-plane-nlffq   1/1     Running   0          40h
kube-system              kube-proxy-c4vz4                                        1/1     Running   0          40h
kube-system              kube-proxy-kclvx                                        1/1     Running   0          40h
kube-system              kube-scheduler-devsecops-control-plane-nlffq            1/1     Running   0          40h
kube-system              kube-vip-devsecops-control-plane-nlffq                  1/1     Running   0          40h
kube-system              vsphere-cloud-controller-manager-66dpr                  1/1     Running   0          40h
kube-system              vsphere-csi-controller-6c677cb64d-7sr2m                 5/5     Running   0          40h
kube-system              vsphere-csi-node-9dn8f                                  3/3     Running   0          40h
kube-system              vsphere-csi-node-dk2jv                                  3/3     Running   0          40h
metallb-system           controller-6568695fd8-cm67z                             1/1     Running   0          40h
metallb-system           speaker-lm272                                           1/1     Running   0          40h
metallb-system           speaker-nnm8t                                           1/1     Running   0          40h
stacks-operator-system   controller-manager-7d6db5dff9-ms6hc                     1/1     Running   0          40h

Management Cluster, Workload Cluster 共にアップグレード出来ました。

$ tanzu cluster list --include-management-cluster
  NAME        NAMESPACE   STATUS   CONTROLPLANE  WORKERS  KUBERNETES        ROLES       PLAN
  devsecops   default     running  1/1           1/1      v1.19.8+vmware.1  <none>      dev
  istio       default     running  1/1           1/1      v1.19.8+vmware.1  <none>      dev
  tkgm-mylab  tkg-system  running  1/1           1/1      v1.20.4+vmware.1  management  dev


まとめ

この環境においては、Extensions を利用していないため、アップグレード作業としてこれで完了です。一応上手くいったと思いますが、実際はCluster のバックアップを取得された上で実施した方が良いかと思います。
まだ「Register Add-ons」のステップは実施していないので、後ほど実施してみたいと思います。

このブログの人気の投稿