vSphere with Tanzu Supervisor Cluster(TKGs) を利用し、Tanzu Kubernetes Grid(TKGm) のWorkload Cluster を作成する

自分備忘録用メモ。こちら「Use the Tanzu CLI with a vSphere with Tanzu Supervisor Cluster」を用いて、TKGm のWorkload Cluster をデプロイします。


前提

  • vSphere with Tanzu の環境がデプロイ済であること
    • vSphere 7U2 を利用
  • Tanzu Kubernetes Grid(TKGm) v1.3.0


Tanzu CLI の展開

vSphere with Tanzu にアクセス出来るJumpbox(Ubuntu) 上に、tanzu CLI を展開します。
$ tar xvf tanzu-cli-bundle-linux-amd64.tar
cli/
cli/core/
cli/core/v1.3.0/
cli/core/v1.3.0/tanzu-core-linux_amd64
cli/core/plugin.yaml
cli/cluster/
cli/cluster/v1.3.0/
cli/cluster/v1.3.0/tanzu-cluster-linux_amd64
cli/cluster/plugin.yaml
cli/login/
cli/login/v1.3.0/
cli/login/v1.3.0/tanzu-login-linux_amd64
cli/login/plugin.yaml
cli/pinniped-auth/
cli/pinniped-auth/v1.3.0/
cli/pinniped-auth/v1.3.0/tanzu-pinniped-auth-linux_amd64
cli/pinniped-auth/plugin.yaml
cli/kubernetes-release/
cli/kubernetes-release/v1.3.0/
cli/kubernetes-release/v1.3.0/tanzu-kubernetes-release-linux_amd64
cli/kubernetes-release/plugin.yaml
cli/management-cluster/
cli/management-cluster/v1.3.0/
cli/management-cluster/v1.3.0/tanzu-management-cluster-linux_amd64
cli/management-cluster/v1.3.0/test/
cli/management-cluster/v1.3.0/test/tanzu-management-cluster-test-linux_amd64
cli/management-cluster/plugin.yaml
cli/manifest.yaml
cli/ytt-linux-amd64-v0.30.0+vmware.1.gz
cli/kapp-linux-amd64-v0.33.0+vmware.1.gz
cli/imgpkg-linux-amd64-v0.2.0+vmware.1.gz
cli/kbld-linux-amd64-v0.24.0+vmware.1.gz
sudo install cli/core/v1.3.0/tanzu-core-linux_amd64 /usr/local/bin/tanzu
$ tanzu version
version: v1.3.0
buildDate: 2021-03-19
sha: 06ddc9a
tanzu plugin install --local cli all
$ tanzu plugin list
  NAME                LATEST VERSION  DESCRIPTION                                                        REPOSITORY  VERSION  STATUS
  alpha               v1.3.0          Alpha CLI commands                                                 core                 not installed
  cluster             v1.3.0          Kubernetes cluster operations                                      core        v1.3.0   installed
  login               v1.3.0          Login to the platform                                              core        v1.3.0   installed
  pinniped-auth       v1.3.0          Pinniped authentication operations (usually not directly invoked)  core        v1.3.0   installed
  kubernetes-release  v1.3.0          Kubernetes release operations                                      core        v1.3.0   installed
  management-cluster  v1.3.0          Kubernetes management cluster operations                           tkg         v1.3.0   installed


Supervisor Cluster の登録

tanzu CLI にSupervisor Cluster をManagement Cluster として登録します。事前にSupervisor Cluster にログインし、context をSupervisor Cluster にしておきます。
kubectl-vsphere v0.0.8 以降においては、KUBECTL_VSPHERE_PASSWORD にパスワードを設定しておけば、ログイン時に毎回パスワードを確認されるのを回避出来ます。
$ kubectl-vsphere version
kubectl-vsphere: version 0.0.8, build 17570859, change 8724671
KUBECTL_VSPHERE_PASSWORD=xxxx kubectl-vsphere login --server=<supervisor-cluster-kube-api-vip> --vsphere-username administrator@vsphere.local --insecure-skip-tls-verify
$ tanzu login
? Select login type Local kubeconfig
? Enter path to kubeconfig (if any) /home/ubuntu/.kube/config
? Enter kube context to use <supervisor-cluster-kube-api-vip>
? Give the server a name svc
✔  successfully logged in to management cluster using the kubeconfig svc


Cluster Config の作成

Workload Cluster を作成するためのマニフェストファイルを作成します。こちら「Step 2: Configure Cluster Parameters」を参考にしながら、作成しました。INFRASTRUCTURE_PROVIDER に関しては、vsphere ではちゃんと動かないため、tkg-service-vsphere を指定します。NAMESPACE は、Workload Cluster をデプロイするSupervisor Namespace 名を指定します。
$ cat ~/.tanzu/tkg/cluster-config.yaml
CONTROL_PLANE_STORAGE_CLASS: default
WORKER_STORAGE_CLASS: default
DEFAULT_STORAGE_CLASS: default
CONTROL_PLANE_VM_CLASS: best-effort-xsmall
WORKER_VM_CLASS: best-effort-xsmall
SERVICE_CIDR: 100.64.0.0/13
CLUSTER_CIDR: 100.96.0.0/11
SERVICE_DOMAIN: cluster.local
NAMESPACE: tadashi
CLUSTER_PLAN: dev
INFRASTRUCTURE_PROVIDER: tkg-service-vsphere


Workload Cluster のデプロイ

tanzu CLI でWorkload Cluster を作成します。Tanzu Kubernetes Cluster(TKC) 作成のためのインターフェースがtanzu CLI に変わるだけで、出来上がるものはTKC です。指定したSupervisor Namespace 上にWorkload Cluster がデプロイされます。
まず、デプロイ時に指定する必要があるTanzu Kubernetes Release の情報を確認しておきます。
$ kubectl get tkr
NAME                                VERSION                          READY   COMPATIBLE   CREATED   UPDATES AVAILABLE
v1.16.12---vmware.1-tkg.1.da7afe7   1.16.12+vmware.1-tkg.1.da7afe7   True    True         24d       [1.17.17+vmware.1-tkg.1.d44d45a 1.16.14+vmware.1-tkg.1.ada4837]
v1.16.14---vmware.1-tkg.1.ada4837   1.16.14+vmware.1-tkg.1.ada4837   True    True         24d       [1.17.17+vmware.1-tkg.1.d44d45a]
v1.16.8---vmware.1-tkg.3.60d2ffd    1.16.8+vmware.1-tkg.3.60d2ffd    False   False        24d       [1.17.17+vmware.1-tkg.1.d44d45a 1.16.14+vmware.1-tkg.1.ada4837]
v1.17.11---vmware.1-tkg.1.15f1e18   1.17.11+vmware.1-tkg.1.15f1e18   True    True         24d       [1.18.15+vmware.1-tkg.1.600e412 1.17.17+vmware.1-tkg.1.d44d45a]
v1.17.11---vmware.1-tkg.2.ad3d374   1.17.11+vmware.1-tkg.2.ad3d374   True    True         24d       [1.18.15+vmware.1-tkg.1.600e412 1.17.17+vmware.1-tkg.1.d44d45a]
v1.17.13---vmware.1-tkg.2.2c133ed   1.17.13+vmware.1-tkg.2.2c133ed   True    True         24d       [1.18.15+vmware.1-tkg.1.600e412 1.17.17+vmware.1-tkg.1.d44d45a]
v1.17.17---vmware.1-tkg.1.d44d45a   1.17.17+vmware.1-tkg.1.d44d45a   True    True         24d       [1.18.15+vmware.1-tkg.1.600e412]
v1.17.7---vmware.1-tkg.1.154236c    1.17.7+vmware.1-tkg.1.154236c    True    True         24d       [1.18.15+vmware.1-tkg.1.600e412 1.17.17+vmware.1-tkg.1.d44d45a]
v1.17.8---vmware.1-tkg.1.5417466    1.17.8+vmware.1-tkg.1.5417466    True    True         24d       [1.18.15+vmware.1-tkg.1.600e412 1.17.17+vmware.1-tkg.1.d44d45a]
v1.18.10---vmware.1-tkg.1.3a6cd48   1.18.10+vmware.1-tkg.1.3a6cd48   True    True         24d       [1.19.7+vmware.1-tkg.1.fc82c41 1.18.15+vmware.1-tkg.1.600e412]
v1.18.15---vmware.1-tkg.1.600e412   1.18.15+vmware.1-tkg.1.600e412   True    True         24d       [1.19.7+vmware.1-tkg.1.fc82c41]
v1.18.5---vmware.1-tkg.1.c40d30d    1.18.5+vmware.1-tkg.1.c40d30d    True    True         24d       [1.19.7+vmware.1-tkg.1.fc82c41 1.18.15+vmware.1-tkg.1.600e412]
v1.19.7---vmware.1-tkg.1.fc82c41    1.19.7+vmware.1-tkg.1.fc82c41    True    True         24d       [1.20.2+vmware.1-tkg.1.1d4f79a]
v1.20.2---vmware.1-tkg.1.1d4f79a    1.20.2+vmware.1-tkg.1.1d4f79a    True    True         2d6h

今回はうまく反映されませんでしたが(containerd の設定フォーマットが違う? or そもそもSupervisor Cluster 経由でのWorkload Cluster(TKC) には設定出来ない?等)、Workload Cluster のカスタマイズするための設定をします。
$ cat ~/.tanzu/tkg/providers/infrastructure-tkg-service-vsphere/ytt/containerd-multi-image-registry.yaml
#@ load("@ytt:overlay", "overlay")

#@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
---
spec:
  kubeadmConfigSpec:
    preKubeadmCommands:
    #@overlay/append
    - echo '  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]' >> /etc/containerd/config.toml
    #@overlay/append
    - echo '    endpoint = ["https://registry-1.docker.io"]' >> /etc/containerd/config.toml
    #@overlay/append
    - systemctl restart containerd

#@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"})
---
spec:
  template:
    spec:
      preKubeadmCommands:
      #@overlay/append
      - echo '  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]' >> /etc/containerd/config.toml
      #@overlay/append
      - echo '    endpoint = ["https://registry-1.docker.io"]' >> /etc/containerd/config.toml
      #@overlay/append
      - systemctl restart containerd


Workload Cluster をデプロイします。
$ tanzu cluster create screwdriver --tkr v1.20.2---vmware.1-tkg.1.1d4f79a --file .tanzu/tkg/cluster-config.yaml -v 6
Using namespace from config:
You are trying to create a cluster with kubernetes version '1.20.2+vmware.1-tkg.1.1d4f79a' on vSphere with Tanzu, Please make sure virtual machine image for the same is available in the cluster content library.
Do you want to continue? [y/N]: y
Validating configuration...
Waiting for the Tanzu Kubernetes Cluster service for vSphere workload cluster
cluster is still not provisioned, retrying, retrying
cluster is still not provisioned, retrying, retrying
...(SNIP)...

作成されたWorkload Cluster にアクセスしてみます。
tanzu cluster kubeconfig get screwdriver --export-file pez-screwdriver-kubeconfig --admin --namespace tadashi
export KUBECONFIG=~/pez-screwdriver-kubeconfig
$ kubectl get nodes
NAME                                         STATUS   ROLES                  AGE   VERSION
screwdriver-control-plane-qqq55              Ready    control-plane,master   15m   v1.20.2+vmware.1
screwdriver-workers-7f86w-7cdbcbd89c-5dftz   Ready    <none>                 10m   v1.20.2+vmware.1


ノードへのログイン

Supervisor Cluster を利用しない場合だと、Management Cluster 作成時に登録したSSH 鍵を用いてログインしますが、実体はTKC なのでこちらの「SSH to Tanzu Kubernetes Cluster Nodes as the System User Using a Password」の方法でログインしてみます。
kubectl config use-context tadashi
$ kubectl get virtualmachines
NAME                                         AGE
screwdriver-control-plane-qqq55              28m
screwdriver-workers-7f86w-7cdbcbd89c-5dftz   22m
$ kubectl get secrets
default-token-swg7b               kubernetes.io/service-account-token   3      23d
scredriver-extensions-ca          kubernetes.io/tls                     3      4h7m
screwdriver-antrea                kubernetes.io/tls                     3      42m
screwdriver-auth-svc-cert         kubernetes.io/tls                     3      20m
screwdriver-ca                    Opaque                                2      29m
screwdriver-ccm-token-q8zvd       kubernetes.io/service-account-token   3      20m
screwdriver-control-plane-qffjj   cluster.x-k8s.io/secret               1      29m
screwdriver-encryption            Opaque                                1      30m
screwdriver-etcd                  Opaque                                2      29m
screwdriver-extensions-ca         kubernetes.io/tls                     3      30m
screwdriver-kubeconfig            Opaque                                1      29m
screwdriver-proxy                 Opaque                                2      29m
screwdriver-pvcsi-token-89rmw     kubernetes.io/service-account-token   3      20m
screwdriver-sa                    Opaque                                2      29m
screwdriver-ssh                   kubernetes.io/ssh-auth                1      31m
screwdriver-ssh-password          Opaque                                1      31m
screwdriver-workers-c8bj5-6qnwt   cluster.x-k8s.io/secret               1      23m
$ kubectl get secrets screwdriver-ssh-password -oyaml
apiVersion: v1
data:
  ssh-passwordkey: VGlZcFI4U3pjT1FxL1FqSGdVMFQrTUFxMUl5Z3NRN2N1S2Iway90RklBTT0=
kind: Secret
metadata:
...(SNIP)...
$ echo "VGlZcFI4U3pjT1FxL1FqSGdVMFQrTUFxMUl5Z3NRN2N1S2Iway90RklBTT0=" |base64 --decode
TiYpR8SzcOQq/QjHgU0T+MAq1IygsQ7cuKb0k/tFIAM=
ssh vmware-system-user@<control-plane-vm-ip>
$ ls /etc/containerd/
config.toml            load_gc_containers.sh
$ cat /etc/containerd/config.toml
root = "/var/lib/containerd"
state = "/run/containerd"
oom_score = 0

[grpc]
  address = "/run/containerd/containerd.sock"
  uid = 0
  gid = 0
  max_recv_message_size = 16777216
  max_send_message_size = 16777216

[debug]
  address = ""
  uid = 0
  gid = 0
  level = ""

[metrics]
  address = ""
  grpc_histogram = false

[cgroup]
  path = ""

[plugins]
  [plugins.cgroups]
    no_prometheus = false
  [plugins.cri]
    stream_server_address = "127.0.0.1"
    stream_server_port = "0"
    enable_selinux = false
    sandbox_image = "localhost:5000/vmware.io/pause:3.2"
    stats_collect_period = 10
    systemd_cgroup = false
    enable_tls_streaming = false
    max_container_log_line_size = 16384
    disable_proc_mount = false
    [plugins.cri.containerd]
      snapshotter = "overlayfs"
      no_pivot = false
      [plugins.cri.containerd.default_runtime]
        runtime_type = "io.containerd.runtime.v1.linux"
        runtime_engine = ""
        runtime_root = ""
      [plugins.cri.containerd.untrusted_workload_runtime]
        runtime_type = ""
        runtime_engine = ""
        runtime_root = ""
    [plugins.cri.cni]
      bin_dir = "/opt/cni/bin"
      conf_dir = "/etc/cni/net.d"
      conf_template = ""
    [plugins.cri.registry]
      [plugins.cri.registry.mirrors]
        [plugins.cri.registry.mirrors."docker.io"]
          endpoint = ["https://registry-1.docker.io"]
        [plugins.cri.registry.mirrors."localhost:5000"]
           endpoint = ["http://localhost:5000"]
    [plugins.cri.x509_key_pair_streaming]
      tls_cert_file = ""
      tls_key_file = ""
  [plugins.diff-service]
    default = ["walking"]
  [plugins.linux]
    shim = "containerd-shim"
    runtime = "runc"
    runtime_root = ""
    no_shim = false
    shim_debug = false
  [plugins.opt]
    path = "/opt/containerd"
  [plugins.restart]
    interval = "10s"
  [plugins.scheduler]
    pause_threshold = 0.02
    deletion_threshold = 0
    mutation_threshold = 100
    schedule_delay = "0s"
    startup_delay = "100ms"

Supervisor Cluster をManagement Cluster と見立てて、tanzu CLI でWorkload Cluster をデプロイする事が出来ました。

このブログの人気の投稿