Tanzu Kubernetes Grid(TKGm) v1.3.0にContour, Harbor, TBSをデプロイした際の手順

Tanzu Kubernetes Grid multicloud(TKGm) v1.3.0 にContour, Harbor, TBS をデプロイした際の手順です。Contour をIngress として利用し、そのバックエンドで Harbor にアクセス出来るようにしています。このHarbor を利用する形でTBS もデプロイしています。


環境

  • Tanzu Kubernetes Grid v1.3.0 on vSphere
  • Workload Cluster v1.20.4

手順

TKGm のデプロイ

以下のドキュメントに従って、まずはTKGm のManagement Cluster, Workload Cluster をデプロイしていきます。

govc を利用し、事前の準備を行うために以下の環境変数をセットしておきます。
$ cat tkgm-env.sh
export GOVC_USERNAME=xxxx
export GOVC_PASSWORD=xxxx
export GOVC_DATACENTER=Datacenter
export GOVC_NETWORK="VM Network"
export GOVC_DATASTORE=datastore3
export GOVC_RESOURCE_POOL=/Datacenter/host/Cluster/Resources/tkgm
export GOVC_INSECURE=1
export TEMPLATE_FOLDER=/Datacenter/vm/tkgm
export GOVC_URL=xxxx
source tkgm-env.sh

ダウンロードしておいたTKGm のノードOS として利用するOVA をインポートしていきます。
$ cat options.json
{
"DiskProvisioning": "thin"
}
govc import.ova --options=options.json -folder ${TEMPLATE_FOLDER} ~/Downloads/_packages/photon-3-kube-v1.20.4-vmware.1-tkg.0-2326554155028348692.ova
govc import.ova --options=options.json -folder ${TEMPLATE_FOLDER} ~/Downloads/_packages/ubuntu-2004-kube-v1.20.4-vmware.1-tkg.0-16153464878630780629.ova

インポートしたOVA をテンプレート化します。
govc vm.markastemplate ${TEMPLATE_FOLDER}/photon-3-kube-v1.20.4
govc vm.markastemplate ${TEMPLATE_FOLDER}/ubuntu-2004-kube-v1.20.4

上の手順に従って、tanzu CLI を導入していきます。
$ tar xvf tanzu-cli-bundle-darwin-amd64.tar
x cli/
x cli/core/
x cli/core/v1.3.0/
x cli/core/v1.3.0/tanzu-core-darwin_amd64
x cli/core/plugin.yaml
x cli/cluster/
x cli/cluster/v1.3.0/
x cli/cluster/v1.3.0/tanzu-cluster-darwin_amd64
x cli/cluster/plugin.yaml
x cli/login/
x cli/login/v1.3.0/
x cli/login/v1.3.0/tanzu-login-darwin_amd64
x cli/login/plugin.yaml
x cli/pinniped-auth/
x cli/pinniped-auth/v1.3.0/
x cli/pinniped-auth/v1.3.0/tanzu-pinniped-auth-darwin_amd64
x cli/pinniped-auth/plugin.yaml
x cli/kubernetes-release/
x cli/kubernetes-release/v1.3.0/
x cli/kubernetes-release/v1.3.0/tanzu-kubernetes-release-darwin_amd64
x cli/kubernetes-release/plugin.yaml
x cli/management-cluster/
x cli/management-cluster/v1.3.0/
x cli/management-cluster/v1.3.0/test/
x cli/management-cluster/v1.3.0/test/tanzu-management-cluster-test-darwin_amd64
x cli/management-cluster/v1.3.0/tanzu-management-cluster-darwin_amd64
x cli/management-cluster/plugin.yaml
x cli/manifest.yaml
x cli/ytt-darwin-amd64-v0.30.0+vmware.1.gz
x cli/kapp-darwin-amd64-v0.33.0+vmware.1.gz
x cli/imgpkg-darwin-amd64-v0.2.0+vmware.1.gz
x cli/kbld-darwin-amd64-v0.24.0+vmware.1.gz
$ cd cli
$ ls
cluster/                                kapp-darwin-amd64-v0.33.0+vmware.1.gz   login/                                  pinniped-auth/
core/                                   kbld-darwin-amd64-v0.24.0+vmware.1.gz   management-cluster/                     ytt-darwin-amd64-v0.30.0+vmware.1.gz
imgpkg-darwin-amd64-v0.2.0+vmware.1.gz  kubernetes-release/                     manifest.yaml
sudo install core/v1.3.0/tanzu-core-darwin_amd64 /usr/local/bin/tanzu
tanzu plugin clean
cd ..
tanzu plugin install --local cli all
$ tanzu plugin list
  NAME                LATEST VERSION  DESCRIPTION                                                        REPOSITORY  VERSION  STATUS
  alpha               v1.3.0          Alpha CLI commands                                                 core                 not installed
  cluster             v1.3.0          Kubernetes cluster operations                                      core        v1.3.0   installed
  login               v1.3.0          Login to the platform                                              core        v1.3.0   installed
  pinniped-auth       v1.3.0          Pinniped authentication operations (usually not directly invoked)  core        v1.3.0   installed
  kubernetes-release  v1.3.0          Kubernetes release operations                                      core        v1.3.0   installed
  management-cluster  v1.3.0          Kubernetes management cluster operations                           tkg         v1.3.0   installed

Management Cluster のデプロイ

Management Cluster をデプロイしていきます。以下のコマンドを実行すると、Management Cluster を導入するためのConfiguration を設定していくUI がブラウザに表示されます。ガイドに従って項目を埋めていき、Deploy する直前で Ctrl+C で処理を中断しています。本来はそのままデプロイして問題ないですが、tanzu CLI で導入したかったので、clusterconfig.yaml の雛形を作成するためにこの様にしています。
$ tanzu management-cluster create --ui

Validating the pre-requisites...
Serving kickstart UI at http://127.0.0.1:8080
^CShutting down...

Stopped serving kickstart UI at http://127.0.0.1:8080

以下の様な環境変数がセットされていることを確認出来ました。
$ cat njgv0nd0ko.yaml
AVI_CA_DATA_B64: ""
AVI_CLOUD_NAME: ""
AVI_CONTROLLER: ""
AVI_DATA_NETWORK: ""
AVI_DATA_NETWORK_CIDR: ""
AVI_ENABLE: "false"
AVI_LABELS: ""
AVI_PASSWORD: ""
AVI_SERVICE_ENGINE_GROUP: ""
AVI_USERNAME: ""
CLUSTER_CIDR: 100.96.0.0/11
CLUSTER_NAME: schecter
CLUSTER_PLAN: dev
ENABLE_CEIP_PARTICIPATION: "false"
ENABLE_MHC: "true"
IDENTITY_MANAGEMENT_TYPE: none
INFRASTRUCTURE_PROVIDER: vsphere
LDAP_BIND_DN: ""
LDAP_BIND_PASSWORD: ""
LDAP_GROUP_SEARCH_BASE_DN: ""
LDAP_GROUP_SEARCH_FILTER: ""
LDAP_GROUP_SEARCH_GROUP_ATTRIBUTE: ""
LDAP_GROUP_SEARCH_NAME_ATTRIBUTE: cn
LDAP_GROUP_SEARCH_USER_ATTRIBUTE: DN
LDAP_HOST: ""
LDAP_ROOT_CA_DATA_B64: ""
LDAP_USER_SEARCH_BASE_DN: ""
LDAP_USER_SEARCH_FILTER: ""
LDAP_USER_SEARCH_NAME_ATTRIBUTE: ""
LDAP_USER_SEARCH_USERNAME: userPrincipalName
OIDC_IDENTITY_PROVIDER_CLIENT_ID: ""
OIDC_IDENTITY_PROVIDER_CLIENT_SECRET: ""
OIDC_IDENTITY_PROVIDER_GROUPS_CLAIM: ""
OIDC_IDENTITY_PROVIDER_ISSUER_URL: ""
OIDC_IDENTITY_PROVIDER_NAME: ""
OIDC_IDENTITY_PROVIDER_SCOPES: ""
OIDC_IDENTITY_PROVIDER_USERNAME_CLAIM: ""
SERVICE_CIDR: 100.64.0.0/13
TKG_HTTP_PROXY_ENABLED: "false"
VSPHERE_CONTROL_PLANE_DISK_GIB: "20"
VSPHERE_CONTROL_PLANE_ENDPOINT: xxx.xxx.xxx.xxx
VSPHERE_CONTROL_PLANE_MEM_MIB: "4096"
VSPHERE_CONTROL_PLANE_NUM_CPUS: "2"
VSPHERE_DATACENTER: /Datacenter
VSPHERE_DATASTORE: /Datacenter/datastore/datastore3
VSPHERE_FOLDER: /Datacenter/vm/tkgm
VSPHERE_NETWORK: VM Network
VSPHERE_PASSWORD: <encoded:xxxxxx>
VSPHERE_RESOURCE_POOL: /Datacenter/host/Cluster/Resources
VSPHERE_SERVER: xxxx
VSPHERE_SSH_AUTHORIZED_KEY: ssh-rsa AAAAxxxxxx
VSPHERE_TLS_THUMBPRINT: 65:xxxx:49
VSPHERE_USERNAME: xxxx
VSPHERE_WORKER_DISK_GIB: "20"
VSPHERE_WORKER_MEM_MIB: "4096"
VSPHERE_WORKER_NUM_CPUS: "2"

上の情報を元に~/.tanzu/tkg/cluster-config.yaml ファイルを作成し、Management Cluster を作成しました。各ノードの最低メモリは4GB 必要になっているようです。4GB より小さい値を設定すると、"Error: configuration validation failed: vSphere node size validation failed: the minimum requirement of VSPHERE_CONTROL_PLANE_MEM_MIB is 4096" の様に怒られます。
$ tanzu management-cluster create -v 6
cluster config file not provided using default config file at '~/.tanzu/tkg/cluster-config.yaml'
CEIP Opt-in status: false

Validating the pre-requisites...

vSphere 7.0 Environment Detected.

You have connected to a vSphere 7.0 environment which does not have vSphere with Tanzu enabled. vSphere with Tanzu includes
an integrated Tanzu Kubernetes Grid Service which turns a vSphere cluster into a platform for running Kubernetes workloads in dedicated
resource pools. Configuring Tanzu Kubernetes Grid Service is done through vSphere HTML5 client.

Tanzu Kubernetes Grid Service is the preferred way to consume Tanzu Kubernetes Grid in vSphere 7.0 environments. Alternatively you may
deploy a non-integrated Tanzu Kubernetes Grid instance on vSphere 7.0.
Do you want to configure vSphere with Tanzu? [y/N]: N
Would you like to deploy a non-integrated Tanzu Kubernetes Grid management cluster on vSphere 7.0? [y/N]: y
Deploying TKG management cluster on vSphere 7.0 ...
Identity Provider not configured. Some authentication features won't work.
no os options provided, selecting based on default os options

Setting up management cluster...
Validating configuration...
Using infrastructure provider vsphere:v0.7.6
Generating cluster configuration...
Setting up bootstrapper...
Fetching configuration for kind node image...
kindConfig:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
kubeadmConfigPatches:
- |
  apiVersion: kubeadm.k8s.io/v1beta2
  kind: ClusterConfiguration
  imageRepository: projects.registry.vmware.com/tkg
  etcd:
    local:
      imageRepository: projects.registry.vmware.com/tkg
      imageTag: v3.4.13_vmware.7
  dns:
    type: CoreDNS
    imageRepository: projects.registry.vmware.com/tkg
    imageTag: v1.7.0_vmware.8
nodes:
- role: control-plane
  extraMounts:
    - hostPath: /var/run/docker.sock
      containerPath: /var/run/docker.sock
Creating kind cluster: tkg-kind-c1iuph6vvhfkr3hufv30
Ensuring node image (projects.registry.vmware.com/tkg/kind/node:v1.20.4_vmware.1) ...
Pulling image: projects.registry.vmware.com/tkg/kind/node:v1.20.4_vmware.1 ...
Preparing nodes ...
Writing configuration ...
Starting control-plane ...
Installing CNI ...
Installing StorageClass ...
Waiting 2m0s for control-plane = Ready ...
Ready after 18s
Bootstrapper created. Kubeconfig: ~/.kube-tkg/tmp/config_GITd3Vva
Installing providers on bootstrapper...
Fetching providers
Installing cert-manager Version="v0.16.1"
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v0.3.14" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.14" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.14" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-vsphere" Version="v0.7.6" TargetNamespace="capv-system"
installed  Component=="cluster-api"  Type=="CoreProvider"  Version=="v0.3.14"
installed  Component=="kubeadm"  Type=="BootstrapProvider"  Version=="v0.3.14"
installed  Component=="kubeadm"  Type=="ControlPlaneProvider"  Version=="v0.3.14"
installed  Component=="vsphere"  Type=="InfrastructureProvider"  Version=="v0.7.6"
Waiting for provider cluster-api
Waiting for provider infrastructure-vsphere
Waiting for provider control-plane-kubeadm
Waiting for provider bootstrap-kubeadm
Waiting for resource capi-kubeadm-control-plane-controller-manager of type *v1.Deployment to be up and running
pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying
Waiting for resource capv-controller-manager of type *v1.Deployment to be up and running
pods are not yet running for deployment 'capv-controller-manager' in namespace 'capv-system', retrying
Waiting for resource capi-kubeadm-bootstrap-controller-manager of type *v1.Deployment to be up and running
pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying
Waiting for resource capi-controller-manager of type *v1.Deployment to be up and running
pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying
pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying
pods are not yet running for deployment 'capv-controller-manager' in namespace 'capv-system', retrying
pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying
pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying
pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying
pods are not yet running for deployment 'capv-controller-manager' in namespace 'capv-system', retrying
pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying
pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying
pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying
pods are not yet running for deployment 'capv-controller-manager' in namespace 'capv-system', retrying
pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying
pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying
pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying
pods are not yet running for deployment 'capv-controller-manager' in namespace 'capv-system', retrying
pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying
pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying
pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying
pods are not yet running for deployment 'capv-controller-manager' in namespace 'capv-system', retrying
pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying
pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying
pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying
pods are not yet running for deployment 'capv-controller-manager' in namespace 'capv-system', retrying
pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying
pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying
pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying
pods are not yet running for deployment 'capv-controller-manager' in namespace 'capv-system', retrying
pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying
pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying
pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying
pods are not yet running for deployment 'capv-controller-manager' in namespace 'capv-system', retrying
Waiting for resource capi-kubeadm-bootstrap-controller-manager of type *v1.Deployment to be up and running
Passed waiting on provider bootstrap-kubeadm after 40.086755833s
Waiting for resource capi-controller-manager of type *v1.Deployment to be up and running
Passed waiting on provider cluster-api after 40.098500821s
pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying
pods are not yet running for deployment 'capv-controller-manager' in namespace 'capv-system', retrying
Waiting for resource capi-kubeadm-control-plane-controller-manager of type *v1.Deployment to be up and running
Passed waiting on provider control-plane-kubeadm after 50.051676043s
Waiting for resource capv-controller-manager of type *v1.Deployment to be up and running
pods are not yet running for deployment 'capv-controller-manager' in namespace 'capi-webhook-system', retrying
Passed waiting on provider infrastructure-vsphere after 55.076903476s
Success waiting on all providers.
Start creating management cluster...
patch cluster object with operation status:
	{
		"metadata": {
			"annotations": {
				"TKGOperationInfo" : "{\"Operation\":\"Create\",\"OperationStartTimestamp\":\"2021-04-01 15:59:08.951004 +0000 UTC\",\"OperationTimeout\":1800}",
				"TKGOperationLastObservedTimestamp" : "2021-04-01 15:59:08.951004 +0000 UTC"
			}
		}
	}
cluster control plane is still being initialized, retrying
cluster control plane is still being initialized, retrying
cluster control plane is still being initialized, retrying
cluster control plane is still being initialized, retrying
cluster control plane is still being initialized, retrying
cluster control plane is still being initialized, retrying
Getting secret for cluster
Waiting for resource schecter-kubeconfig of type *v1.Secret to be up and running
Saving management cluster kubeconfig into ~/.kube/config
Installing providers on management cluster...
Fetching providers
Installing cert-manager Version="v0.16.1"
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v0.3.14" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.14" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.14" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-vsphere" Version="v0.7.6" TargetNamespace="capv-system"
installed  Component=="cluster-api"  Type=="CoreProvider"  Version=="v0.3.14"
installed  Component=="kubeadm"  Type=="BootstrapProvider"  Version=="v0.3.14"
installed  Component=="kubeadm"  Type=="ControlPlaneProvider"  Version=="v0.3.14"
installed  Component=="vsphere"  Type=="InfrastructureProvider"  Version=="v0.7.6"
Waiting for provider infrastructure-vsphere
Waiting for provider bootstrap-kubeadm
Waiting for provider control-plane-kubeadm
Waiting for provider cluster-api
Waiting for resource capi-kubeadm-control-plane-controller-manager of type *v1.Deployment to be up and running
Waiting for resource capv-controller-manager of type *v1.Deployment to be up and running
pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying
pods are not yet running for deployment 'capv-controller-manager' in namespace 'capv-system', retrying
Waiting for resource capi-kubeadm-bootstrap-controller-manager of type *v1.Deployment to be up and running
Waiting for resource capi-controller-manager of type *v1.Deployment to be up and running
pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying
pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying
pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying
pods are not yet running for deployment 'capv-controller-manager' in namespace 'capv-system', retrying
pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying
pods are not yet running for deployment 'capi-controller-manager' in namespace 'capi-system', retrying
pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying
pods are not yet running for deployment 'capv-controller-manager' in namespace 'capv-system', retrying
pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying
Waiting for resource capi-controller-manager of type *v1.Deployment to be up and running
Passed waiting on provider cluster-api after 10.13201719s
pods are not yet running for deployment 'capi-kubeadm-control-plane-controller-manager' in namespace 'capi-kubeadm-control-plane-system', retrying
pods are not yet running for deployment 'capv-controller-manager' in namespace 'capv-system', retrying
pods are not yet running for deployment 'capi-kubeadm-bootstrap-controller-manager' in namespace 'capi-kubeadm-bootstrap-system', retrying
pods are not yet running for deployment 'capv-controller-manager' in namespace 'capv-system', retrying
Waiting for resource capi-kubeadm-control-plane-controller-manager of type *v1.Deployment to be up and running
Passed waiting on provider control-plane-kubeadm after 20.07460899s
Waiting for resource capi-kubeadm-bootstrap-controller-manager of type *v1.Deployment to be up and running
Passed waiting on provider bootstrap-kubeadm after 20.128117913s
pods are not yet running for deployment 'capv-controller-manager' in namespace 'capv-system', retrying
Waiting for resource capv-controller-manager of type *v1.Deployment to be up and running
Passed waiting on provider infrastructure-vsphere after 30.075488227s
Success waiting on all providers.
Waiting for the management cluster to get ready for move...
Waiting for resource schecter of type *v1alpha3.Cluster to be up and running
Waiting for resources type *v1alpha3.MachineDeploymentList to be up and running
Waiting for resources type *v1alpha3.MachineList to be up and running
machine schecter-control-plane-7flm4 is still being provisioned, retrying
machine schecter-control-plane-7flm4 is still being provisioned, retrying
machine schecter-control-plane-7flm4 is still being provisioned, retrying
machine schecter-control-plane-7flm4 is still being provisioned, retrying
machine schecter-control-plane-7flm4 is still being provisioned, retrying
machine schecter-control-plane-7flm4 is still being provisioned, retrying
machine schecter-control-plane-7flm4 is still being provisioned, retrying
machine schecter-control-plane-7flm4 is still being provisioned, retrying
machine schecter-control-plane-7flm4 is still being provisioned, retrying
machine schecter-control-plane-7flm4 is still being provisioned, retrying
machine schecter-control-plane-7flm4 is still being provisioned, retrying
machine schecter-control-plane-7flm4 is still being provisioned, retrying
machine schecter-control-plane-7flm4 is still being provisioned, retrying
machine schecter-control-plane-7flm4 is still being provisioned, retrying
machine schecter-control-plane-7flm4 is still being provisioned, retrying
machine schecter-control-plane-7flm4 is still being provisioned, retrying
machine schecter-control-plane-7flm4 is still being provisioned, retrying
machine schecter-control-plane-7flm4 is still being provisioned, retrying
machine schecter-control-plane-7flm4 is still being provisioned, retrying
machine schecter-control-plane-7flm4 is still being provisioned, retrying
machine schecter-control-plane-7flm4 is still being provisioned, retrying
machine schecter-control-plane-7flm4 is still being provisioned, retrying
machine schecter-control-plane-7flm4 is still being provisioned, retrying
machine schecter-control-plane-7flm4 is still being provisioned, retrying
Waiting for addons installation...
Waiting for resources type *v1alpha3.ClusterResourceSetList to be up and running
Waiting for resource antrea-controller of type *v1.Deployment to be up and running
Moving all Cluster API objects from bootstrap cluster to management cluster...
Performing move...
Discovering Cluster API objects
Moving Cluster API objects Clusters=1
Creating objects in the target cluster
Deleting objects from the source cluster
Context set for management cluster schecter as 'schecter-admin@schecter'.
Deleting kind cluster: tkg-kind-c1iuph6vvhfkr3hufv30

Management cluster created!


You can now create your first workload cluster by running the following:

  tanzu cluster create [name] -f [file]

Management Cluster を確認していきます。
$ tanzu cluster list --include-management-cluster
  NAME      NAMESPACE   STATUS   CONTROLPLANE  WORKERS  KUBERNETES        ROLES       PLAN
  schecter  tkg-system  running  1/1           1/1      v1.20.4+vmware.1  management  dev
$ tanzu management-cluster get
  NAME      NAMESPACE   STATUS   CONTROLPLANE  WORKERS  KUBERNETES        ROLES
  schecter  tkg-system  running  1/1           1/1      v1.20.4+vmware.1  management


Details:

NAME                                                           READY  SEVERITY  REASON                           SINCE  MESSAGE
/schecter                                                      True                                              6m49s
├─ClusterInfrastructure - VSphereCluster/schecter            True                                              7m50s
├─ControlPlane - KubeadmControlPlane/schecter-control-plane  True                                              6m49s
│ └─Machine/schecter-control-plane-7flm4                    True                                              7m26s
└─Workers
  └─MachineDeployment/schecter-md-0
    └─Machine/schecter-md-0-57cf744b9b-glnqw                 False  Info      WaitingForClusterInfrastructure  7m50s


Providers:

  NAMESPACE                          NAME                    TYPE                    PROVIDERNAME  VERSION  WATCHNAMESPACE
  capi-kubeadm-bootstrap-system      bootstrap-kubeadm       BootstrapProvider       kubeadm       v0.3.14
  capi-kubeadm-control-plane-system  control-plane-kubeadm   ControlPlaneProvider    kubeadm       v0.3.14
  capi-system                        cluster-api             CoreProvider            cluster-api   v0.3.14
  capv-system                        infrastructure-vsphere  InfrastructureProvider  vsphere       v0.7.6
$ tanzu management-cluster kubeconfig get --admin --export-file lab-schecter-kubeconfig
Credentials of workload cluster 'schecter' have been saved
You can now access the cluster by running 'kubectl config use-context schecter-admin@schecter' under path 'lab-schecter-kubeconfig'
export KUBECONFIG=~/lab-schecter-kubeconfig
$ kubectl get nodes
NAME                             STATUS   ROLES                  AGE   VERSION
schecter-control-plane-7flm4     Ready    control-plane,master   23m   v1.20.4+vmware.1
schecter-md-0-57cf744b9b-glnqw   Ready    <none>                 22m   v1.20.4+vmware.1

Workload Cluster のデプロイ

ディレクトリ~/.tanzu/tkg/clusterconfigs にWorkload Cluster 用の定義ファイルを作成してます。
$ cat cluster-fender-config.yaml
AVI_ENABLE: "false"
CLUSTER_NAME: fender
CLUSTER_PLAN: dev
ENABLE_CEIP_PARTICIPATION: "false"
ENABLE_MHC: "true"
VSPHERE_CONTROL_PLANE_DISK_GIB: "40"
VSPHERE_CONTROL_PLANE_ENDPOINT: xxx.xxx.xxx.xxx
VSPHERE_CONTROL_PLANE_MEM_MIB: "4096"
VSPHERE_CONTROL_PLANE_NUM_CPUS: "2"
VSPHERE_DATACENTER: /Datacenter
VSPHERE_DATASTORE: /Datacenter/datastore/datastore3
VSPHERE_FOLDER: /Datacenter/vm/tkgm
VSPHERE_NETWORK: VM Network
VSPHERE_PASSWORD: <encoded:xxxxxx>
VSPHERE_RESOURCE_POOL: /Datacenter/host/Cluster/Resources
VSPHERE_SERVER: xxxx
VSPHERE_SSH_AUTHORIZED_KEY: ssh-rsa AAAAxxxxxx
VSPHERE_TLS_THUMBPRINT: 65:xxxx:49
VSPHERE_USERNAME: xxxx
VSPHERE_WORKER_DISK_GIB: "128"
VSPHERE_WORKER_MEM_MIB: "8192"
VSPHERE_WORKER_NUM_CPUS: "4"
CONTROL_PLANE_MACHINE_COUNT: 1
WORKER_MACHINE_COUNT: 1
OS_NAME: ubuntu

作成したら以下のコマンドでWorkload Cluster を作成していきます。
$ tanzu cluster create fender --tkr v1.20.4---vmware.1-tkg.1 --file ~/.tanzu/tkg/clusterconfigs/cluster-fender-config.yaml -v 6
Using namespace from config:
Validating configuration...
Waiting for resource pinniped-federation-domain of type *unstructured.Unstructured to be up and running
no matches for kind "FederationDomain" in version "config.supervisor.pinniped.dev/v1alpha1", retrying
no matches for kind "FederationDomain" in version "config.supervisor.pinniped.dev/v1alpha1", retrying
no matches for kind "FederationDomain" in version "config.supervisor.pinniped.dev/v1alpha1", retrying
no matches for kind "FederationDomain" in version "config.supervisor.pinniped.dev/v1alpha1", retrying
Failed to configure Pinniped configuration for workload cluster. Please refer to the documentation to check if you can configure pinniped on workload cluster manually
Creating workload cluster 'fender'...
patch cluster object with operation status:
	{
		"metadata": {
			"annotations": {
				"TKGOperationInfo" : "{\"Operation\":\"Create\",\"OperationStartTimestamp\":\"2021-04-01 16:43:53.391895 +0000 UTC\",\"OperationTimeout\":1800}",
				"TKGOperationLastObservedTimestamp" : "2021-04-01 16:43:53.391895 +0000 UTC"
			}
		}
	}
Waiting for cluster to be initialized...
cluster control plane is still being initialized, retrying
cluster control plane is still being initialized, retrying
cluster control plane is still being initialized, retrying
cluster control plane is still being initialized, retrying
Getting secret for cluster
Waiting for resource fender-kubeconfig of type *v1.Secret to be up and running
Waiting for cluster nodes to be available...
Waiting for resource fender of type *v1alpha3.Cluster to be up and running
Waiting for resources type *v1alpha3.MachineDeploymentList to be up and running
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
worker nodes are still being created for MachineDeployment 'fender-md-0', DesiredReplicas=1 Replicas=1 ReadyReplicas=0 UpdatedReplicas=1, retrying
Waiting for resources type *v1alpha3.MachineList to be up and running
machine fender-control-plane-qshw5 is still being provisioned, retrying
machine fender-control-plane-qshw5 is still being provisioned, retrying
machine fender-control-plane-qshw5 is still being provisioned, retrying
machine fender-control-plane-qshw5 is still being provisioned, retrying
machine fender-control-plane-qshw5 is still being provisioned, retrying
machine fender-control-plane-qshw5 is still being provisioned, retrying
machine fender-control-plane-qshw5 is still being provisioned, retrying
machine fender-control-plane-qshw5 is still being provisioned, retrying
machine fender-control-plane-qshw5 is still being provisioned, retrying
machine fender-control-plane-qshw5 is still being provisioned, retrying
machine fender-control-plane-qshw5 is still being provisioned, retrying
machine fender-control-plane-qshw5 is still being provisioned, retrying
machine fender-control-plane-qshw5 is still being provisioned, retrying
machine fender-control-plane-qshw5 is still being provisioned, retrying
machine fender-control-plane-qshw5 is still being provisioned, retrying
machine fender-control-plane-qshw5 is still being provisioned, retrying
machine fender-control-plane-qshw5 is still being provisioned, retrying
machine fender-control-plane-qshw5 is still being provisioned, retrying
machine fender-control-plane-qshw5 is still being provisioned, retrying
machine fender-control-plane-qshw5 is still being provisioned, retrying
machine fender-control-plane-qshw5 is still being provisioned, retrying
machine fender-control-plane-qshw5 is still being provisioned, retrying
machine fender-control-plane-qshw5 is still being provisioned, retrying
machine fender-control-plane-qshw5 is still being provisioned, retrying
machine fender-control-plane-qshw5 is still being provisioned, retrying
machine fender-control-plane-qshw5 is still being provisioned, retrying
machine fender-control-plane-qshw5 is still being provisioned, retrying
Waiting for addons installation...
Waiting for resources type *v1alpha3.ClusterResourceSetList to be up and running
Waiting for resource antrea-controller of type *v1.Deployment to be up and running

Workload cluster 'fender' created

Workload Cluster にアクセスします。
$ tanzu cluster list
  NAME    NAMESPACE  STATUS   CONTROLPLANE  WORKERS  KUBERNETES        ROLES   PLAN
  fender  default    running  1/1           1/1      v1.20.4+vmware.1  <none>  dev
$ tanzu cluster kubeconfig get fender --export-file lab-fender-kubeconfig --admin
Credentials of workload cluster 'fender' have been saved
You can now access the cluster by running 'kubectl config use-context fender-admin@fender' under path 'lab-fender-kubeconfig'
export KUBECONFIG=~/lab-fender-kubeconfig
$ kubectl get nodes
NAME                          STATUS   ROLES                  AGE   VERSION
fender-control-plane-qshw5    Ready    control-plane,master   23m   v1.20.4+vmware.1
fender-md-0-6d7c984b5-wwqcm   Ready    <none>                  22m   v1.20.4+vmware.1

無事にデプロイ出来ましたので、続いてContour をデプロイしていきます。

Contour のデプロイ

こちらの「Deploying and Managing Extensions and Shared Services」に従ってデプロイしていきます。tanzu-extentions は事前にダウンロードして、展開しておきます。
まず、tmc-extension-manager.yaml を apply します。
kubectl apply -f tmc-extension-manager.yaml

tkg-extensions-v1.3.0+vmware.1 ディレクトリ直下に移動し、cert-manager をインストールします。
kubectl apply -f cert-manager

この後は「Implementing Ingress Control with Contour」の手順に従って進めていきます。ディレクトリをtkg-extensions-v1.3.0+vmware.1/extensions に移動し、以下のマニフェストを apply します。
kubectl apply -f ingress/contour/namespace-role.yaml

contour のvalues.yaml をコピーし、envoy.service.typeNodePort から LoadBalancer に変更します。変更したvalues.yaml を元にSecret を作成します。
$ cp ingress/contour/vsphere/contour-data-values.yaml.example \
ingress/contour/vsphere/contour-data-values.yaml
$ cat ingress/contour/vsphere/contour-data-values.yaml
#@data/values
#@overlay/match-child-defaults missing_ok=True
---
infrastructure_provider: "vsphere"
contour:
  image:
    repository: projects.registry.vmware.com/tkg
envoy:
  image:
    repository: projects.registry.vmware.com/tkg
  service:
    type: "NodePort"%
vim ingress/contour/vsphere/contour-data-values.yaml
$ kubectl create secret generic contour-data-values \
--from-file=values.yaml=ingress/contour/vsphere/contour-data-values.yaml -n tanzu-system-ingress

contour をデプロイします。
kubectl apply -f ingress/contour/contour-extension.yaml

しばらくすると、サービスが起動してきます。
$ kubectl get app contour -n tanzu-system-ingress
NAME      DESCRIPTION           SINCE-DEPLOY   AGE
contour   Reconcile succeeded   11s            114s

後ほど必要になるので、envoy の EXTERNAL-IP をメモしておきます。
$ kubectl get svc -n tanzu-system-ingress
NAME      TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)                      AGE
contour   ClusterIP      100.70.169.229   <none>           8001/TCP                     73s
envoy     LoadBalancer   100.68.82.104    xx.xxx.xxx.xxx   80:30006/TCP,443:30870/TCP   72s

Contour をデプロイした後は、こちら「Access the Envoy Administration Interface Remotely」にある手順に従うと、Envoy の管理者インターフェースにアクセスする事が出来ます。
export KUBECONFIG=~/fender-kubeconfig
ENVOY_POD=$(kubectl -n tanzu-system-ingress get pod -l app=envoy -o name | head -1)
kubectl -n tanzu-system-ingress port-forward $ENVOY_POD 9001

続いて、Let's Encrypt を利用し、Contour で利用する証明書を取得します。

Let's Encrypt で証明書の取得

以下のコマンドで証明書を発行します。
この環境ではGoogle DNS を使っています。envoy にアサインされていたEXTERNAL-IP*.<MYDOMAIN> のAレコードとして登録しています。
$ certbot --server https://acme-v02.api.letsencrypt.org/directory \
-d "*.<MYDOMAIN>" --manual --preferred-challenges dns-01 certonly \
--work-dir ./<cluster-name>/wd \
--config-dir ./<cluster-name>/cfg \
--logs-dir ./<cluster-name>/logs

案内に従ってステップを進めていくと、DNS TXT レコード とそれに対応する Values が表示されるので、その組合せをGoogle DNS に登録します。
名前解決出来るまでしばらく時間が掛かりますが、以下のコマンドで登録したDNS TXT レコードの Values 値が返ってくればOKです。
nslookup -q=txt _acme-challenge.<MYDOMAIN>

クエリが通ったら、プロンプトでEnter を入力し、処理を終えます。
Congratulations! Your certificate and chain have been saved at: と出力されていれば、無事処理が終わっています。

Harbor のデプロイ

続いてHarbor をデプロイしていきます。この手順ではBitnami が提供しているHelm Chart を利用してデプロイしています。こちらにある手順に従って、事前にBitnami のHelm Chart を追加しておきます。
Harbor をデプロイするNamespace とRoleBinding 設定をしていきます。
kubectl create ns harbor
Harbor が利用するTLS のSecret を作成していきます。--cert--key で指定しているファイルは上の手順で作成したfullchain.pemprivkey.pem になります。
$ kubectl create secret tls harbor-tls-secret -n harbor \
--cert=./<cluster-name>/cfg/live/<MYDOMAIN>/fullchain.pem \
--key=./<cluster-name>/cfg/live/<MYDOMAIN>/privkey.pem

Helm のリポジトリをアップデートしておきます。
helm repo update
$ helm search repo bitnami |grep harbor
  bitnami/harbor                              	9.8.3        	2.2.1        	Harbor is an an open source trusted cloud nativ...

Harbor をインストールする際のvalues.yaml を作成します。この環境ではIngress として、Contour を利用しているので、ingress.annotations.kubernetes.io/ingress.class にContour を指定しています。
$ cat values.yaml
harborAdminPassword: xxxxx
volumePermissions.enabled: true

service:
  type: ClusterIP
  tls:
    enabled: true
    existingSecret: harbor-tls-secret
    notaryExistingSecret: harbor-tls-secret

externalURL: registry.<MYDOMAIN>

ingress:
  enabled: true
  hosts:
    core: registry.<MYDOMAIN>
    notary: notary.<MYDOMAIN>
  annotations:
    ingress.kubernetes.io/force-ssl-redirect: "true"     # force https, even if http is requested
    kubernetes.io/ingress.class: contour                 # using Contour for ingress
    kubernetes.io/tls-acme: "true"                       # using ACME certificates for TLS

portal:
  tls:
    existingSecret: harbor-tls-secret

persistence:
  persistentVolumeClaim:
    registry:
      size: 100Gi
    trivy:
      size: 50Gi

helm でHarbor をデプロイします。
helm upgrade --install harbor bitnami/harbor -f values.yaml --namespace harbor

無事にHarbor が立ち上がっている様です。
$ kubectl get pods -n harbor
NAME                                    READY   STATUS    RESTARTS   AGE
harbor-chartmuseum-69965dcd88-w7st4     1/1     Running   0          3m9s
harbor-core-54f5cf46bc-frvjk            1/1     Running   0          3m9s
harbor-jobservice-76bdd989bf-4vzg6      1/1     Running   0          3m9s
harbor-notary-server-cc445d54d-dsw2v    1/1     Running   0          3m9s
harbor-notary-signer-798bbddc89-knrg8   1/1     Running   1          3m9s
harbor-portal-5f69977cbf-d46wk          1/1     Running   0          3m9s
harbor-postgresql-0                     1/1     Running   0          3m9s
harbor-redis-master-0                   1/1     Running   0          3m9s
harbor-registry-9f55dc7ff-5p9dd         2/2     Running   0          3m9s
harbor-trivy-0                          1/1     Running   0          3m9s

試しに、コンテナイメージをHarbor にアップロード出来るか確認してみます。Harbor UI から事前にdevops ユーザーを作成し、コンテナをアップロードする権限を追加しています。
docker login registry.<MYDOMAIN> -u devops -p xxxx
docker tag nginx:1.17.10 registry.<MYDOMAIN>/library/nginx:1.17.10
$ docker push registry.<MYDOMAIN>/library/nginx:1.17.10
The push refers to repository [registry.<MYDOMAIN>/library/nginx]
6c7de695ede3: Pushed
2f4accd375d9: Pushed
ffc9b21953f4: Pushed
1.17.10: digest: sha256:8269a7352a7dad1f8b3dc83284f195bac72027dd50279422d363d49311ab7d9b size: 948

無事にインストール出来たことが確認出来ました。このHarbor を利用して、TBS をデプロイしていきます。

TBS のデプロイ

こちら「Tanzu Kubernetes Grid 上に Tanzu Build Service をインストールする」の記事を参考に事前に必要なパッケージやコマンドをインストールしておきます。
上で作成したHarbor とregistry.pivotal.io にログインしておきます。registry.pivotal.io のログインにはVMware Tanzu Network のアカウントが必要になります。
docker login registry.<MYDOMAIN> -u devops -p xxxx
docker login registry.pivotal.io

TBS のインストールに必要なコンテナイメージをHarbor 上に展開します。Harbor 上にプロジェクトは事前に作成しておき、メンバーとしてdevops ユーザーを加えています。
$ kbld relocate -f images.lock --lock-output images-relocated.lock --repository registry.<MYDOMAIN>/tanzu/build-service
...(SNIP)...
relocate | imported 15 images
Succeeded

TBS をデプロイします。
$ ytt -f values.yaml -f manifests/ -v docker_repository="registry.<MYDOMAIN>/tanzu/build-service" -v docker_username=devops -v docker_password=xxxx | kbld -f images-relocated.lock -f- | kapp deploy -a tanzu-build-service -f- -y

...(SNIP)...

Changes

Namespace               Name                                                            Kind                            Conds.  Age  Op      Op st.  Wait to    Rs  Ri
(cluster)               build-service                                                   Namespace                       -       -    create  -       reconcile  -   -
^                       build-service-admin-role                                        ClusterRole                     -       -    create  -       reconcile  -   -
^                       build-service-admin-role-binding                                ClusterRoleBinding              -       -    create  -       reconcile  -   -
^                       build-service-authenticated-role                                ClusterRole                     -       -    create  -       reconcile  -   -
^                       build-service-authenticated-role-binding                        ClusterRoleBinding              -       -    create  -       reconcile  -   -
^                       build-service-secret-syncer-role                                ClusterRole                     -       -    create  -       reconcile  -   -
^                       build-service-secret-syncer-role-binding                        ClusterRoleBinding              -       -    create  -       reconcile  -   -
^                       build-service-user-role                                         ClusterRole                     -       -    create  -       reconcile  -   -
^                       build-service-warmer-role                                       ClusterRole                     -       -    create  -       reconcile  -   -
^                       build-service-warmer-role-binding                               ClusterRoleBinding              -       -    create  -       reconcile  -   -
^                       builders.kpack.io                                               CustomResourceDefinition        -       -    create  -       reconcile  -   -
^                       builds.kpack.io                                                 CustomResourceDefinition        -       -    create  -       reconcile  -   -
^                       cert-injection-webhook-cluster-role                             ClusterRole                     -       -    create  -       reconcile  -   -
^                       cert-injection-webhook-cluster-role-binding                     ClusterRoleBinding              -       -    create  -       reconcile  -   -
^                       clusterbuilders.kpack.io                                        CustomResourceDefinition        -       -    create  -       reconcile  -   -
^                       clusterstacks.kpack.io                                          CustomResourceDefinition        -       -    create  -       reconcile  -   -
^                       clusterstores.kpack.io                                          CustomResourceDefinition        -       -    create  -       reconcile  -   -
^                       custom-stack-editor-role                                        ClusterRole                     -       -    create  -       reconcile  -   -
^                       custom-stack-viewer-role                                        ClusterRole                     -       -    create  -       reconcile  -   -
^                       customstacks.stacks.stacks-operator.tanzu.vmware.com            CustomResourceDefinition        -       -    create  -       reconcile  -   -
^                       defaults.webhook.cert-injection.tanzu.vmware.com                MutatingWebhookConfiguration    -       -    create  -       reconcile  -   -
^                       defaults.webhook.kpack.io                                       MutatingWebhookConfiguration    -       -    create  -       reconcile  -   -
^                       images.kpack.io                                                 CustomResourceDefinition        -       -    create  -       reconcile  -   -
^                       kpack                                                           Namespace                       -       -    create  -       reconcile  -   -
^                       kpack-controller-admin                                          ClusterRole                     -       -    create  -       reconcile  -   -
^                       kpack-controller-admin-binding                                  ClusterRoleBinding              -       -    create  -       reconcile  -   -
^                       kpack-webhook-certs-mutatingwebhookconfiguration-admin-binding  ClusterRoleBinding              -       -    create  -       reconcile  -   -
^                       kpack-webhook-mutatingwebhookconfiguration-admin                ClusterRole                     -       -    create  -       reconcile  -   -
^                       metrics-reader                                                  ClusterRole                     -       -    create  -       reconcile  -   -
^                       proxy-role                                                      ClusterRole                     -       -    create  -       reconcile  -   -
^                       proxy-rolebinding                                               ClusterRoleBinding              -       -    create  -       reconcile  -   -
^                       sourceresolvers.kpack.io                                        CustomResourceDefinition        -       -    create  -       reconcile  -   -
^                       stacks-operator-manager-role                                    ClusterRole                     -       -    create  -       reconcile  -   -
^                       stacks-operator-manager-rolebinding                             ClusterRoleBinding              -       -    create  -       reconcile  -   -
^                       stacks-operator-system                                          Namespace                       -       -    create  -       reconcile  -   -
^                       validation.webhook.kpack.io                                     ValidatingWebhookConfiguration  -       -    create  -       reconcile  -   -
build-service           build-pod-image-fetcher                                         DaemonSet                       -       -    create  -       reconcile  -   -
^                       build-service-warmer-namespace-role                             Role                            -       -    create  -       reconcile  -   -
^                       build-service-warmer-namespace-role-binding                     RoleBinding                     -       -    create  -       reconcile  -   -
^                       ca-cert                                                         ConfigMap                       -       -    create  -       reconcile  -   -
^                       canonical-registry-secret                                       Secret                          -       -    create  -       reconcile  -   -
^                       cb-service-account                                              ServiceAccount                  -       -    create  -       reconcile  -   -
^                       cert-injection-webhook                                          Deployment                      -       -    create  -       reconcile  -   -
^                       cert-injection-webhook                                          Service                         -       -    create  -       reconcile  -   -
^                       cert-injection-webhook-role                                     Role                            -       -    create  -       reconcile  -   -
^                       cert-injection-webhook-role-binding                             RoleBinding                     -       -    create  -       reconcile  -   -
^                       cert-injection-webhook-sa                                       ServiceAccount                  -       -    create  -       reconcile  -   -
^                       cert-injection-webhook-tls                                      Secret                          -       -    create  -       reconcile  -   -
^                       http-proxy                                                      ConfigMap                       -       -    create  -       reconcile  -   -
^                       https-proxy                                                     ConfigMap                       -       -    create  -       reconcile  -   -
^                       no-proxy                                                        ConfigMap                       -       -    create  -       reconcile  -   -
^                       secret-syncer-controller                                        Deployment                      -       -    create  -       reconcile  -   -
^                       secret-syncer-service-account                                   ServiceAccount                  -       -    create  -       reconcile  -   -
^                       setup-ca-certs-image                                            ConfigMap                       -       -    create  -       reconcile  -   -
^                       sleeper-image                                                   ConfigMap                       -       -    create  -       reconcile  -   -
^                       warmer-controller                                               Deployment                      -       -    create  -       reconcile  -   -
^                       warmer-service-account                                          ServiceAccount                  -       -    create  -       reconcile  -   -
kpack                   build-init-image                                                ConfigMap                       -       -    create  -       reconcile  -   -
^                       build-init-windows-image                                        ConfigMap                       -       -    create  -       reconcile  -   -
^                       canonical-registry-secret                                       Secret                          -       -    create  -       reconcile  -   -
^                       canonical-registry-serviceaccount                               ServiceAccount                  -       -    create  -       reconcile  -   -
^                       completion-image                                                ConfigMap                       -       -    create  -       reconcile  -   -
^                       completion-windows-image                                        ConfigMap                       -       -    create  -       reconcile  -   -
^                       controller                                                      ServiceAccount                  -       -    create  -       reconcile  -   -
^                       kp-config                                                       ConfigMap                       -       -    create  -       reconcile  -   -
^                       kpack-controller                                                Deployment                      -       -    create  -       reconcile  -   -
^                       kpack-controller-local-config                                   Role                            -       -    create  -       reconcile  -   -
^                       kpack-controller-local-config-binding                           RoleBinding                     -       -    create  -       reconcile  -   -
^                       kpack-webhook                                                   Deployment                      -       -    create  -       reconcile  -   -
^                       kpack-webhook                                                   Service                         -       -    create  -       reconcile  -   -
^                       kpack-webhook-certs-admin                                       Role                            -       -    create  -       reconcile  -   -
^                       kpack-webhook-certs-admin-binding                               RoleBinding                     -       -    create  -       reconcile  -   -
^                       lifecycle-image                                                 ConfigMap                       -       -    create  -       reconcile  -   -
^                       rebase-image                                                    ConfigMap                       -       -    create  -       reconcile  -   -
^                       webhook                                                         ServiceAccount                  -       -    create  -       reconcile  -   -
^                       webhook-certs                                                   Secret                          -       -    create  -       reconcile  -   -
stacks-operator-system  canonical-registry-secret                                       Secret                          -       -    create  -       reconcile  -   -
^                       controller-manager                                              Deployment                      -       -    create  -       reconcile  -   -
^                       controller-manager-metrics-service                              Service                         -       -    create  -       reconcile  -   -
^                       leader-election-role                                            Role                            -       -    create  -       reconcile  -   -
^                       leader-election-rolebinding                                     RoleBinding                     -       -    create  -       reconcile  -   -
^                       stackify-image                                                  ConfigMap                       -       -    create  -       reconcile  -   -

Op:      82 create, 0 delete, 0 update, 0 noop
Wait to: 82 reconcile, 0 delete, 0 noop

...(SNIP)...

Succeeded

ClusterBuilder をインストールします。
kp import -f descriptor-100.0.80.yaml
$ kp clusterbuilder list
NAME       READY    STACK                          IMAGE
base       true     io.buildpacks.stacks.bionic    registry.<MYDOMAIN>/tanzu/build-service/base@sha256:e49bfb99292dd424d41debba05b21a1abed5b806ded08a8fedfab5c5491f3434
default    true     io.buildpacks.stacks.bionic    registry.<MYDOMAIN>/tanzu/build-service/default@sha256:e49bfb99292dd424d41debba05b21a1abed5b806ded08a8fedfab5c5491f3434
full       true     io.buildpacks.stacks.bionic    registry.<MYDOMAIN>/tanzu/build-service/full@sha256:38841a236b15059a390975ea6f492d7bda0d294287d267fe95ff731993363946
tiny       true     io.paketo.stacks.tiny          registry.<MYDOMAIN>/tanzu/build-service/tiny@sha256:97b734efc85782bebdbe090c3bed0fd281ed0c0eec2dffb12d04825468f5091d


TKGm のデプロイから始まり、Contour、Harbor、TBS とデプロイし終わりました。

このブログの人気の投稿