VMware/TKGm: Difference between revisions
Jump to navigation
Jump to search
(11 intermediate revisions by the same user not shown) | |||
Line 22: | Line 22: | ||
arcas --env vsphere --file /data/tkgm-on-vlan-87.json --verbose --avi_configuration | arcas --env vsphere --file /data/tkgm-on-vlan-87.json --verbose --avi_configuration | ||
arcas --env vsphere --file /data/tkgm-on-vlan-87.json --verbose --skip_precheck --tkg_mgmt_configuration --shared_service_configuration --workload_preconfig --workload_deploy --deploy_extensions | arcas --env vsphere --file /data/tkgm-on-vlan-87.json --verbose --skip_precheck --tkg_mgmt_configuration --shared_service_configuration --workload_preconfig --workload_deploy --deploy_extensions | ||
#curl https://ders-gitlab.dersllc.com/ders/vmware-se/-/raw/main/HomeLab/tkgm-on-vlan-87-2.json > /data/tkgm-on-vlan-87-2.json | |||
#arcas --env vsphere --file /data/tkgm-on-vlan-87-2.json --verbose --avi_configuration | |||
#arcas --env vsphere --file /data/tkgm-on-vlan-87-2.json --verbose --skip_precheck --tkg_mgmt_configuration --shared_service_configuration --workload_preconfig --workload_deploy --deploy_extensions | |||
== Installing Tanzu Package Applications == | == Installing Tanzu Package Applications == | ||
Line 78: | Line 82: | ||
== Change TKGm node Resources (CPU / Memory / Datastore) == | == Change TKGm node Resources (CPU / Memory / Datastore) == | ||
===TKG 1.x=== | |||
References: | |||
https://veducate.co.uk/tkg-kubectl-scale-vertically/ | |||
https://vmwire.com/2021/11/22/scaling-tkgm-control-plane-nodes-vertically/ | |||
===TKG 2.x === | |||
kubectl config use-context {{highlight|mgmt-admin@mgmt}} | |||
kubectl edit Kubeadmconfigtemplates -n tkg-system {{highlight|tanzu-mgmt-md-0-bootstrap-72bfm}} | |||
==Add Trusted CA to TKG Management Cluster== | |||
===TKG 1.x=== | |||
kubectl config use-context {{highlight|mgmt-admin@mgmt}} | |||
kubectl edit Kubeadmconfigtemplates -n tkg-system {{highlight|tanzu-mgmt-md-0-bootstrap-72bfm}} | |||
==Add Trusted CA to TKG Management Cluster | |||
kubectl config use-context mgmt-admin@mgmt | |||
kubectl edit Kubeadmconfigtemplates -n tkg-system tanzu-mgmt-md-0-bootstrap-72bfm | |||
spec: | spec: | ||
Line 96: | Line 99: | ||
spec: | spec: | ||
files: | files: | ||
- content: | | <strong>- content: | | ||
-----BEGIN CERTIFICATE----- | -----BEGIN CERTIFICATE----- | ||
MIIFmjCCA4KgAwIBAgIJAKVK2W1HOS0NMA0GCSqGSIb3DQEBCwUAMGsxCzAJBgNV | MIIFmjCCA4KgAwIBAgIJAKVK2W1HOS0NMA0GCSqGSIb3DQEBCwUAMGsxCzAJBgNV | ||
Line 105: | Line 108: | ||
owner: root:root | owner: root:root | ||
path: /etc/ssl/certs/tkg-custom-ca.pem | path: /etc/ssl/certs/tkg-custom-ca.pem | ||
permissions: "0644" | permissions: "0644"</strong> | ||
== | ===TKG 2.x=== | ||
kubectl config use-context {{highlight|mgmt-admin@mgmt}} | kubectl config use-context {{highlight|mgmt-admin@mgmt}} | ||
kubectl edit clusters {{highlight|shared}} -n {{highlight|default}} | kubectl edit clusters {{highlight|shared}} -n {{highlight|default}} | ||
Line 138: | Line 141: | ||
cp ~/.config/tanzu/tkg/clusterconfigs/work.yaml ~/tmc.yaml | cp ~/.config/tanzu/tkg/clusterconfigs/work.yaml ~/tmc.yaml | ||
sed -i 's/work1/tmc/g' ~/tmc.yaml {{highlight|#<-- | sed -i 's/work1/tmc/g' ~/tmc.yaml {{highlight|#<-- Only run SED cmd if you have a unique cluster name, if not edit file manually!}} | ||
tanzu cluster create -f ~/tmc.yaml | tanzu cluster create -f ~/tmc.yaml | ||
Line 146: | Line 149: | ||
== Set NTP server for Cluster Nodes == | == Set NTP server for Cluster Nodes == | ||
kubectl config use-context {{highlight|mgmt-admin@mgmt}} | |||
kubectl edit clusters.cluster.x-k8s.io {{highlight|shared}} -n {{highlight|default}} | |||
spec: | spec: | ||
topology: | topology: | ||
Line 155: | Line 157: | ||
- name: ntpServers | - name: ntpServers | ||
value: | value: | ||
- 172.16.84.21 | {{highlight|- 172.16.84.21}} | ||
This will rebuild all of the nodes with the NTP servers in the Chronyd.conf file. This preform a rolling update so there is not down time. | |||
==REFERENCES== | |||
https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.6/vmware-tanzu-kubernetes-grid-16/GUID-mgmt-clusters-airgapped-environments.html |
Latest revision as of 18:02, 1 November 2023
Tanzu Kubernetes Grid (Multi-Cloud)
Prep TKGm Install
cd /data curl --insecure https://ders-gitlab.dersllc.com/ders/ders-proxy/-/raw/master/STAR_dersllc_com.crt > /data/ders-star-chain.crt curl --insecure https://ders-gitlab.dersllc.com/ders/ders-proxy/-/raw/master/AddTrustExternalCARoot.crt >> /data/ders-star-chain.crt curl --insecure https://ders-gitlab.dersllc.com/ders/vmware-se/-/raw/main/HomeLab/DERS-CA-CERT/ders-ca.cer > /data/ders-priv-ca.crt curl --insecure https://ders-gitlab.dersllc.com/ders/ders-proxy/-/raw/master/AddTrustExternalCARoot.crt > /data/ders-ca.crt curl --insecure https://ders-gitlab.dersllc.com/ders/ders-proxy/-/raw/master/dersllc-new.key > /data/ders-star.key cat ders-priv-ca.crt >> ders-star-chain.crt cat /data/ders-priv-ca.crt >> /etc/pki/tls/certs/ca-bundle.crt cp /data/ders-star-chain.crt /data/ders-star-chain.pem cat /data/ders-ca.crt >> /etc/pki/tls/certs/ca-bundle.crt systemctl docker restart docker login harbor.dersllc.com #arcas --load_tanzu_image_to_harbor --repo_name tanzu_210 --tkg_binaries_path /opt/vmware/arcas/tools/tanzu21.tar
Installing TKGm
curl https://ders-gitlab.dersllc.com/ders/vmware-se/-/raw/main/HomeLab/tkgm-on-vlan-87.json > /data/tkgm-on-vlan-87.json arcas --env vsphere --file /data/tkgm-on-vlan-87.json --verbose --avi_configuration arcas --env vsphere --file /data/tkgm-on-vlan-87.json --verbose --skip_precheck --tkg_mgmt_configuration --shared_service_configuration --workload_preconfig --workload_deploy --deploy_extensions #curl https://ders-gitlab.dersllc.com/ders/vmware-se/-/raw/main/HomeLab/tkgm-on-vlan-87-2.json > /data/tkgm-on-vlan-87-2.json #arcas --env vsphere --file /data/tkgm-on-vlan-87-2.json --verbose --avi_configuration #arcas --env vsphere --file /data/tkgm-on-vlan-87-2.json --verbose --skip_precheck --tkg_mgmt_configuration --shared_service_configuration --workload_preconfig --workload_deploy --deploy_extensions
Installing Tanzu Package Applications
tanzu package install cert-manager -n tanzu-package-repo-global -p cert-manager.tanzu.vmware.com -v 1.7.2+vmware.1-tkg.1 tanzu package install contour -n tanzu-package-repo-global -p contour.tanzu.vmware.com -v 1.20.2+vmware.1-tkg.1 -f /data/contour-default-values.yaml tanzu package install prometheus -n tanzu-package-repo-global -p prometheus.tanzu.vmware.com -v 2.36.2+vmware.1-tkg.1 tanzu package install grafana -n tanzu-package-repo-global -p grafana.tanzu.vmware.com -v 7.5.16+vmware.1-tkg.1
Setup Pinniped for Key Cloak Authentication
Setup KeyCloak
1. Apache Proxy Configured as SSO.DERSLLC.COM
<VirtualHost *:443> ServerName sso.dersllc.com RequestHeader set X-Forwarded-Proto "https" RemoteIPHeader X-Forwarded-For ProxyPass / http://172.16.87.22:8080/ ProxyPassReverse / http://172.16.87.22:8080/ </VirtualHost>
2. Run Container
docker stop keycloak docker rm keycloak docker run -d --name keycloak -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin quay.io/keycloak/keycloak:20.0.2 start --proxy edge --hostname-strict=false
3. Update the Admin Password 4. Create a Client Connection
Open the Keycloak Admin Console Click 'Clients' Click 'Create client' Fill in the form with the following values: Client type: OpenID Connect Client ID: tkgm Name: TKGm Click 'Next' Check 'Client authentication' Check 'Authorization' Make sure 'Standard flow' is enabled Click 'Save'
References:
https://www.keycloak.org/getting-started/getting-started-docker
Default Pods that have CURL
kube-system
1. vsphere-csi-node - vsphere-csi-node
kubectl exec -it -n kube-system vsphere-csi-node-4bf25 -c vsphere-csi-node -- bash
2. vsphere-csi-controller -c vsphere-csi-controller
kubectl exec -it -n kube-system vsphere-csi-controller-74f44b74c5-2t2zh -c vsphere-csi-controller -- bash
Upgrade vSphere Cluster Hardware
Frank Escaros-Buechsel easiest would be to enable EVC on the cluster now then add the new hosts to said cluster and migrate over with vmotion depending on rack space and network layout of course this way you dont need to update any references and you still have a more or less seamless migration (edited)
Change TKGm node Resources (CPU / Memory / Datastore)
TKG 1.x
References: https://veducate.co.uk/tkg-kubectl-scale-vertically/ https://vmwire.com/2021/11/22/scaling-tkgm-control-plane-nodes-vertically/
TKG 2.x
kubectl config use-context mgmt-admin@mgmt kubectl edit Kubeadmconfigtemplates -n tkg-system tanzu-mgmt-md-0-bootstrap-72bfm
Add Trusted CA to TKG Management Cluster
TKG 1.x
kubectl config use-context mgmt-admin@mgmt kubectl edit Kubeadmconfigtemplates -n tkg-system tanzu-mgmt-md-0-bootstrap-72bfm spec: template: spec: files: - content: | -----BEGIN CERTIFICATE----- MIIFmjCCA4KgAwIBAgIJAKVK2W1HOS0NMA0GCSqGSIb3DQEBCwUAMGsxCzAJBgNV ...... l2D4kF501KKaU73yqWjgom7C12yxow+ev+to51byrvLjKzg6CYG1a4XXvi3tPxq3 smPi9WIsgtRqAEFQ8TmDn5XpNpaYbg== -----END CERTIFICATE----- owner: root:root path: /etc/ssl/certs/tkg-custom-ca.pem permissions: "0644"
TKG 2.x
kubectl config use-context mgmt-admin@mgmt kubectl edit clusters shared -n default
spec:
..........
topology:
..........
variables:
..........
- name: trust
value:
additionalTrustedCAs:
- data: <BASE64 ENCODED PEM OF CA>
name: imageRepository
- data: <BASE64 ENCODED PEM OF CA>
name: proxy # <---This is very important to have the name set to proxy
Once this is saved, Tanzu will rebuild the cluster nodes with the new CA. This will be a rolling update so there will be no down time.
References: https://vrabbi.cloud/post/trusting-private-cas-in-tkg-2-1-clusters/ https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/2.3/using-tkg/workload-clusters-secret.html
Add another workload cluster after using SIVT to deploy TKGm
cp .config/tanzu/tkg/clusterconfigs/work.yaml test.yaml # Change the name of the cluster throughout the file. vi test.yaml
cp ~/.config/tanzu/tkg/clusterconfigs/work.yaml ~/tmc.yaml
sed -i 's/work1/tmc/g' ~/tmc.yaml #<-- Only run SED cmd if you have a unique cluster name, if not edit file manually!
tanzu cluster create -f ~/tmc.yaml
tanzu cluster kubeconfig get tmc --admin
kubectl config use-context tmc-admin@tmc
Set NTP server for Cluster Nodes
kubectl config use-context mgmt-admin@mgmt kubectl edit clusters.cluster.x-k8s.io shared -n default spec: topology: variables: - name: ntpServers value: - 172.16.84.21
This will rebuild all of the nodes with the NTP servers in the Chronyd.conf file. This preform a rolling update so there is not down time.
REFERENCES
https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.6/vmware-tanzu-kubernetes-grid-16/GUID-mgmt-clusters-airgapped-environments.html