VMware/TKGm: Difference between revisions
Jump to navigation
Jump to search
Line 109: | Line 109: | ||
==Add Trusted CA to TKG Management Cluster (TKG 2.x)== | ==Add Trusted CA to TKG Management Cluster (TKG 2.x)== | ||
kubectl config use-context mgmt-admin@mgmt | kubectl config use-context mgmt-admin@mgmt | ||
kubectl edit clusters shared -n default | kubectl edit clusters {{highlight|shared}} -n {{highlight|default}} | ||
spec: | spec: | ||
Line 124: | Line 124: | ||
<strong>- data: <BASE64 ENCODED PEM OF CA> | <strong>- data: <BASE64 ENCODED PEM OF CA> | ||
name: proxy</strong> {{highlight|# <---This is very important to have the name set to <strong>proxy</strong>}} | name: proxy</strong> {{highlight|# <---This is very important to have the name set to <strong>proxy</strong>}} | ||
Once this is saved, Tanzu will rebuild the cluster nodes with the new CA. This will be a rolling update so there will be no down time. | |||
==Add another workload cluster== | ==Add another workload cluster== |
Revision as of 13:06, 19 September 2023
Tanzu Kubernetes Grid (Multi-Cloud)
Prep TKGm Install
cd /data curl --insecure https://ders-gitlab.dersllc.com/ders/ders-proxy/-/raw/master/STAR_dersllc_com.crt > /data/ders-star-chain.crt curl --insecure https://ders-gitlab.dersllc.com/ders/ders-proxy/-/raw/master/AddTrustExternalCARoot.crt >> /data/ders-star-chain.crt curl --insecure https://ders-gitlab.dersllc.com/ders/vmware-se/-/raw/main/HomeLab/DERS-CA-CERT/ders-ca.cer > /data/ders-priv-ca.crt curl --insecure https://ders-gitlab.dersllc.com/ders/ders-proxy/-/raw/master/AddTrustExternalCARoot.crt > /data/ders-ca.crt curl --insecure https://ders-gitlab.dersllc.com/ders/ders-proxy/-/raw/master/dersllc-new.key > /data/ders-star.key cat ders-priv-ca.crt >> ders-star-chain.crt cat /data/ders-priv-ca.crt >> /etc/pki/tls/certs/ca-bundle.crt cp /data/ders-star-chain.crt /data/ders-star-chain.pem cat /data/ders-ca.crt >> /etc/pki/tls/certs/ca-bundle.crt systemctl docker restart docker login harbor.dersllc.com #arcas --load_tanzu_image_to_harbor --repo_name tanzu_210 --tkg_binaries_path /opt/vmware/arcas/tools/tanzu21.tar
Installing TKGm
curl https://ders-gitlab.dersllc.com/ders/vmware-se/-/raw/main/HomeLab/tkgm-on-vlan-87.json > /data/tkgm-on-vlan-87.json arcas --env vsphere --file /data/tkgm-on-vlan-87.json --verbose --avi_configuration arcas --env vsphere --file /data/tkgm-on-vlan-87.json --verbose --skip_precheck --tkg_mgmt_configuration --shared_service_configuration --workload_preconfig --workload_deploy --deploy_extensions
Installing Tanzu Package Applications
tanzu package install cert-manager -n tanzu-package-repo-global -p cert-manager.tanzu.vmware.com -v 1.7.2+vmware.1-tkg.1 tanzu package install contour -n tanzu-package-repo-global -p contour.tanzu.vmware.com -v 1.20.2+vmware.1-tkg.1 -f /data/contour-default-values.yaml tanzu package install prometheus -n tanzu-package-repo-global -p prometheus.tanzu.vmware.com -v 2.36.2+vmware.1-tkg.1 tanzu package install grafana -n tanzu-package-repo-global -p grafana.tanzu.vmware.com -v 7.5.16+vmware.1-tkg.1
Setup Pinniped for Key Cloak Authentication
Setup KeyCloak
1. Apache Proxy Configured as SSO.DERSLLC.COM
<VirtualHost *:443> ServerName sso.dersllc.com RequestHeader set X-Forwarded-Proto "https" RemoteIPHeader X-Forwarded-For ProxyPass / http://172.16.87.22:8080/ ProxyPassReverse / http://172.16.87.22:8080/ </VirtualHost>
2. Run Container
docker stop keycloak docker rm keycloak docker run -d --name keycloak -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin quay.io/keycloak/keycloak:20.0.2 start --proxy edge --hostname-strict=false
3. Update the Admin Password 4. Create a Client Connection
Open the Keycloak Admin Console Click 'Clients' Click 'Create client' Fill in the form with the following values: Client type: OpenID Connect Client ID: tkgm Name: TKGm Click 'Next' Check 'Client authentication' Check 'Authorization' Make sure 'Standard flow' is enabled Click 'Save'
References:
https://www.keycloak.org/getting-started/getting-started-docker
Default Pods that have CURL
kube-system
1. vsphere-csi-node - vsphere-csi-node
kubectl exec -it -n kube-system vsphere-csi-node-4bf25 -c vsphere-csi-node -- bash
2. vsphere-csi-controller -c vsphere-csi-controller
kubectl exec -it -n kube-system vsphere-csi-controller-74f44b74c5-2t2zh -c vsphere-csi-controller -- bash
Upgrade vSphere Cluster Hardware
Frank Escaros-Buechsel easiest would be to enable EVC on the cluster now then add the new hosts to said cluster and migrate over with vmotion depending on rack space and network layout of course this way you dont need to update any references and you still have a more or less seamless migration (edited)
Change TKGm node Resources (CPU / Memory / Datastore)
References:
https://veducate.co.uk/tkg-kubectl-scale-vertically/ https://vmwire.com/2021/11/22/scaling-tkgm-control-plane-nodes-vertically/
REFERENCES
https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/1.6/vmware-tanzu-kubernetes-grid-16/GUID-mgmt-clusters-airgapped-environments.html
Add Trusted CA to TKG Management Cluster (1.x)
kubectl config use-context mgmt-admin@mgmt kubectl edit Kubeadmconfigtemplates -n tkg-system tanzu-mgmt-md-0-bootstrap-72bfm spec: template: spec: files: - content: | -----BEGIN CERTIFICATE----- MIIFmjCCA4KgAwIBAgIJAKVK2W1HOS0NMA0GCSqGSIb3DQEBCwUAMGsxCzAJBgNV ...... l2D4kF501KKaU73yqWjgom7C12yxow+ev+to51byrvLjKzg6CYG1a4XXvi3tPxq3 smPi9WIsgtRqAEFQ8TmDn5XpNpaYbg== -----END CERTIFICATE----- owner: root:root path: /etc/ssl/certs/tkg-custom-ca.pem permissions: "0644"
Add Trusted CA to TKG Management Cluster (TKG 2.x)
kubectl config use-context mgmt-admin@mgmt kubectl edit clusters shared -n default
spec:
..........
topology:
..........
variables:
..........
- name: trust
value:
additionalTrustedCAs:
- data: <BASE64 ENCODED PEM OF CA>
name: imageRepository
- data: <BASE64 ENCODED PEM OF CA>
name: proxy # <---This is very important to have the name set to proxy
Once this is saved, Tanzu will rebuild the cluster nodes with the new CA. This will be a rolling update so there will be no down time.
Add another workload cluster
cp .config/tanzu/tkg/clusterconfigs/work.yaml test.yaml # Change the name of the cluster throughout the file. vi test.yaml tanzu cluster create work-1 -f test.yaml tanzu cluster kubeconfig get --admin work-1
Set NTP server for Cluster Nodes
Edit the clusters.cluster.x-k8s.io and search for ntpServers and add the value. Esc :wq This will rebuild all of the nodes with the NTP servers in the Chronyd.conf file. spec: topology: variables: - name: ntpServers value: - 172.16.84.21