1808
Comment:
|
2080
|
Deletions are marked like this. | Additions are marked like this. |
Line 1: | Line 1: |
## page was renamed from kubernetes/helm | |
Line 2: | Line 3: |
Links: [[Azure/Kubernetes]] | Links: [[Azure/Kubernetes]] , [[K8s/Monitoring]] |
Line 21: | Line 23: |
* | |
Line 26: | Line 28: |
az network public-ip create --resource-group <rgK8S> --name <cluster-PublicIP> --allocation-method static | az network public-ip create --resource-group <MC_K8S_from_above> --name <cluster-PublicIP> --allocation-method static |
Line 28: | Line 30: |
1. assign ip to nginx ingress controller {{{ helm install stable/nginx-ingress -namespace kube-system --set controller.service.loadBalancerIP="52.23.23.32" --set controller.replicaCount=2 }}} |
Kubernetes helm chart notes
Links: Azure/Kubernetes , K8s/Monitoring
Install 2019-03
- Ran official curl, and get_helm, all fine
- When deploying chart got an error, tiller not allowed to create namespaces.
Fixed with command belown and helm init --upgrade
kubectl create serviceaccount --namespace kube-system tiller # serviceaccount "tiller" created kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller # clusterrolebinding "tiller-cluster-rule" created kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}' # deployment "tiller-deploy" patched
201903 add ingress and static ip
Get k8s nodeResourceGroup grom gui or with
az aks show --resource-group <rgK8S> --name <clusterName> --query nodeResourceGroup -o tsv
provision static ip with
az network public-ip create --resource-group <MC_K8S_from_above> --name <cluster-PublicIP> --allocation-method static
assign ip to nginx ingress controller
helm install stable/nginx-ingress -namespace kube-system --set controller.service.loadBalancerIP="52.23.23.32" --set controller.replicaCount=2
Run 2019-02
- helm install --name p1 git/helmchart/ --namespace piet --set "env=DEV" --timeout 600
- Run: helm ls --all p1; to check the status of the release
- run: helm del --purge p1; to delete it
- helm status p1
- helm history p1
- helm rollback p1 3; roll back to version 3
- helm rollback; to last successfully DEPLOYED revision
- helm upgrade --debug --dry-run
- helm upgrade --install
Tricks
- Use checksum of config map to change app annotation and force a new version to be deployed for apps that do not pick up new configs.
hooks
annotation: "helm.sh/hook": " "