284
Comment:
|
2015
|
Deletions are marked like this. | Additions are marked like this. |
Line 4: | Line 4: |
== Install 2019-03 == * Ran official curl, and get_helm, all fine * When deploying chart got an error, tiller not allowed to create namespaces. * Fixed with command belown and helm init --upgrade {{{{{ kubectl create serviceaccount --namespace kube-system tiller # serviceaccount "tiller" created kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller # clusterrolebinding "tiller-cluster-rule" created kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}' # deployment "tiller-deploy" patched }}}}} == 201903 add ingress and static ip == 1. Get k8s nodeResourceGroup grom gui or with {{{ az aks show --resource-group <rgK8S> --name <clusterName> --query nodeResourceGroup -o tsv }}} 1. provision static ip with {{{ az network public-ip create --resource-group <MC_K8S_from_above> --name <cluster-PublicIP> --allocation-method static }}} 1. assign ip to nginx ingress controller {{{ helm install stable/nginx-ingress -namespace kube-system --set controller.service.loadBalancerIP="52.23.23.32" --set controller.replicaCount=2 }}} == Run 2019-02 == |
|
Line 11: | Line 40: |
* helm history p1 * helm rollback p1 3; roll back to version 3 * helm rollback; to last successfully DEPLOYED revision * helm upgrade --debug --dry-run * helm upgrade --install === Tricks === * Use checksum of config map to change app annotation and force a new version to be deployed for apps that do not pick up new configs. * https://helm.sh/docs/developing_charts/#hooks * https://youtu.be/WugC_mbbiWU?t=1043 * hooks {{{ annotation: "helm.sh/hook": " " }}} |
Kubernetes helm chart notes
Links: Azure/Kubernetes
Install 2019-03
- Ran official curl, and get_helm, all fine
- When deploying chart got an error, tiller not allowed to create namespaces.
Fixed with command belown and helm init --upgrade
kubectl create serviceaccount --namespace kube-system tiller # serviceaccount "tiller" created kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller # clusterrolebinding "tiller-cluster-rule" created kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}' # deployment "tiller-deploy" patched
201903 add ingress and static ip
Get k8s nodeResourceGroup grom gui or with
az aks show --resource-group <rgK8S> --name <clusterName> --query nodeResourceGroup -o tsv
provision static ip with
az network public-ip create --resource-group <MC_K8S_from_above> --name <cluster-PublicIP> --allocation-method static
assign ip to nginx ingress controller
helm install stable/nginx-ingress -namespace kube-system --set controller.service.loadBalancerIP="52.23.23.32" --set controller.replicaCount=2
Run 2019-02
- helm install --name p1 git/helmchart/ --namespace piet --set "env=DEV" --timeout 600
- Run: helm ls --all p1; to check the status of the release
- run: helm del --purge p1; to delete it
- helm status p1
- helm history p1
- helm rollback p1 3; roll back to version 3
- helm rollback; to last successfully DEPLOYED revision
- helm upgrade --debug --dry-run
- helm upgrade --install
Tricks
- Use checksum of config map to change app annotation and force a new version to be deployed for apps that do not pick up new configs.
hooks
annotation: "helm.sh/hook": " "