Partially takes down the workload
We have a GKE cluster and when we implented kube-downscaler, it takes down the workload partially. (It worked perfectly in test cluster)
- Version: 20.10.0
- Implemented with ArgoCD (with auto sync on)
- Toal workloads= 112, out of which 35 were scaled down succesfully. Rest 77 were scaling down and up again and again.
- tried multiple times, but every time success count was limited to 35. Not sure if this number is defined somewhere?
- In our test cluster we had aroud 12 workloads and all were scaled down smoothly even with ArgoCD auto sync on.
- We noticed the annotation (deployment.kubernetes.io) in replicaset were changed to "0", but this number kept changing back to original in the probematic 77 workloads and pods killed and launched again
In Kube-downscaler logs, it just prints that it will scale down application, etc (no errors/warning).
The configmap has following:
DEFAULT_UPTIME: "Mon-Fri 04:00-19:00 CET"
In deployment we have added resource restriction:
I have difficulties understanding your problem report, what do you mean with "it takes down the workload partially"?
Sure,i will try to explain.
so we have a GKE cluster running with around 112 deployments. Out of these only 35 deployments were scaled down with kube-downscaler. For rest of the deployments (77) were scaled down and up and down and up again and again.
We are using ArgoCD for application deployment with auto sync on (it is for all the 112 applications), so for these (77) deployment, when kube-downscaler scaled down the workload, the ArgoCD auto sync kicks in and bring back the load back to original state. The other 35 deployment stays down all the time.
not sure why scaled down only some of the application when all have same implementation.
I hope this helps.
Deleting a branch is permanent. It CANNOT be undone. Continue?