Partially takes down the workload #3

Open
opened 5 months ago by ank-ankur · 2 comments

Hi,

We have a GKE cluster and when we implented kube-downscaler, it takes down the workload partially. (It worked perfectly in test cluster)

Details:

  • Version: 20.10.0
  • Implemented with ArgoCD (with auto sync on)
  • Toal workloads= 112, out of which 35 were scaled down succesfully. Rest 77 were scaling down and up again and again.
  • tried multiple times, but every time success count was limited to 35. Not sure if this number is defined somewhere?
  • In our test cluster we had aroud 12 workloads and all were scaled down smoothly even with ArgoCD auto sync on.
  • We noticed the annotation (deployment.kubernetes.io) in replicaset were changed to "0", but this number kept changing back to original in the probematic 77 workloads and pods killed and launched again
    deployment.kubernetes.io/desired-replicas: '2'
    deployment.kubernetes.io/max-replicas: '3'
    downscaler/original-replicas: '2'

In Kube-downscaler logs, it just prints that it will scale down application, etc (no errors/warning).

The configmap has following:
DEFAULT_UPTIME: "Mon-Fri 04:00-19:00 CET"
EXCLUDE_NAMESPACES: "namespace1,namespace2"

In deployment we have added resource restriction:
resources:
limits:
cpu: 200m
memory: 400Mi
requests:
cpu: 100m
memory: 200Mi

Hi, We have a GKE cluster and when we implented kube-downscaler, it takes down the workload partially. (It worked perfectly in test cluster) Details: - Version: 20.10.0 - Implemented with ArgoCD (with auto sync on) - Toal workloads= 112, out of which 35 were scaled down succesfully. Rest 77 were scaling down and up again and again. - tried multiple times, but every time success count was limited to 35. Not sure if this number is defined somewhere? - In our test cluster we had aroud 12 workloads and all were scaled down smoothly even with ArgoCD auto sync on. - We noticed the annotation (deployment.kubernetes.io) in replicaset were changed to "0", but this number kept changing back to original in the probematic 77 workloads and pods killed and launched again deployment.kubernetes.io/desired-replicas: '2' deployment.kubernetes.io/max-replicas: '3' downscaler/original-replicas: '2' In Kube-downscaler logs, it just prints that it will scale down application, etc (no errors/warning). The configmap has following: DEFAULT_UPTIME: "Mon-Fri 04:00-19:00 CET" EXCLUDE_NAMESPACES: "namespace1,namespace2" In deployment we have added resource restriction: resources: limits: cpu: 200m memory: 400Mi requests: cpu: 100m memory: 200Mi
Owner

I have difficulties understanding your problem report, what do you mean with "it takes down the workload partially"?

I have difficulties understanding your problem report, what do you mean with "it takes down the workload partially"?
Poster

Sure,i will try to explain.

so we have a GKE cluster running with around 112 deployments. Out of these only 35 deployments were scaled down with kube-downscaler. For rest of the deployments (77) were scaled down and up and down and up again and again.

We are using ArgoCD for application deployment with auto sync on (it is for all the 112 applications), so for these (77) deployment, when kube-downscaler scaled down the workload, the ArgoCD auto sync kicks in and bring back the load back to original state. The other 35 deployment stays down all the time.

not sure why scaled down only some of the application when all have same implementation.

I hope this helps.

Sure,i will try to explain. so we have a GKE cluster running with around 112 deployments. Out of these only 35 deployments were scaled down with kube-downscaler. For rest of the deployments (77) were scaled down and up and down and up again and again. We are using ArgoCD for application deployment with auto sync on (it is for all the 112 applications), so for these (77) deployment, when kube-downscaler scaled down the workload, the ArgoCD auto sync kicks in and bring back the load back to original state. The other 35 deployment stays down all the time. not sure why scaled down only some of the application when all have same implementation. I hope this helps.
Sign in to join this conversation.
No Label
No Milestone
No Assignees
2 Participants
Notifications
Due Date

No due date set.

Dependencies

This issue currently doesn't have any dependencies.

Loading…
There is no content yet.