Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Waited due to client-side throttling, not priority and fairness in vso logs #1000

Open
kamalverma1 opened this issue Jan 11, 2025 · 3 comments
Labels
bug Something isn't working

Comments

@kamalverma1
Copy link

kamalverma1 commented Jan 11, 2025

Describe the bug
I have created about 2000 Vaultstaticsecrets in a test environment for testing a production scenerio where we would be adding a similar amount of Vaultstaticsecrets. However, after creating the 2000 Vaultstaticsecrets, the k8s secrets were created as expected but the VSO manager pod continuously shows client-side throttling logs. I could not find any relevent configuration to fix this.

So, I wanted to know how to fix this issue and prevent any issue due to this in production.

Here is my Vaultstaticsecret yaml config:

kubectl describe Vaultstaticsecrets my-secret-1 -n kverma1
Name:         my-secret-1
Namespace:    kverma1
Labels:       <none>
Annotations:  <none>
API Version:  secrets.hashicorp.com/v1beta1
Kind:         VaultStaticSecret
Metadata:
  Creation Timestamp:  2025-01-10T08:13:19Z
  Finalizers:
    vaultstaticsecret.secrets.hashicorp.com/finalizer
  Generation:        2
  Resource Version:  1248163987
  UID:               3e1ee715-5aae-4135-bb8d-e56df279061e
Spec:
  Destination:
    Create:     true
    Name:       kverma1-secret-1
    Overwrite:  false
    Transformation:
  Hmac Secret Data:  true
  Mount:             kverma1-kv
  Path:              testsecret
  Refresh After:     30s
  Type:              kv-v2
  Vault Auth Ref:    default
Status:
  Last Generation:  2
  Secret MAC:       u/gsXXXX/XXXXXXXXXXXXXXXXXXX8CJ2Q=
Events:             <none>

vault-secrets-operator logs:

I0111 21:29:20.849500       1 request.go:697] Waited for 1m15.081255886s due to client-side throttling, not priority and fairness, request: PUT:https://172.25.0.1:443/apis/secrets.hashicorp.com/v1beta1/namespaces/kverma1/vaultstaticsecrets/my-secret-162/status
I0111 21:29:30.899511       1 request.go:697] Waited for 1m15.124993742s due to client-side throttling, not priority and fairness, request: PUT:https://172.25.0.1:443/apis/secrets.hashicorp.com/v1beta1/namespaces/kverma1/vaultstaticsecrets/my-secret-540/status
I0111 21:29:40.949080       1 request.go:697] Waited for 1m15.092334163s due to client-side throttling, not priority and fairness, request: PUT:https://172.25.0.1:443/apis/secrets.hashicorp.com/v1beta1/namespaces/kverma1/vaultstaticsecrets/my-secret-854/status
I0111 21:29:50.949185       1 request.go:697] Waited for 1m14.985630581s due to client-side throttling, not priority and fairness, request: PUT:https://172.25.0.1:443/apis/secrets.hashicorp.com/v1beta1/namespaces/kverma1/vaultstaticsecrets/my-secret-1844/status
I0111 21:30:00.949355       1 request.go:697] Waited for 1m15.215804073s due to client-side throttling, not priority and fairness, request: PUT:https://172.25.0.1:443/apis/secrets.hashicorp.com/v1beta1/namespaces/kverma1/vaultstaticsecrets/my-secret-1592/status
I0111 21:30:10.998788       1 request.go:697] Waited for 1m14.930877226s due to client-side throttling, not priority and fairness, request: PUT:https://172.25.0.1:443/apis/secrets.hashicorp.com/v1beta1/namespaces/kverma1/vaultstaticsecrets/my-secret-1933/status

To Reproduce
Steps to reproduce the behavior:

  1. Deploy application with the following yaml file with the following VSO custom resources.
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
  name: my-secret-1254
  namespace: kverma1
spec:
  destination:
    create: true
    name: kverma1-secret-1254
    overwrite: false
  hmacSecretData: true
  mount: kverma1-kv
  path: testsecret
  refreshAfter: 30s
  type: kv-v2
  vaultAuthRef: default
  1. Any custom resources used for your secrets.
    VaultConnection and VaultAuth for connection to vault.

  2. VSOhelmchart resources in values.yaml:

    controller:
      kubeRbacProxy:
        resources:
          limits:
            cpu: 500m
            memory: 512Mi
          requests:
            cpu: 200m
            memory: 256Mi
      manager:
        resources:
          limits:
            cpu: 1000m
            memory: 1Gi
          requests:
            cpu: 500m
            memory: 512Mi
      replicas: 1
  1. See error (vault-secrets-operator logs, application logs, etc.)
I0111 21:29:20.849500       1 request.go:697] Waited for 1m15.081255886s due to client-side throttling, not priority and fairness, request: PUT:https://172.25.0.1:443/apis/secrets.hashicorp.com/v1beta1/namespaces/kverma1/vaultstaticsecrets/my-secret-162/status
I0111 21:29:30.899511       1 request.go:697] Waited for 1m15.124993742s due to client-side throttling, not priority and fairness, request: PUT:https://172.25.0.1:443/apis/secrets.hashicorp.com/v1beta1/namespaces/kverma1/vaultstaticsecrets/my-secret-540/status
I0111 21:29:40.949080       1 request.go:697] Waited for 1m15.092334163s due to client-side throttling, not priority and fairness, request: PUT:https://172.25.0.1:443/apis/secrets.hashicorp.com/v1beta1/namespaces/kverma1/vaultstaticsecrets/my-secret-854/status
I0111 21:29:50.949185       1 request.go:697] Waited for 1m14.985630581s due to client-side throttling, not priority and fairness, request: PUT:https://172.25.0.1:443/apis/secrets.hashicorp.com/v1beta1/namespaces/kverma1/vaultstaticsecrets/my-secret-1844/status
I0111 21:30:00.949355       1 request.go:697] Waited for 1m15.215804073s due to client-side throttling, not priority and fairness, request: PUT:https://172.25.0.1:443/apis/secrets.hashicorp.com/v1beta1/namespaces/kverma1/vaultstaticsecrets/my-secret-1592/status
I0111 21:30:10.998788       1 request.go:697] Waited for 1m14.930877226s due to client-side throttling, not priority and fairness, request: PUT:https://172.25.0.1:443/apis/secrets.hashicorp.com/v1beta1/namespaces/kverma1/vaultstaticsecrets/my-secret-1933/status

Application deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "8"
    meta.helm.sh/release-name: hashicorp-vault-secrets-operator
    meta.helm.sh/release-namespace: vso-namespace
  creationTimestamp: "2025-01-10T13:06:35Z"
  generation: 8
  labels:
    app.kubernetes.io/component: controller-manager
    app.kubernetes.io/instance: hashicorp-vault-secrets-operator
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault-secrets-operator
    app.kubernetes.io/version: 0.7.1
    control-plane: controller-manager
    helm.sh/chart: vault-secrets-operator-0.7.1
    helm.toolkit.fluxcd.io/name: hashicorp-vault-secrets-operator
    helm.toolkit.fluxcd.io/namespace: vso-namespace
  name: hashicorp-vault-secrets-operator-controller-manager
  namespace: vso-namespace
  resourceVersion: "1248620639"
  uid: 2dac04d1-b857-43c0-91f0-2446722af05f
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/instance: hashicorp-vault-secrets-operator
      app.kubernetes.io/name: vault-secrets-operator
      control-plane: controller-manager
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      annotations:
        kubectl.kubernetes.io/default-container: manager
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: hashicorp-vault-secrets-operator
        app.kubernetes.io/name: vault-secrets-operator
        control-plane: controller-manager
    spec:
      containers:
      - args:
        - --secure-listen-address=0.0.0.0:8443
        - --upstream=http://127.0.0.1:8080/
        - --logtostderr=true
        - --v=0
        env:
        - name: KUBERNETES_CLUSTER_DOMAIN
          value: cluster.local
        image: gcr.io/kubebuilder/kube-rbac-proxy:v0.15.0
        imagePullPolicy: IfNotPresent
        name: kube-rbac-proxy
        ports:
        - containerPort: 8443
          name: https
          protocol: TCP
        resources:
          limits:
            cpu: 500m
            memory: 512Mi
          requests:
            cpu: 200m
            memory: 256Mi
        securityContext:
          allowPrivilegeEscalation: false
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      - args:
        - --health-probe-bind-address=:8081
        - --metrics-bind-address=127.0.0.1:8080
        - --leader-elect
        - --backoff-initial-interval=5s
        - --backoff-max-interval=60s
        - --backoff-max-elapsed-time=0s
        - --backoff-multiplier=1.50
        - --backoff-randomization-factor=0.50
        - --zap-log-level=info
        - --zap-time-encoding=rfc3339
        - --zap-stacktrace-level=panic
        command:
        - /vault-secrets-operator
        env:
        - name: VSO_MAX_CONCURRENT_RECONCILES
          value: "100"
        image: hashicorp/vault-secrets-operator:0.7.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 8081
            scheme: HTTP
          initialDelaySeconds: 15
          periodSeconds: 20
          successThreshold: 1
          timeoutSeconds: 1
        name: manager
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: 8081
            scheme: HTTP
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          limits:
            cpu: "1"
            memory: 1Gi
          requests:
            cpu: 500m
            memory: 512Mi
        securityContext:
          allowPrivilegeEscalation: false
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /var/run/podinfo
          name: podinfo
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        runAsNonRoot: true
      serviceAccount: hashicorp-vault-secrets-operator-controller-manager
      serviceAccountName: hashicorp-vault-secrets-operator-controller-manager
      terminationGracePeriodSeconds: 120
      volumes:
      - downwardAPI:
          defaultMode: 420
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
            path: name
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.uid
            path: uid
        name: podinfo
status:

Other useful info to include: kubectl describe deployment <app> and kubectl describe <vso-custom-resource> <app> output.

Expected behavior
A clear and concise description of what you expected to happen.

Environment

  • Kubernetes version:
    • Distribution or cloud vendor (OpenShift, EKS, GKE, AKS, etc.): AKS v1.29.7
    • Other configuration options or runtime services (istio, etc.):
  • vault-secrets-operator version: 0.7.1

Additional context
Add any other context about the problem here.

@kamalverma1 kamalverma1 added the bug Something isn't working label Jan 11, 2025
@kamalverma1
Copy link
Author

I also tried using the newer helm chart version 0.9.1, but got similar throttle logs.

I0112 19:08:15.281587       1 request.go:700] Waited for 4.993444686s due to client-side throttling, not priority and fairness, request: GET:https://172.25.0.1:443/api/v1/namespaces/kverma1/secrets/kverma1-secret-664
I0112 19:08:25.331069       1 request.go:700] Waited for 4.979504963s due to client-side throttling, not priority and fairness, request: GET:https://172.25.0.1:443/api/v1/namespaces/cluster-core/secrets/vso-cc-storage-hmac-key
I0112 19:08:35.381528       1 request.go:700] Waited for 4.992213884s due to client-side throttling, not priority and fairness, request: GET:https://172.25.0.1:443/api/v1/namespaces/cluster-core/secrets/vso-cc-storage-hmac-key
I0112 19:08:45.430980       1 request.go:700] Waited for 4.977830327s due to client-side throttling, not priority and fairness, request: GET:https://172.25.0.1:443/api/v1/namespaces/kverma1/secrets/kverma1-secret-745
I0112 19:08:55.431652       1 request.go:700] Waited for 4.983231168s due to client-side throttling, not priority and fairness, request: GET:https://172.25.0.1:443/api/v1/namespaces/cluster-core/secrets/vso-cc-storage-hmac-key
I0112 19:09:05.481162       1 request.go:700] Waited for 4.992837687s due to client-side throttling, not priority and fairness, request: GET:https://172.25.0.1:443/api/v1/namespaces/cluster-core/secrets/vso-cc-storage-hmac-key
I0112 19:09:15.481659       1 request.go:700] Waited for 4.993701058s due to client-side throttling, not priority and fairness, request: GET:https://172.25.0.1:443/api/v1/namespaces/kverma1/secrets/kverma1-secret-581
I0112 19:09:25.531130       1 request.go:700] Waited for 4.982704111s due to client-side throttling, not priority and fairness, request: GET:https://172.25.0.1:443/api/v1/namespaces/cluster-core/secrets/vso-cc-storage-hmac-key
I0112 19:09:35.531173       1 request.go:700] Waited for 4.993132129s due to client-side throttling, not priority and fairness, request: GET:https://172.25.0.1:443/api/v1/namespaces/cluster-core/secrets/vso-cc-storage-hmac-key
I0112 19:09:45.580751       1 request.go:700] Waited for 4.991770374s due to client-side throttling, not priority and fairness, request: GET:https://172.25.0.1:443/api/v1/namespaces/kverma1/secrets/kverma1-secret-584
I0112 19:09:55.581254       1 request.go:700] Waited for 4.980579992s due to client-side throttling, not priority and fairness, request: GET:https://172.25.0.1:443/api/v1/namespaces/cluster-core/secrets/vso-cc-storage-hmac-key
I0112 19:10:05.630932       1 request.go:700] Waited for 4.990519999s due to client-side throttling, not priority and fairness, request: GET:https://172.25.0.1:443/api/v1/namespaces/cluster-core/secrets/vso-cc-storage-hmac-key
I0112 19:10:15.631384       1 request.go:700] Waited for 4.991168306s due to client-side throttling, not priority and fairness, request: GET:https://172.25.0.1:443/api/v1/namespaces/kverma1/secrets/kverma1-secret-1347
I0112 19:10:25.681032       1 request.go:700] Waited for 4.990248553s due to client-side throttling, not priority and fairness, request: GET:https://172.25.0.1:443/api/v1/namespaces/kverma1/secrets/kverma1-secret-1078

@kamalverma1
Copy link
Author

I see there is an existing PR related to this issue here. Not sure if it is the best solution to the throttling issue but It would be great to see if the PR is merged and it fixes this issue.

@sergeyshevch
Copy link

@kamalverma1 We tested this PR and together with adjusting it totally solved an issue. Recommendation firstly set QPS to -1 and look on metrics to properly adjust both variables.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants