Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AGENTCFG-13] Adding a scatter mechanism in the secrets component #34744

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

rahulkaukuntla
Copy link
Contributor

@rahulkaukuntla rahulkaukuntla commented Mar 4, 2025

What does this PR do?

We want to add some randomization around refreshing secrets so an entire fleet of agents don’t refresh at the same time and overload the secret backend.

Motivation

Describe how you validated your changes

I ran a custom version of the agent that logs the value of the secrets refresh interval for a couple of times and verified that the value was within the range of [r.refreshInterval - randDuration, r.refreshInterval + randDuration].

Possible Drawbacks / Trade-offs

Additional Notes

@github-actions github-actions bot added short review PR is simple enough to be reviewed quickly team/agent-configuration labels Mar 4, 2025
@rahulkaukuntla rahulkaukuntla added changelog/no-changelog qa/done QA done before merge and regressions are covered by tests labels Mar 4, 2025
@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Mar 4, 2025

Uncompressed package size comparison

Comparison with ancestor 369a8dbb39dc6e8601d82c8f43caaaf88d6a0a55

Diff per package
package diff status size ancestor threshold
datadog-heroku-agent-amd64-deb 0.05MB ⚠️ 440.74MB 440.69MB 0.50MB
datadog-agent-amd64-deb 0.04MB ⚠️ 815.44MB 815.40MB 0.50MB
datadog-agent-x86_64-rpm 0.04MB ⚠️ 825.23MB 825.19MB 0.50MB
datadog-agent-x86_64-suse 0.04MB ⚠️ 825.23MB 825.19MB 0.50MB
datadog-agent-arm64-deb 0.03MB ⚠️ 806.39MB 806.36MB 0.50MB
datadog-agent-aarch64-rpm 0.03MB ⚠️ 816.17MB 816.14MB 0.50MB
datadog-dogstatsd-arm64-deb 0.00MB 37.97MB 37.97MB 0.50MB
datadog-dogstatsd-amd64-deb 0.00MB 39.43MB 39.43MB 0.50MB
datadog-dogstatsd-x86_64-rpm 0.00MB 39.51MB 39.51MB 0.50MB
datadog-dogstatsd-x86_64-suse 0.00MB 39.51MB 39.51MB 0.50MB
datadog-iot-agent-amd64-deb 0.00MB 62.11MB 62.11MB 0.50MB
datadog-iot-agent-x86_64-rpm 0.00MB 62.18MB 62.18MB 0.50MB
datadog-iot-agent-x86_64-suse 0.00MB 62.18MB 62.18MB 0.50MB
datadog-iot-agent-arm64-deb 0.00MB 59.35MB 59.35MB 0.50MB
datadog-iot-agent-aarch64-rpm 0.00MB 59.42MB 59.42MB 0.50MB

Decision

⚠️ Warning

@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Mar 4, 2025

Test changes on VM

Use this command from test-infra-definitions to manually test this PR changes on a VM:

inv aws.create-vm --pipeline-id=57948976 --os-family=ubuntu

Note: This applies to commit c05169d

@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Mar 4, 2025

Static quality checks ✅

Please find below the results from static quality gates

Successful checks

Info

Result Quality gate On disk size On disk size limit On wire size On wire size limit
static_quality_gate_agent_deb_amd64 789.07MiB 801.8MiB 192.32MiB 202.62MiB
static_quality_gate_agent_deb_arm64 780.49MiB 793.14MiB 174.06MiB 184.51MiB
static_quality_gate_agent_rpm_amd64 789.14MiB 801.79MiB 194.39MiB 205.03MiB
static_quality_gate_agent_rpm_arm64 780.42MiB 793.09MiB 175.85MiB 186.44MiB
static_quality_gate_agent_suse_amd64 789.04MiB 801.81MiB 194.39MiB 205.03MiB
static_quality_gate_agent_suse_arm64 780.51MiB 793.14MiB 175.85MiB 186.44MiB
static_quality_gate_dogstatsd_deb_amd64 37.68MiB 47.67MiB 9.78MiB 19.78MiB
static_quality_gate_dogstatsd_deb_arm64 36.29MiB 46.27MiB 8.49MiB 18.49MiB
static_quality_gate_dogstatsd_rpm_amd64 37.68MiB 47.67MiB 9.79MiB 19.79MiB
static_quality_gate_dogstatsd_suse_amd64 37.68MiB 47.67MiB 9.79MiB 19.79MiB
static_quality_gate_iot_agent_deb_amd64 59.31MiB 69.0MiB 14.9MiB 24.8MiB
static_quality_gate_iot_agent_deb_arm64 56.67MiB 66.4MiB 12.87MiB 22.8MiB
static_quality_gate_iot_agent_rpm_amd64 59.31MiB 69.0MiB 14.92MiB 24.8MiB
static_quality_gate_iot_agent_rpm_arm64 56.68MiB 66.4MiB 12.87MiB 22.8MiB
static_quality_gate_iot_agent_suse_amd64 59.31MiB 69.0MiB 14.92MiB 24.8MiB
static_quality_gate_docker_agent_amd64 873.81MiB 886.12MiB 293.93MiB 304.21MiB
static_quality_gate_docker_agent_arm64 888.45MiB 900.79MiB 280.19MiB 290.47MiB
static_quality_gate_docker_agent_jmx_amd64 1.05GiB 1.06GiB 369.06MiB 379.33MiB
static_quality_gate_docker_agent_jmx_arm64 1.05GiB 1.06GiB 351.28MiB 361.55MiB
static_quality_gate_docker_dogstatsd_amd64 45.83MiB 55.78MiB 17.29MiB 27.28MiB
static_quality_gate_docker_dogstatsd_arm64 44.48MiB 54.45MiB 16.16MiB 26.16MiB
static_quality_gate_docker_cluster_agent_amd64 265.01MiB 274.78MiB 106.37MiB 116.28MiB
static_quality_gate_docker_cluster_agent_arm64 280.98MiB 290.82MiB 101.2MiB 111.12MiB

Copy link

cit-pr-commenter bot commented Mar 4, 2025

Regression Detector

Regression Detector Results

Metrics dashboard
Target profiles
Run ID: 8082afc8-f121-46d4-9c7b-6f284e539e13

Baseline: 369a8db
Comparison: c05169d
Diff

Optimization Goals: ✅ No significant changes detected

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI trials links
quality_gate_logs % cpu utilization +2.71 [-0.31, +5.72] 1 Logs
uds_dogstatsd_to_api_cpu % cpu utilization +1.36 [+0.46, +2.26] 1 Logs
quality_gate_idle memory utilization +0.60 [+0.57, +0.63] 1 Logs bounds checks dashboard
tcp_syslog_to_blackhole ingress throughput +0.43 [+0.37, +0.50] 1 Logs
quality_gate_idle_all_features memory utilization +0.28 [+0.22, +0.35] 1 Logs bounds checks dashboard
file_to_blackhole_1000ms_latency egress throughput +0.22 [-0.55, +0.99] 1 Logs
file_to_blackhole_1000ms_latency_linear_load egress throughput +0.20 [-0.27, +0.66] 1 Logs
file_tree memory utilization +0.10 [+0.05, +0.16] 1 Logs
file_to_blackhole_0ms_latency_http1 egress throughput +0.05 [-0.76, +0.86] 1 Logs
file_to_blackhole_0ms_latency egress throughput +0.04 [-0.74, +0.82] 1 Logs
file_to_blackhole_300ms_latency egress throughput +0.02 [-0.62, +0.65] 1 Logs
uds_dogstatsd_to_api ingress throughput +0.01 [-0.27, +0.29] 1 Logs
tcp_dd_logs_filter_exclude ingress throughput +0.00 [-0.02, +0.02] 1 Logs
file_to_blackhole_0ms_latency_http2 egress throughput -0.00 [-0.72, +0.72] 1 Logs
file_to_blackhole_100ms_latency egress throughput -0.03 [-0.70, +0.65] 1 Logs
file_to_blackhole_500ms_latency egress throughput -0.10 [-0.89, +0.68] 1 Logs

Bounds Checks: ✅ Passed

perf experiment bounds_check_name replicates_passed links
file_to_blackhole_0ms_latency lost_bytes 10/10
file_to_blackhole_0ms_latency memory_usage 10/10
file_to_blackhole_0ms_latency_http1 lost_bytes 10/10
file_to_blackhole_0ms_latency_http1 memory_usage 10/10
file_to_blackhole_0ms_latency_http2 lost_bytes 10/10
file_to_blackhole_0ms_latency_http2 memory_usage 10/10
file_to_blackhole_1000ms_latency memory_usage 10/10
file_to_blackhole_1000ms_latency_linear_load memory_usage 10/10
file_to_blackhole_100ms_latency lost_bytes 10/10
file_to_blackhole_100ms_latency memory_usage 10/10
file_to_blackhole_300ms_latency lost_bytes 10/10
file_to_blackhole_300ms_latency memory_usage 10/10
file_to_blackhole_500ms_latency lost_bytes 10/10
file_to_blackhole_500ms_latency memory_usage 10/10
quality_gate_idle intake_connections 10/10 bounds checks dashboard
quality_gate_idle memory_usage 10/10 bounds checks dashboard
quality_gate_idle_all_features intake_connections 10/10 bounds checks dashboard
quality_gate_idle_all_features memory_usage 10/10 bounds checks dashboard
quality_gate_logs intake_connections 10/10
quality_gate_logs lost_bytes 10/10
quality_gate_logs memory_usage 10/10

Explanation

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

CI Pass/Fail Decision

Passed. All Quality Gates passed.

  • quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.

@rahulkaukuntla rahulkaukuntla marked this pull request as ready for review March 4, 2025 19:40
@rahulkaukuntla rahulkaukuntla requested a review from a team as a code owner March 4, 2025 19:40
@rahulkaukuntla rahulkaukuntla requested a review from hush-hush March 4, 2025 19:40
Comment on lines 247 to 249
// Generate a random value within the range [-r.refreshIntervalScatter, r.refreshIntervalScatter]
randDuration := time.Duration(rand.Int63n(2*int64(r.refreshIntervalScatter))) - r.refreshIntervalScatter
r.ticker = time.NewTicker(r.refreshInterval + randDuration)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure this works since we only want to change the first tick of the ticker. Here, if we configure a interval of 1 hour but the randDuration adds 30min we would refresh at startTime+1h30, startTime+3h, startTime+4h30, ... startTime being the moment when the Agent started.

What we actually want is for the first tick to happens between startTime and startTime+randDuration and then every hours after that.

@@ -241,7 +244,9 @@ func (r *secretResolver) startRefreshRoutine() {
if r.ticker != nil || r.refreshInterval == 0 {
return
}
r.ticker = time.NewTicker(r.refreshInterval)
// Generate a random value within the range [-r.refreshIntervalScatter, r.refreshIntervalScatter]
randDuration := time.Duration(rand.Int63n(2*int64(r.refreshIntervalScatter))) - r.refreshIntervalScatter
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of using a new duration secret_refresh_interval_scatter we could leverage the existing secret_refresh_interval. The idea would be to have the first refresh between the Agent start time and secret_refresh_interval and then every secret_refresh_interval after that.

That way, if I configure a secret_refresh_interval of 1h, the first refresh might happens at T+32min and then, T+1h32, T+2h32, ...

We could enable this behavior by default with a setting to disable it (maybe secret_refresh_scatter: true/false).

@@ -216,6 +218,7 @@ func (r *secretResolver) Configure(params secrets.ConfigParams) {
r.responseMaxSize = SecretBackendOutputMaxSizeDefault
}
r.refreshInterval = time.Duration(params.RefreshInterval) * time.Second
r.refreshIntervalScatter = time.Duration(params.RefreshIntervalScatter) * time.Second
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice to add info around the refresh configuration/behavior to the information returned by the API (see the endpoint register here).

Copy link
Member

@hush-hush hush-hush left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We also need to add tests for this new feature.

@github-actions github-actions bot added medium review PR review might take time and removed short review PR is simple enough to be reviewed quickly labels Mar 5, 2025
@github-actions github-actions bot added long review PR is complex, plan time to review it and removed medium review PR review might take time labels Mar 6, 2025
Copy link

Go Package Import Differences

Baseline: 369a8db
Comparison: c05169d

binaryosarchchange
trace-agentlinuxamd64
+1, -0
+github.com/benbjohnson/clock
trace-agentlinuxarm64
+1, -0
+github.com/benbjohnson/clock
trace-agentwindowsamd64
+1, -0
+github.com/benbjohnson/clock
trace-agentdarwinamd64
+1, -0
+github.com/benbjohnson/clock
trace-agentdarwinarm64
+1, -0
+github.com/benbjohnson/clock
heroku-trace-agentlinuxamd64
+1, -0
+github.com/benbjohnson/clock

Copy link
Member

@hush-hush hush-hush left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a few comments, but looks great overall. Thanks for the tests 👌

@@ -356,6 +356,7 @@ func InitConfig(config pkgconfigmodel.Setup) {
config.BindEnvAndSetDefault("secret_backend_skip_checks", false)
config.BindEnvAndSetDefault("secret_backend_remove_trailing_line_break", false)
config.BindEnvAndSetDefault("secret_refresh_interval", 0)
config.BindEnvAndSetDefault("secret_refresh_interval_scatter", true)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
config.BindEnvAndSetDefault("secret_refresh_interval_scatter", true)
config.BindEnvAndSetDefault("secret_refresh_scatter", true)

return &secretResolver{
cache: make(map[string]string),
origin: make(handleToContext),
enabled: true,
tlmSecretBackendElapsed: telemetry.NewGauge("secret_backend", "elapsed_ms", []string{"command", "exit_code"}, "Elapsed time of secret backend invocation"),
tlmSecretUnmarshalError: telemetry.NewCounter("secret_backend", "unmarshal_errors_count", []string{}, "Count of errors when unmarshalling the output of the secret binary"),
tlmSecretResolveError: telemetry.NewCounter("secret_backend", "resolve_errors_count", []string{"error_kind", "handle"}, "Count of errors when resolving a secret"),
clk: clk,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When possible, It's better not to have exception or dedicated condition in the code specific to the tests.

Here for example, instead of require a extra parameter from the caller that will always be nil outside our tests you can create a variable in the file that will be changed by the tests when needed.

ex, in this file

// At the top of this file
var newClock = clock.New

[...]

// The current line becomes
clk: newClock(),

In the test:

func TestSomething(t *testing.T) {
	t.Cleanup(func () {newClock = clock.New))
	newClock = clock.NewMock

	newEnabledSecretResolver(tel)
[...]
}

This will allow you to remove the if line 251.

go func() {
<-r.ticker.C
if _, err := r.Refresh(); err != nil {
log.Debug("First refresh error", "error", err)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Info level would be better here and I don't think we need to explicitly state that it's the first refresh. Also please use log.Infof to offer better formating.

for {
<-r.ticker.C
if _, err := r.Refresh(); err != nil {
log.Info(err)
log.Debug("Periodic refresh error", "error", err)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here.

if _, err := r.Refresh(); err != nil {
log.Debug("First refresh error", "error", err)
}
r.ticker.Reset(r.refreshInterval)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding a quick comment explaining what we do here would be great for future devs.

@@ -661,4 +684,9 @@ func (r *secretResolver) GetDebugInfo(w io.Writer) {
if err != nil {
fmt.Fprintf(w, "error rendering secret info: %s", err)
}

if r.refreshIntervalScatter {
fmt.Fprintf(w, "The first secret refresh will happen at a random time between the starting of the agent and the set refresh interval")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let give more precise information to users. It might also be useful if we need to troubleshoot this feature (tracking the time of the first refresh will be useful).

Suggested change
fmt.Fprintf(w, "The first secret refresh will happen at a random time between the starting of the agent and the set refresh interval")
fmt.Fprintf(w, "'secret_refresh interval' enabled: the first refresh will happen at '<insert time here>'' and then every %s", <first refresh>, r.refreshInterval)

func TestStartRefreshRoutineWithScatter(t *testing.T) {
mockClock := clock.NewMock()

tel := fxutil.Test[telemetry.Component](t, nooptelemetry.Module())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's move this line to 831, that way the t object use is the one of each sub tests and the potential cleanup from the telemetry component can be trigger for each.

Same for mockClock, it would be better to not reuse the same clock between tests to avoid side effects.

Comment on lines +832 to +836
originalAllowListEnabled := allowlistEnabled
allowlistEnabled = false
defer func() {
allowlistEnabled = originalAllowListEnabled
}()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
originalAllowListEnabled := allowlistEnabled
allowlistEnabled = false
defer func() {
allowlistEnabled = originalAllowListEnabled
}()
defer func(resetValue bool) {
allowlistEnabled = resetValue
}(allowlistEnabled)
allowlistEnabled = false

Comment on lines +874 to +876
if resolver.ticker == nil {
t.Fatal("Ticker was not created")
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if resolver.ticker == nil {
t.Fatal("Ticker was not created")
}
require.NotNil(t, resolver.ticker)

Comment on lines +879 to +911
// In scattered case, first we advance 1/4 of the refresh interval
mockClock.Add(resolver.refreshInterval / 4)

refreshHappened := false
select {
case <-refreshCalledChan:
refreshHappened = true
case <-time.After(100 * time.Millisecond):
// No refresh yet
}

if !refreshHappened {
// If no refresh yet, advance to 3/4 of the refresh interval
mockClock.Add(resolver.refreshInterval / 2)

select {
case <-refreshCalledChan:
refreshHappened = true
case <-time.After(100 * time.Millisecond):
// Still no refresh
}
}

if !refreshHappened {
// If still no refresh, advance to the full refresh interval
mockClock.Add(resolver.refreshInterval / 4)

select {
case <-refreshCalledChan:
case <-time.After(1 * time.Second):
t.Fatal("First refresh didn't occur even after full interval")
}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This we hard-code the seed, don't we know in advance when the refresh will happen ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
changelog/no-changelog long review PR is complex, plan time to review it qa/done QA done before merge and regressions are covered by tests team/agent-configuration
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants