-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[AGENTCFG-13] Adding a scatter mechanism in the secrets component #34744
base: main
Are you sure you want to change the base?
Conversation
Uncompressed package size comparisonComparison with ancestor Diff per package
Decision |
Test changes on VMUse this command from test-infra-definitions to manually test this PR changes on a VM: inv aws.create-vm --pipeline-id=57948976 --os-family=ubuntu Note: This applies to commit c05169d |
Static quality checks ✅Please find below the results from static quality gates Successful checksInfo
|
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: 369a8db Optimization Goals: ✅ No significant changes detected
|
perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
---|---|---|---|---|---|---|
➖ | quality_gate_logs | % cpu utilization | +2.71 | [-0.31, +5.72] | 1 | Logs |
➖ | uds_dogstatsd_to_api_cpu | % cpu utilization | +1.36 | [+0.46, +2.26] | 1 | Logs |
➖ | quality_gate_idle | memory utilization | +0.60 | [+0.57, +0.63] | 1 | Logs bounds checks dashboard |
➖ | tcp_syslog_to_blackhole | ingress throughput | +0.43 | [+0.37, +0.50] | 1 | Logs |
➖ | quality_gate_idle_all_features | memory utilization | +0.28 | [+0.22, +0.35] | 1 | Logs bounds checks dashboard |
➖ | file_to_blackhole_1000ms_latency | egress throughput | +0.22 | [-0.55, +0.99] | 1 | Logs |
➖ | file_to_blackhole_1000ms_latency_linear_load | egress throughput | +0.20 | [-0.27, +0.66] | 1 | Logs |
➖ | file_tree | memory utilization | +0.10 | [+0.05, +0.16] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency_http1 | egress throughput | +0.05 | [-0.76, +0.86] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency | egress throughput | +0.04 | [-0.74, +0.82] | 1 | Logs |
➖ | file_to_blackhole_300ms_latency | egress throughput | +0.02 | [-0.62, +0.65] | 1 | Logs |
➖ | uds_dogstatsd_to_api | ingress throughput | +0.01 | [-0.27, +0.29] | 1 | Logs |
➖ | tcp_dd_logs_filter_exclude | ingress throughput | +0.00 | [-0.02, +0.02] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency_http2 | egress throughput | -0.00 | [-0.72, +0.72] | 1 | Logs |
➖ | file_to_blackhole_100ms_latency | egress throughput | -0.03 | [-0.70, +0.65] | 1 | Logs |
➖ | file_to_blackhole_500ms_latency | egress throughput | -0.10 | [-0.89, +0.68] | 1 | Logs |
Bounds Checks: ✅ Passed
perf | experiment | bounds_check_name | replicates_passed | links |
---|---|---|---|---|
✅ | file_to_blackhole_0ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http1 | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http1 | memory_usage | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http2 | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http2 | memory_usage | 10/10 | |
✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_1000ms_latency_linear_load | memory_usage | 10/10 | |
✅ | file_to_blackhole_100ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_300ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_300ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_500ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | |
✅ | quality_gate_idle | intake_connections | 10/10 | bounds checks dashboard |
✅ | quality_gate_idle | memory_usage | 10/10 | bounds checks dashboard |
✅ | quality_gate_idle_all_features | intake_connections | 10/10 | bounds checks dashboard |
✅ | quality_gate_idle_all_features | memory_usage | 10/10 | bounds checks dashboard |
✅ | quality_gate_logs | intake_connections | 10/10 | |
✅ | quality_gate_logs | lost_bytes | 10/10 | |
✅ | quality_gate_logs | memory_usage | 10/10 |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
CI Pass/Fail Decision
✅ Passed. All Quality Gates passed.
- quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check intake_connections: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
// Generate a random value within the range [-r.refreshIntervalScatter, r.refreshIntervalScatter] | ||
randDuration := time.Duration(rand.Int63n(2*int64(r.refreshIntervalScatter))) - r.refreshIntervalScatter | ||
r.ticker = time.NewTicker(r.refreshInterval + randDuration) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure this works since we only want to change the first tick of the ticker. Here, if we configure a interval of 1 hour but the randDuration
adds 30min we would refresh at startTime+1h30, startTime+3h, startTime+4h30, ... startTime being the moment when the Agent started.
What we actually want is for the first tick to happens between startTime and startTime+randDuration
and then every hours after that.
@@ -241,7 +244,9 @@ func (r *secretResolver) startRefreshRoutine() { | |||
if r.ticker != nil || r.refreshInterval == 0 { | |||
return | |||
} | |||
r.ticker = time.NewTicker(r.refreshInterval) | |||
// Generate a random value within the range [-r.refreshIntervalScatter, r.refreshIntervalScatter] | |||
randDuration := time.Duration(rand.Int63n(2*int64(r.refreshIntervalScatter))) - r.refreshIntervalScatter |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of using a new duration secret_refresh_interval_scatter
we could leverage the existing secret_refresh_interval
. The idea would be to have the first refresh between the Agent start time and secret_refresh_interval
and then every secret_refresh_interval
after that.
That way, if I configure a secret_refresh_interval
of 1h, the first refresh might happens at T+32min and then, T+1h32, T+2h32, ...
We could enable this behavior by default with a setting to disable it (maybe secret_refresh_scatter: true/false
).
@@ -216,6 +218,7 @@ func (r *secretResolver) Configure(params secrets.ConfigParams) { | |||
r.responseMaxSize = SecretBackendOutputMaxSizeDefault | |||
} | |||
r.refreshInterval = time.Duration(params.RefreshInterval) * time.Second | |||
r.refreshIntervalScatter = time.Duration(params.RefreshIntervalScatter) * time.Second |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice to add info around the refresh configuration/behavior to the information returned by the API (see the endpoint register here).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We also need to add tests for this new feature.
Go Package Import DifferencesBaseline: 369a8db
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a few comments, but looks great overall. Thanks for the tests 👌
@@ -356,6 +356,7 @@ func InitConfig(config pkgconfigmodel.Setup) { | |||
config.BindEnvAndSetDefault("secret_backend_skip_checks", false) | |||
config.BindEnvAndSetDefault("secret_backend_remove_trailing_line_break", false) | |||
config.BindEnvAndSetDefault("secret_refresh_interval", 0) | |||
config.BindEnvAndSetDefault("secret_refresh_interval_scatter", true) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
config.BindEnvAndSetDefault("secret_refresh_interval_scatter", true) | |
config.BindEnvAndSetDefault("secret_refresh_scatter", true) |
return &secretResolver{ | ||
cache: make(map[string]string), | ||
origin: make(handleToContext), | ||
enabled: true, | ||
tlmSecretBackendElapsed: telemetry.NewGauge("secret_backend", "elapsed_ms", []string{"command", "exit_code"}, "Elapsed time of secret backend invocation"), | ||
tlmSecretUnmarshalError: telemetry.NewCounter("secret_backend", "unmarshal_errors_count", []string{}, "Count of errors when unmarshalling the output of the secret binary"), | ||
tlmSecretResolveError: telemetry.NewCounter("secret_backend", "resolve_errors_count", []string{"error_kind", "handle"}, "Count of errors when resolving a secret"), | ||
clk: clk, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When possible, It's better not to have exception or dedicated condition in the code specific to the tests.
Here for example, instead of require a extra parameter from the caller that will always be nil
outside our tests you can create a variable in the file that will be changed by the tests when needed.
ex, in this file
// At the top of this file
var newClock = clock.New
[...]
// The current line becomes
clk: newClock(),
In the test:
func TestSomething(t *testing.T) {
t.Cleanup(func () {newClock = clock.New))
newClock = clock.NewMock
newEnabledSecretResolver(tel)
[...]
}
This will allow you to remove the if line 251.
go func() { | ||
<-r.ticker.C | ||
if _, err := r.Refresh(); err != nil { | ||
log.Debug("First refresh error", "error", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Info level would be better here and I don't think we need to explicitly state that it's the first refresh. Also please use log.Infof
to offer better formating.
for { | ||
<-r.ticker.C | ||
if _, err := r.Refresh(); err != nil { | ||
log.Info(err) | ||
log.Debug("Periodic refresh error", "error", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here.
if _, err := r.Refresh(); err != nil { | ||
log.Debug("First refresh error", "error", err) | ||
} | ||
r.ticker.Reset(r.refreshInterval) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding a quick comment explaining what we do here would be great for future devs.
@@ -661,4 +684,9 @@ func (r *secretResolver) GetDebugInfo(w io.Writer) { | |||
if err != nil { | |||
fmt.Fprintf(w, "error rendering secret info: %s", err) | |||
} | |||
|
|||
if r.refreshIntervalScatter { | |||
fmt.Fprintf(w, "The first secret refresh will happen at a random time between the starting of the agent and the set refresh interval") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let give more precise information to users. It might also be useful if we need to troubleshoot this feature (tracking the time of the first refresh will be useful).
fmt.Fprintf(w, "The first secret refresh will happen at a random time between the starting of the agent and the set refresh interval") | |
fmt.Fprintf(w, "'secret_refresh interval' enabled: the first refresh will happen at '<insert time here>'' and then every %s", <first refresh>, r.refreshInterval) |
func TestStartRefreshRoutineWithScatter(t *testing.T) { | ||
mockClock := clock.NewMock() | ||
|
||
tel := fxutil.Test[telemetry.Component](t, nooptelemetry.Module()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's move this line to 831, that way the t
object use is the one of each sub tests and the potential cleanup from the telemetry component can be trigger for each.
Same for mockClock
, it would be better to not reuse the same clock between tests to avoid side effects.
originalAllowListEnabled := allowlistEnabled | ||
allowlistEnabled = false | ||
defer func() { | ||
allowlistEnabled = originalAllowListEnabled | ||
}() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
originalAllowListEnabled := allowlistEnabled | |
allowlistEnabled = false | |
defer func() { | |
allowlistEnabled = originalAllowListEnabled | |
}() | |
defer func(resetValue bool) { | |
allowlistEnabled = resetValue | |
}(allowlistEnabled) | |
allowlistEnabled = false |
if resolver.ticker == nil { | ||
t.Fatal("Ticker was not created") | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if resolver.ticker == nil { | |
t.Fatal("Ticker was not created") | |
} | |
require.NotNil(t, resolver.ticker) |
// In scattered case, first we advance 1/4 of the refresh interval | ||
mockClock.Add(resolver.refreshInterval / 4) | ||
|
||
refreshHappened := false | ||
select { | ||
case <-refreshCalledChan: | ||
refreshHappened = true | ||
case <-time.After(100 * time.Millisecond): | ||
// No refresh yet | ||
} | ||
|
||
if !refreshHappened { | ||
// If no refresh yet, advance to 3/4 of the refresh interval | ||
mockClock.Add(resolver.refreshInterval / 2) | ||
|
||
select { | ||
case <-refreshCalledChan: | ||
refreshHappened = true | ||
case <-time.After(100 * time.Millisecond): | ||
// Still no refresh | ||
} | ||
} | ||
|
||
if !refreshHappened { | ||
// If still no refresh, advance to the full refresh interval | ||
mockClock.Add(resolver.refreshInterval / 4) | ||
|
||
select { | ||
case <-refreshCalledChan: | ||
case <-time.After(1 * time.Second): | ||
t.Fatal("First refresh didn't occur even after full interval") | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This we hard-code the seed, don't we know in advance when the refresh will happen ?
What does this PR do?
We want to add some randomization around refreshing secrets so an entire fleet of agents don’t refresh at the same time and overload the secret backend.
Motivation
Describe how you validated your changes
I ran a custom version of the agent that logs the value of the secrets refresh interval for a couple of times and verified that the value was within the range of
[r.refreshInterval - randDuration, r.refreshInterval + randDuration]
.Possible Drawbacks / Trade-offs
Additional Notes