This repository has been archived by the owner on May 3, 2022. It is now read-only.
Releases: bookingcom/shipper
Releases · bookingcom/shipper
v0.6.0: shipper: don't lie about webhook's identity
The webhook was getting a client that had rolloutblocks controller as its user agent. Even though rollout blocks were indeed what the client was being used for, that's not guaranteed to stay true forever. So let's make sure we give the webhook a client tied to its own identity.
v0.6.0-beta.0: release controller: move cluster choosing to scheduling
This may or may not make conceptual sense (it does to me, but I could not find out why choosing clusters and scheduling a release was two separate steps in the first place, although after #166[1] I'm fairly convinced this was just a technical artifact), but it sure is convenient: we move all the error handling during the scheduling step to a single chunk of code. This fixes an issue where errors in ChooseClusters were not reflected in any condition, making the Release object not change during the sync, and therefore not triggering any events, being essentially invisible to users. As a bonus, I restored the actual testing part of this in the unit tests. We were previously just checking that ChooseClusters didn't trigger any updates, without actually checking if it was doing the right thing (choosing clusters). [1] https://github.com/bookingcom/shipper/pull/166/files#diff-caffe52421149f1f8d77a0e7c749867dR327-R341
v0.6.0-alpha.4: capacity controller: don't retry TargetDeploymentCountError
Whenever the capacity controller can't find the exact number of deployments it expects, it should not keep on retrying: it's impossible for the error to resolve if deployments aren't created or deleted to satisfy the expectation. Since the controller also listens to deployments being created and deleted, just waiting for those events should be enough for a new sync of the CapacityTarget to trigger, and the controller will check again if the expectation is satisfied. This should reduce a lot of spurious retries when a deployment just got created and the change hasn't propagated to the informers yet, and will also make the resync loop much faster in case shipper has a lot of broken releases currently active.
v0.6.0-alpha.3: kubernetes: improve shipper's deployments
This brings the Shipper you deploy out of the box closer to the way we deploy it in Booking. This also fixes a tiny bug where the metrics port was not exposed in the `shipper` deployment.
v0.6.0-alpha.2: installation controller: also migrate CanOverride
This was an oversight on my part in #114: by default, CanOverride will always be false, but that's not always what's supposed to happen: if we're migrating an InstallationTarget that belongs to a contender release, CanOverride needs to be set to true. This reintroduces some code that was removed in #114, but it can be safely removed in the next version of shipper, as all objects would've been migrated.
v0.6.0-alpha.1: shipper-state-metrics: initialize rolloutblocks lister
This fixes a panic when running the collector: ``` panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x116ce51] goroutine 342 [running]: main.ShipperStateMetrics.collectRolloutBlocks(0x1610760, 0xc00041e010, 0x1610860, 0xc00041e030, 0x1610820, 0xc00041e050, 0x16107a0, 0xc00041e070, 0x16108e0, 0xc00041e090, ...) /shipper/cmd/shipper-state-metrics/collector.go:351 +0x101 main.ShipperStateMetrics.Collect(0x1610760, 0xc00041e010, 0x1610860, 0xc00041e030, 0x1610820, 0xc00041e050, 0x16107a0, 0xc00041e070, 0x16108e0, 0xc00041e090, ...) /shipper/cmd/shipper-state-metrics/collector.go:101 +0x194 github.com/prometheus/client_golang/prometheus.(*Registry).Gather.func1() /shipper/vendor/github.com/prometheus/client_golang/prometheus/registry.go:434 +0x19d created by github.com/prometheus/client_golang/prometheus.(*Registry).Gather /shipper/vendor/github.com/prometheus/client_golang/prometheus/registry.go:445 +0x571 ```
v0.6.0-alpha.0: Update chart.go (#186)
v0.5.0-beta.0: Chart repo errors are wrapped in Shipper errors fixes #152
This change is aiming to turn helm library errors into Shipper errors so the responsible caller gets an idea if the error should be retried. Signed-off-by: Oleg Sidorov <oleg.sidorov@booking.com>
v0.5.0-alpha.6: Chart repo errors are wrapped in Shipper errors fixes #152
This change is aiming to turn helm library errors into Shipper errors so the responsible caller gets an idea if the error should be retried. Signed-off-by: Oleg Sidorov <oleg.sidorov@booking.com>
v0.5.0: Chart repo errors are wrapped in Shipper errors fixes #152
This change is aiming to turn helm library errors into Shipper errors so the responsible caller gets an idea if the error should be retried. Signed-off-by: Oleg Sidorov <oleg.sidorov@booking.com>