Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: kubernetes Server Side Apply Proposal #80

Merged
merged 17 commits into from
Aug 10, 2023

Conversation

jfuller95
Copy link
Contributor

This purposely does not go into the implementation details, but as we understand it should be a relatively simple modification in place of where the current code sends the "apply" request to Kubernetes API

@jfuller95 jfuller95 force-pushed the k8s-server-side-apply branch from 26a2f1e to 5010fcf Compare May 18, 2023 15:09
@tombentley
Copy link
Member

@jfuller95 thanks for the proposal! I see you opened this in draft. Do you want any review now, or should we wait until you mark it as ready?

Copy link
Member

@scholzj scholzj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the proposal. This is something what came up multiple times in the past. So it would be great to have this. But I think this proposal is a bit too simple. I'm missing following parts:

  • More details explanation how will the implementation look like based on our code base. What methods will be used, how they will be used, what needs to be changed etc. I do not think this is as simple as you suggest.
  • Explain how it will deal with conflicts when someone else owns a fields which now strimzi needs to own.

@jfuller95
Copy link
Contributor Author

@jfuller95 thanks for the proposal! I see you opened this in draft. Do you want any review now, or should we wait until you mark it as ready?

I was planning on letting my work team review it first before opening it up to "Ready" state but feel free to comment on it.

I shall be modifying it today with the current suggestions / attempting to piece together Java for a more in-depth proposal as requested 😄

@jfuller95 jfuller95 marked this pull request as ready for review May 22, 2023 10:39
@scholzj
Copy link
Member

scholzj commented May 24, 2023

@jfuller95 So, is this now ready for review?

@jfuller95
Copy link
Contributor Author

Yes please

Copy link
Member

@scholzj scholzj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left some comments. Some things which would be good to mention:

  • What impact does it have on supported Kubernetes versions? Are all the required features supported and enabled by default in Kube 1.21+ (I think they are, but might be good to doublecheck and confirm in the proposal).
  • Does this work against the Mock Kubernetes server used in the tests? If not, it would need to be addressed and that should be probably covered in the proposal as well.
  • Do you have working prototype of this? (Just asking out of curiosity - it is not really required for the proposal itself)

Comment on lines 62 to 73
The [`ResourceDiff`](https://github.com/strimzi/strimzi-kafka-operator/blob/c3522cf4b17004004a676854d37ba01bb9a44800/operator-common/src/main/java/io/strimzi/operator/common/operator/resource/ResourceDiff.java#L46-L78) will need to filter only the parts of the resource that Strimzi owns as to not trigger a reconcile loop for externally owned resource paths.
This will allow the removal of `ignorableFields` throughout the codebase as paths are being selected now, rather than filtering out fields.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You say we need to filter things. What will we filter them against if we remove the ignorableFields?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd guess we'd need to consume a resources metadata.managedFields.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently ResourceDiff is looking at two things - the current state, and desired state. It's looking at all differences, and filtering out certain things explicitly that it knows other people manage (but is breaking whenever anything else is changed, even though it's not something it should care about, like additional labels etc)

Instead, it should look at the differences that are related to the desired state - instead of it being a filter of "is this in the ignored paths", it's a filter of "is this path in the desired state"

We don't need to worry about things that have gone away from the desired state, because the managed fields on the server side will handle it. There's no need to look at the managed fields on the client side here

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

even though it's not something it should care about, like additional labels

I think you are misunderstanding the design. Not every unset field means we do not care about this field. It means that at this point we are fine with its default. And for the labels and annotations, you are actually proposing here a change to the semantic which today is that for example the labels and annotations set through Strimzi should be the only ones present on the resource => i.es any other labels and annotations not defined there should be removed.

That means that it needs to be clear from the proposal how would it work for the users who are fine with the current design and how it impacts the code and the usability. Right now, I'm not really sure what the actual impact is in some of these areas.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you are misunderstanding the design. Not every unset field means we do not care about this field. It means that at this point we are fine with its default.

Mhm. Well, I think what we're saying here is we think this design is fundamentally wrong, and it should be changed, to allow for standard Kubernetes API machinery to function as expected. Strimzi should work like other operators, which ignore changes to fields they're not explicitly setting.

which today is that for example the labels and annotations set through Strimzi should be the only ones present on the resource => i.es any other labels and annotations not defined there should be removed.

This is not true. Annotations starting with cattle.io/ or field.cattle.io are silently ignored. This is not mentioned anywhere in the Strimzi API documentation

That means that it needs to be clear from the proposal how would it work for the users who are fine with the current design and how it impacts the code and the usability. Right now, I'm not really sure what the actual impact is in some of these areas.

Fair. Right now, anyone who's happy with the current design is clearly not suffering from a tight reconcilation loop - which means they don't have anything that is fighting with Strimzi. This means that in this new proposal, there would be no change, as there's nothing they have deployed trying to make such changes.

I think the impact on the code and usability would be great - instead of lots of explicit ignore lists, there would be a simple implementation that only looks at the fields that are trying to be set, and the references to unrelated products like Rancher would be removed. This is all a much cleaner design

Copy link
Member

@tombentley tombentley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this proposal.

I'd like to understand what other benefits (or drawbacks) there would be for users, admission hooks etc in this proposal. Right now it's suggesting some fairly deep changes with labels and annotations as the sole justification.

If you look at our CR APIs they typically copy some Kube specific configuration (e.g. affinity, tolerations etc) which then gets applied to the dependent resources (e.g. pods and services). While this proposal doesn't change that, it would enable a different mechanism for users to configure dependent resources in the future. This raises some questions:

  • if Kube were to add a feature, should we encourage users to configure dependent resources directly?
  • What if we later wanted/needed to configure that via our CR? Would those users who were already configuring the dependent resources directly have an upgrade path (I'm guessing "not really", precisely because those dependent resources might actually be configured by some other operator which doesn't know about Strimzi).
    • If that's the case, should we consider a long term path of simplifying our CR APIs to remove those fields which we're just copying around to the dependent resources?

Comment on lines 62 to 73
The [`ResourceDiff`](https://github.com/strimzi/strimzi-kafka-operator/blob/c3522cf4b17004004a676854d37ba01bb9a44800/operator-common/src/main/java/io/strimzi/operator/common/operator/resource/ResourceDiff.java#L46-L78) will need to filter only the parts of the resource that Strimzi owns as to not trigger a reconcile loop for externally owned resource paths.
This will allow the removal of `ignorableFields` throughout the codebase as paths are being selected now, rather than filtering out fields.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd guess we'd need to consume a resources metadata.managedFields.

@mikebryant
Copy link

I'd like to understand what other benefits (or drawbacks) there would be for users, admission hooks etc in this proposal. Right now it's suggesting some fairly deep changes with labels and annotations as the sole justification.

Actually, in our current usage, it's labels, annotations, topologySpreadConstraints, at least.

The point is that if you have any cluster-wide config set using admission controllers, right now this makes the Strimzi operator unusable, because it goes into a tight loop fighting the admission controller - even when Strimzi isn't trying to set specific fields.

This behaviour is in direct conflict with any use of mutating webhooks, and these are a core Kubernetes feature.

There's also the benefit to cleaner code within the project - already there's references to cattle.io, because of this behaviour. This is going to be a constant moving target to update Strimzi to allow all of the annotations used by every open source project out there - much simpler to use the upstream solution to this... which is server-side apply & managedFields.

@scholzj
Copy link
Member

scholzj commented Jun 1, 2023

I'd like to understand what other benefits (or drawbacks) there would be for users, admission hooks etc in this proposal. Right now it's suggesting some fairly deep changes with labels and annotations as the sole justification.

Actually, in our current usage, it's labels, annotations, topologySpreadConstraints, at least.

The point is that if you have any cluster-wide config set using admission controllers, right now this makes the Strimzi operator unusable, because it goes into a tight loop fighting the admission controller - even when Strimzi isn't trying to set specific fields.

This behaviour is in direct conflict with any use of mutating webhooks, and these are a core Kubernetes feature.

There's also the benefit to cleaner code within the project - already there's references to cattle.io, because of this behaviour. This is going to be a constant moving target to update Strimzi to allow all of the annotations used by every open source project out there - much simpler to use the upstream solution to this... which is server-side apply & managedFields.

As I said above, I think you are misunderstanding the semantics of what the user says in the configuration and what exactly you propose to change here. When the Kafka CR defines that the topology spread constraint should not be set by not setting it, that means that the user wants the topology spread constraints to be empty. Not that we do not care about it. So it is a managed field that is set to null. Maybe that is the reason why I don't think the proposal is really clear about what it proposes, what the impact is and how should it work because you seem to be treating it as a simple bugfix but in reality, it is a major change to the behavior.

I can understand the use cases for merging labels and annotations. I do not really understand any benefit from merging topology spread constraints or affinity rules. I do not understand why not just manage them through the custom resource. (That said, I do not think anything prevents you from injecting your own Topology Spread Constraints already today with admission controllers).

I think the idea that you manage resources through some other means whether it's admission controllers or just manually editing them through kubectl raises also supportability issues. When the operator does not know what is actually deployed, it is hard to manage things, it is hard to debug things, help out people etc. There are for sure situations where admission controllers make a lot of sense, but in a way, it also goes against the operator pattern since the operator cannot have the responsibility anymore when it does not control the resources.

@mikebryant
Copy link

When the Kafka CR defines that the topology spread constraint should not be set by not setting it, that means that the user wants the topology spread constraints to be empty.

These are not the semantics of any other Kubernetes resource. Consider Deployment - same semantics as what we're proposing. Any fields the user sets are set, anything else can and will be modified by admission control, and the Deployment object is fine with this. The Kafka CR semantics are the odd ones out here, across the whole Kubernetes ecosystem.

but in reality, it is a major change to the behavior.

I don't understand this - in an environment not using admission controllers or other methods to change resources - what is the change here? Could you give me an example?

I do not really understand any benefit from merging topology spread constraints or affinity rules. I do not understand why not just manage them through the custom resource.

We (the platform team) want to set up sensible defaults in our cluster. We want to run the operator, and let our users create things (like an individual Kafka cluster). By setting defaults using admission control, we can have these set for everything in the environment. Rather than documenting "set this", and expecting every user to set it for every single thing they deploy. The difference is in scale. You're thinking about this from a perspective of a single custom resource. I'm thinking about the overhead in 400 different services using a whole bunch of different ways of doing things.

@mikebryant
Copy link

(That said, I do not think anything prevents you from injecting your own Topology Spread Constraints already today with admission controllers).

This causes a reconcile loop, because the StrimziPodSet controller will keep seeing a diff

@scholzj
Copy link
Member

scholzj commented Jun 1, 2023

These are not the semantics of any other Kubernetes resource. Consider Deployment - same semantics as what we're proposing. Any fields the user sets are set, anything else can and will be modified by admission control, and the Deployment object is fine with this. The Kafka CR semantics are the odd ones out here, across the whole Kubernetes ecosystem.

But Strimzi does not prevent you from modifying things with the admission controller. It might trigger the admission controller, but it would actually not run in a tight loop because it does not have any events on this.

I don't understand this - in an environment not using admission controllers or other methods to change resources - what is the change here? Could you give me an example?

I guess you can start by explaining some of the simple situations ...

  • Kafka CR is configured to add to Service Account labels a: b and c: d ... now user removes the c: d from the Kafka CR. What happens?
  • Kafka CR is configured to add to Service Account labels a: b and c: d ... now user uses kubectl or some other tool to change c: e. What happens?
  • Service account has label a: b. User configures in the Kafka CR to set a label a: c. What happens?
  • If I stick with your examples ... user configures pod-anti-affinity for pods with the label app: pulsar. Admission controller adds pod-anti-affinity for label app: nats. What would happen?
  • Kafka CR configures peers who are allowed to access some port in the network policy it creates. A user will use kubectl to edit the network policy to add more peers. What happens in the next reconciliation?

Situations like this should be clearly covered in the proposal.

We (the platform team) want to set up sensible defaults in our cluster. We want to run the operator, and let our users create things (like an individual Kafka cluster). By setting defaults using admission control, we can have these set for everything in the environment. Rather than documenting "set this", and expecting every user to set it for every single thing they deploy. The difference is in scale. You're thinking about this from a perspective of a single custom resource. I'm thinking about the overhead in 400 different services using a whole bunch of different ways of doing things.

For example, the topology spread constraint which was raised before is more or less useless without rack awareness in Kafka. Are you injecting that as well into Kafka? Or what if some users are too smart and set their own topology spread constraints in the Kafka CR? What about deleting the Kafka cluster and its data by mistake? Or misconfiguring the Kafka cluster and its topics to not have any replication (and again, making the cluster unavailable/unreliable regardless of the topology spread constraints).

I think this is basically unrelated to this proposal ... but I wonder if for example having some set of OPA Gatekeeper rules to reject Kafka clusters (at the custom resource level) without the right configurations would be a better approach as it allows you to enforce things beyond some simple modifications to Kubernetes resources. You can enforce your topology spread constraint, you can enforce rack awareness, you can enforce topic configurations to use replication etc.

This causes a reconcile loop, because the StrimziPodSet controller will keep seeing a diff

Can you share more details on what exactly you do and how? The StrimziPodSet controller does not diff the Pods really. And the KafkaReconciler IIRC compares only the revision. That is actually very different behavior from other resources (e.g. Services, Service Accounts etc.). But that should not be reconciled in this way.

@AntoineDao
Copy link

AntoineDao commented Jun 2, 2023

Hi @scholzj 👋 We are currently reviewing the comments left on this thread and will aim to get back to you within the next week. We appreciate the team taking time to review this proposal in detail and will aim to give you the information needed to come to a decision quickly.

Some Questions

I have a couple of questions to understand what the key sticking points are with this proposal:

  1. Are the implementation details absolutely necessary before someone can start work on this proposal? Do proposals usually come paired with a Proof of Concept Pull Request to the strimzi operator repository?
  2. Does the fact we intend to put this change behind a feature flag reduce the burden of evidence that it won't impact existing users with backwards-incompatible changes? Are feature flags acceptable as a semi-permanent solution if the Strimzi team is not ready to commit to Server Side Apply by default in the future? This would make it more of a configuration flag I guess 🎏
  3. Do you have community calls/open door meetings for contributors looking to make changes to the codebase? We want to help improve Strimzi for the better and there is a lot of context and complexity in this project that we could benefit from talking about to someone who has worked on the code base for a while.

Some Answers

I can answer a few questions asked on this thread which I think were not answered already:

  • What impact does it have on supported Kubernetes versions? Are all the required features supported and enabled by default in Kube 1.21+ (I think they are, but might be good to doublecheck and confirm in the proposal).

The main Kubernetes change we propose is to use Server Side Apply where Strimzi currently uses a JSON Patch. This feature has been stable since 1.22 and beta since 1.18

  • Do you have working prototype of this? (Just asking out of curiosity - it is not really required for the proposal itself)

No we do not yet. This is also why we have been light on implementation details. We were hedging our bets to avoid dedicating too much dev time if this proposal was going to be rejected from the get go. I hope my understanding is correct that the Strimzi team is interested in this feature and can see the wider benefit when it comes to enhancing the operator with newer Kubernetes features 😄

Wrapping Up Requirements

To confirm, am I right in my understanding that the key information your require to move this proposal forward is as follows:

  1. Figure out whether the mock kubernetes server handles server side apply for testing purposes. If not propose a valid alternative.
  2. Give a detailed explanation of the specific use-case that we are trying to resolve with this proposal. This should allow you to assess whether our proposal is overkill or commensurate with the issue at hand.
  3. Provide a list of scenarios where this proposal will change the current behaviour and what we expect the new behaviour to be in each scenario.

@scholzj
Copy link
Member

scholzj commented Jun 2, 2023

We are currently reviewing the comments left on this thread and will aim to get back to you within the next week. We appreciate the team taking time to review this proposal in detail and will aim to give you the information needed to come to a decision quickly.

No worries. And thanks for taking the time to write the proposal ;-).

Are the implementation details absolutely necessary before someone can start work on this proposal? Do proposals usually come paired with a Proof of Concept Pull Request to the strimzi operator repository?

No, it is not absolutely required to have some PoC. In the past, we had proposals both with and without PoCs / prototypes. It always a bit depends on the author, the subject of the proposal, etc.

However, the proposal should (at least from my personal PoV) convince people not just about some nice idea or useful feature but also that it is feasible, that it will work, that it can be maintained, etc. Having some working PoC might help with this. One of the areas where PoC is IMHO useful is that you have something in addition to the text to look into - reading a code makes it sometimes easier to understand what exactly is meant by some parts of the proposal text etc.

But it might be also sufficient to just get a bit more into the detail on what parts of the Strimzi code base would change and how. The actual changes will be fairly low-level and deep within the Strimzi code base. So understanding how it will be changed helps. As you also mention the feature gate - it would be also great to understand how you plan to integrate it etc. => for example at what level will the code branch handle the feature gate being on or off etc.

Honestly, I have very little practical experience with server-side-apply - so I might be a bit skeptical about how well it works based on past experiences with other similar things. So having a bit more detail of how it will work would help me to understand it and maybe better get the advantage you expect from it.

Does the fact we intend to put this change behind a feature flag reduce the burden of evidence that it won't impact existing users with backwards-incompatible changes? Are feature flags acceptable as a semi-permanent solution if the Strimzi team is not ready to commit to Server Side Apply by default in the future? This would make it more of a configuration flag I guess 🎏

I think the feature gate helps to protect against bugs in the early implementation, it helps to protect against some unforeseen situations etc. So it helps with any concerns about how well the server-side-apply works, about any implementation details etc. So I think it is a good idea to have a feature gate for this.

But I do not think it helps that much with the change of the semantics of how it will work. Few people like you who are really interested in it will probably use it early and confirm that it helps with your use case and eventually improve it before it is enabled for everyone. Few other people might try it - but most users who did not care about this until now will run into it when the feature gate gets enabled by default and the new logic will either be fine for them or not. In other words, the incompatibility will still come sooner or later.

That is at least my experience with the other feature gates we had.

Do you have community calls/open door meetings for contributors looking to make changes to the codebase? We want to help improve Strimzi for the better and there is a lot of context and complexity in this project that we could benefit from talking about to someone who has worked on the code base for a while.

We have bi-weekly community calls open to anyone: https://github.com/strimzi/strimzi-kafka-operator#strimzi-community-meetings ... if needed, I'm sure we can also arrange some separate call about this.

No we do not yet. This is also why we have been light on implementation details. We were hedging our bets to avoid dedicating too much dev time if this proposal was going to be rejected from the get go. I hope my understanding is correct that the Strimzi team is interested in this feature and can see the wider benefit when it comes to enhancing the operator with newer Kubernetes features 😄

I guess we can first clarify the changes to the behavior and how they impact other users - for example by clarifying the scenarios I mentioned in my previous comment. And once that is more clear move towards the implementation details.

Wrapping Up Requirements

To confirm, am I right in my understanding that the key information your require to move this proposal forward is as follows:

  1. Figure out whether the mock kubernetes server handles server side apply for testing purposes. If not propose a valid alternative.
  2. Give a detailed explanation of the specific use-case that we are trying to resolve with this proposal. This should allow you to assess whether our proposal is overkill or commensurate with the issue at hand.
  3. Provide a list of scenarios where this proposal will change the current behaviour and what we expect the new behaviour to be in each scenario.

Yes, I think clarifying these points would help to understand things better.


PS: Just to be clear ... this is my personal answer not coordinated in any way with the other maintainers - just to be clear in case they have different views ;-)

@jfuller95
Copy link
Contributor Author

Just to give an update, I am currently ingesting all the information on this page before refining the proposal 😄

@jfuller95
Copy link
Contributor Author

jfuller95 commented Jun 13, 2023

Thanks all for the comments, I hope the latest commit and this message can move us on a little bit.

The specific scenario that caused us issues is within the proposal itself but is with Kyverno applying annotations to StatefulSets (now StrimziPodSets) which Strimzi didn’t recognise and tried to remove, which Kyverno then reapplied and so on, it kept a Kafka cluster continuously rolling. Please see the proposal for a more in-depth explanation with a log segment.

We currently have a Java engineer starting a POC piece of work on this so I cannot comment on the Mock Kubernetes server used in tests nor can I share a POC yet, but hopefully we can in due course.

Server Side apply can be seen as you control the things you need/want to and let the rest go, in a Deployment for example: you’d set image and version but leave limits/replicas alone so that VPA / HPA can control them as needed. This obviously requires thought about which fields you want to control and which you don’t. It simplifies the code greatly as you don’t need a list of annotations to ignore etc

@tombentley mentioned new features and whether Strimzi should update to set them or leave them to the user. This is hard to comment on as it would depend on the feature and desired behaviour, but in short - Strimzi should only be controlling the fields it cares about, and forcing control of them at that.

  • Kafka CR is configured to add to Service Account labels a: b and c: d ... now user removes the c: d from the Kafka CR. What happens?
    SA labels above would be in metadata.managedFields and once the user changes the CR, the next reconciliation loop would see it owns a field that no longer exists in “applied” config and it would be removed.
  • Kafka CR is configured to add to Service Account labels a: b and c: d ... now user uses kubectl or some other tool to change c: e. What happens?
    SA labels above would be in metadata.managedFields and therefore would show in the next reconciliation loop as a diff and force a change to re-add them
  • Service account has label a: b. User configures in the Kafka CR to set a label a: c. What happens?
    The next reconciliation loop would see the remote as a: b and the applied config as a: c and set it to a: c as it is within metadata.managedFields
  • If I stick with your examples ... user configures pod-anti-affinity for pods with the label app: pulsar. Admission controller adds pod-anti-affinity for label app: nats. What would happen?
    Strimzi would force ownership of the label and set it to app: nats, overwriting the user set config.
  • Kafka CR configures peers who are allowed to access some port in the network policy it creates. A user will use kubectl to edit the network policy to add more peers. What happens in the next reconciliation?
    There is no patch strategy listed so it should replace the whole list with the desired state, thereby removing the user added peers

Lastly I have also left another comment about using .serverSideApply() as I am unsure.

This purposely does not go into the implementation details, but as we understand it should be a
relatively simple modification in place of where the current code sends the "apply" request to
Kubernetes API

Signed-off-by: James Fuller <james.fuller@mettle.co.uk>
… comments

Signed-off-by: James Fuller <james.fuller@mettle.co.uk>
Signed-off-by: James Fuller <james.fuller@mettle.co.uk>
Signed-off-by: James Fuller <james.fuller@mettle.co.uk>
Signed-off-by: James Fuller <james.fuller@mettle.co.uk>
Copy link
Member

@scholzj scholzj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I was a bit busy with the upcoming release, so I got to this only now. I left some more comments. Mostly about the wording. I think only the point around the use of noop reconciliation result is something what might be worth checking if you did not checked it. But otherwise it looks good to me.

Signed-off-by: James Fuller <james.fuller@mettle.co.uk>
Signed-off-by: James Fuller <james.fuller@mettle.co.uk>
Copy link
Member

@scholzj scholzj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the proposal. I think it looks good to me now.

Copy link
Contributor

@PaulRMellor PaulRMellor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice proposal. I've made a few minor suggestions for readability

Signed-off-by: James Fuller <james.fuller@mettle.co.uk>
@scholzj
Copy link
Member

scholzj commented Jul 25, 2023

@tombentley @ppatierno Do you have something more to this? It would be great to have your comments / approval before we move forward with it.

Copy link
Member

@ppatierno ppatierno left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Nothing more from my side, thanks for this proposal!

Copy link
Member

@tombentley tombentley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left one suggestion, but it's only a suggestion. This LGTM. Thank you for this significant effort.

@scholzj
Copy link
Member

scholzj commented Jul 31, 2023

@jfuller95 can you please look at the comment from Tom?


This now has enough votes to be approved. Let's leave it open until Tuesday (1st August) if anyone has further comments. If not, we should merge it as approved.

@scholzj
Copy link
Member

scholzj commented Aug 9, 2023

@jfuller95 Did you get any chance to look at the last comment from Tom? It would be great if we could get it resolved and merge this.

@jfuller95
Copy link
Contributor Author

Sorry - I have been off for 2 weeks. Will take a look today 😄

Signed-off-by: James Fuller <james.fuller@mettle.co.uk>
Signed-off-by: James Fuller <james.fuller@mettle.co.uk>
Copy link
Member

@tombentley tombentley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

Signed-off-by: Jakub Scholz <www@scholzj.com>
@scholzj
Copy link
Member

scholzj commented Aug 10, 2023

Approved with 6 binding +1 votes.

Thanks for the proposal @jfuller95 (and everyone else for the comments and contributions to this proposal as well)

@scholzj scholzj merged commit 342f506 into strimzi:main Aug 10, 2023
@jfuller95 jfuller95 deleted the k8s-server-side-apply branch August 11, 2023 08:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants