Skip to content

Conversation

llxp
Copy link
Contributor

@llxp llxp commented Mar 28, 2025

This PR fixes a bug related to infobloxippool getting deleted even when there are claims referencing the pool and adds some log messages for infoblox operations.

This PR can only be merged, if the PR in in-cluster-provider is merged too and the go.mod is updated accordingly:
kubernetes-sigs/cluster-api-ipam-provider-in-cluster#329

@llxp llxp requested a review from schrej as a code owner March 28, 2025 10:51
@llxp llxp force-pushed the feature/add-ib-logging branch from c12a6ec to a595a94 Compare March 28, 2025 15:11
for _, claim := range inUseClaims {
logger.Info("still found claim in use", "claim", claim.Name)
}
if len(inUseClaims) == 0 && len(inUseAddresses) == 0 {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need to check for both here. IPAddresses won't be reconciled on their own, so even if we end up with an orphaned IPAddress there is no use in keeping the pool. Checking for claims is sufficient.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, gonna change that.

Copy link
Contributor Author

@llxp llxp Mar 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, maybe we also don't need the (maybe even not working) DELETE webhook anymore then, right?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The webhook should do the same thing as the controller. The webhook is for UX, the finalizer is unavoidable protection. See kubernetes-sigs/cluster-api#5498 for a longer discussion.

Comment on lines 123 to 124
err := errors.New("pool not found in infoblox ipam provider")
logger.Error(err, "the referenced pool could not be found in infoblox-provider")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is quite repetitive. Can we shorten it a bit? Maybe we just log the actual error here?

Also please don't explain that this is coming from the 'infoblox provider' in any log or error messages. If you're reading these errors/logs you're looking at the output of said provider, so there is no benefit in stating that here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea was to be able to identify from which portion of the provider this log message stems from. As this is gonna be directly wrapped in the in-cluster provider later on. Previously the error messages where exactly the same and I had to look for quite a time time to figure out, from which project this error message actually stems. So some kind of identification would be useful I'd say.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I now changed the wording a bit, so there is no provider mentioned and both error messages differ now a bit

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then we could consider having more precise errors in the ClaimReconciler by explicitly stating that the error occurred in the ProviderAdapter or ClaimHandler.

@llxp llxp force-pushed the feature/add-ib-logging branch 2 times, most recently from ec208c2 to e50105c Compare March 28, 2025 17:58
@llxp llxp force-pushed the feature/add-ib-logging branch from e50105c to bb37e3e Compare March 28, 2025 18:01
@schrej schrej merged commit 4b91620 into telekom:main Jun 23, 2025
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants