delete slave pod when slave pod got provisioned by failed to run #1774
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Added call to deleteSlavePod when computer is null. When Jenkins master encounter disk full issue, Jenkins master failed to write out the queue file /var/jenkins_home/queue.xml and also had some other file write issue, like:
Failed to create agent log directory /var/jenkins_home/logs/slaves/{slavePodName} and also:
Provisioned agent {slavePodName} failed to launch
java.nio.file.FileSystemException: /var/jenkins_home/nodes/{slavePodName}: No space left on device Also, the kubernetes plugin will try to provision a new pod every second and it will exhaust cloud resource in very quickly. And since no "delete" command send to cloud, the nominated cloud is not able to delete these pods.
Testing done
Steps to test:
Submitter checklist