Replies: 1 comment
-
|
I've run into this before. The solution for me was to make the health checks less frequent. The locks appeared because the healthcheck is timing out (60s) and cancelling the restic operation before it is finished. I've had no further issues with the following configuration: healthcheck:
test: ["CMD", "rcb", "status"]
interval: 30m
timeout: 5m
retries: 3
start_period: 1m |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi 👋
I was using following HealthCheck configuration for my
stackbackcontainers:this worked pretty well, until from some point (I suppose since the deployment schedule became more often) a "stale"-locks start to appear:
Those locks do not affect backup, however they do affect the cleanup i.e. the maintenance of the repository:
since the
hostnamein the lock file is the same as the running container (not the backup runner) and the timestamp of the Lock file does not correspond the backup schedule, I've started to suspect theHealthCheck.The
rcb statuscalls therestic cat configcommand, which by default does create a lock on the repository... There is an--no-lockoption, which disables the lock..I've updated my HealthChecks to use
restic cat config --no-lockcommand instead ofrcb statusIt would be nice to receive any thoughts/suggestions on this topic 🙃
Cheers!
Beta Was this translation helpful? Give feedback.
All reactions