Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide a way of customizing DOCKER_PRUNE_UNTIL, DISK_MIN_AVAILABLE, and DISK_MIN_INODES for the cron job #465

Open
arturopie opened this issue Aug 3, 2018 · 4 comments · May be fixed by #639 or #623
Labels
agent health Relating to whether the agent is or should pull additional work enhancement help wanted

Comments

@arturopie
Copy link

Right now, there is no way of customizing those values for the hourly docker-gc cron job. The defaults are not good enough for us since our docker images are quite big and the disk gets fill up very quick. We also run a RAM drive for the runners, which requires different clean up params from the builders.

I imagine and approach could be to add those variables to the cloud formation template parameters, and generate the cron job on provision time, rather than image building time. Another option is to generate a file on provision time that the cron job would source when it runs. Thoughts?

@lox
Copy link
Contributor

lox commented Oct 1, 2018

Yeah, I think this would be worth adding a stack parameter for.

@tspecht
Copy link

tspecht commented Jan 12, 2019

+1 on this, are there any plans on adding this?

@lox
Copy link
Contributor

lox commented Jul 27, 2019

This would definitely be nice to have. Would happily accept a PR.

@huonw
Copy link

huonw commented Nov 20, 2019

We have builds start failing when there's still a significant amount of disk free (~2GB), because our images are large; a large chunk of these images can be automatically cleaned with the docker system prune (even with the until=1h filter), except the DISK_MIN_AVAILABLE threshold is too low, so the script doesn't end up running the prune.

With the current 4.3 stacks, the limit seems to be 1GB, but on master it seems to be 5GB (e97921c). That would be good-enough as a work-around for us. Is there a reason it isn't in the 4.3 series?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
agent health Relating to whether the agent is or should pull additional work enhancement help wanted
Projects
None yet
5 participants