-
-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: DRY + speed up docker build #2016
base: dev
Are you sure you want to change the base?
Conversation
Hey, bro, I think it's in the dev branch ,not the main branch |
@akx Love to see some work on making Docker builds faster but I will note that we've had some past issues getting |
Thanks for the PR, LGTM but more testing wanted here! |
I couldn't find any instructions on which branch to target, but I noticed a lot of work happening on both |
What are the Docker platform combinations to target here? I can write e.g. a Makefile to build all images. :)
(same results repeated for no builtin ollama) Is there a particular e.g. CUDAness scenario in which it hasn't worked here?
As above: anything you'd particularly like me to test (within the constraints here, i.e. I'm working on Apple Silicon, no CUDA here)? |
I'll go through our convos the last time I messed around with |
We would need testing with all 6 of our variants! |
And test runs of Github action workflows |
@akx I think there's been some signifcant changes as of late to our build workflow. If this PR is still applicable I encourage you to keep working on it, or if we've resolved some of the original issues that prompted you to create it perhaps it can now be closed. |
@justinh-rahb The repetition that this PR was fixing is still there in the Dockerfile as far as I can see. I'll maybe revisit this once someone takes a look at my other PRs (#2041, #2233) – it's discouraging to keep rebasing them to no review or interest. |
I understand your frustration but please also remember that we're also volunteers that have regular jobs too, and the project has been moving very quickly. I am personally trying to get the backlog looked at right now, thus why I've been checking in with the open PRs and communicating with our contributors in the backchannels to get things moving along. |
Absolutely, I'm in the same position. Sorry if I sounded a bit unkind there :) |
Description
This PR:
RUN
statements.uv
for installing Torch as well (it's faster!)This should have no effect for the end user other than a possibly slightly smaller image.
For developers, this is easier to maintainer.
For CI and builders, this is faster.
Testing & review
I checked that an image built with and without
USE_OLLAMA
works as before.I didn't check the CUDA configuration, since I have no CUDA-enabled Docker hardware at hand right now.
Changelog Entry
Changed