Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

watch: support multiple containers for tar implementation #10860

Merged
merged 1 commit into from
Aug 3, 2023

Conversation

milas
Copy link
Contributor

@milas milas commented Aug 1, 2023

What I did
Support services with scale > 1 for the tar watch sync.

Add a "lossy" multi-writer specific to pipes that writes the tar data to each io.PipeWriter, which is connected to stdin for the tar process being exec'd in the container.

The data is written serially to each writer. This could be adjusted to do concurrent writes but that will rapidly increase the I/O load, so is not done here - in general, 99% of the time you'll be developing (and thus using watch/sync) with a single replica of a service.

If a write fails, the corresponding io.PipeWriter is removed from the active set and closed with an error.

This means that a single container copy failing won't stop writes to the others that are succeeding. Of course, they will be in an inconsistent state afterwards still, but that's a different problem.

(not mandatory) A picture of a cute animal, if possible in relation to what you did
4 cats that all look very similar

@milas milas requested a review from a team August 1, 2023 21:25
@milas milas self-assigned this Aug 1, 2023
@milas milas requested review from nicksieger, ndeloof, StefanScherer, ulyssessouza, glours and laurazard and removed request for a team August 1, 2023 21:25
Support services with scale > 1 for the tar watch sync.

Add a "lossy" multi-writer specific to pipes that writes the
tar data to each `io.PipeWriter`, which is connected to `stdin`
for the `tar` process being exec'd in the container.

The data is written serially to each writer. This could be
adjusted to do concurrent writes but that will rapidly increase
the I/O load, so is not done here - in general, 99% of the
time you'll be developing (and thus using watch/sync) with a
single replica of a service.

If a write fails, the corresponding `io.PipeWriter` is removed
from the active set and closed with an error.

This means that a single container copy failing won't stop
writes to the others that are succeeding. Of course, they will
be in an inconsistent state afterwards still, but that's a
different problem.

Signed-off-by: Milas Bowman <milas.bowman@docker.com>
Copy link
Contributor

@glours glours left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for the test 👍

@codecov
Copy link

codecov bot commented Aug 2, 2023

Codecov Report

Patch coverage: 9.03% and project coverage change: -1.60% ⚠️

Comparison is base (8318f66) 59.88% compared to head (fb33132) 58.29%.
Report is 8 commits behind head on v2.

Additional details and impacted files
@@            Coverage Diff             @@
##               v2   #10860      +/-   ##
==========================================
- Coverage   59.88%   58.29%   -1.60%     
==========================================
  Files         118      120       +2     
  Lines       10064    10377     +313     
==========================================
+ Hits         6027     6049      +22     
- Misses       3440     3728     +288     
- Partials      597      600       +3     
Files Changed Coverage Δ
internal/sync/tar.go 0.00% <0.00%> (ø)
pkg/compose/watch.go 20.74% <2.73%> (-5.89%) ⬇️
internal/sync/writer.go 81.81% <81.81%> (ø)

... and 1 file with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@milas milas merged commit efd44de into docker:v2 Aug 3, 2023
25 checks passed
@milas milas deleted the tar-multiwrite branch August 3, 2023 18:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants