Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Verify pipelining is happening #1

Open
KodrAus opened this issue Nov 8, 2016 · 3 comments
Open

Verify pipelining is happening #1

KodrAus opened this issue Nov 8, 2016 · 3 comments

Comments

@KodrAus
Copy link
Member

KodrAus commented Nov 8, 2016

So rotor should be able to pipeline requests over a single connection. I'm assuming by just spinning off a new request with the same connection, but I'm not really sure.

This needs to be measured, maybe using a tool like clumsy on Windows (there's probably a nix alternative, but I'm not aware of it) to slow the requests right down and timing when they start vs complete.

@KodrAus
Copy link
Member Author

KodrAus commented Nov 8, 2016

So it doesn't seem to be happening, and I'm guessing this is for a couple of reasons:

  1. futures::collect seems to be blocking on each future. I suspect this is because Client doesn't implement Clone, so it needs to be moved and passed to one closure at a time. I'm not sure if it's that smart about it or not though...
  2. rotor_http isn't waking up the state machine when there's an active request in progress. Either because it doesn't actually support pipelining yet, or because that bit of wakeup just isn't implemented.

I'll check these both out.

@KodrAus
Copy link
Member Author

KodrAus commented Nov 8, 2016

Ok so 1 is defunct. futures::collect will poll one at a time, which isn't an issue if the futures are already executing on another thread somewhere.

So for concurrency, spin up the futures (which in this case is putting a message on a queue). Then poll the futures that already exist.

2 is correct, there isn't actually pipelining going on through wakeups because the machine isn't being woken up while in progress. I might have a look at this or just look at prototyping a client with tokio and see what edges that produces.

@KodrAus
Copy link
Member Author

KodrAus commented Nov 8, 2016

Here's a neat idea: Use the futures::stream::channel to send requests to our connection pool.

We can then either handle them directly through the stream, or stick them on a queue that a bunch of connections can fight over. The queue would need to participate in back pressure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant