Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't discard pending data frames on errors #536

Open
nox opened this issue May 3, 2021 · 9 comments
Open

Don't discard pending data frames on errors #536

nox opened this issue May 3, 2021 · 9 comments

Comments

@nox
Copy link
Contributor

nox commented May 3, 2021

When a server sends a data frame with the EOS flag set and there are still bytes that were supposed to be received before content-length reaches 0, h2 just discards the entire frame and its payload, not letting the user access its bytes at all. Instead, it should queue the error about the malformed message and yield it next time the user polls the stream.

if stream.ensure_content_length_zero().is_err() {
proto_err!(stream:
"recv_data: content-length underflow; stream={:?}; len={:?}",
stream.id,
frame.payload().len(),
);
return Err(RecvError::Stream {
id: stream.id,
reason: Reason::PROTOCOL_ERROR,
});
}

@nox
Copy link
Contributor Author

nox commented May 3, 2021

Actually the RFC makes it clear that any premature end of stream is definitely a protocol error, my bad:

A request or response that includes a payload body can include a content-length header field. A request or response is also malformed if the value of a content-length header field does not equal the sum of the DATA frame payload lengths that form the body. A response that is defined to have no payload, as described in [RFC7230], Section 3.3.2, can have a non-zero content-length header field, even though no content is included in DATA frames.

Intermediaries that process HTTP requests or responses (i.e., any intermediary not acting as a tunnel) MUST NOT forward a malformed request or response. Malformed requests or responses that are detected MUST be treated as a stream error (Section 5.4.2) of type PROTOCOL_ERROR.

So I'll just fix the hanging, once I understand how it happens.

@nox
Copy link
Contributor Author

nox commented May 3, 2021

I'm curious though as to how a proxy making a h1 request to some origin on behalf of a h2 request it received from the client should handle the case of a response with a body shorter than the content-length.

@LPardue
Copy link
Contributor

LPardue commented May 3, 2021

In the case where such proxy is aware that the h1 response is not chunked and it has content length, if it sees the response end (the TCP connection close) before all the content has arrived, then it can reset the stream to the client.

@nox
Copy link
Contributor Author

nox commented May 3, 2021

It turns out that the hanging is probably due to the proxy or the client, so closing this.

@nox nox closed this as completed May 3, 2021
@nox nox changed the title Premature end of stream makes client unhappy Don't discard pending data frames on errors May 4, 2021
@nox
Copy link
Contributor Author

nox commented May 4, 2021

Repurposed the issue.

@nox nox reopened this May 4, 2021
@nox
Copy link
Contributor Author

nox commented May 4, 2021

I wonder if we should also expose the frames too large for the stream's window size, and those that overflow content-length.

@nox
Copy link
Contributor Author

nox commented May 4, 2021

@seanmonstar Do you have any opinion on how to implement this? My initial thought was to add a new recv::Event::ContentOverflow or something like that, which triggers a stream error when RecvStream::poll_data sees it, but that seems like the wrong place to do that.

The current code really doesn't expect a user to see a frame after a stream error so it's quite difficult to implement this change.

@seanmonstar
Copy link
Member

Maybe an Event::StreamError?

@nox
Copy link
Contributor Author

nox commented May 5, 2021

That's what I tried at first, but then that would mean resetting the stream poll_data when that new event is popped from the pending_recv queue, right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants