Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add a read_events_with_timeout() #106

Open
ceyusa opened this issue Jul 18, 2018 · 16 comments
Open

add a read_events_with_timeout() #106

ceyusa opened this issue Jul 18, 2018 · 16 comments

Comments

@ceyusa
Copy link
Contributor

ceyusa commented Jul 18, 2018

That would imply to use a select on the inotify's fd waiting for data in the fd or the timeout.

The workaround would be to use the nix crate with Inotify's RawFD and do the select in the application.

@hannobraun
Copy link
Owner

Thank you for the suggestion, @ceyusa. I think another workaround would be to use Inotify::stream (only available in master, but I plan to release it soon), convert that into a future using Stream::into_future, and then use Tokio's FutureExt::deadline.

I'm not opposed to adding the suggested read_events_with_timeout method, but I'm a bit concerned about the duplication between the traditional methods and the new futures-based API. Long-term, I'd like to unify both, if possible.

@ceyusa
Copy link
Contributor Author

ceyusa commented Jul 19, 2018

Thank you for the suggestion, @ceyusa. I think another workaround would be to use Inotify::stream (only available in master, but I plan to release it soon), convert that into a future using Stream::into_future, and then use Tokio's FutureExt::deadline.

That makes more sense in Rust. Thanks!

Do you know if that works nowadays? if it does, I think the issue should be closed.

@hannobraun
Copy link
Owner

@ceyusa

Do you know if that works nowadays? if it does, I think the issue should be closed.

I haven't tried it, but I have no reason to believe that it won't work. I'd be interested to hear back, if you try it.

@ceyusa
Copy link
Contributor Author

ceyusa commented Jul 24, 2018

I kind of did it. Though it doesn't look clean to me (perhaps because I'm not used to these idioms):

I spawn a thread with this loop:

        while running.load(Ordering::Relaxed) {
            let sender_clone = sender.clone();
            let when = Instant::now() + Duration::from_secs(5);
            let future = inotify
                .event_stream()
                .into_future()
                .deadline(when)
                .then(move |result| {
                    if let Ok(result) = result {
                        let (event, _) = result;
                        if let Some(event) = event {
                            if let Ok(changed_file) = changed_file_from_event(event) {
                                sender_clone.send(changed_file).unwrap();
                            }
                        }
                    }
                    Ok(())
                });
            tokio::run(future);
        }

The funny thing is that tokio::run() spawns another thread internally. So, nested threads!

I wonder if there's a way to avoid this nesting.

@hannobraun
Copy link
Owner

Unfortunately, I know very little about Tokio myself, so all I'm writing here might be wrong.

Are you sure that the call to tokio::run is required? There's nothing like that in our test case. If you want to wait until the future has finished, can't you just call Future::wait?

@ceyusa
Copy link
Contributor Author

ceyusa commented Jul 27, 2018

The problem is that deadline is a Tokio's timer, and its documentation mentions:

These types must be used from within the context of the Runtime or a timer context must be setup explicitly

In order to use FutureExt::deadline it is required to use a Runtime.

@hannobraun
Copy link
Owner

I understand. I hope there's some better way to do this, but I don't know of one.

If anyone submitted a pull request implementing the proposed read_events_with_timeout method, I'd be inclined to accept it. I still hope we can one day have a unified future-based API that is both convenient and efficient, but that shouldn't prevent us from supporting valid use cases right now.

@ceyusa
Copy link
Contributor Author

ceyusa commented Sep 4, 2018

Let me continue sharing my experiences on this issue.

Just found out that I could do what a want (a cheap thread pushing for events) using tokio as loop event, without the deadline.

    let future = inotify
        .event_stream()
        .map_err(|e| println!("inotify error: {:?}", e))
        .for_each(move |event| {
            println!("event: {:?}", event);
            Ok(())
        });

    tokio::spawn(future);

And it works great with inotify 0.5.

I decide to make a new example using tokio's runtime, but in release 0.6 there's the commit b1b07af which I don't fully understand. Besides, I cannot find a way to add a static mutable buffer in the future stream for tokio's runtime:

    let mut buffer = [0u8; 32];
    let future = inotify
        .event_stream(&mut buffer)
        .map_err(|e| println!("inotify error: {:?}", e))
        .for_each(move |event| {
            println!("event: {:?}", event);
            Ok(())
        });

    tokio::run(future); // or spawn

I got this:

error[E0597]: `buffer` does not live long enough
  --> examples/tokio-streams.rs:25:28
   |
25 |         .event_stream(&mut buffer)
   |                            ^^^^^^ borrowed value does not live long enough
...
34 | }
   | - borrowed value only lives until here
   |
   = note: borrowed value must be valid for the static lifetime...

I wonder how I can set in the future stream a static mutable vector for the event loop.

@hannobraun
Copy link
Owner

@ceyusa Thank you for your feedback, and thanks for the pull request. I don't have time to look into it right now, but I'll get back to you as soon as I can!

@hannobraun
Copy link
Owner

@ceyusa

I decide to make a new example using tokio's runtime, but in release 0.6 there's the commit b1b07af which I don't fully understand.

I made this change for two reasons. First, to bring the API closer to the underlying inotify API (which works on a buffer provided by the user), but mostly, to give the user the choice on how big to make that buffer. Previously, the size was hardcoded.

I didn't realize that this would be causing problems with the Tokio API.

Besides, I cannot find a way to add a static mutable buffer in the future stream for tokio's runtime

I see you found a way to do it now. It's really bad that something like this is required. I've opened #120.

@daniel-pfeiffer
Copy link

Moin Hanno,

issue-120-workaround.rs doesn't show how to apply a timeout. Though I lived there, I'm no tokio specialist, so not sure where to add it. But worse, it uses static mut, which is being deprecated.

So I would very much like to have a low level function as initially requested using (not select() nowadays, but) poll()! I guess it would set up poll() and if not timed out, would then call read_events_blocking().

If you don't have time, I could try to add it.

@hannobraun
Copy link
Owner

Hi Daniel,

I think you're referring to an example that was added back in 2018 and is no longer in this repository. This issue is also from 2018, which was like 4 breaking releases ago. I don't know how much that example, or anything discussed here, reflect the current API.

I also don't know what the state of the art is regarding async and timeouts, and if whatever that is should be used with inotify, or if we need our own method for the non-async API. My concern about duplication between the async and non-async APIs remains.

I don't have the expertise to provide much guidance, and have doubts about expanding the non-async API for use cases that would possibly be better served by the async one. For that reason, it would fall on you as the contributor to figure out what's right, then make your case to convince me.

I certainly don't intend to be difficult. It it turns out it makes sense to expand the non-async API, that's what we'll do. But my favorite solution would definitely be to know for certain that such a non-async function is not needed.

@daniel-pfeiffer
Copy link

daniel-pfeiffer commented Apr 13, 2024

My problem is actually the following: I collect data and email it at midnight or shutdown. The latter is tricky. In a signal handler I'm not allowed to do much and exit wouldn't call Drop. So I set an AtomicBool and check it before reading events or lines. If read_events_blocking could just return an error when interrupted by a signal, all would be fine.

Otherwise it doesn't really matter, whether I get a timeout, or sleep and read_events. In both cases I have only three seconds after the kill to deliver the email, before power off. So I need a rather active loop with very short sleeps, since I don't know how much time I might need in the worst case.

@hannobraun
Copy link
Owner

How about this:

  1. You make a stream of inotify events using Inotify::into_event_stream.
  2. Then you make a stream over the relevant signal (or multiple streams, if you need to handle multiple signals) using Tokio.
  3. Again using Tokio, you can then select over both streams, meaning you'd get either the next inotify event or you'd be notified of the signal (no timeout required).

Do you think something like this would work for your case?

@daniel-pfeiffer
Copy link

It seems kinda crazy, to turn a "simpe" CLI into async all over the place…

OTOH this sounds like it might work, so thanks for the detailed steps!

@hannobraun
Copy link
Owner

I don't know, I think once you do signal handling, your CLI isn't that simple anymore 😄

Plus, it sounds like your use of async can be restricted to the one function that handles inotify events, so "all over the place" isn't the right characterization here, I think.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants