Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove the thread pool from future::Cache #294

Merged
merged 26 commits into from Aug 22, 2023

Conversation

tatsuya6502
Copy link
Member

@tatsuya6502 tatsuya6502 commented Aug 1, 2023

This PR changes the followings on future::Cache:

For more details, see the migration guide at commit #16b4f899.

- Add `async-trait` crate to `future` feature.
- Duplicate the modules under `sync_base` to `future` and modified them to remove the
  thread pool (including to switch to an async-aware locks).
- Add iterator API to `cht::SegmentedHashMap`.
- Major API changes on `future::Cache`. See the `doc/migration-guide.md` for
  more details.
- Change the delivery mode of the eviction listener of `future::Cache` from `Queued`
  to `Immediate`.
- Update the unit and integration tests.
  - But unit tests in `future::base_cache` are disabled yet.
Ensure that `future::Cache` will remove key-level locks after eviction notifications
are delivered.
- Remove `async-io` crate.
- Refactor some internal methods.
- Bump the version to v0.12.0.
- Move `flush` method from `future::ConcurrentCacheExt` trait to `future::Cache`
  struct.
Clean up dependencies and feature flags.
Clean up dependencies and feature flags.
The same refactoring as `future::base_cache::Inner::admit` is applied to
`sync_base::base_cache::Inner::admit`.
- Add `beta.1` to the version.
- Reenable a unit test.
Update the integration tests for async runtimes.
- Make `future::Cache`'s internal `schedule_write_op` method to yield to the other
  async tasks when the write op channel is full.
- Strengthen the orderings of a `compare_exchange` operation in Housekeeper to ensure
  that other thread can see the updated value immediately.
@tatsuya6502
Copy link
Member Author

tatsuya6502 commented Aug 8, 2023

I removed background threads completely from future::Cache by removing scheduled_thread_pool and async-io crates from it. The former launched N × threads where N is the number of logical CPU cores, and the latter launched one thread for firing timers.

In future::Cache, async-io was used to sleep a user task calling insert etc. when the internal write operation channel is full. I replaced it with a simple spin loop. However, this caused entire cache operations to stall if the eviction listener is set.

I believe this happened because the followings:

  • The async task that was running the eviction listener gave up the CPU when it was trying to acquire a key-level lock.
    • Note that the same task was draining the write op channel.
  • Other tasks got the write op channel full so they could not push the write operation logs to the channel. They went into the spin loop.
    • They never gave up the CPU because they were in the spin loop, causing the async task running eviction listener to starve.

See the full theads dump below for more details.

To fix the problem, I changed the spin loop to sometimes yield to other async tasks.

However, there is no common future yielding API and implementation in Rust. So I did some hack with async-lock crate's Mutex (commit #c86e675d). It seems to work at least with Tokio, but I need to check with other async runtimes.

A full threads dump of mokabench

mokabench: cargo run --release -- --num-clients 8 --ttl 9 --ttl 3 --eviction-listener immediate

Almost all Tokio runtime workers were in the spin loop in the schedule_write_op method. Nothing was running the async task for the eviction listener. I believe it did not have a chance to get the CPU.

full thread dump
$ rust-lldb
...
(lldb) process attach --pid 65808
Process 65808 stopped
* thread #1, name = 'main', queue = 'com.apple.main-thread', stop reason = signal SIGSTOP
    frame #0: 0x000000018edab710 libsystem_kernel.dylib`__psynch_cvwait + 8
libsystem_kernel.dylib`:
->  0x18edab710 <+8>:  b.lo   0x18edab730               ; <+40>
    0x18edab714 <+12>: pacibsp
    0x18edab718 <+16>: stp    x29, x30, [sp, #-0x10]!
    0x18edab71c <+20>: mov    x29, sp
Target 0: (mokabench) stopped.
(lldb) bt all
* thread #1, name = 'main', queue = 'com.apple.main-thread', stop reason = signal SIGSTOP
  * frame #0: 0x000000018edab710 libsystem_kernel.dylib`__psynch_cvwait + 8
    frame #1: 0x000000018ede8574 libsystem_pthread.dylib`_pthread_cond_wait + 1232
    frame #2: 0x00000001004ba1a4 mokabench`std::sync::condvar::Condvar::wait::h851a2d2c92a3833d + 136
    frame #3: 0x00000001004b2c74 mokabench`tokio::runtime::park::Inner::park::hffb88b330dc44f47 + 188
    frame #4: 0x00000001004150f8 mokabench`tokio::runtime::context::blocking::BlockingRegionGuard::block_on::ha314db216325f93f + 292
    frame #5: 0x00000001003dd384 mokabench`tokio::runtime::context::runtime::enter_runtime::h7cbaa6103c291d64 + 380
    frame #6: 0x00000001003eb370 mokabench`tokio::runtime::runtime::Runtime::block_on::h2ec0e2def5245fd9 + 96
    frame #7: 0x00000001003fdf54 mokabench`mokabench::main::hd5e49fb753a23803 + 164
    frame #8: 0x00000001003d8ad0 mokabench`std::sys_common::backtrace::__rust_begin_short_backtrace::hb17800b524cd7d40 + 12
    frame #9: 0x00000001004182b8 mokabench`std::rt::lang_start::_$u7b$$u7b$closure$u7d$$u7d$::h59dde19606ec495f + 24
    frame #10: 0x00000001004ddb98 mokabench`std::rt::lang_start_internal::hdd06e3566639fc5b [inlined] core::ops::function::impls::_$LT$impl$u20$core..ops..function..FnOnce$LT$A$GT$$u20$for$u20$$RF$F$GT$::call_once::h0689b9cc840db667 at function.rs:284:13 [opt]
    frame #11: 0x00000001004ddb90 mokabench`std::rt::lang_start_internal::hdd06e3566639fc5b [inlined] std::panicking::try::do_call::h8d21b0c0c04af112 at panicking.rs:500:40 [opt]
    frame #12: 0x00000001004ddb90 mokabench`std::rt::lang_start_internal::hdd06e3566639fc5b [inlined] std::panicking::try::h618481d45c1b815c at panicking.rs:464:19 [opt]
    frame #13: 0x00000001004ddb90 mokabench`std::rt::lang_start_internal::hdd06e3566639fc5b [inlined] std::panic::catch_unwind::hbdeff70f3984ee7b at panic.rs:142:14 [opt]
    frame #14: 0x00000001004ddb90 mokabench`std::rt::lang_start_internal::hdd06e3566639fc5b [inlined] std::rt::lang_start_internal::_$u7b$$u7b$closure$u7d$$u7d$::haa4994ba13a3cd15 at rt.rs:148:48 [opt]
    frame #15: 0x00000001004ddb90 mokabench`std::rt::lang_start_internal::hdd06e3566639fc5b [inlined] std::panicking::try::do_call::h39b55541875d339a at panicking.rs:500:40 [opt]
    frame #16: 0x00000001004ddb8c mokabench`std::rt::lang_start_internal::hdd06e3566639fc5b [inlined] std::panicking::try::h93ac0a218f84acad at panicking.rs:464:19 [opt]
    frame #17: 0x00000001004ddb8c mokabench`std::rt::lang_start_internal::hdd06e3566639fc5b [inlined] std::panic::catch_unwind::h07a4f62359dfd8f0 at panic.rs:142:14 [opt]
    frame #18: 0x00000001004ddb8c mokabench`std::rt::lang_start_internal::hdd06e3566639fc5b at rt.rs:148:20 [opt]
    frame #19: 0x00000001003fe040 mokabench`main + 52
    frame #20: 0x000000018ea8ff28 dyld`start + 2236
  thread #2, name = 'tokio-runtime-worker'
    frame #0: 0x00000001003e8fd8 mokabench`moka::future::cache::Cache$LT$K$C$V$C$S$GT$::schedule_write_op::_$u7b$$u7b$closure$u7d$$u7d$::h06a142bd3b53b532 + 652
    frame #1: 0x00000001003ea418 mokabench`moka::future::cache::Cache$LT$K$C$V$C$S$GT$::insert::_$u7b$$u7b$closure$u7d$$u7d$::hd59efb62510b3db0 + 3584
    frame #2: 0x00000001003ed788 mokabench`mokabench::cache::moka_driver::async_cache::MokaAsyncCache$LT$I$GT$::insert::_$u7b$$u7b$closure$u7d$$u7d$::ha20eb1317da20840 + 324
    frame #3: 0x00000001003e3144 mokabench`_$LT$mokabench..cache..moka_driver..async_cache..MokaAsyncCache$LT$I$GT$$u20$as$u20$mokabench..cache..AsyncCacheDriver$LT$mokabench..parser..TraceEntry$GT$$GT$::get_or_insert::_$u7b$$u7b$closure$u7d$$u7d$::h09fca2b654784bf0 + 524
    frame #4: 0x000000010041bdb8 mokabench`mokabench::run_multi_tasks::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::h452f502428ab5500 + 840
    frame #5: 0x000000010041ac9c mokabench`tokio::loom::std::unsafe_cell::UnsafeCell$LT$T$GT$::with_mut::h9093146eed0a4214 + 72
    frame #6: 0x00000001003ead18 mokabench`tokio::runtime::task::core::Core$LT$T$C$S$GT$::poll::h2c70f850aecda3cf + 56
    frame #7: 0x000000010040b09c mokabench`tokio::runtime::task::harness::Harness$LT$T$C$S$GT$::poll::h3fa3f6ddc209d470 + 100
    frame #8: 0x00000001004bf800 mokabench`tokio::runtime::scheduler::multi_thread::worker::Context::run_task::hf7311d5e7832921f + 392
    frame #9: 0x00000001004bf0a4 mokabench`tokio::runtime::scheduler::multi_thread::worker::Context::run::hf22a3164c2df249c + 1204
    frame #10: 0x00000001004c0d38 mokabench`tokio::runtime::context::scoped::Scoped$LT$T$GT$::set::h78a11f4c8af1efe5 + 64
    frame #11: 0x00000001004b6e1c mokabench`tokio::runtime::context::runtime::enter_runtime::h3ba36611ccaeaea1 + 464
    frame #12: 0x00000001004beb5c mokabench`tokio::runtime::scheduler::multi_thread::worker::run::h57d94f6b7e40b4a9 + 88
    frame #13: 0x00000001004b2a18 mokabench`tokio::loom::std::unsafe_cell::UnsafeCell$LT$T$GT$::with_mut::h96a3c7b07057c79a + 204
    frame #14: 0x00000001004b4a80 mokabench`tokio::runtime::task::harness::Harness$LT$T$C$S$GT$::poll::h95479dc377919128 + 184
    frame #15: 0x00000001004b0b4c mokabench`tokio::runtime::blocking::pool::Inner::run::h21df40924f8b322c + 312
    frame #16: 0x00000001004adcac mokabench`std::sys_common::backtrace::__rust_begin_short_backtrace::hfd39605de895e62c + 220
    frame #17: 0x00000001004aa9d4 mokabench`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h94af4c22891eed3d + 152
    frame #18: 0x00000001004e7b7c mokabench`std::sys::unix::thread::Thread::new::thread_start::h06dea01d0985d944 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h6e76824cbc2fe276 at boxed.rs:1985:9 [opt]
    frame #19: 0x00000001004e7b70 mokabench`std::sys::unix::thread::Thread::new::thread_start::h06dea01d0985d944 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::hff5d24f551839383 at boxed.rs:1985:9 [opt]
    frame #20: 0x00000001004e7b6c mokabench`std::sys::unix::thread::Thread::new::thread_start::h06dea01d0985d944 at thread.rs:108:17 [opt]
    frame #21: 0x000000018ede7fa8 libsystem_pthread.dylib`_pthread_start + 148
  thread #3, name = 'tokio-runtime-worker'
    frame #0: 0x00000001003e8fdc mokabench`moka::future::cache::Cache$LT$K$C$V$C$S$GT$::schedule_write_op::_$u7b$$u7b$closure$u7d$$u7d$::h06a142bd3b53b532 + 656
    frame #1: 0x00000001003ea418 mokabench`moka::future::cache::Cache$LT$K$C$V$C$S$GT$::insert::_$u7b$$u7b$closure$u7d$$u7d$::hd59efb62510b3db0 + 3584
    frame #2: 0x00000001003ed788 mokabench`mokabench::cache::moka_driver::async_cache::MokaAsyncCache$LT$I$GT$::insert::_$u7b$$u7b$closure$u7d$$u7d$::ha20eb1317da20840 + 324
    frame #3: 0x00000001003e3144 mokabench`_$LT$mokabench..cache..moka_driver..async_cache..MokaAsyncCache$LT$I$GT$$u20$as$u20$mokabench..cache..AsyncCacheDriver$LT$mokabench..parser..TraceEntry$GT$$GT$::get_or_insert::_$u7b$$u7b$closure$u7d$$u7d$::h09fca2b654784bf0 + 524
    frame #4: 0x000000010041bdb8 mokabench`mokabench::run_multi_tasks::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::h452f502428ab5500 + 840
    frame #5: 0x000000010041ac9c mokabench`tokio::loom::std::unsafe_cell::UnsafeCell$LT$T$GT$::with_mut::h9093146eed0a4214 + 72
    frame #6: 0x00000001003ead18 mokabench`tokio::runtime::task::core::Core$LT$T$C$S$GT$::poll::h2c70f850aecda3cf + 56
    frame #7: 0x000000010040b09c mokabench`tokio::runtime::task::harness::Harness$LT$T$C$S$GT$::poll::h3fa3f6ddc209d470 + 100
    frame #8: 0x00000001004bf800 mokabench`tokio::runtime::scheduler::multi_thread::worker::Context::run_task::hf7311d5e7832921f + 392
    frame #9: 0x00000001004bf0a4 mokabench`tokio::runtime::scheduler::multi_thread::worker::Context::run::hf22a3164c2df249c + 1204
    frame #10: 0x00000001004c0d38 mokabench`tokio::runtime::context::scoped::Scoped$LT$T$GT$::set::h78a11f4c8af1efe5 + 64
    frame #11: 0x00000001004b6e1c mokabench`tokio::runtime::context::runtime::enter_runtime::h3ba36611ccaeaea1 + 464
    frame #12: 0x00000001004beb5c mokabench`tokio::runtime::scheduler::multi_thread::worker::run::h57d94f6b7e40b4a9 + 88
    frame #13: 0x00000001004b2a18 mokabench`tokio::loom::std::unsafe_cell::UnsafeCell$LT$T$GT$::with_mut::h96a3c7b07057c79a + 204
    frame #14: 0x00000001004b4a80 mokabench`tokio::runtime::task::harness::Harness$LT$T$C$S$GT$::poll::h95479dc377919128 + 184
    frame #15: 0x00000001004b0b4c mokabench`tokio::runtime::blocking::pool::Inner::run::h21df40924f8b322c + 312
    frame #16: 0x00000001004adcac mokabench`std::sys_common::backtrace::__rust_begin_short_backtrace::hfd39605de895e62c + 220
    frame #17: 0x00000001004aa9d4 mokabench`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h94af4c22891eed3d + 152
    frame #18: 0x00000001004e7b7c mokabench`std::sys::unix::thread::Thread::new::thread_start::h06dea01d0985d944 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h6e76824cbc2fe276 at boxed.rs:1985:9 [opt]
    frame #19: 0x00000001004e7b70 mokabench`std::sys::unix::thread::Thread::new::thread_start::h06dea01d0985d944 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::hff5d24f551839383 at boxed.rs:1985:9 [opt]
    frame #20: 0x00000001004e7b6c mokabench`std::sys::unix::thread::Thread::new::thread_start::h06dea01d0985d944 at thread.rs:108:17 [opt]
    frame #21: 0x000000018ede7fa8 libsystem_pthread.dylib`_pthread_start + 148
  thread #4, name = 'tokio-runtime-worker'
    frame #0: 0x000000018edab710 libsystem_kernel.dylib`__psynch_cvwait + 8
    frame #1: 0x000000018ede8574 libsystem_pthread.dylib`_pthread_cond_wait + 1232
    frame #2: 0x00000001004ba1a4 mokabench`std::sync::condvar::Condvar::wait::h851a2d2c92a3833d + 136
    frame #3: 0x00000001004b2c74 mokabench`tokio::runtime::park::Inner::park::hffb88b330dc44f47 + 188
    frame #4: 0x00000001004b44d0 mokabench`tokio::runtime::scheduler::multi_thread::park::Parker::park::hc713be26ad7a54f1 + 124
    frame #5: 0x00000001004bfcb4 mokabench`tokio::runtime::scheduler::multi_thread::worker::Context::park_timeout::h255f02e29233dc22 + 152
    frame #6: 0x00000001004bf420 mokabench`tokio::runtime::scheduler::multi_thread::worker::Context::run::hf22a3164c2df249c + 2096
    frame #7: 0x00000001004c0d38 mokabench`tokio::runtime::context::scoped::Scoped$LT$T$GT$::set::h78a11f4c8af1efe5 + 64
    frame #8: 0x00000001004b6e1c mokabench`tokio::runtime::context::runtime::enter_runtime::h3ba36611ccaeaea1 + 464
    frame #9: 0x00000001004beb5c mokabench`tokio::runtime::scheduler::multi_thread::worker::run::h57d94f6b7e40b4a9 + 88
    frame #10: 0x00000001004b2a18 mokabench`tokio::loom::std::unsafe_cell::UnsafeCell$LT$T$GT$::with_mut::h96a3c7b07057c79a + 204
    frame #11: 0x00000001004b4a80 mokabench`tokio::runtime::task::harness::Harness$LT$T$C$S$GT$::poll::h95479dc377919128 + 184
    frame #12: 0x00000001004b0b4c mokabench`tokio::runtime::blocking::pool::Inner::run::h21df40924f8b322c + 312
    frame #13: 0x00000001004adcac mokabench`std::sys_common::backtrace::__rust_begin_short_backtrace::hfd39605de895e62c + 220
    frame #14: 0x00000001004aa9d4 mokabench`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h94af4c22891eed3d + 152
    frame #15: 0x00000001004e7b7c mokabench`std::sys::unix::thread::Thread::new::thread_start::h06dea01d0985d944 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h6e76824cbc2fe276 at boxed.rs:1985:9 [opt]
    frame #16: 0x00000001004e7b70 mokabench`std::sys::unix::thread::Thread::new::thread_start::h06dea01d0985d944 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::hff5d24f551839383 at boxed.rs:1985:9 [opt]
    frame #17: 0x00000001004e7b6c mokabench`std::sys::unix::thread::Thread::new::thread_start::h06dea01d0985d944 at thread.rs:108:17 [opt]
    frame #18: 0x000000018ede7fa8 libsystem_pthread.dylib`_pthread_start + 148
  thread #5, name = 'tokio-runtime-worker'
    frame #0: 0x00000001003e8fe8 mokabench`moka::future::cache::Cache$LT$K$C$V$C$S$GT$::schedule_write_op::_$u7b$$u7b$closure$u7d$$u7d$::h06a142bd3b53b532 + 668
    frame #1: 0x00000001003ea418 mokabench`moka::future::cache::Cache$LT$K$C$V$C$S$GT$::insert::_$u7b$$u7b$closure$u7d$$u7d$::hd59efb62510b3db0 + 3584
    frame #2: 0x00000001003ed788 mokabench`mokabench::cache::moka_driver::async_cache::MokaAsyncCache$LT$I$GT$::insert::_$u7b$$u7b$closure$u7d$$u7d$::ha20eb1317da20840 + 324
    frame #3: 0x00000001003e3144 mokabench`_$LT$mokabench..cache..moka_driver..async_cache..MokaAsyncCache$LT$I$GT$$u20$as$u20$mokabench..cache..AsyncCacheDriver$LT$mokabench..parser..TraceEntry$GT$$GT$::get_or_insert::_$u7b$$u7b$closure$u7d$$u7d$::h09fca2b654784bf0 + 524
    frame #4: 0x000000010041bdb8 mokabench`mokabench::run_multi_tasks::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::h452f502428ab5500 + 840
    frame #5: 0x000000010041ac9c mokabench`tokio::loom::std::unsafe_cell::UnsafeCell$LT$T$GT$::with_mut::h9093146eed0a4214 + 72
    frame #6: 0x00000001003ead18 mokabench`tokio::runtime::task::core::Core$LT$T$C$S$GT$::poll::h2c70f850aecda3cf + 56
    frame #7: 0x000000010040b09c mokabench`tokio::runtime::task::harness::Harness$LT$T$C$S$GT$::poll::h3fa3f6ddc209d470 + 100
    frame #8: 0x00000001004bf800 mokabench`tokio::runtime::scheduler::multi_thread::worker::Context::run_task::hf7311d5e7832921f + 392
    frame #9: 0x00000001004bf0a4 mokabench`tokio::runtime::scheduler::multi_thread::worker::Context::run::hf22a3164c2df249c + 1204
    frame #10: 0x00000001004c0d38 mokabench`tokio::runtime::context::scoped::Scoped$LT$T$GT$::set::h78a11f4c8af1efe5 + 64
    frame #11: 0x00000001004b6e1c mokabench`tokio::runtime::context::runtime::enter_runtime::h3ba36611ccaeaea1 + 464
    frame #12: 0x00000001004beb5c mokabench`tokio::runtime::scheduler::multi_thread::worker::run::h57d94f6b7e40b4a9 + 88
    frame #13: 0x00000001004b2a18 mokabench`tokio::loom::std::unsafe_cell::UnsafeCell$LT$T$GT$::with_mut::h96a3c7b07057c79a + 204
    frame #14: 0x00000001004b4a80 mokabench`tokio::runtime::task::harness::Harness$LT$T$C$S$GT$::poll::h95479dc377919128 + 184
    frame #15: 0x00000001004b0b4c mokabench`tokio::runtime::blocking::pool::Inner::run::h21df40924f8b322c + 312
    frame #16: 0x00000001004adcac mokabench`std::sys_common::backtrace::__rust_begin_short_backtrace::hfd39605de895e62c + 220
    frame #17: 0x00000001004aa9d4 mokabench`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h94af4c22891eed3d + 152
    frame #18: 0x00000001004e7b7c mokabench`std::sys::unix::thread::Thread::new::thread_start::h06dea01d0985d944 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h6e76824cbc2fe276 at boxed.rs:1985:9 [opt]
    frame #19: 0x00000001004e7b70 mokabench`std::sys::unix::thread::Thread::new::thread_start::h06dea01d0985d944 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::hff5d24f551839383 at boxed.rs:1985:9 [opt]
    frame #20: 0x00000001004e7b6c mokabench`std::sys::unix::thread::Thread::new::thread_start::h06dea01d0985d944 at thread.rs:108:17 [opt]
    frame #21: 0x000000018ede7fa8 libsystem_pthread.dylib`_pthread_start + 148
  thread #6, name = 'tokio-runtime-worker'
    frame #0: 0x00000001003e8fd8 mokabench`moka::future::cache::Cache$LT$K$C$V$C$S$GT$::schedule_write_op::_$u7b$$u7b$closure$u7d$$u7d$::h06a142bd3b53b532 + 652
    frame #1: 0x00000001003ea418 mokabench`moka::future::cache::Cache$LT$K$C$V$C$S$GT$::insert::_$u7b$$u7b$closure$u7d$$u7d$::hd59efb62510b3db0 + 3584
    frame #2: 0x00000001003ed788 mokabench`mokabench::cache::moka_driver::async_cache::MokaAsyncCache$LT$I$GT$::insert::_$u7b$$u7b$closure$u7d$$u7d$::ha20eb1317da20840 + 324
    frame #3: 0x00000001003e3144 mokabench`_$LT$mokabench..cache..moka_driver..async_cache..MokaAsyncCache$LT$I$GT$$u20$as$u20$mokabench..cache..AsyncCacheDriver$LT$mokabench..parser..TraceEntry$GT$$GT$::get_or_insert::_$u7b$$u7b$closure$u7d$$u7d$::h09fca2b654784bf0 + 524
    frame #4: 0x000000010041bdb8 mokabench`mokabench::run_multi_tasks::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::h452f502428ab5500 + 840
    frame #5: 0x000000010041ac9c mokabench`tokio::loom::std::unsafe_cell::UnsafeCell$LT$T$GT$::with_mut::h9093146eed0a4214 + 72
    frame #6: 0x00000001003ead18 mokabench`tokio::runtime::task::core::Core$LT$T$C$S$GT$::poll::h2c70f850aecda3cf + 56
    frame #7: 0x000000010040b09c mokabench`tokio::runtime::task::harness::Harness$LT$T$C$S$GT$::poll::h3fa3f6ddc209d470 + 100
    frame #8: 0x00000001004bf800 mokabench`tokio::runtime::scheduler::multi_thread::worker::Context::run_task::hf7311d5e7832921f + 392
    frame #9: 0x00000001004bf0a4 mokabench`tokio::runtime::scheduler::multi_thread::worker::Context::run::hf22a3164c2df249c + 1204
    frame #10: 0x00000001004c0d38 mokabench`tokio::runtime::context::scoped::Scoped$LT$T$GT$::set::h78a11f4c8af1efe5 + 64
    frame #11: 0x00000001004b6e1c mokabench`tokio::runtime::context::runtime::enter_runtime::h3ba36611ccaeaea1 + 464
    frame #12: 0x00000001004beb5c mokabench`tokio::runtime::scheduler::multi_thread::worker::run::h57d94f6b7e40b4a9 + 88
    frame #13: 0x00000001004b2a18 mokabench`tokio::loom::std::unsafe_cell::UnsafeCell$LT$T$GT$::with_mut::h96a3c7b07057c79a + 204
    frame #14: 0x00000001004b4a80 mokabench`tokio::runtime::task::harness::Harness$LT$T$C$S$GT$::poll::h95479dc377919128 + 184
    frame #15: 0x00000001004b0b4c mokabench`tokio::runtime::blocking::pool::Inner::run::h21df40924f8b322c + 312
    frame #16: 0x00000001004adcac mokabench`std::sys_common::backtrace::__rust_begin_short_backtrace::hfd39605de895e62c + 220
    frame #17: 0x00000001004aa9d4 mokabench`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h94af4c22891eed3d + 152
    frame #18: 0x00000001004e7b7c mokabench`std::sys::unix::thread::Thread::new::thread_start::h06dea01d0985d944 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h6e76824cbc2fe276 at boxed.rs:1985:9 [opt]
    frame #19: 0x00000001004e7b70 mokabench`std::sys::unix::thread::Thread::new::thread_start::h06dea01d0985d944 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::hff5d24f551839383 at boxed.rs:1985:9 [opt]
    frame #20: 0x00000001004e7b6c mokabench`std::sys::unix::thread::Thread::new::thread_start::h06dea01d0985d944 at thread.rs:108:17 [opt]
    frame #21: 0x000000018ede7fa8 libsystem_pthread.dylib`_pthread_start + 148
  thread #7, name = 'tokio-runtime-worker'
    frame #0: 0x00000001003e8f30 mokabench`moka::future::cache::Cache$LT$K$C$V$C$S$GT$::schedule_write_op::_$u7b$$u7b$closure$u7d$$u7d$::h06a142bd3b53b532 + 484
    frame #1: 0x00000001003ea418 mokabench`moka::future::cache::Cache$LT$K$C$V$C$S$GT$::insert::_$u7b$$u7b$closure$u7d$$u7d$::hd59efb62510b3db0 + 3584
    frame #2: 0x00000001003ed788 mokabench`mokabench::cache::moka_driver::async_cache::MokaAsyncCache$LT$I$GT$::insert::_$u7b$$u7b$closure$u7d$$u7d$::ha20eb1317da20840 + 324
    frame #3: 0x00000001003e3144 mokabench`_$LT$mokabench..cache..moka_driver..async_cache..MokaAsyncCache$LT$I$GT$$u20$as$u20$mokabench..cache..AsyncCacheDriver$LT$mokabench..parser..TraceEntry$GT$$GT$::get_or_insert::_$u7b$$u7b$closure$u7d$$u7d$::h09fca2b654784bf0 + 524
    frame #4: 0x000000010041bdb8 mokabench`mokabench::run_multi_tasks::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::h452f502428ab5500 + 840
    frame #5: 0x000000010041ac9c mokabench`tokio::loom::std::unsafe_cell::UnsafeCell$LT$T$GT$::with_mut::h9093146eed0a4214 + 72
    frame #6: 0x00000001003ead18 mokabench`tokio::runtime::task::core::Core$LT$T$C$S$GT$::poll::h2c70f850aecda3cf + 56
    frame #7: 0x000000010040b09c mokabench`tokio::runtime::task::harness::Harness$LT$T$C$S$GT$::poll::h3fa3f6ddc209d470 + 100
    frame #8: 0x00000001004bf800 mokabench`tokio::runtime::scheduler::multi_thread::worker::Context::run_task::hf7311d5e7832921f + 392
    frame #9: 0x00000001004bf0a4 mokabench`tokio::runtime::scheduler::multi_thread::worker::Context::run::hf22a3164c2df249c + 1204
    frame #10: 0x00000001004c0d38 mokabench`tokio::runtime::context::scoped::Scoped$LT$T$GT$::set::h78a11f4c8af1efe5 + 64
    frame #11: 0x00000001004b6e1c mokabench`tokio::runtime::context::runtime::enter_runtime::h3ba36611ccaeaea1 + 464
    frame #12: 0x00000001004beb5c mokabench`tokio::runtime::scheduler::multi_thread::worker::run::h57d94f6b7e40b4a9 + 88
    frame #13: 0x00000001004b2a18 mokabench`tokio::loom::std::unsafe_cell::UnsafeCell$LT$T$GT$::with_mut::h96a3c7b07057c79a + 204
    frame #14: 0x00000001004b4a80 mokabench`tokio::runtime::task::harness::Harness$LT$T$C$S$GT$::poll::h95479dc377919128 + 184
    frame #15: 0x00000001004b0b4c mokabench`tokio::runtime::blocking::pool::Inner::run::h21df40924f8b322c + 312
    frame #16: 0x00000001004adcac mokabench`std::sys_common::backtrace::__rust_begin_short_backtrace::hfd39605de895e62c + 220
    frame #17: 0x00000001004aa9d4 mokabench`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h94af4c22891eed3d + 152
    frame #18: 0x00000001004e7b7c mokabench`std::sys::unix::thread::Thread::new::thread_start::h06dea01d0985d944 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h6e76824cbc2fe276 at boxed.rs:1985:9 [opt]
    frame #19: 0x00000001004e7b70 mokabench`std::sys::unix::thread::Thread::new::thread_start::h06dea01d0985d944 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::hff5d24f551839383 at boxed.rs:1985:9 [opt]
    frame #20: 0x00000001004e7b6c mokabench`std::sys::unix::thread::Thread::new::thread_start::h06dea01d0985d944 at thread.rs:108:17 [opt]
    frame #21: 0x000000018ede7fa8 libsystem_pthread.dylib`_pthread_start + 148
  thread #8, name = 'tokio-runtime-worker'
    frame #0: 0x00000001003e8ff8 mokabench`moka::future::cache::Cache$LT$K$C$V$C$S$GT$::schedule_write_op::_$u7b$$u7b$closure$u7d$$u7d$::h06a142bd3b53b532 + 684
    frame #1: 0x00000001003ea418 mokabench`moka::future::cache::Cache$LT$K$C$V$C$S$GT$::insert::_$u7b$$u7b$closure$u7d$$u7d$::hd59efb62510b3db0 + 3584
    frame #2: 0x00000001003ed788 mokabench`mokabench::cache::moka_driver::async_cache::MokaAsyncCache$LT$I$GT$::insert::_$u7b$$u7b$closure$u7d$$u7d$::ha20eb1317da20840 + 324
    frame #3: 0x00000001003e3144 mokabench`_$LT$mokabench..cache..moka_driver..async_cache..MokaAsyncCache$LT$I$GT$$u20$as$u20$mokabench..cache..AsyncCacheDriver$LT$mokabench..parser..TraceEntry$GT$$GT$::get_or_insert::_$u7b$$u7b$closure$u7d$$u7d$::h09fca2b654784bf0 + 524
    frame #4: 0x000000010041bdb8 mokabench`mokabench::run_multi_tasks::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::h452f502428ab5500 + 840
    frame #5: 0x000000010041ac9c mokabench`tokio::loom::std::unsafe_cell::UnsafeCell$LT$T$GT$::with_mut::h9093146eed0a4214 + 72
    frame #6: 0x00000001003ead18 mokabench`tokio::runtime::task::core::Core$LT$T$C$S$GT$::poll::h2c70f850aecda3cf + 56
    frame #7: 0x000000010040b09c mokabench`tokio::runtime::task::harness::Harness$LT$T$C$S$GT$::poll::h3fa3f6ddc209d470 + 100
    frame #8: 0x00000001004bf800 mokabench`tokio::runtime::scheduler::multi_thread::worker::Context::run_task::hf7311d5e7832921f + 392
    frame #9: 0x00000001004bf0a4 mokabench`tokio::runtime::scheduler::multi_thread::worker::Context::run::hf22a3164c2df249c + 1204
    frame #10: 0x00000001004c0d38 mokabench`tokio::runtime::context::scoped::Scoped$LT$T$GT$::set::h78a11f4c8af1efe5 + 64
    frame #11: 0x00000001004b6e1c mokabench`tokio::runtime::context::runtime::enter_runtime::h3ba36611ccaeaea1 + 464
    frame #12: 0x00000001004beb5c mokabench`tokio::runtime::scheduler::multi_thread::worker::run::h57d94f6b7e40b4a9 + 88
    frame #13: 0x00000001004b2a18 mokabench`tokio::loom::std::unsafe_cell::UnsafeCell$LT$T$GT$::with_mut::h96a3c7b07057c79a + 204
    frame #14: 0x00000001004b4a80 mokabench`tokio::runtime::task::harness::Harness$LT$T$C$S$GT$::poll::h95479dc377919128 + 184
    frame #15: 0x00000001004b0b4c mokabench`tokio::runtime::blocking::pool::Inner::run::h21df40924f8b322c + 312
    frame #16: 0x00000001004adcac mokabench`std::sys_common::backtrace::__rust_begin_short_backtrace::hfd39605de895e62c + 220
    frame #17: 0x00000001004aa9d4 mokabench`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h94af4c22891eed3d + 152
    frame #18: 0x00000001004e7b7c mokabench`std::sys::unix::thread::Thread::new::thread_start::h06dea01d0985d944 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h6e76824cbc2fe276 at boxed.rs:1985:9 [opt]
    frame #19: 0x00000001004e7b70 mokabench`std::sys::unix::thread::Thread::new::thread_start::h06dea01d0985d944 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::hff5d24f551839383 at boxed.rs:1985:9 [opt]
    frame #20: 0x00000001004e7b6c mokabench`std::sys::unix::thread::Thread::new::thread_start::h06dea01d0985d944 at thread.rs:108:17 [opt]
    frame #21: 0x000000018ede7fa8 libsystem_pthread.dylib`_pthread_start + 148
  thread #9, name = 'tokio-runtime-worker'
    frame #0: 0x00000001003e8fd4 mokabench`moka::future::cache::Cache$LT$K$C$V$C$S$GT$::schedule_write_op::_$u7b$$u7b$closure$u7d$$u7d$::h06a142bd3b53b532 + 648
    frame #1: 0x00000001003ea418 mokabench`moka::future::cache::Cache$LT$K$C$V$C$S$GT$::insert::_$u7b$$u7b$closure$u7d$$u7d$::hd59efb62510b3db0 + 3584
    frame #2: 0x00000001003ed788 mokabench`mokabench::cache::moka_driver::async_cache::MokaAsyncCache$LT$I$GT$::insert::_$u7b$$u7b$closure$u7d$$u7d$::ha20eb1317da20840 + 324
    frame #3: 0x00000001003e3144 mokabench`_$LT$mokabench..cache..moka_driver..async_cache..MokaAsyncCache$LT$I$GT$$u20$as$u20$mokabench..cache..AsyncCacheDriver$LT$mokabench..parser..TraceEntry$GT$$GT$::get_or_insert::_$u7b$$u7b$closure$u7d$$u7d$::h09fca2b654784bf0 + 524
    frame #4: 0x000000010041bdb8 mokabench`mokabench::run_multi_tasks::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::h452f502428ab5500 + 840
    frame #5: 0x000000010041ac9c mokabench`tokio::loom::std::unsafe_cell::UnsafeCell$LT$T$GT$::with_mut::h9093146eed0a4214 + 72
    frame #6: 0x00000001003ead18 mokabench`tokio::runtime::task::core::Core$LT$T$C$S$GT$::poll::h2c70f850aecda3cf + 56
    frame #7: 0x000000010040b09c mokabench`tokio::runtime::task::harness::Harness$LT$T$C$S$GT$::poll::h3fa3f6ddc209d470 + 100
    frame #8: 0x00000001004bf800 mokabench`tokio::runtime::scheduler::multi_thread::worker::Context::run_task::hf7311d5e7832921f + 392
    frame #9: 0x00000001004bf0a4 mokabench`tokio::runtime::scheduler::multi_thread::worker::Context::run::hf22a3164c2df249c + 1204
    frame #10: 0x00000001004c0d38 mokabench`tokio::runtime::context::scoped::Scoped$LT$T$GT$::set::h78a11f4c8af1efe5 + 64
    frame #11: 0x00000001004b6e1c mokabench`tokio::runtime::context::runtime::enter_runtime::h3ba36611ccaeaea1 + 464
    frame #12: 0x00000001004beb5c mokabench`tokio::runtime::scheduler::multi_thread::worker::run::h57d94f6b7e40b4a9 + 88
    frame #13: 0x00000001004b2a18 mokabench`tokio::loom::std::unsafe_cell::UnsafeCell$LT$T$GT$::with_mut::h96a3c7b07057c79a + 204
    frame #14: 0x00000001004b4a80 mokabench`tokio::runtime::task::harness::Harness$LT$T$C$S$GT$::poll::h95479dc377919128 + 184
    frame #15: 0x00000001004b0b4c mokabench`tokio::runtime::blocking::pool::Inner::run::h21df40924f8b322c + 312
    frame #16: 0x00000001004adcac mokabench`std::sys_common::backtrace::__rust_begin_short_backtrace::hfd39605de895e62c + 220
    frame #17: 0x00000001004aa9d4 mokabench`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h94af4c22891eed3d + 152
    frame #18: 0x00000001004e7b7c mokabench`std::sys::unix::thread::Thread::new::thread_start::h06dea01d0985d944 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h6e76824cbc2fe276 at boxed.rs:1985:9 [opt]
    frame #19: 0x00000001004e7b70 mokabench`std::sys::unix::thread::Thread::new::thread_start::h06dea01d0985d944 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::hff5d24f551839383 at boxed.rs:1985:9 [opt]
    frame #20: 0x00000001004e7b6c mokabench`std::sys::unix::thread::Thread::new::thread_start::h06dea01d0985d944 at thread.rs:108:17 [opt]
    frame #21: 0x000000018ede7fa8 libsystem_pthread.dylib`_pthread_start + 148
(lldb) detach
Process 65808 detached

Update the migration guide.
- Rename `eviction_listener` method of `future::CacheBuilder` to
  `async_eviction_listener`. This method takes a closure that returns a
  `ListenerFuture`.
- Add `eviction_listener` method. This method takes a closure that returns nothing.
- Update the documentation.
Replace `entry.last_modified().is_none()` with `entry.is_dirty()`.
Update the change log, readme, and migration guide.
- Rename `flush` method of `future::Cache` to `run_pending_tasks`.
- Enable unit tests in `future::base_cache`.
@tatsuya6502 tatsuya6502 force-pushed the remove-thread-pool-from-future-cache branch from 55242bf to 6e7e912 Compare August 20, 2023 02:06
@tatsuya6502
Copy link
Member Author

To fix the problem, I changed the spin loop to sometimes yield to other async tasks.

However, there is no common future yielding API and implementation in Rust. So I did some hack with async-lock crate's Mutex (commit #c86e675d). It seems to work at least with Tokio, but I need to check with other async runtimes.

I also tested with async-std, and it worked well too.

@tatsuya6502 tatsuya6502 marked this pull request as ready for review August 20, 2023 08:54
@Swatinem
Copy link
Contributor

I have been looking at this PR a little, haven’t actually run the code though.
Though I was wondering about cancelation safety, now that all the functions are async. What would happen if one task would just drop one of the futures without polling them to completion?

@tatsuya6502
Copy link
Member Author

Though I was wondering about cancelation safety, now that all the functions are async. What would happen if one task would just drop one of the futures without polling them to completion?

Ah, good question. Thank you! I overlooked.

I believe the current implementation of run_pending_tasks is not cancellation safe. For example, not running the following code at the end of run_pending_tasks will leave important cache counters inaccurate:

moka/src/future/base_cache.rs

Lines 1336 to 1338 in 16b4f89

self.entry_count.store(eviction_state.counters.entry_count);
self.weighted_size
.store(eviction_state.counters.weighted_size);

This one is even worse. Not running the last line will cause all future run_pending_tasks calls not to do anything (simply skipping):

cache.run_pending_tasks(MAX_SYNC_REPEATS).await;
self.is_sync_running.store(false, Ordering::Release);

I will review and try to solve by adding a drop guard, which ensures to run some cleanup code when Drop::drop method is called.

I hope there is no need to call async fn from drop. If we need to call async fn, we will have to use a method provided by an async runtime (e.g. block_on). Then, future::Cache will have to depend on specific async runtime. (By the way, If it has to depend on specific async runtime, I will run run_pending_task in a spawned task, so we do not have to worry about cancellation.)

@tatsuya6502
Copy link
Member Author

tatsuya6502 commented Aug 20, 2023

I will review and try to solve by adding a drop guard, which ensures to run some cleanup code when Drop::drop method is called.

I hope there is no need to call async fn from drop. If we need to call async fn, we will have to use a method provided by an async runtime (e.g. block_on). Then, future::Cache will have to depend on specific async runtime. ...

I spent only a quarter hour to review but it was enough to realize that the drop guard cannot handle some cases in run_pending_tasks.

I think Shared future (instead of the drop guard) can be used to solve our problem. By turning the future returned from run_pending_tasks into Shared, we can avoid cancelling it when the caller future is canceled, and then resume evaluation by other caller.

https://docs.rs/futures-util/0.3.28/futures_util/future/trait.FutureExt.html#method.shared

Try to make `run_pending_tasks` to be cancelling safe.
@tatsuya6502
Copy link
Member Author

I think Shared future (...) can be used to solve our problem. By turning the future returned from run_pending_tasks into Shared, we can avoid cancelling it when the caller future is canceled, and then resume evaluation by other caller.

Here is my first attempt: 4f8eff7#diff-0e33fe68a3e87c2f155b33e6f6eaf3be8359baefebb2a61ffaa6a7da45dc434a

I only checked all existing unit tests pass. I need to add a unit test to test cancellation.

Make `run_pending_tasks` to be cancelling safe.
`futures-util` v0.3.17 is the first version that supports `poll_immediate`.
@tatsuya6502
Copy link
Member Author

I need to add a unit test to test cancellation.

I added the test by #1c8e0eb. I verified that run_pending_tasks is working as expected. (Some CI jobs got compile errors because the version of futures-utl was too old. Next commit should fix it)

I will review again later but I think run_pending_tasks of future::Cache is cancellation safe now.

@tatsuya6502
Copy link
Member Author

tatsuya6502 commented Aug 21, 2023

future::Cache v0.12.0 supports async eviction listener. Some of the cache write methods, such as insert, get_with, invalidate and run_pending_tasks, may call the eviction listener. If async eviction listener runs for a long time, it can cause a cancellation.

This could be addressed by the same Shared future technique, but I am not sure if it is a right thing to do. I would rather recommend user to rewrite the async eviction listener to return quickly, e.g. by spawning the lengthy task.

- Remove unsafe `Send` and incorrect `Sync` impls from internal `cht::iter::Iter`.
- Update some code comments.
@tatsuya6502
Copy link
Member Author

tatsuya6502 commented Aug 22, 2023

I was reviewing the codes and found that cache write operations (such as insert and get_with) may not record a write operation log if the caller (enclosing future) has been cancelled. This will leave a newly inserted cached entry not managed by the cache policies (LFU filter, LRU queue, etc.). When this happens, the entry will never be evicted by the policy, while it can be read by get, replaced by other insert, or removed by invalidate.

This issue already existed in very early version of moka. But v0.12.0 will increase the chance of this issue to happen as it now supports for calling user supplied async eviction listener.

A common solution for this would be write-ahead-log, which to record the write op log ahead of the actual insert/update operation. However, we will not be able to do this because it is impossible to know the exact operation we are going to perform (insert or update to the internal concurrent hash table (cht)). Our lock-free cht allows multiple threads to try to insert and remove the same key at the same time, without taking locks. We will know our insert attempt turned out to be an insert or update only after we succeeded.

@tatsuya6502
Copy link
Member Author

I am reviewing the codes and found that cache write operations (such as insert and get_with) may not record a write operation log if the caller (enclosing future) has been cancelled.

I am going to address the above issue in a separate pull request (although I have not decided what to do with it). This pull request is already too big and I would like to merge it into main now rather than adding more stuff.

Tweak the migration guide.
clippy 0.1.73 (680cdf8168a 2023-08-21)
Copy link
Member Author

@tatsuya6502 tatsuya6502 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Merging.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

None yet

2 participants