-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(op2): implement fast wasm memory ops #275
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to try and approach this a different way as the potential for possible footguns with this API is probably going to be a little too high.
See the comments inline for more details, but the #[memory]
(or #[wasmmemory]
) argument should probably just use the raw v8::WasmMemoryObject on the JavaScript side, and either a &[u8]
, or &mut [u8]
on the Rust side.
The idea behind op2
is that all the complex, foot-gunny stuff should happen in the op code itself. So the magic in WasmMemory::get
should actually get in the op-generated code.
This means that you should make the slow call of a wasm memory parameter require the state, and then fetch the wasm memory resource from that state in the slow path. You would then create the slice from it as we do for buffers and pass that to the code.
I also suspect this code might be broken when calling the op from JS, as the fastcall doesn't check to see if the fastcall options wasm_memory parameter is set.
I'd like to know a bit more about how these calls work from JS/WASM first -- are calls to functions with fast equivalents always fast in wasm mode? If so, I think we might actually want to have two operating modes for wasm memory: one being a "virtual" op for accessing the "fast" wasm memory (maybe something like #[memory(from_wasm)]
?), and a second one being a "non-virtual" op that uses the underlying v8::WasmMemoryObject directly rather than using a resource intermediary.
// SAFETY: `v8::Local` is always non-null pointer; the `HandleScope` is | ||
// already on the stack, but we don't have access to it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code worries me a bit because there's no guarantee that the scope is on the stack when it is called -- it is possible to create one of these via WasmMemory::null() and then call get on it.
As mentioned above, it would be better to hide this inside the op itself because we can make those guarantees there.
fn op_wasm(state: &mut OpState, memory: Option<&mut [u8]>) { | ||
let memory = memory.unwrap_or_else(|| wasm_memory_unchecked(state)); | ||
#[op2(fast)] | ||
fn op_wasm(state: &mut OpState, #[smi] rid: u32, #[memory] memory: WasmMemory) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this particular construct is a bit too awkward -- what if we make #[memory]
a non-virtual argument that takes the v8::WasmMemoryObject
?
We do something similar to this already for buffers in both fast and slow mode.
It would look something like this:
#[op2(fast)]
fn op_wasm(#[memory] memory: &[u8]) {
...
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There can be multiple WasmMemory objects, hence it is stored in a ResourceTable and we need explicit access to state and rid.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we pass the actual v8::WasmMemoryObject
instead of storing it in the resource table, though?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It wouldn't be a fast call then
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not? We allow for v8::Local<v8::Value>
in fastcalls, so it should work? Or am I missing something?
@@ -696,6 +699,8 @@ pub enum AttributeModifier { | |||
Bigint, | |||
/// #[number], for u64/usize/i64/isize indicating value is a Number | |||
Number, | |||
/// #[memory] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bikeshedding a bit, might be better to use #[wasmmemory]
so it's more obvious to a reader.
Closes #273