Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor!: Drop NodeClassSet #389

Merged
merged 3 commits into from
Apr 29, 2024
Merged

refactor!: Drop NodeClassSet #389

merged 3 commits into from
Apr 29, 2024

Conversation

mwcampbell
Copy link
Contributor

I now believe that this was a bad API design decision on my part. The original rationale was explained in a blog post on optimizing node size, but I now think I went too far. It's not at all clear from real-world measurements that this approach to minimizing node size at the cost of speed and API complexity is worth it. I mention speed because, of course, it takes time to determine whether the node class can be shared, by comparing the new node class with existing entries in the BTreeSet, and this is done every time NodeBuilder::build is called. And even if we eventually gather data showing that the size versus speed tradeoff is worthwhile, I think it was a mistake on my part to expose this complexity in the API. I now think it would be better to have a global NodeClassSet behind a mutex, as we already had to do for serde support, than to expose this implementation detail to the users; we already know that it caused confusion for at least one potential user.

This change does increase the size of Node from 32 to 112 bytes, a difference of 80 bytes. I think this is completely acceptable; I remember from chat room discussion during my work on Node size optimization last year that people were happy when I got the size down to 128 bytes.

Looking at it another way, I would rather ship 1.0 without the API complexity of NodeClassSet, and figure out how to re-add the optimization in a backward-compatible way later if it's truly necessary, than ship with this API complexity and be stuck with it forever. My work on integrating AccessKit into GTK has led me to refocus on making AccessKit as simple and easy as possible for our users, the toolkits, and that's the primary purpose of this PR.

@mwcampbell
Copy link
Contributor Author

A secondary reason for this PR is that I now know that the attempted optimization is a net loss in some cases, like the prototype Newton adapter (on the newton-prototype branch). In the current version of that adapter, Node instances are always ephemeral; they're always immediately serialized and pushed, and the adapter doesn't use accesskit_consumer at all. So in that case, calculating whether node classes can be shared, to optimize memory usage, is pure waste. For a use case like the Newton adapter, the important thing is to create, serialize, and destroy the nodes as fast as possible, and keep code size and complexity to a minimum.

@mwcampbell
Copy link
Contributor Author

I added a related, smaller refactor: now we wrap the Node in Arc in the consumer library, rather than using Arc for the slice of property values in the core crate. I know I've gone back and forth a bit on this. Rationale: Copying the property values into an Arc is a waste for things like the Newton adapter, and now that Node is a bit bigger, wrapping that whole struct in Arc will reduce copying when we finally move to immutable-chunkmap.

@DataTriny DataTriny merged commit 1b153ed into main Apr 29, 2024
10 checks passed
@DataTriny DataTriny deleted the drop-node-class-set branch April 29, 2024 10:09
@mwcampbell mwcampbell mentioned this pull request Apr 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants