Skip to content

Add .cumulative to cumsum & cumprod docstrings #9533

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 13 commits into from
Oct 3, 2024

Conversation

max-sixty
Copy link
Collaborator

As discussed in a couple of issues, we should be directing folks towards the .cumulative. (the only missing piece is skip_na...

I thought this was a reasonable way to have the generation script work for these; ofc open to feedback.

I also added the namedarray file to the instructions for generating

@dcherian
Copy link
Contributor

Note that the methods on the cumulative method are more performant and better supported

I'm surprised they're more performant than using numpy's cumprod, cumsum directly. Is this because cumulative redirects to numbagg?

@max-sixty
Copy link
Collaborator Author

max-sixty commented Sep 24, 2024

Yes! :)

I don't have benchmarks at https://github.com/numbagg/numbagg, since cumulative().sum() just calls move_sum with the appropriate window, and numpy doesn't have a comparable function.

But to confirm — 10x faster one over 10 columns, 2x faster over 1 column:

[nav] In [36]: A = np.random.rand(60000, 10)

[ins] In [40]: %timeit np.cumsum(A, axis=0)
2.44 ms ± 82.8 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

[ins] In [47]: %timeit numbagg.move_sum(A, window=60000, min_count=0, axis=0)
371 µs ± 16.5 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
A = np.random.rand(60000, 1)

[nav] In [50]: %timeit np.cumsum(A, axis=0)
211 µs ± 5.34 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)

[ins] In [49]: %timeit numbagg.move_sum(A, window=60000, min_count=0, axis=0)
106 µs ± 1.62 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)

@dcherian
Copy link
Contributor

we should clarify that then!

for the equivalent non-numbagg benchmark you could use xarray with use_numbagg=False?

@max-sixty
Copy link
Collaborator Author

we should clarify that then!

In the .cumulative docstring?

max-sixty and others added 6 commits October 2, 2024 12:29
Co-authored-by: Deepak Cherian <dcherian@users.noreply.github.com>
@max-sixty max-sixty merged commit e227c0b into pydata:main Oct 3, 2024
29 checks passed
@max-sixty max-sixty deleted the see-also-cumulative branch October 3, 2024 01:15
@max-sixty
Copy link
Collaborator Author

Merged, also improved the docs & suggested commands in generate_aggregations.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants