Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark results #83

Open
victimsnino opened this issue Jun 1, 2023 · 4 comments
Open

Benchmark results #83

victimsnino opened this issue Jun 1, 2023 · 4 comments
Labels
enhancement New feature or request infrastructure

Comments

@victimsnino
Copy link

Hey! I think, it can be nice to have benchmark results per commit or once-generated at least to understand how it is efficient =)

@tcbrindle
Copy link
Owner

Hi, thanks for the suggestion! I agree that this would be good. If possible it would be nice to build benchmarks for each pull request, as we do with CodeCov reports, to make sure we aren't accidentally introducing any performance regressions.

We do build the couple of benchmarks we have as part of the CI pipeline to make sure they compile, but we don't actually run them. This wouldn't be too difficult to change, but we'd still need some way of processing the output, ideally in a way that integrates well with Github. Perhaps there are Github Actions scripts already available which do that?

Beyond that, we'd probably need quite a few more benchmark tests than the two we have at the moment. Ideally these would compare a Flux pipeline with the equivalent C++20 ranges pipeline as a baseline, and possibly with a "raw loop" version as well to see how well it compares.

@tcbrindle tcbrindle added enhancement New feature or request infrastructure labels Jun 1, 2023
@victimsnino
Copy link
Author

I can recommend this one: https://github.com/benchmark-action/github-action-benchmark

Easy to configure, but graphs are too simple =)

Yeah, i think, it have to compare with raw loops and ranges. At least to see, if it doesn't provide too much performance penalty =)

@DeveloperPaul123
Copy link
Contributor

I think nanobench can output a json format that is compatible with pyperf which you can use to check for performance regressions.

@Trass3r
Copy link

Trass3r commented Jan 17, 2024

I can recommend this one: https://github.com/benchmark-action/github-action-benchmark

Thanks for the link! I've been looking for something like this.

Yeah, i think, it have to compare with raw loops and ranges. At least to see, if it doesn't provide too much performance penalty =)

Also a compile-time impact benchmark, it's often overlooked.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request infrastructure
Projects
None yet
Development

No branches or pull requests

4 participants