Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

function level coverage #780

Closed
mtozjh opened this issue Feb 28, 2019 · 28 comments
Closed

function level coverage #780

mtozjh opened this issue Feb 28, 2019 · 28 comments
Labels
enhancement New feature or request fixed

Comments

@mtozjh
Copy link

mtozjh commented Feb 28, 2019

I need to get info about what functions are tested. Currently there is only line coverage data.
if this tool can be enhanced to report that data, it will be great

@mtozjh mtozjh added the enhancement New feature or request label Feb 28, 2019
@przecze
Copy link

przecze commented May 15, 2020

I would also very like to see such feature added. Currently we have to use some home-brewed script that parses normal coverage report and tries to extract info about function, this has some issues so "native" support from pycoverage would be great.

@IterableTrucks
Copy link

By now there's only one plugin called pytest-func-cov for function coverage. But it does not support file reporting.

@nedbat
Copy link
Owner

nedbat commented Jan 7, 2023

I need some clarification about what people mean by "function-level coverage". The pytest-func-cov plugin considers a function tested if it is called directly by a test, but not if it is called indirectly. Is that what is meant? Or are indirect calls sufficient?

But it does not support file reporting.

The pytest-func-cov README shows a report with file names, so I'm not sure what is missing?

@getim
Copy link

getim commented Oct 3, 2023

We are looking to complete our current coverage report with this data as well. It looks like pytest-func-cov's definition of function coverage (a function must be called directly from a test) is different from what other sources say. Atlassian, Wikipedia and lcov's data format docs all define a function as covered if it's hit during a test at any point.

For our particular use case, having the FNF (functions found) and FNH (functions hit) numbers in the lcov report generated by coveragepy would complete the HTML reports we currently generate with genhtml from the lcov report.

@jab
Copy link

jab commented Mar 10, 2024

Being able to see what functions were covered by tests would be useful for me too, and being called indirectly should absolutely still count.

Currently we have to use some home-brewed script that parses normal coverage report and tries to extract info about function

Anything you can share here, by chance?

The pytest-func-cov plugin considers a function tested if it is called directly by a test, but not if it is called indirectly.

This makes pytest-func-cov unusable for my purposes. Also, it needs to be reimplemented due to a design flaw, and isn't actively maintained. Plus it'd be better to have this feature directly in coverage.py itself or a coverage plugin, as opposed to something external (with its own separate data format / possibly no structured data format).

@nedbat
Copy link
Owner

nedbat commented Mar 10, 2024

I have some thoughts about how to implement this, and have started some of the foundation work. I'm curious if people have ideas about how to present the data? Do you have examples of other coverage tools (possibly in other languages) that do a good job showing the results?

@jab
Copy link

jab commented Mar 10, 2024

That's awesome, thanks @nedbat!

A useful way of presenting this data for me would be something like:

❯ coverage report --functions --sort-by=function-coverage  # I made up these options

Function                   Stmts   Miss Branch BrPart  Cover
------------------------------------------------------------
mypkg:func1                  245      7     98      3  97.1%
mypkg:MyCls.method1           51      5      2      0  90.6%
...

...as well as an associated JSON representation. Even just a list of (function, cov_percent) pairs would be enough to answer questions like, "What are the best (and worst) covered functions?"

That said, just having this data collected in the first place would be useful to have landed before any presentation support is landed. The harder part for users is getting this data to be collected. But once it's collected, it's easier for us to handle pulling it out of the coverage database ourselves in whatever format we need. In case it makes things easier for you to ship the collection support first, with presentation support left for a followup release.

@nedbat
Copy link
Owner

nedbat commented Mar 10, 2024

This is an interesting point, because the function-level data doesn't need to be stored in the database at all, just as the missing lines aren't in the database. The database captures the measured data. The reporting phase can determine what is missing, and could also calculate statistics per function.

A middle ground could be something like a JSON report that includes function data.

@jab
Copy link

jab commented Mar 10, 2024

Outputting JSON about function-level coverage would be great!

Just to improvise something:

❯ coverage json --rollup=functions
{
  "files": {
    "mypkg/__init__.py": {
       "functions": {
         "mypkg:func1": {"percent_covered": 1.0, ...},
         "mypkg:MyCls.method1": {"percent_covered": 1.0, ...}
       }
    }
  }
}

And if passing --show-contexts could also show which test functions caused which functions to be covered (when using dynamic_context = test_function), that would be especially awesome. But the function-level rollup would be super useful even without that.

@nedbat
Copy link
Owner

nedbat commented Mar 12, 2024

If anyone wants to try some really hacked together functionality as a work in progress: install this temporary branch of coverage.py:

python3 -m pip install git+https://github.com/nedbat/coveragepy@nedbat/public-analysis#egg=coverage==0.0

After running coverage as usual, run regions.py: https://github.com/nedbat/coveragepy/blob/nedbat/public-analysis/lab/regions.py

It will list functions and classes, with their total percentage.

Keep in mind, this is temporary. I'm interested to hear if this is useful data.

@jab
Copy link

jab commented Mar 23, 2024

Definitely useful, thank you!

Some prior art just for completeness:

lcov's genhtml includes function coverage by default:

Screenshot 2024-03-23 at 9 52 24 AM

And gcovr does as well:

Screenshot 2024-03-23 at 9 52 38 AM

(Here's a zip of the source files and the html output I used to produce those screenshots, and here's a post about those two tools, in case it's helpful.)

@nedbat
Copy link
Owner

nedbat commented Mar 23, 2024

Thanks. I'm wondering how those tools cope with real projects with thousands of functions. How do they present the information in a useful way?

@jab
Copy link

jab commented Mar 23, 2024

Looks like both allow you to view the functions in a single file at a time (and hopefully there aren't thousands of functions in a single file). In genhtml's case, that looks like the only way to view function coverage, while gcovr also includes a view including all functions across all files (among several other views). Both tools generate trees of views that let you drill down into subdirectories (and provide associated rollups of coverage data per subdirectory). And I guess you can always use lcov's --extract or --remove options if you need to slice the data further before generating html from it via either tool

Also looks like only gcovr shows the percent of the lines in a given function that were covered ("block coverage"). In contrast, genhtml only shows "hit count". I think it's definitely worth including all the line-based coverage counts per function (hits, misses, total percent covered, ...).

@nedbat
Copy link
Owner

nedbat commented Apr 15, 2024

I've implemented more real function and class reports. Please try them out: https://nedbatchelder.com/blog/202404/try_it_functionclass_coverage_report.html

@kierun
Copy link

kierun commented Apr 16, 2024

I ran it on pynpc and got these. The project is small, and it is a little difficult to see
everything on one screen. Breaking it down per file would be cleaner. A summary page could have the top (or bottom?) covered. By that, I mean the top 10 (say?) functions/classes that are the least covered.

Functions screenshot, shrunk to fit on one screen.

screenshot-1

Class screenshot, full sized.

screenshot-2

@nedbat
Copy link
Owner

nedbat commented Apr 16, 2024

@kierun:

Breaking it down per file would be cleaner. A summary page could have the top (or bottom?) covered. By that, I mean the top 10 (say?) functions/classes that are the least covered.

Thanks for the feedback. It might not be obvious, but the column headers are click-to-sort, so you can order the report by increasing coverage to put the 10 least-covered items at the top of the page. Also, I'll shortly be adding a checkbox to hide fully-covered items (#1384) which will help you focus on where work is needed. Does that help?

@kierun
Copy link

kierun commented Apr 16, 2024

[…] It might not be obvious, but the column headers are click-to-sort […]

Oh, cool! 😁 This is perfect.

Also, I'll shortly be adding a checkbox to hide fully-covered items (#1384) which will help you focus on where work is needed. Does that help?

Yes, that would be fantastic.

Thank you so much!

@nedbat
Copy link
Owner

nedbat commented Apr 16, 2024

@kierun:

[…] It might not be obvious, but the column headers are click-to-sort […]

Oh, cool! 😁 This is perfect.

Any ideas what can I do to make the column sortability more apparent?

@kierun
Copy link

kierun commented Apr 16, 2024

Any ideas what can I do to make the column sortability more apparent?

Up and down chevrons might be good choices, like ▲/△ and ▼/▽. Of course, having them one above the other is CSS black voodoo magic. 🪄

  • Unsorted: △|▽
  • Decending: △|▼
  • Acending:▲|▽

This is where I found the glyphs, if it helps.

@nedbat
Copy link
Owner

nedbat commented Apr 16, 2024

Interesting idea. A quick attempt shows that the glyphs don't play nicely, and it's cluttered:
image

Maybe I can just add a line above the table: "Columns are sortable." I know: total cop-out.

@kierun
Copy link

kierun commented Apr 17, 2024

Yes, it is well too cluttered. Maybe just have one icon which change shape depending on the sorting?

Maybe I can just add a line above the table: "Columns are sortable." I know: total cop-out.

If it's simple and works! 🚀 🌔 Go for it!

@jab
Copy link

jab commented Apr 17, 2024

Awesome progress!

Any ideas what can I do to make the column sortability more apparent?

Rather than clutter the UI with "Columns are sortable" text, instead ensure that tables are always shown sorted by one of the columns and that the header for that column shows the arrow to indicate the sort state. Currently tables are already initialized sorted by the first column, but for some reason the arrow is hidden, so users miss this important UI cue. The example in the w3.org "sortable table" is some prior art here.

Should these reports be optional (requested with a switch) or always produced?

Always produced.

Does the handling of nested functions and classes make sense?

I wasn't sure about "Inner functions are not counted as part of their outer functions". I wonder if inner functions/classes should just be considered the same as other lines in the implementation of an (outer) function/class (at least by default).

I'm wondering how [this scales to] real projects with thousands of functions...

Is it reasonable to produce one page with every function? How large does a project have to get before that’s not feasible or useful?

Large projects have their code spread across many files in many subdirectories.

To scale better, I've been thinking about whether coverage.py's views should more directly reflect a hierarchy like "directories contain files, files contain classes and functions, classes and functions contain lines".

Each time you drill down the hierarchy, it both adds more detail and removes information you don't care about, kind of like zooming in when using a maps app. Additional "layers" could similarly be enabled to add more detail at the current "zoom level" (as with traffic in a maps app). This seems like one approach to scaling the current UI to better handle more code as well as more features of that code.

Concretely, "coverage html" could generate a tree of views that reflects the way the code is organized into subdirectories. The top-level report would show the stats for each file in the top-level directory, but if there is code in any subdirectory, the top-level report would only include a single entry for each subdirectory with its aggregate stats, and the files in that subdirectory would not be shown. You could then click a subdirectory to see the view for the files in that subdirectory. (This matches genhtml and gcovr's current behavior, fwiw.)

(Currently coverage.py provides a single view for file-level coverage that shows all files in all subdirectories combined together. To focus on files in a certain directory, you currently have to use the "filter" input, which doesn't scale as well.)

With this in mind, a new per-file view would be able to show the class- and function-level coverage for just the classes and functions in a given file, in addition to the current line-level view that is generated for each file.

Finally, each directory-level view would provide a link to a report that would show the class- and function-level view for all classes and functions below that directory level (regardless of file).

Still thinking about this but didn't want to let any more time pass before I shared the rough thoughts.

@nedbat
Copy link
Owner

nedbat commented Apr 17, 2024

I appreciate the ideas. One benefit of a flat view is that you can sort it by coverage to focus on what needs work no matter where it is.

@jab
Copy link

jab commented Apr 17, 2024

One benefit of a flat view is that you can sort it by coverage to focus on what needs work no matter where it is.

Agreed. That's what I had in mind with:

Finally, each directory-level view would provide a link to a report that would show the class- and function-level view for all classes and functions below that directory level (regardless of file).

So the top-level (root directory) report would link to a function/class-level coverage report that included all functions/classes (regardless of file or sub(sub,...)directory), preserving what's currently possible. The same could be done for all files (regardless of sub(sub,...)directory), again preserving what's currently possible.


One idea I realized I omitted above: Instead of separate views for class- and function-level coverage, would it be worth considering a combined "class and function coverage" view that merged the class and function data together, and two checkboxes would allow you to toggle them each off individually if desired? (Or maybe something even more like "layers" in a maps app, but I'm not sure what.)

@jab
Copy link

jab commented Apr 17, 2024

Just wanted to add, I'm guessing that reorganizing coverage.py's reports in the way I described is probably a lot more work than you were thinking of doing for this.

Since lcov/genhtml already support function-level coverage as well as producing trees of reports that match the directory structure, would coverage lcov && genhtml ... -- i.e., use coverage.py to collect coverage data and convert it to lcov format, then use genhtml to generate html reports from the lcov data -- would this now work for viewing function-level coverage, but fit into the alternative, directory-oriented paradigm that genhtml uses? If so, that also sounds like a nice way to decouple shipping coverage.py's progress on the data generation for function-level coverage from its progress on the presentation of the new data.

@nedbat
Copy link
Owner

nedbat commented Apr 17, 2024

Thanks for the pointer to genhtml. I'll experiment with that as an escape hatch for people who want more involved reports (once the lcov output has function data).

@nedbat
Copy link
Owner

nedbat commented Apr 18, 2024

ensure that tables are always shown sorted by one of the columns and that the header for that column shows the arrow to indicate the sort state. Currently tables are already initialized sorted by the first column, but for some reason the arrow is hidden, so users miss this important UI cue.

This is a good point. I didn't realize the initial sorting isn't indicated. I've changed the report so that it's always sorted by something, and the sorting is indicated with more obvious triangles in commit a3dff72. Thanks.

@nedbat
Copy link
Owner

nedbat commented Apr 23, 2024

(oops, forgot to mention this issue in the changelog, but) this is now released as part of coverage 7.5.0. Try it out!

@nedbat nedbat closed this as completed Apr 23, 2024
@nedbat nedbat added the fixed label Apr 23, 2024
nedbat added a commit that referenced this issue Apr 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request fixed
Projects
None yet
Development

No branches or pull requests

7 participants