Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make getting test results from fixtures easier #230

Closed
pytestbot opened this issue Nov 21, 2012 · 37 comments
Closed

Make getting test results from fixtures easier #230

pytestbot opened this issue Nov 21, 2012 · 37 comments
Labels
type: enhancement new feature or API change, should be merged into features branch

Comments

@pytestbot
Copy link
Contributor

Originally reported by: Pedro Rodriguez (BitBucket: prodriguez320, GitHub: prodriguez320)


The current way is too complex for how simple this should be.


@pytestbot
Copy link
Contributor Author

Original comment by Ronny Pfannschmidt (BitBucket: RonnyPfannschmidt, GitHub: RonnyPfannschmidt):


could you give an example of what you mean

"getting test results from fixtures" does not compute

fixtures do not have test results

tests have a result

tests use fixtures

@pytestbot
Copy link
Contributor Author

Original comment by Pedro Rodriguez (BitBucket: prodriguez320, GitHub: prodriguez320):


There is no way (well there is, but it is complex) of reacting to a test result on finalizers (adding something or saying something if a test passes or fails).

Holger Krekel and I had this conversation: http://stackoverflow.com/questions/13364868/in-pytest-how-can-i-figure-out-if-a-test-failed-from-request/13375241

He told me to file an issue for it. Sorry if I was too vague.

@pytestbot pytestbot added the type: enhancement new feature or API change, should be merged into features branch label Jun 15, 2015
@pfctdayelise
Copy link
Contributor

As there is now an example in the docs about how to do this (see http://pytest.org/dev/example/simple.html#making-test-result-information-available-in-fixtures ) I will close this, if someone thinks it really deserves better support feel free to reopen.

@RonnyPfannschmidt
Copy link
Member

There should be a less expensive way to track it

@flub
Copy link
Member

flub commented Jul 26, 2015

The example should also probably use a hookwrapper now

@RonnyPfannschmidt
Copy link
Member

true

@flub
Copy link
Member

flub commented Jul 26, 2015

@RonnyPfannschmidt So looking at this I'm not sure what you have in mind for less expensive. Looking at this I am kind of tempted to just put the reports on the item by default. That's not hard, not that expensive I would think and makes using it a lot simpler. On the other hand making the report available by default gives using this some more official endorsement and that may or may not be a good thing.

@RonnyPfannschmidt
Copy link
Member

reports take up memory, its already expensive when tracking what junitxml does on huge suites, it gets more horrific when everything remembers everything

@arosszay
Copy link

arosszay commented Jun 2, 2016

Hi guys,

[EDIT: First off, I really should thank Holger for being so kind AND taking the time and effort to provide us a way, albeit complex and indirect, but at least a possible way to check the status of a test. I have yet to read the example in detail, but I am assuming my comments here are still valid. I want to make sure that my appreciation for what you all have done to help the QA community out thus far is not missed by the frustration I lay out in my comment. Please keep this in mind as you read.]

I am a software automation qa engineer and I can tell you that having the ability in our automation framework code to determine if a previous running test has failed or has passed is CRITICAL!!! I am dumbfounded that this function isn't obvious as a critical function to have available in a super easy way as described by Pedro Rodriguez.

This is what we, automation engineers, want from pytest:

  1. Write a fixture function
  2. Within the teardown method of the fixture function we want a super easy way to check an attribute to determine if a test has failed or passed...something like (but doesn't have to be...after all I'm not a developer, If I was I'd have probably already make these changes in pytest myself) request.function.fail or previous_test_status or something simple like that.

What memory issues are you guys talking about? How hard is it to set an attribute somewhere indicating what the status of the last running test was? I don't understand why memory would even be a consideration here. What are you guys concerned about with this?

Lastly, I just want to say that the only reason I sound frustrated is because I am. Because I know that if pytest was a paid for product, this would never be an issue. And us poor QA guys like me, who DO NOT really know how to code, are stuck with open-source software that is just not fully up to par...and we've been waiting for years just to get a simple little function like the one we're talking about included in the tool we literally depend on 100% to do our work. (This issue was reported back in 2012!!)

If it's that much of a problem, why don't you guys just start making everyone pay for the product and actually make something awesome for the world of QA to use? Our jobs DO NOT have to suck because of you. Unfortunately, the reality is that our jobs do suck because of half-baked open-source software.

How much do you guys want to be paid to add this function? I bet you I can convince my company, a tiny bootstrapped startup even, to fork over 500 bucks to develop this simple function. Any takers on that? It would be nice to get this new functionality out the door in the next 3 weeks.

Thanks guys for all the hard work. I know it sucks working for free, but why did you choose to do that for such an important project that literally every single QA automation engineer using Selenium + Python depends on. Please, we really need pytest to work the best for us.

Thanks again for all the hard work thus far guys...and I'm serious about you guys thinking about how to start charging for this tool. It's important enough trust me...companies that are using it now will DEFINITELY pay for it. All of you will see money start coming in.

In any case, when do you think we can get this new function available in pytest?

@The-Compiler
Copy link
Member

Please tone it down a bit. Blaming people for not doing the thing you want them to do in their free time isn't helping anybody. Is there a paid Python test runner which is better than pytest (since pytest is, apparently, not "up to par")? I certainly am not aware of one.

And no, it doesn't suck to work for free. I'm going to argue pytest, or Python, or a lot of other free software (as in freedom) would never be where they are if they were paid and proprietary. What sucks is reading pointless rants like this. In fact, this is a perfect way to ensure people are going to be more motivated working on something else instead.

That being said, Merlinux (the company of @hpk42) is offering paid feature development for pytest and then contracting pytest core developers. There are by far not enough companies doing so so someone could work full-time. Can you change that?

@nicoddemus
Copy link
Member

nicoddemus commented Jun 2, 2016

@arosszay, I understand your frustration, but I agree completely with @The-Compiler's points. It is free software, people work on it on their on time, and we always try very hard to be accepting of external contributions so people in need of a certain feature can code it themselves. I understand the latter is not an option for you guys because you mention you're not coders, but as Florian mentioned Merlinux accepts paid feature development so perhaps that would be a solution to your specific problem (as a somewhat recent example of where that happen was the implementation of restarting slaves after a test crashing in pytest-xdist).

About keeping the reports in memory by default, it was a concern a few years ago: pytest was initially developed to run pypy's test suite which contains 20k+ tests, so keeping test results in memory was a concern back then. It might not be a problem anymore, but I particularly like pytest's current design of triggering hooks and keeping state to a minimum; this permits people to implement plugins which keep that information in memory if they want to.

Having said that and using the sample code provided, it seems easy to provide a plugin which would append that information automatically to all test items.

@astraw38
Copy link

astraw38 commented Jun 2, 2016

I think probably one of the best things about pytest is the ability to extend and adapt it to your needs. If you need it to have the capability to track the last test's status, it's relatively easy to do that in a conftest.py or a plugin.

Building every request into the core functionality of the project only adds more and more WORK required for the core maintainers. They care about more than just your use case.

@arosszay
Copy link

arosszay commented Jun 2, 2016

@The-Compiler , tone it down for what? My tone accurately conveys the situation I find myself in, so I'd rather express the truth than lie.

I understand how my complaint is a non-issue for developers. That is exactly why I said something. Because non-developers are using the pytest product.

Because pytest is open-source does not excuse it from being a product that customers use.

Yes, pytest has customers believe it or not. The customers might not be paying for the product, but we are customers nonetheless.

POINT #1) If we had the option to pay, I’m sure a huge percentage would pay for the better product rather than not-pay for the current free-version of the product. Why not just make pytest free in its current form…this makes it available for folks to use for free and be open-source yada yada yada…but all new development on pytest is only available for people who pay for the product? Pretty simply solution to getting money to the guys who pour their blood, sweat, and tears into this project. But who knows…I’m ignorant about a great many things, and there’s probably a bunch of factors that I’m ignorant of that makes my suggestion silly. I dunno, I’m just trying to brainstorm solutions to a problem I perceive. Plus, just the idea that people are doing real hard work and not getting financial compensation for that work infuriates my soul. You guys SHOULD be getting paid. I’m shocked to find out that you prefer otherwise!!! …that being said, I will look into the link provided about pay-for-service option with pytest. Thank you for informing me of this :)

POINT #2) How would you feel if the hotel you booked…the only hotel in the city that you need to travel to that is (because there is no competition and another hotel doesn’t exist)…how would you feel if that hotel didn’t include a bathroom for your use? Would you be a satisfied customer of said hotel? Probably not. Most likely, you would be thinking how it is possible that a hotel was built with zero bathrooms. Well, maybe the developer of the hotel had different things in mind. The customers that actually use the hotel, however, have other concerns…like wanting to use a bathroom in the place of temporary residence they booked.

Furthermore, how would you then feel if developer of said hotel basically told you to f*** off after you filed said complaint with the hotel? He recommended to, “go build your own bathrooms in the hotel if you really feel strongly about it..this is an open source hotel project after all, don’t you get it bro?” My guess is that you would feel pretty miffed. Why? Because you won’t be able to develop the skills necessary in the time you have before you need to use the hotel to build the damn bathrooms in it. Plus, you probably have other things to do with your life and don’t necessarily find it an amazing opportunity to learn how to build bathrooms in hotels.

Given this analogy, I hope you are more capable of empathizing with my situation. I displayed my empathy in my comment for folks who work on open-source code (i.e. the product developers). I expect product developers to show empathy towards their customers. @The-Compiler, why have contempt for customers of your product? Being a product developer, isn’t it in your best interest to delight your customers as oppose to chastise them, disregard their concerns, and tell them that communicating their concerns will result in their concerns being ignored as a conscious decision?

You are right, I’m sure it’s very free (freedom) to work on open-source projects where a boss can’t fire you for being a complete asshole. You are definitely lucky there. I’m sure your attitude is 100% kosher at the job that provides you with a paycheck, however. I’m sure at that job, delighting your customers is taken for granted. If you want to be a good product developer, I suggest you bring that kosher attitude to all products you develop. You might be surprised with a, “Thank you for making pytest the best f***ing tool ever” comment instead of what I gave you today. But keep being upset at your customers for telling you what they want. Maybe that’s the kind of human being you want to be.

Hogel, @hpk42, again…my UTMOST thanks and appreciation for giving us at least an example we can go by to achieve the functionality that us lowly non-coder skilled automation engineers need to use in order to delight our customers (i.e. the companies that hire us to make sure their software development’s quality is up to par) as best as possible.

Am I frustrated that pytest isn’t perfect in my eyes? Yes. Is that frustration okay? Yes. Should product developers be upset that I’m frustrated? No. Should they be concerned? If they care about their customers, I think so. Do I blame anyone working on pytest for it not being perfect according to my needs? Most certainly not. I understand things are not ideal. Should a customer voice their frustration and concerns to the developers of a product they depend on? In a free and democratic society, I certainly hope so. Should those customer frustrations and concerns be well received by product developers? Yes.

@arosszay
Copy link

arosszay commented Jun 2, 2016

@astraw38, well said and I agree. Unfortunately for a guy like me, the thought of extending pytest to adapt to my needs is similar to the thought of climbing Mt. Everest for someone who has never climbed a mountain before (i.e. the thought is kind-of terrifying).

Building every request into the core functionality would not be a smart business decision, no matter what business you are in. I don’t recall asking for that though. From my POV, what I am asking for is obviously a core function…much like the bathroom in the hotel analogy I provided above. If developers of pytest don’t see this as a core function…that’s totally cool. I’m here telling you that customers of your product DO see what I’m asking for as a core function.

For the record, my use case applies to all companies that are using the Python bindings of Selenium within the context of an in-house custom built GUI automation framework. I’d be floored to be informed I’m the only guy that finds themselves in this situation.

@RonnyPfannschmidt
Copy link
Member

please stop acting like we are not fulfilling a contract

you are verbally abusing volunteers without any right whatsoever as far as i can tell

if that is how you want to carry yourself, please do it somewhere where you pay the people for taking that

we are not paid to take that, and out free time is our value, we do not want to have it wasted by someone acting entitled

@arosszay
Copy link

arosszay commented Jun 2, 2016

@RonnyPfannschmidt, I concede to your point if you agree that I am not a customer of pytest. Otherwise, I am not being abusive and I am well within my rights. I will concede my point and agree that I am abusive if in fact folks like me should not and are not considered customers of open-source products like pytest.

If that reality becomes evident to me, based on feedback from you guys. I will happily shut the f*** up as I have no rights here. But please, help me get on the same page, so I'm not wasting my time nor yours

@The-Compiler
Copy link
Member

As long as you don't pay for pytest development, you are not a customer of pytest, and nobody has any obligation to work on anything except on whatever they feel like working on.

The best ways to get a feature implemented in a project full of volunteers is convincing them it's something they want to work on - either by asking nicely, paying someone for it, or contributing it yourself.

@RonnyPfannschmidt
Copy link
Member

@arosszay for the points of your list

  1. if pytest was not completely free as in freedom i would never have joined the effort and turn into someone that does its maintenance and releasing, neither would many of the core contributors
  2. comparing free open source software to a hotel gives of a very explicit impression of your understanding - i cant even begin to describe my sick gut feeling given a comparison so horrific and misfit

@nicoddemus
Copy link
Member

Guys, let's all take a breather, I think this discussion is getting a little too heated. 😬

Let's recapitulate: the problem in this issue has been around for a few years now, but I think there are two important points to consider here:

  1. The core developers are not entirely sure this functionality should be in the core; and
  2. There is already a 3 line workaround that does the job;

I'm pretty sure 2) is the reason why this didn't get more attention: a simple workaround for a not so common issue is usually OK (and yes, this is not so common as one might think). We (maintainers) are usually wary of introducing new functionality, because we risk of introducing new issues for (the many) existing pytest users. A functionality that might seem obvious and critical for some might introduce bugs or be a nuisance to others.

That's not to say we (maintainers) don't care about users, on the contrary. The problem is we all have limited time, so we must choose what issues to tackle on the free time we have to work on pytest. If an issue has a simple workaround (as this one) it is usually put in the background.

Having said that, if an user demonstrates why the workaround is a problem in a polite manner, usually people are more willing to help them (maintainers and users alike).

@arosszay you never before commented on the urgency of this issue for you, and why you couldn't apply the workaround pointed out by @pfctdayelise. In our minds, this is a small enhancement with a known workaround that has been put in the background for more urgent issues. I'm sure if you have expressed your concern in a more polite manner before you got so frustrated (I'm assuming you have been feeling like that for awhile) someone might have stepped in and helped you. I understand your frustration, but a little politeness here and there goes a long way. 😁

To summarize, and to avoid letting this become a flame war, @arosszay could be helped to implement the workaround in his infrastructure, or even be guided to write a plugin and even publish it on PyPI. I would gladly help you in this regard, btw.

@arosszay
Copy link

arosszay commented Jun 2, 2016

@nicoddemus, well dang. Now my response is pointless, but I will keep my thoughts here in the comment for the record and just because I have a stupid ego that cares that I spent the last 20 minutes responding to @The-Compiler. Feel free to read those thoughts below the ---- cutoff. For the record though, I acknowledge that everything written below the ------cutoff is invalid now after @nicoddemus response, and should be considered as 'comedy material' for your reading pleasure.

In the meantime, @nicoddemus, your reply was actually constructive, so thank you for that. I'm serious. If you guys knew me, you'd know I'm the guy in the office that gets fired up rather quickly about stuff...that's just my personality and it's all good. Kinda like being gay, but just high energy instead. I can't choose this. This is something I am naturally. Choosing something different is equivalent to the gay person 'choosing' to be straight. Unfortunately, your assumption that I was frustrated by this for a long time is wrong....I just straight freaked out on the spot after discovering this functionality wasn't available by default. Again...that's my personality...I understand that other people find it too aggressive and what not. I apologize for being me, if that helps you guys communicate with me.

I tried to anticipate negative reaction to my frustration best I could with my EDIT, but clearly it was a failed effort. It's okay, we all live and learn and us engaging in this thread hopefully added some kind of value for each of us as human beings. @nicoddemus, with everything you said in your comment...I now REALLY understand the volunteers perspective on my concern. THANK YOU FOR THAT!!! Super helpful :)

With that knowledge now in my possession, I officially retract my request for this feature being added in pytest. I will, as I was going to do anyway, take a thorough and detailed look at the example workaround provided by Hogel. Hopefully I'll be able to figure things out and all will be good in the land. If not, @nicoddemus, I'll humbly consider asking for help...which, for some reason, I thought would be a much bigger ask than implementing a feature...but, hey, we all learning!

Thanks again for the helpful information and context.

cutoff-----------------------------------------------------------------------------------------------------------------cutoff

@The-Compiler, okay thank you for that. Now I understand.

I wrongly assumed that volunteers would be motivated to implement a feature if they discovered the users of the product they built were frustrated with their product. I wrongly assumed you guys would react the same way I would If I was you. No worries, it's all good. I gave it my best shot. More importantly, however, I will heed your good advice and next time I find myself in a situation where I need something from volunteers on an open-source project I promise you I will:

#1) not assume they are product developers, but keep in mind that they are volunteers ONLY

#2) communicate with volunteers by asking nicely for the things that will make my employer happy

#3) pay someone to develop things that I, myself, need (this is not one of those times, but the job I personally hired ChopDawg to do for me is!) I will also inform my employer of the difficulties I'm having and they will determine whether or not it's worth the cost to pay for development of pytest.

As much as I'd love to, I will not however, contribute developing this feature for pytest, I do not have the skills to do that and I am unwilling to take the effort to learn the necessary information to be able to do that. That's a ridiculous request of a non-developer. Again...just as ridiculous as to ask the guy who booked the hotel room to go and build his own bathroom. I understand that it's possible for me to jump in and do what I want, but I am telling you I can't, but that still doesn't relieve me of my dependence on your work and efforts. What's a brother to do yo? Nothing obviously. I have no power except to rant and rave in a comment thread hoping someone will feel my pain and help me out..but this is the last of it. Never again. I understand I'm not a customer now. I wish things were different. I wish we lived in a good world where goodness thrives and people help each other. I know better though, especially evidenced by the responses to my comment, that I live in an evil world where evil thrives and people are reluctant to lend a helping hand. An evil world where the only thing people are willing to do are the things that benefit themselves in some way. Never-mind if said person created something that other people now increasingly depend on. That's not MY PROBLEM after all. Gee, thanks fellow human!

@The-Compiler
Copy link
Member

Thanks @nicoddemus for cooling things down, and apologies from my side as well if things got heated up a bit too much. I also wrote my initial response before/during (?) your edit, so I never saw that in the first place.

It's not that we don't want to help you or we don't care about issues people are having with pytest - but as @nicoddemus said: With 300 open issues, and a handful of active core developers which do this thing in their spare time and all probably have other hobbies as well, there's only so much we can do... 😉

@nicoddemus
Copy link
Member

Just to get the ball rolling, to add a rep object to items in a request, all you need is to add this to your root conftest.py file:

import pytest

@pytest.hookimpl(hookwrapper=True, tryfirst=True)
def pytest_runtest_makereport(item, call):    
    outcome = yield
    rep = outcome.get_result()
    setattr(item, "rep_" + rep.when, rep)
    return rep

Now your fixtures can inspect request.node (which is an item):

import pytest

@pytest.yield_fixture
def log_on_failure(request):
    msgs = []
    yield msgs.append
    item = request.node
    if item.rep_call.failed:
        print('\nlogged messages:', item.nodeid)
        for m in msgs:
            print ("- %s" % m)

Example:

def test_1(log_on_failure):
    log_on_failure('starting')
    assert 1 + 1 == 2
    log_on_failure('step 2')
    assert 2 * 4 == 4
    log_on_failure('step 3')
    assert 8 == 3
    log_on_failure('step 4')


def test_2(log_on_failure):
    log_on_failure('starting')
    assert 1 + 1 == 2
    log_on_failure('step 2')
    assert 2 * 2 == 4
    log_on_failure('step 3')
    assert 0
    log_on_failure('step 4')    

Running this with py.test -s you can see in the output:

test_foo.py F
logged messages: test_foo.py::test_1
- starting
- step 2

To change that into a reusable plugin is piece of cake if you use cookiecutter-pytest-plugin. 😉

@arosszay
Copy link

@nicoddemus , since you were so amazing at helping me I want to show you my appreciation for your efforts by a financial contribution to yourself. If you are willing to give me your paypal email, I will send over some loot for you because, honestly, your code was the only thing that got me up and running. I tried to figure it out on my own 1st, by looking at the example Hogel provided in the pytest docs...but alas could not get it to work for the life of me. I really wanted to do it on my own since you guys gave me such a hard time over here, but I couldn't do it and ended up having to come back here and use what you provided. After having this experience, I really feel like I owe you big time @nicoddemus , because you saved my butt!! Thank you for being so awesome!!! Hit me up with that paypal if you want some monies...I owe you!

@nicoddemus
Copy link
Member

Hey @arosszay!

A token of appreciation would certainly be welcome, although by no means required. My PayPal account is the same email address as in my profile (GH's username @ gmail). Thanks! 👍 😁

@arosszay
Copy link

arosszay commented Aug 11, 2016

@nicoddemus , will you believe it?! I got the following error from PayPal: "Sorry, this recipient can’t accept personal payments." I used your GH username @ gmail as you specified

If you can confirm or do something about this, and update me, I will try and send you funds again. Please advise!!

@nicoddemus
Copy link
Member

@arosszay could you drop me a line by email? This is off-topic this thread. 😁

@victoragung
Copy link

I'm trying to use the example here https://docs.pytest.org/en/latest/example/simple.html#making-test-result-information-available-in-fixtures

However, it doesn't seem to work when I use scope="module" on the fixture. Can I please get some pointers on how to make it work for module scope?

@nicoddemus
Copy link
Member

@pappavic in a module-scope fixture, request.node is actually the module object. You will have to adapt the pytest_runtest_makereport hook to store that information in the module object then, probably in a dict:

# conftest.py
import pytest

@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
    # execute all other hooks to obtain the report object
    outcome = yield
    rep = outcome.get_result()
    _test_reports = getattr(item.module, '_test_reports', {})
    _test_reports[(item.nodeid, rep.when)] = rep
    item.module._test_reports = _test_reports
# test_module_rep.py
import pprint
import pytest

def test_foo(something):
    pass

@pytest.fixture(scope='module')
def something(request):
    yield
    print()
    pprint.pprint(request.node.obj._test_reports)

Output:

module_rep\test_module_rep.py .
{('test_module_rep.py::test_foo', 'call'): <TestReport 'test_module_rep.py::test_foo' when='call' outcome='passed'>,
 ('test_module_rep.py::test_foo', 'setup'): <TestReport 'test_module_rep.py::test_foo' when='setup' outcome='passed'>}

@nicoddemus
Copy link
Member

I think it is time to close this issue.

@pappavic feel free to follow up with more questions about #230 (comment). 👍

@victoragung
Copy link

@nicoddemus Got it, thank you!

@toosto
Copy link

toosto commented Nov 25, 2021

@nicoddemus With respect to making https://docs.pytest.org/en/latest/example/simple.html#making-test-result-information-available-in-fixtures work for module scope, you have provided a solution above. Could you please provide me with an alternative snippet where I would use the hook wrapper code same as written in that html link, but instead access the results from the module node(request.node in a module level fixture)?
Basically, I want to find a way to reach the test item from the module node and read the report/outcome from it, instead of setting the result on the test item's module.

I want to fetch the corresponding test results in the module's tear down code.

@flub
Copy link
Member

flub commented Nov 30, 2021 via email

@toosto
Copy link

toosto commented Nov 30, 2021

I want to decide whether I should perform a tear down of the module fixture's setup, based on whether the associated tests have failed.

@nicoddemus
Copy link
Member

As @flub commented, you can't access the current test from a module fixture by definition. The only way to have some more fine-tuned control would be to implement pytest_runtest_teardown yourself, and from there access the module fixture data somehow (probably through a global) when your desired condition happens.

@toosto
Copy link

toosto commented Nov 30, 2021

Maybe I didn't write it clearly. Apologies. Here is, hopefully, a better version of my problem:

If my module contains 4 tests(A, B, C, D), and 3(A, B, C) of them use the same module scope fixture(say mod_fix_1), while the 4th(D) doesn't use mod_fix_1, I want to find/check the result of tests A,B,C(because they all use mod_fix_1) in the tear down code of mod_fix_1, and perform the teardown only if all A,B,C have passed.

I hope that make sense.

@nicoddemus
Copy link
Member

Hi @toosto,

Thanks for the explanation, I now understand better what you mean.

However high-scope fixtures don't track which tests use it, you will need to do it yourself using hooks, as in the examples in this thread.

@toosto
Copy link

toosto commented Nov 30, 2021

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: enhancement new feature or API change, should be merged into features branch
Projects
None yet
Development

No branches or pull requests

10 participants