Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gh-74690: typing: Call _get_protocol_attrs and _callable_members_only at protocol class creation time, not during isinstance() checks #103160

Merged
merged 9 commits into from
Apr 5, 2023

Conversation

AlexWaygood
Copy link
Member

@AlexWaygood AlexWaygood commented Apr 1, 2023

This PR proposes caching the results of _get_protocol_attrs() and _callable_members_only(), so that they only need to be computed once for each protocol class. This hugely speeds up calling isinstance() against runtime-checkable protocols on the "second call", for all kinds of subtypes of a runtime-checkable protocol. There is, however, a small behaviour change:

>>> from typing import *
>>> @runtime_checkable
... class Bar(Protocol):
...     x: int
...
>>> class Foo:
...     def __init__(self):
...         self.x = 42
...
>>> isinstance(Foo(), Bar)
True
>>> Bar.__annotations__["y"] = int
>>> isinstance(Foo(), Bar)  # Evaluates to `False` on `main`; `True` with this PR

It seems pretty unlikely that anybody would be doing that, though (monkey-patching methods or the __annotations__ dict on a protocol class itself). Do we care about the behaviour change? Is it worth documenting the behaviour change, if we do decide it's okay?

Here's benchmark results on my machine for this PR:

Time taken for objects with a property: 1.76
Time taken for objects with a classvar: 1.69
Time taken for objects with an instance var: 2.38
Time taken for objects with no var: 7.28
Time taken for nominal subclass instances: 19.92
Time taken for registered subclass instances: 11.60

And here's the same benchmark on main:

Time taken for objects with a property: 3.14
Time taken for objects with a classvar: 3.14
Time taken for objects with an instance var: 11.57
Time taken for objects with no var: 15.26
Time taken for nominal subclass instances: 24.60
Time taken for registered subclass instances: 21.32

(The benchmark is pretty skewed towards showing a good result for caching, since it just calls isinstance() 500,000 times against the same runtime-checkable protocol.)

Benchmark script
import time
from typing import Protocol, runtime_checkable

@runtime_checkable
class HasX(Protocol):
    x: int

class Foo:
    @property
    def x(self) -> int:
        return 42

class Bar:
    x = 42

class Baz:
    def __init__(self):
        self.x = 42

class Egg: ...

class Nominal(HasX):
    def __init__(self):
        self.x = 42

class Registered: ...

HasX.register(Registered)

num_instances = 500_000
foos = [Foo() for _ in range(num_instances)]
bars = [Bar() for _ in range(num_instances)]
bazzes = [Baz() for _ in range(num_instances)]
basket = [Egg() for _ in range(num_instances)]
nominals = [Nominal() for _ in range(num_instances)]
registereds = [Registered() for _ in range(num_instances)]


def bench(objs, title):
    start_time = time.perf_counter()
    for obj in objs:
        isinstance(obj, HasX)
    elapsed = time.perf_counter() - start_time
    print(f"{title}: {elapsed:.2f}")


bench(foos, "Time taken for objects with a property")
bench(bars, "Time taken for objects with a classvar")
bench(bazzes, "Time taken for objects with an instance var")
bench(basket, "Time taken for objects with no var")
bench(nominals, "Time taken for nominal subclass instances")
bench(registereds, "Time taken for registered subclass instances")

@AlexWaygood AlexWaygood added type-feature A feature request or enhancement performance Performance or resource usage DO-NOT-MERGE stdlib Python modules in the Lib dir topic-typing 3.12 bugs and security fixes labels Apr 1, 2023
@AlexWaygood AlexWaygood removed the request for review from gvanrossum April 1, 2023 00:04
@AlexWaygood
Copy link
Member Author

Removing request for review from @gvanrossum since he's on vacation :)

@AlexWaygood
Copy link
Member Author

Cc. @posita, for interest :)

@carljm
Copy link
Member

carljm commented Apr 1, 2023

Although I agree that runtime patching of a Protocol is pretty unlikely in practice, I still find this change uncomfortable, because in case any runtime patching does occur, the new behavior is somewhat unpredictable and hard to specify accurately. Effectively the state of the Protocol will be "locked in" whenever the first isinstance check against it occurs. So patches that take effect before that will be respected, and after that won't be.

I feel like "runtime patching of a Protocol is not ever respected" would be a defensible behavior, but this inconsistent behavior seems like something we shouldn't adopt.

The ideal behavior would probably be "runtime patching of a Protocol is not even possible and errors immediately," (and then we could safely make this optimization) but that's quite difficult or impossible to implement.

@JelleZijlstra
Copy link
Member

There is some similar caching for ABCs. I'm not too familiar with how it works, but it might be worth looking into how that cache deals with monkeypatching. If we ignore monkeypatching there, it might be fine to do the same for Protocols.

@AlexWaygood
Copy link
Member Author

AlexWaygood commented Apr 1, 2023

I feel like "runtime patching of a Protocol is not ever respected" would be a defensible behavior, but this inconsistent behavior seems like something we shouldn't adopt.

We could just calculate the __protocol_attrs__ and __callable_proto_members_only__ at protocol-class creation time (in Protocol.__init_subclass__ or _ProtocolMeta.__new__) rather than at first-isinstance()-call-time. That would slow down protocol class creation a little, but probably not by much, and protocol class creation is only ever done once. Would you like that behaviour more?

@carljm
Copy link
Member

carljm commented Apr 1, 2023

We could just calculate the __protocol_attrs__ and __callable_proto_members_only__ at protocol-class creation time (in __init_subclass__) rather than at first-isinstance()-call-time. That would slow down protocol class creation a little, but probably not by much, and protocol class creation is only ever done once. Would you like that behaviour more?

I would, yeah. It seems like a consistent behavior that we could document, that the state of a runtime-checkable Protocol is locked in when it is defined and runtime patching of the Protocol class has no effect.

It also seems unlikely that someone would bother to decorate a Protocol as runtime-checkable and then never runtime check anything against it, so I'm not sure how often the laziness would offer a benefit?

It's entirely possible I'm being too conservative here, though -- if @JelleZijlstra is right that there is already similar caching in ABCs with similar issues, that would be a strong signal that we don't need to care here.

@AlexWaygood
Copy link
Member Author

AlexWaygood commented Apr 1, 2023

It also seems unlikely that someone would bother to decorate a Protocol as runtime-checkable and then never runtime check anything against it, so I'm not sure how often the laziness would offer a benefit?

Maybe a library that provides a large number of runtime-checkable protocols as utilities? Doing the calculation of these attributes at class-creation time could plausibly slow down the import of that library, and the user of the library might end up only using the classes for type annotations, never doing any isinstance() checks.

But, doesn't really feel like a big deal.

It's entirely possible I'm being too conservative here, though -- if @JelleZijlstra is right that there is already similar caching in ABCs with similar issues, that would be a strong signal that we don't need to care here.

I also have no idea how the ABC caching works, but will experiment and get back to you!

@AlexWaygood
Copy link
Member Author

AlexWaygood commented Apr 1, 2023

Ah yes, here's the ABC caching behaviour:

>>> import collections.abc
>>> class Foo:
...     def __iter__(self):
...         while True: yield 42
...
>>> isinstance(Foo(), collections.abc.Iterable)
True
>>> iter(Foo())
<generator object Foo.__iter__ at 0x000001DBE55B7040>
>>> del Foo.__iter__
>>> isinstance(Foo(), collections.abc.Iterable)
True
>>> iter(Foo())
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: 'Foo' object is not iterable

Given that behaviour, it could well be reasonable to slap a cache on the whole _ProtocolMeta.__instancecheck__ call, similar to the way beartype does it... though I still don't much like the idea of such a big behaviour change tbh.

@AlexWaygood
Copy link
Member Author

I do actually think it will make the code easier to read and understand, apart from anything else, if we do the calculation at class-creation time, rather than lazily as I currently have it in my PR. So I think I will make that change.

@AlexWaygood AlexWaygood changed the title gh-74690: typing._ProtocolMeta.__instancecheck__: Cache results of _get_protocol_attrs and _callable_members_only gh-74690: typing: Call _get_protocol_attrs and _callable_members_only at protocol class creation time, not during __isinstance() checks Apr 1, 2023
@AlexWaygood AlexWaygood changed the title gh-74690: typing: Call _get_protocol_attrs and _callable_members_only at protocol class creation time, not during __isinstance() checks gh-74690: typing: Call _get_protocol_attrs and _callable_members_only at protocol class creation time, not during isinstance() checks Apr 1, 2023
@AlexWaygood
Copy link
Member Author

AlexWaygood commented Apr 1, 2023

Moving the calls to protocol-class-creation time makes the benchmark even faster (I've updated the numbers in my initial post).

This comes at the cost of making protocol class creation about 40% slower. I think that's actually a reasonable trade-off. Creating the protocol class only needs to be done once, whereas isinstance() checks against the protocol class may need to be done repeatedly. Moreover, a 40% slowdown sounds severe, but I doubt it will actually lead to import times getting slower anywhere. E.g. typing.py defines seven runtime-checkable protocols, but importing typing doesn't seem to get significantly slower (if at all) with this PR.

Lib/typing.py Outdated
Comment on lines 2000 to 2006
def __init__(cls, *args, **kwargs):
cls.__protocol_attrs__ = _get_protocol_attrs(cls)
# PEP 544 prohibits using issubclass()
# with protocols that have non-method members.
cls.__callable_proto_members_only__ = all(
callable(getattr(cls, attr, None)) for attr in cls.__protocol_attrs__
)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only reason I'm adding this method here rather than doing this work in Protocol.__init_subclass__ is that it seems slightly more backwards-compatible. With this PR, _ProtcolMeta.__instancecheck__ assumes that all classes with _ProtocolMeta as their metaclass will have a __protocol_attrs__ attribute. Since _ProtocolMeta is an undocumented implementation detail, it should only be Protocol and Protocol subclasses using _ProtocolMeta as their metaclass, and if we could count on that, then it would be safe to do this work in Protocol.__init_subclass__. But it's possible users might have been reaching into the internals of typing.py and creating other classes that use _ProtocolMeta as their metaclass, and I don't want to risk breaking their code unnecessarily.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AlexWaygood
Copy link
Member Author

This change is clearly newsworthy, but I'm adding "skip news" for now -- I'll add a NEWS entry (and possibly a note in "What's new in 3.12") after all the performance-related PRs have been decided on (and possibly merged).

@AlexWaygood
Copy link
Member Author

main and this PR branch are both slower after 6d59c9e, so I've updated the benchmark results in the PR description.

Copy link
Member

@carljm carljm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me, assuming NEWS is coming later.

Lib/typing.py Outdated Show resolved Hide resolved
@AlexWaygood
Copy link
Member Author

I confirmed locally that this still provides a big speedup, even after #103160 being merged in

@AlexWaygood AlexWaygood merged commit 3246688 into python:main Apr 5, 2023
10 checks passed
@AlexWaygood AlexWaygood deleted the protocol-attrs-cache-1 branch April 5, 2023 11:19
gaogaotiantian pushed a commit to gaogaotiantian/cpython that referenced this pull request Apr 8, 2023
…bers_only` at protocol class creation time, not during `isinstance()` checks (python#103160)
warsaw pushed a commit to warsaw/cpython that referenced this pull request Apr 11, 2023
…bers_only` at protocol class creation time, not during `isinstance()` checks (python#103160)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
3.12 bugs and security fixes performance Performance or resource usage skip news stdlib Python modules in the Lib dir topic-typing type-feature A feature request or enhancement
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants