Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add lru_cache on parse for observe to speed up instantiation time #1344

Merged
merged 2 commits into from
Nov 11, 2020
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion traits/observation/parsing.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
# Thanks for using Enthought open source!

import contextlib
from functools import reduce
from functools import lru_cache, reduce
import operator

from traits.observation import _generated_parser
Expand Down Expand Up @@ -254,6 +254,7 @@ def _handle_tree(tree, default_notifies=None):
tree.children, default_notifies=default_notifies)


@lru_cache(maxsize=128)
Copy link
Member

@mdickinson mdickinson Nov 11, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One (non-blocking) suggestion: it's probably worth pulling the 128 out into a (private) named constant, and defining that constant near the top of the file with a comment line explaining what it's for. E.g., something like:

#: Maximum number of parsed observation patterns stored in the LRU cache
_OBSERVE_PATTERN_CACHE_MAXSIZE = 128

(but I haven't put any thought into a good name, so feel free to come up with something better)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm. It's really an expression cache, not a pattern cache. Given that it's a private variable and that the observation context should be clear, maybe _EXPRESSION_CACHE_MAXSIZE?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I too prefer a module level constant (the truly desperate can change the value if they needed to).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with the module level constant, unfortunately since the cache is created via a decorator at import time (I think), there is no way to change it at run time by changing the constant - you'd need to patch the source.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is no way to change it at run time by changing the constant - you'd need to patch the source.

Ah yes you are right. Nevermind that then, let's leave that to the truly desperate.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yikes, I made this change locally but forgot to push... I will open a trivial separate PR adding the module level constant

def parse(text):
""" Top-level function for parsing user's text to an ObserverExpression.

Expand Down