Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache calls to compile_str #1516

Merged
merged 3 commits into from
Sep 10, 2021
Merged

Cache calls to compile_str #1516

merged 3 commits into from
Sep 10, 2021

Conversation

mdickinson
Copy link
Member

This PR adds an LRU cache to the compile_str function that compiles an observe mini-language expression in text form to a collection of ObserverGraphs.

In testing on a medium-sided Traits-using application, this reduced the number of calls to ObserverExpression._as_graphs from 221 to 90 at application startup time.

The PR also hoiks compilation of the expression provided to an observe decorator up a level: in code like:

@observe("foo:bar.baz")
def _some_method(self, event):
    ...

compilation currently (in main) occurs when observe("foo:bar.baz") is applied to the _some_method function. With this PR, it occurs as part of the observe("foo:bar:baz") call instead. This will rarely make a difference to real code, but it does allow the possibility of sharing observe-decorators without incurring multiple compilations:

observe_all_updates = observe("children:items:updated")

@observe_all_updates
def _some_method(self, event):
    ...

@observe_all_updates
def some_other_method(self, event):
    ...

Note: caching compilation of ObserverExpressions to graphs isn't useful: equality for ObserverExpressions is based on the conversion to graphs, so a conforming hash function would also need to be based on _as_graphs, so would perform the exact same operation that's being cached.

No tests, because there should be no perceptible change in behaviour other than performance. (Though it would make sense to develop a benchmark test suite at some point.)

@corranwebster
Copy link
Contributor

Note: caching compilation of ObserverExpressions to graphs isn't useful: equality for ObserverExpressions is based on the conversion to graphs, so a conforming hash function would also need to be based on _as_graphs, so would perform the exact same operation that's being cached.

Perhaps not for this PR, but the ObserverExpression._as_graphs could be implemented as:

def _as_graphs(self):
    if not hasattr(self, '_graphs'):
        self._graphs = ...  # do the computation
    return self._graphs

which would then effectively cache both string-based and expression-based expression objects. It wouldn't prevent double-work for equal expressions, but would prevent double-work for situations like:

complex_expr = parse('foo').then(filter(myfilter))

def dynamic_observers(obj):
    obj.observe(handler, complex_expr)

@mdickinson
Copy link
Member Author

[...] the ObserverExpression._as_graphs could be implemented as [...]

Good point. Separately from that, I don't think we actually need ObserverExpression.__eq__ to go through _as_graphs; as far as I can tell, that's purely for the benefit of the test suite. I think we could reimplement __eq__ to provide simple structural equality (and add a corresponding __hash__.)

@mdickinson mdickinson merged commit 74d249c into main Sep 10, 2021
@mdickinson mdickinson deleted the metd/caching-experiment branch September 10, 2021 07:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants